Jump to content

Beq Janus

Resident
  • Posts

    610
  • Joined

  • Last visited

Everything posted by Beq Janus

  1. It does not exist. We added it to Firestorm, there is no such feature in the default viewer Es una carectaristica especifica de Firestorm
  2. They are right, Rigify has nothing to do with Second Life and has no knowledge of the specific skeletal structures used by SL. If you want to make SL targetted animations in Blender then you need to look at the Avastar product instead, or build it manually yourself ensuring that you name every bone properly and adhere to the correct hierarchy as noted by Optimo. Avastar is specifically designed with SL/OpenSim animations and rigging in mind.
  3. Feb 26th is far too soon(tm) Thank you for all the support that you have given the open source side of Second Life over the years, it is a rare thing to be able to work actively on open-source to help improve a commercial platform you enjoy as we get to do with TPVs. I hope your legacy of open source support continues. If you want to keep your hand in on coding, we're open to patches, just raise a Jira 😉 All the very best Oz. Beq
  4. The additional functions came from the "Camera Presets" project from the Lab. The camera controls presets in the FS default skin are a lot smaller than those handed down from the lab and resizable. If many people don't feel that the new presets controls are not a worthwhile addition then it would be worth creating a Jira and we can consider it, be sure to be clear what you are wanting to see. Given the breadth of the user base it is very hard to please everyone and, as it happens, configurable UI is a real pain to manage in the code so it is not something we rush to do.
  5. Double check that you have not got a local override being enforced @SarahThe Wanderer On your quick prefs. hit the X next to personal lighting to reset it.
  6. Any idea how long ago? I can go back through the view source code and see if we changed in FS (or acquired changes from the lab)
  7. To my knowledge this was never possible. There is only name slot in the mesh asset upload if I recall correctly and thus all the others get defaulted, there have been a number of requests for this to be changed. I'll double check and post an update, but don't get your hopes up. Beq EDIT: @Whirly Fizzle beat me to it. See the above link to BUG-202864 and within that my analysis of the mesh asset format data. I am not 100% convinced that what I wrote in that is totally consistent with reality, though the behaviour remains the same. I have a feeling that while the serialised version that the viewer stores has the name, that it is stripped from the upload because the server cannot handle it.
  8. Ugh typos plague me. As it won't let me edit..a2=b2+c2 OR a=√(b2+c2)
  9. Sorry, got distracted away on other things. The radius of the BB is indeed defined by the equation you cited. except for the divide by 2 that @Aquila Kytori noted. It is an extension of the basic Pythagorean equations to get the hypoteneuse of a triangle (the long edge) is the square root of the sum of the squares of the two sides (opposite and adjacent) a2=√(b2+c2). This extends to three dimensions by adding the third length into the right hand side. When we are using the overall dimensions of a box, this gives us the full diagonal, and thus we divide by two to get the radius. Don't be thrown by the bounding box being a box. what you are calculating is the radius of the smallest sphere that would completely enclose the bounding box. All the rest is mathematical shenanigans that are to some extent lost in time. The volume modifiers are I believe intended to compensate for the BB/radius issue. The human brain does a pretty good job of assessing size and volume, given a ball and a box of similar sizes we would expect the two to behave similarly over distance (LOD swap at the same time) however the BB radius of the cube is larger than the "similar size" ball and thus the ball would swap before the cube. There are all kinds of issues with such simple rules, but I hope this explains where the idea comes from.
  10. The cost of the physics is derived from the area of the triangles that make up the mesh with additional cost being assigned to triangles that are longer than they are wide (long thin triangles) this is because they increase the cost of collision detection. The extreme case of this being the degenerate triangle, which in mathematics is where two of the vertices are the same, but in the viewer it is signified by being a small percentage of the total length of the sides (if I recall correctly, though it might be a small percentage of the average length). Degenerate triangles result in the red highlights in the preview and the instruction to reduce the complexity/remove thin triangles. I suspect that the small differences that you found were enough to skew a few of the triangles into a "longer thinner" category and thus incur a penalty.
  11. Hi Finch, Can you be explicit please and state the actual version number that was running, and the version you have now. We have not expired any current release versions for a long time. We always allow the 3 most recent full releases. We have beta releases which can and do get blocked when a new version is available, so that might be one possibility. We have not moved from the last release version 6.3.9 though we have an optional beta release 6.4. Beta releases can and do get blocked as new ones are deployed. Make sure you download the 6.3.9 Release and try that. Having said that, if SL viewer will not load there is something more afoot. Are there any errors? EDIT: Our support team have advised that if SL viewer is failing as well then you should probably try to contact the Second Life support group and get the SL viewer up and running first. This is best done through the instructions on this link https://lindenlab.freshdesk.com/support/solutions/articles/31000131009-contact-support
  12. Physics issues appear to be related to the uplift of regions, whether directly or indirectly. There are a lot of crossing failures where bridges that were previously "helped" by large physical blocks that are no longer crossing properly. I've not seen this reported for situations other than region handover issues where an avatar is transferred from region to region so this may have nothing to do with the OP's issue. It would do no harm to see if a Jira exists and add your comments to it. The lab can only fix things that they are aware of, and will fix things quicker the more aware of it they are and thus recognise how many are affected. Region crossing problems overall extend from how crossings are managed. When you "leave" one region it effectively sends a projection of your direction and speed to the new region so that you can continue your journey uninterrupted. this of course goes all kinds of wonky if the handover takes an extended amount of time, and I suspect that while we are still in the midst of the uplift to cloud there will be lots of rough edges.
  13. The viewers are predominantly single-threaded (see note below) but windows is particularly poor at managing this (or is it particularly adept at it? you choose) and while all of the activity is happening on a single thread, Windows moves that thread about all over the shop. If you sum all of the per core CPU usage it will almost certainly come to about 1.2 * 100/N where N is the number of cores. e.g. my machine has 8 cores and if I sum all the usage it comes in at around 14% which is a little over the 12.5% you'd expect of a 100% utilised single thread. Mostly true but not entirely so, it is not so cut and dried. Viewers are multi-threaded, however the vast majority of work happens on a single thread and that includes all of you rendering, data marshalling to and from cache etc. The reason we can also be confident that it uses 100% is that the main loop of the viewer is literally looping as fast as it can drawing frames and servicing input devices. The exception to this case is when you deliberately frame rate limit or when you defocus from the viewer window; the mainloop takes a deliberate sleep on each frame then to reduce the load while you are doing other stuff. What happens on other threads is network fetching of http assets primarily. It is also worth noting that your voice service (slvoice.exe) being a separate executable can run on another core. The viewer was never built for multi-thread rendering and even in the places where threads are employed it is some rather peculiar traits. In particular openGL is not friendly towards threaded access and even if that were the case the manner in which the data is marshalled today would struggle to take proper advantage of the additional cores. There is light at the end of this tunnel though. The lab are actively researching and preparing for, a migration to a new pipeline and as part of that migration being able to scale to available computing power is a key aim. It's not a small undertaking (there is a reason why none of the third party viewers, have such support) it requires a complete rewrite of the rendering internals and that extends it's reach into how data is retrieved and stored, and even has its claws in the higher level functions of the UI. As such, the first steps towards this brave new world will likely be entirely invisible to users as more strict borders are placed around architectural parts of the viewer to allow the surgery to take place. The challenge of course is also making sure that updates don't lose more users than they benefit, I don't know what the typical user's machine looks like, but the lab do have extensive data on this, I know anecdotally that on Firestorm we have a phenomenally wide range of users, from those with small laptops with slow CPUS and onboard graphics to those with multi-GPU, overclocked desktop beasts. It is inevitable that some older machines will simply no longer be usable, the minimum hardware spec will be adjusted, but it would make no commercial sense to the lab to do that if they lost more users than they gained. Time will tell where that takes us.
  14. As others have noted, it is most likely a settings issue (and reinstalling may not help unless settings are wiped) I would suggest the following (screenshots are form the latest SL viewer windows but I believe should correspond to the ancient version too) open preferences (ctrl-p )(Me->preferences) on General. Ensure that "pressing letter keys" is set to "affects movement" on Move & View "Arrow keys always move me" can be enabled (strictly speaking it does not need to be) If this only happens after she has opened a chat then the second item above is almost certainly her issue. ---- To be clear on the Linux position, your friend, or her husband, should almost certainly review the TPV options if she is likely to spend any time at all in SL. I will try to clarify the status here but @Oz Linden might want to keep me honest on this one and give a more "offical" version to correct or balance my (undoubtedly) biased summary. Essentially, the lab do not feel that the Linux community in SL is large enough to warrant maintaining a viewer for themselves. At one point the Lab had stated that community contributions were required for them to update their viewer, and in response one of our team (Firestorm) developers sent a number of updates in to help the Lab to bring their distribution up to date and restore support but these were never applied and by now would need to be re-created and re-submitted anyway. The current answer regarding Linux support is only slightly different, and is simply that they (the Lab) don't think that Linux support is worth the effort and that their developer time spent on doing such would by definition be time not available to spend on new features/bug fixing for the majority of users. I can understand that as a commercial and practical decision but it remains true that even a small percentage of users is a reasonably large number of people given the SL user base. The result of this is that Linux support falls to those of us in TPV-land (there are a number of options listed on the TPV page earlier in this thread). This runs a longer-term risk that technology choices made by the Lab are not supportable on Linux because, while the lab are not expecting to make such decisions, it can happen in the background; a good example of this is Vivox voice. Vivox dropped Linux support a few years ago and as such Linux voice support is limited to older updates and would be lost forever should the server-side support for that older API be lost. The LL Linux viewer is so old and out of date that if a TPV were to list it we would be marked down because it does not comply with any of the recent feature updates. These include but are not limited to:- Bento, Animesh, BOM, and EEP to name but a few. Anyone running that viewer would have a seriously degraded experience. Ironically, the lab's own support policy means that this viewer is not supported. https://releasenotes.secondlife.com/supported_viewers.html It is notable that the downloads page does not appear to give any indication that Linux is unsupported or will give a degraded experience. This is unfortunate and really ought to be fixed.
  15. Asset servers are in the cloud and have been a while, not that that necessarily has any bearing on things here. there are all kinds of server interactions, such as money transactions etc. So it could be any number of things. As this is an http error, it is being returned from the server. I would suggest that you raise a jira ticket as it seems most likely an uplift related issue and well-worth escalating. The waters are of course muddied by the fact that the uploader has changed a bit, but in reality the underlying mechanics are the same so the balance of probability is with the server side. That said, if you can reliably get the issue with the SL viewer but the Firestorm release version does not have the issue then that is important to add to the ticket as it would suggest something has been messed up on the client side. (Charlotte appears to have had failures on the older FS too so that very much points to a server side issue)
  16. It's an uncaught error, take a look in the log tab and see if there is anything of note in there. With no screenshots and no other information that's about all I can suggest.
  17. The old Ruth models used to be available on the SL wiki, but keep in mind that these are hopelessly old and do not have Bento support. You can find lots of old resource here : http://wiki.secondlife.com/wiki/Clothing_Tutorials While some more up to date (but often broken) resources for Bento are here : http://wiki.secondlife.com/wiki/Project_Bento_Testing Depending on your needs though you might want to take a look at the Ruth/Roth project. A fully open source enhanced, avatar developed by some active creators in the OpenSim space, but available and fully compatible with Second Life. https://github.com/RuthAndRoth The latest Ruth and Roth releases include Bento and BOM support. If you want to try them out you can get them on the market place in the Ruth and Roth store (for free) https://marketplace.secondlife.com/stores/228512
  18. It's a nice effect, really well done, but in the end it's just old school mega-sculpts used as sim surrounds, it has been done in a more tasteful and visually appealing way than many of the high details surrounds we see, but apart from the designer's skill there is nothing new here. It still has the same problems, and exemplifies them as a use case. It is a fantastic example for @Vir Linden, @Ptolemy Linden and Euclid to look at to further the discussion of "We need to solve the sim-surround problem" . I should probably throw all of the following into a Jira and link that to the various Jiras that relate to large scale prims. EDIT: JIRA @ https://jira.secondlife.com/browse/BUG-229551 What we have here is a great example featured on the lab's own blog that relies upon the use of objects that were made in 2006 during a period in which server-side validation was not being performed. The megaprims have long been grumbled about by the lab as undesirable but tolerated anomalies and with recognition that there is no viable replacement. The primary driver is often LI, the prim cap applies, so legacy accounting means that these objects cost just 1LI. However, the example shown is not just cheaper in sculpts, it is not possible without them. Thus modernising sim surrounds has at least the following problems:- That to do this as a mesh would be implausibly expensive, time consuming and frankly unsightly. 64m tiles even at low resolution. The cost is far higher, in this example we have 4x256x256 placed on the corners. giving us 12 64m tiles of "offsim" space. Quick mental math approx is that the LI cost of a triangle at HIGH LOD at 45m radius is 0.06) a sculpt is 2048 triangles for the sake of argument we'll use 1/12 of this per mesh ~170 triangles or 10LI, making it 120LI per sculpt of which there are numerous layers in use in the example. Complexity in assembly, and use. Due to the linking limitation you cannot link these, you can however described an entire scene in Blender and export it, it will arrive inworld as a collection of unlinked prims. You can then if you are careful manoeuvre it into place, but don't lose the selection or you are in trouble. this is exactly how "Hugh" was imported, and I wrote a special tool to allow me to do both the slicing into 64m hunks and the exporting to DAE as a whole scene for import. Not many creators have that option. Realigning the parts is painful but harder still is ensuring that the seams work properly across LODs...see point 1.3 Unsightly, For all the issues with Sculpt vomit that mesh addresses, aligning the seams of meshes that LOD at different points is a landscaping nightmare. A single 256m object does not have this problem. Once it loads it is done and at that scale it will never LOD swap. The 64m prims will, at normal settings, swap to Medium LOD at just 180m The fact that these render within draw distance is a side-effect of the fact that they are mega prims, the centres of the volumes are thus within draw distance while the geometry is significantly outside of it. In fact, the reality of this is that it is ONLY because of their scale that this can happen at all. prims have to be anchored in the coordinate space of a region. this means that their centre point, or the centre of their linkset parent must be within the regions boinds. The 56m linking distance limitation thus makes it impossible to achieve the level of overlap attained by the creator using sculpts here. Putting aside the LI cost there is no practical way to achieve this effect without sculpts. If we accept that there are valid uses for these "anomalies" then the case for meshes of similar size and complexity should be considered. They should after all have the same rendering cost as the comparable sculpts, which from the SL documentation is broadly equivalent to a hollow torus prim. Using sculpts and megaprims is not, however, a walk in the park either, especially for newcomers. You are limited to the specific range of sizes and shapes that were made before the stable door was bolted shut, in practice this means that you have to have access to the legacy objects, which has got more and more difficult over time (the loss of the SALT HUD was a sad day.) and something that newcomers would frequently struggle with. It also means that you frequently have to use, the closest match rather than an ideal match. It is not as if content creation in SL needs additional learning hurdles to be thrown down. It really doesn't. Proposal: A "simple" solution here is to allow meshes to have up to 2048 triangles within 1LI cost irrespective of scale. This would What this example does not do is move forward any discussion of truly long-range, low-resolution horizon rendering. It demonstrates how nice things like that might be, it doesn't realistically offer insights into achieving them. All that aside, It is, undoubtedly, gorgeously put together. Thanks @animats for the heads up.
  19. Silly question but the texture mapping isn't set to planar for some bizarre reason is it? What does it look like in the uploader if you toggle on the UV overlay? If you are using FS then that is here The latest lab viewer has incorporated my changes along with their own and you should also have that in the SL viewer too.
  20. while this does not surprise me in the least, I can agree that it is annoying in the sense that you cannot reliably know that the way that you have assembled/disassembled a design is the best and most efficient. I'll explain why I believe the discrepancy is arising and hopefully form that you can see that it is to a large extent unknowable in advance and thus a frustration we live with. In the general scheme of "my split thing costs less/more than my whole thing why?" type discussions, there's generally a couple of things at play, but mostly, in this case, it is zip compression I suspect. There's no magic in mesh uploading and LI, no trickery, just some rather "simple" and "generalised" algorithms, which mean that while results across the full estate are more or less predictable at the individual level there are corner cases and nuances. The streamed data is based on the amount of compressed data. and thus my expectation here is that splitting them is giving a more compressible result. The compression used is zip, which, without trying to go too deep into tech looks for patterns in the data and replaces those patterns that occur most often with a "short" code. The more common the pattern the shorter the short codes (the most common ideally being a stored as a single bit). This is a very poor illustration of the actual zip-deflate algorithm, so for those interested in the real bit-twiddling you can go here Given the above, the more repetition/ commonality in the data the more compressible it is. When you split the object something about the data is making it more compressible, that could be any number of things, such as facing the same direction (making the normals compress better)being more consistently planar making all the X or Y coordinates compress better. It is not worth second guessing mostly. I posted, once upon a time, about the oddity that can arise from using "use SLM" debug setting whereby you get a better compression. It is not guaranteed and 'use SLM' has other annoying side effects that can make it more hassle than it's worth, but the gains are (as best as I can tell from limited actual investigation) the result of a reordering of the mesh data when you restore from SLM versus the "natural" ordering that occurs when loaded from Collada. NOTE: It can sometimes be worse! There is potentially, a most optimal sorting for any given mesh that makes it more compressible, but it's not a one size fits all. EDIT - I forgot an important contributor to this saving...rounding "errors" When we mix things up in the compression we might make a 0.1 or 0.2 saving overall. But this then results in a full LI reduction, why? In this case we were previously at or around 3.6LI and by fiddling around at the edges we made it 3.49LI, the magic of rounding then steps in and our 3.49 is 3LI instead of our 3.6 which was 4LI. This can be illustrated by linking two of the 3.6LI objects that were 4LI each and noting that the combined linkset is 7LI (3.6+3.6 becomes 7.2) conversely the 3.4 duplicated becomes 3.4+3.4 and will also be 7LI END Edit The other thing (and something I think @ChinRey circumvented) was the different sizes. It is clear that when you split a large mesh into smaller parts there are very large savings to be made. That is not the case here but is part of the larger story that people will observe. To explain this consider a house, with windows and a door, and on the door is a very ornate brass doorknocker in the shape of a lion. The door knocker is 8k verts, the entire rest of the house is let's say another 8k, as single object we have 16k verts @ 5mx5mx5m (I like my houses cubic clearly). Big objects incur big fees on triangle/vert count and thus this will be a lot more expensive than ff instead I split the two items so I have an 8K 5x5x5 house and and 8k 0.5x0.5x0.5 door knocker. The catch with this is (what Rey was avoiding) that the small door knocker will LOD swap a lot sooner, in my example that is a reasonable expectation and a good trade off. As I stated above, the equations are simple and consistent, but that is not to say that there are not inconsistencies. two small bushes will LOD swap far sooner than one large one, and thus the theory holds that a lower cost multiplier is used. if the large object gets a cost of X per triangle and the small object get Y per triangle that is nothing that enforces the n*X will be more than 2n*Y. If that works for you and you can handle the LOD swap distance then go for it. I can look at the specifics, but my guess here is that compressed_data (A)+compress_data(B) < compressed_data(A+B). keep in mind too that the streaming is for ALL the LODs so the generated LODs (if you use them) for a combined mesh may be less compressible than the generates LODs of the individuals. Hope this answers some questions. It may be that something else, something I've neglected here, is responsible but that's my opening gambit.
  21. As noted elsewhere:- Lag compared with some_random_game is not really an addressable problem. It might be useful here to list just a few of the reasons why such comparisons are not likely to yield sensible results. SL is quite different in performance constraints to a commercial game for many reasons and thus what you can achieve in those games is not achievable in SL 1) A commercial game for the most part has predicatble rendering, the creators will have measured each scene or level and identified rendering issues that cause jitter in the frame rate. 2) The commercial game has content made by professional artists who care about, and perhaps more importantly are rewarded by the performance-quality of their assets . SL content is produced by artists who generally care very little about performance as SL shoppers do not reward creators for their dilligence but instead shop based on some perception of visual-quality . 3) In most games, the assets that are drawn are not having to be streamed from a remote serer, they are part of the installation on your hard drive 4) In a game, things are pretty much under the control of the producers, they know exactly when a blood crazed zombie is going to appear, and thus it can be made ready. In SL, we literally do not know from frame to frame if a new object we've never seen before is going to appear, with existing stationary objects are going to start moving around, growing, shrinking, spinning, changing colour.... What these all mean is that the viewer is not able to be a typical game engine, a large swathe of the optimisations commonly employed in games to increase and maintain framerates are simply not viable in a dynamic environment like ours. So what lag/performance comparisons do make sense? The correct comparisons are viewer to viewer. * One viewer versus another, for example SL viewer versus Firestorm, or Firestorm versus Alchemy. Different viewers have different motivators and different user bases. Firestorm (which I work on, just to give full disclosure here 🙂 ) is not likely to be the fastest. We carry a large proportion of SL users and these users have a very broad range of hardware and network capabilities and a very broad set of requirements. We are a feature rich, somewhat portly viewer. Meanwhile, Alchemy prides itself on performance, but may well not be able to provide all the configuration bells and whistles we give you. Pick your viewer based upon your needs. * One viewer release versus another. With each new release of the viewers, new features are added. Sometimes these can improve performance, sometimes they will increase the demands on your hardware and may well be slower. thus comparing lag between viewer version numbers of the same viewer is a valid measurement too. A good example here is to compare the performance of EEP and non-EEP viewer releases. We have advised most people to not use the current preview because we feel that the performance amongst other things is simply not acceptable. There are a number of issues that are being actively worked upon by the Lab to improve some of the newly introduced overheads and once those flow down stream we will review our stance and the release status. (The EEP performance issue is an interesting example, some people (the minority it seems) are not seeing the degradation that others are. I see about 20% lower frame rates on EEP. Some others are not and that is very likely to do with where in the maze of viewer activity the bottleneck is occurring for that individual.)
  22. To my knowledge it is still an average, though not a straight mean, I have a notion that it removes all outliers from any calculation, the fact that Black Dragon reports completely different numbers to other viewers but does not skew the overall figures is an indicator of this.To be honest though I could have completely dreamed that up 🙂 . @Vir Linden is best placed to talk about this though as he has worked most closely with the calculations lately and might also be able to explain a bit more about the future direction. @Wulfie Reanimator's link is good and appears mostly correct for viewers other than BD. The key addition that is not on that page is a base 1000 addition for animesh IIRC.
  23. Your complexity is calculated by the viewer using a well-defined, if pretty useless algorithm. It can be found in the code (for those who can read such things) starting from llvoavatar.cpp function calculateUpdateRenderComplexity() The complexity is reported to the lab and as Wulfie says they use an average on their side. In future, the expectation is that this calculation will move mostly serverside as part of the bake service. This is one of the changes in the ArcTan project, I don't know any more details than that though.
  24. If they are not ones that we can fix then if and when LL fix them, they'll be merged into the "next" release. The reason we have a preview is because we are not happy about the state of EEP and don't consider it fit for general release, far too many bugs. Those who want to try it, and I am sure some will find it suits them fine, can try it but everyone else will be able to wait until the most critical bugs are addressed (hopefully). The reality is that there's a lot of bugs still outstanding from minor to critical and once we get more users exposed to it that count will no doubt increase.
×
×
  • Create New...