Jump to content

Beq Janus

Resident
  • Posts

    610
  • Joined

  • Last visited

Everything posted by Beq Janus

  1. Sadly not. this is managed by the server-side and not able to be specified by the viewer. As others have noted you can achieve some of this through scripts and Liz (@polysail) and I have been prodding LL to fix a server-side problem that will allow full names of components of linksets to be retained. In conjunction with this change (you will be able to export a list of positions and rotations - and any other attributes) from Blender, Maya etc and paste those into a notecard to adjust positions of complex builds. See https://jira.secondlife.com/browse/BUG-202864 for more details on this.
  2. Yes, these apply across platforms. I've also fixed the block that prevents the OpenSim/non-havok build from using Analyze. It will now correctly allow the non-Havok hull decomposition which, while not as good as the Havok one, is useful to have.
  3. Technically yes, but that would be extremely poor practice in most circumstances (and to be honest I have not tried that extreme case). The typical use case is impostering where you flatten an object for the lowest LOD (or others when applicable) and the big advantage is that we no longer have to have a placeholder triangle hidden away. https://gyazo.com/88703cd123c6aa239a65953a82c8f029 This clip shows an adjustable monkey wrench that falls back to a flat imposter (it is actually adjustable hence the two sets of planes) As @arton Rotarusaid, no, the HIGH LOD must hold the super set of all textures. You can do your zooming out trick with the cube though, use a single material on the high lod mesh and another on the medium and another on the low etc.
  4. Forcing users to jump through hoops in order to satisfy a special condition of the mesh format is lame, lazy and unprofessional, though I doubt it was ever intentionally left that way. Sadly, however, it has been that way for as long as we've been making meshes, and, as it turns out, unnecessarily so. Ironically, the mechanism to fix this has also existed since Mesh day 1 but was buried deep in some horribly convoluted code. When GLOD is used to auto LOD it can remove an entire face, to deal with this the uploader creates an empty triangle and at the point of upload this gets translated into a special "NoGeometry" marker. The "NoGeometry" marker is also respected when the mesh is decoded later. There were a number of bugs layered on top of each other that stopped this being used for user supplied LOD models. I rewrote the parser section that was at the heart of this tangle of knots and was then able to reconstruct it so that use LODs also support the "NoGeometry" marker. The result is perfectly valid meshes that do not have to waste very valuable (and expensive) triangles in the lower LODs and hopefully, less confusion for new creators.
  5. No, that error is reported by the viewer so nothing has happened on the server side That error is unfortunately a pain, and I have made a number of changes to how it works for the next release of FS to remove some of the reasons it appears. There are a few different reasons: 1) the "mostly legitimate" reason, which is that one of your LOD models does not have the same materials as the high LOD (this is a thing I have changed next release to make it easier) 2) the High LOD is invalid or missing - it does not sound like it is this one for you 3) the model names in the Lower LODs are not matching the High LOD. Turn on detailed logging in the log tab and try again, it will give you a lot of info which might help narrow it down (it might add to you confusion too mind) This blog http://beqsother.blogspot.com/2021/06/summarising-next-improvements-to.html and the one before it explain the forthcoming changes. Not that they will help you right now. Post the output in the detailed logging and that might help us help you.
  6. It's not "fixable" in the sense that as part of the upload process all mesh is squished into a cube that is 1x1x1. Once it gets rezzed in second life, the object has a scale and the mesh is expanded to fill that "scale" and at that point (depending on the download) the only LOD available might be your lower LOD so it has to always work with what it has. The actual upload could be adjusted to "do the right thing" but there's a few changes needed to get us to that place. In theory, the uploader could take the scale of the LOD object into account when it does the "squishing", so if a LOD is 50% the width of the reference at the time of upload then it would be encoded to be 1x0.5x1, then when it gets inworld that would still work. But, at present, this would still have significant shortcomings because there is no spatial relationship between LODs required by the uploader (that is to say they do not have to be in the same location across the different collada files), so the behaviour would be entirely dependent on whether the geometry centre matched (which is highly unlikely). This is a problem that could be addressed, it needs a long overdue project that supports the uploading of the pivot point (https://jira.secondlife.com/browse/BUG-37617) and more generally the individual origins of model in a scene. Once we have that then the ability to retain the scale of LODs relative to the reference (High LOD) could be implemented. Raising a feature request Jira at https://jira.secondlife.com would be a great way to get this on the radar. Note: The origins issue as per my comments on the Jira above, needs both viewer and server-side work to fix the physics alignment problems. After that the rest "should" be viewer work.
  7. All LODs expand to have the same bounding box. In this case you can pad this out by place either a small triangle or (I think) a single vertex at the two opposite corners of the bounding box. this will stop the LOD stretching. This problem is common when people make a physics shape too
  8. I just posted a quick summary of a few features for builders/creators that are currently going through the QA process in hope that they'll land in the next release of FS. The blog post can be read here http://beqsother.blogspot.com/2021/06/summarising-next-improvements-to.html The very brief summary is :- 1) You can now upload proper subsets of texture faces in your LODs without silly errors or wasting valuable triangles on placeholders 2) The error reporting of mismatched materials should be more accurately tied to the real cause of the failure. 3) Two new default physics shapes can be selected from the drop down on the physics tab, and a third "user defined entry" can be specified if you really need one. 4) You can now change the ambient lighting in the preview window and actually see things in the shadows! As always the release date is "soon", but honestly it should be sooner rather than later our hardworking QA group are hammering at it as I type.
  9. Funny how this thread gets necro'd when I am about to post about this very subject. I have (I hope) some very good news, as of the next Firestorm release there should be a significant reduction in these heinous misreported errors. The not a subset error has traditionally appeared for a number of reasons only one of which was actually related to the materials of the model. In some circumstances it would report as this when in fact it was simply unbale to match the LOD model in the file to a valid mesh. Badly named objects and mismatched meshes between LOD models could all result in the same error. I recently wrote a blog post discussing at length how this has been completely rewritten to replace those errors with something at least a little more useful. When the HIGH LOD model cannot be correctly determined (which I suspect was the case here) it should report "Model <name of model> has no High Lod (LOD3)". If there is a name mismatch or other issue typically resulting from a mismatched LOD file, it should report "Model <name of model in DAE> was not used - mismatching lod models." There are always other cases, and issues with name parsing, bad characters in names etc can cause all kinds of weirdness. So there remain less informative "parsing error" messages which sadly will not always tell you exactly why a DAE load failed, but if you have models that have a poor error report please feel free to raise a jira for me (https://jira.firestormviewer.org) and attach the DAE files and I will review it and see whether anything can be done in future. The story-style explanation can be found in this post http://beqsother.blogspot.com/2021/06/taming-mesh-uploader-improved-workflow.html and a shorter summary post in the latest blog (see separate thread in this forum).
  10. what version of Firestorm is that? Please grab the latest if it is not. There was a bug in the release from about 12 months ago that would cause such a hang in certain cases based on your inventory IIRC. @Whirly Fizzle any other thoughts?
  11. That's very peculiar... My belief is that the slplugin issues are mostly on shutdown (of the plugin), it is dying badly instead of just exiting... As to why script editing triggers that... hmm do you have the "help" browser pages open by chance?
  12. Agreed, this is an annoying constant irritation. The slplugin we use is mostly the same as the with LL viewer. A lot of the crashes are deep in 3rd party libraries that are used. If you have a way to reliably trigger the crashes that would be useful, it seems to happen when you change region or TP most often but you mentioned building/scripting. If you can fidn a way to force it to happen I will have a better chance of fixing it, or at least seeing what causes it.
  13. This is a different use case to many that get discussed. When people say that you cannot get a person's IP address form inside in SL this is a correct statement. The only way to associate users by IP is to go to an external service where you as the user have bypassed the anonymity by visiting a website outside of SL. The problem here is that for many people this is a subtle distinction and beyond their understanding.
  14. Coming very late to this. But to answer the question asked to me directly by @Candide LeMay(Henri has answered the original question about "project interesting"), the changes added to Firestorm prioritise textures loaded from inventory specifically, it has nothing (sadly) to do with what is right in front of you, but more "what you have in your handbag". We prioritise images that you are intentionally pulling out of your handbag (inventory) so that clicking textures from your inventory, photos etc, will be prioritised. This includes things like outfit thumbnails in the outfit gallery display, which used to be so slow as to be unusable but is now tolerable for the most part. This solution makes sense because it is a direct response to an action you have taken, double clicking on the texture or opening the outfits floater. It has a separate priority queue for these because what used to happen was that, in heavy texture regions (and these days that can be pretty much anywhere), the "on-demand" textures like these would be drowned in the flood of trees and grass, and of course the pointlessly oversized textures slapped on nipple rings, and brass aglets on your bootlaces.
  15. Just to clarify something regarding thr group "inefficiencies" to be polite about it. The specific cases I referred to in the meeting that @animatsmentioned were to do with the manner in which group notices are handled. This was especially bad during December and January of this year due to an error caused by the uplift project that meant that old notices were not being cleared down. while group chat is related to this it is not the same set of problems. As@NiranV Deansays the issue is largely server side with a helping of poor protocol design and (I don't doubt) a bunch of places where the viewer could improve. The server-side is the biggest part of this though. One of the issues ties very well into what @Gabriele Gravessays. Groups have been used and abused for too many reasons. We have groups that are used predominantly for messaging (consider Firestorm Support English, or Builders Brewery, for example) where we have 10s of 1000s of users. Most of which are silent most of the time. You have massive merchant groups such as Blueberry where people don't chat, they are there to get notices and updates. Then we have land groups, that allow us access to the places we live. all of this overloading of the group system is a massive problem. The group chat problems are one side of this, consider your TP failures... How much of the time it takes to complete a TP is caused by the receiving server having to check through your list of groups and roles to determine whether you can enter the land you are moving to? The group system has been abused all of its life and it is very hard to roll back from that abuse I fear.
  16. As noted by others, 8GB of RAM is going to be a problem, especially if you are running anything else. One of the reasons that the 32 bit viewer has low RAM is that we force the max texture size to be 512, you can choose to do this in the 64 bit viewer too. 32bit viewers by definition cannot have more than 4GB RAM (that's the maximum limit of 32bit machines) Your GPU is listed as a 1050TI, as noted by @KjartanEno if you are using the advanced vram controls then this will try to use more of the GPU RAM. This is a feature long requested by many users, but it puts a lot more pressure on your system RAM too because of the way that OpenGL and the viewer works. Switching back to the basic mode and reducing the amount of texture memory will help release system RAM use. Keeping your Draw distance to a reasonable level will help reduce the number of textures being pulled into your RAM too. Remove HUDs that you do not need to have immediate access too. People frequently walk around wearing the HUDs that are used to change the alpha overlays of their mesh body. These are generally a massive hog of texture RAM and removing these will further ease the pressure. The thing to keep in mind is that every texture that has to get to your graphics card has to go through your system RAM to get there. The fewer and smaller textures that you try to draw on screen the less your viewer will have to load. Of course, as a Firestorm person my advice is FS-centric (though some of it extends to other viewers as well, removing HUDs is just a good habit to get into) but do try other viewers too, the TPV directory is the safest place to find these. The great advantage of having TPVs is the choice it gives us. For anyone on less than 16GB RAM, I highly recommend trying the 512 limit, far too many creators slap a 1024 pixel texture on something without justification. In the example you give of flying a plane I would expect you to barely notice the quality change as most things are far away anyway, but as texture are limited to 512x512 all of those 1024x1024 images will be restricted to a quarter of their former size.
  17. The change would have been covered in the referenced notes from the lab rather than our own. I implemented the change for FS but when I came to commit it, it had already been added (slightly differently) by the lab so I discarded mine and went with theirs 🙂 I'm still inclined to revisit this and support file names _MED _LOW and _LOWEST mostly because the backwards LOD naming of SL drives me nuts.
  18. Hmm not me, though I don't doubt there are similar stories to mine, it is not an unusual request. This was just a few months ago. Liz and I were thinking of a collaborative place where we might dust off old items we'd built and maybe new stuff we worked on. For now that is back on hold as we can't find a sensible way to make it work within the rules.
  19. As @Coffee Pancake says, if that reproduces on default viewer then they NEED to have a jira, it will not get fixed otherwise. There are a number of additional shader/pipeline changes post EEP the latest batch will drop in LMR5 which is currently an RC and will be landing "soon". The LL viewer release schedule got derailed a little by the texture cache update rollback. I know that LMR5 has a number of additional fixes that we did not include in the latest update (because we don't include stuff until LL release it, and we did not want to wait to get the latest version out.) and that at least some of them relate to the sky. Visually that looks like the haze is clipping at long distance. @Whirly Fizzleis this an issue that you are aware of?
  20. I expressly raised a ticket to request permission to share a dedicated alt between two individuals for the purpose of maintaining a unified branding presence in a store. I contacted the lab because the TOS expressly states "without prior permission". I requested that permission and was told I could not have it! So even if you want to do it, and you ask nicely, and have a fully legitimate reason for doing so, it still gets declined. They did say that it was my risk, or something to that effect, so when I asked if they could confirm that this would (assuming no abuse took place) not result in any of the accounts being banned. I got no guarantee of that. Needless to say, the plans went no further.
  21. Nope, it exists in the linden viewer. We never added it. We increased the default at some point in the past to deal with the poor experience of sculpts. I don't understand this assertion. Land impact has nothing to do with LOD numerically and is not affected by it. The land impact is based upon largely irrelevant evaluation of streaming cost, derived in large part from the fact that back in the day there was a literal cost to the region to "send" the mesh or other assets. This has not been true for many years as things are sourced from the CDN and typically reside on an Akamai edge server far closer to you than the region ever was. Moreover, the bandwidth available to the majority of users is radically different to what it was then too. If LI had a measure of true rendering cost in it then it would be a different beast again. LI captures some essence of that notion of rendering cost in the fact that the more triangles in an object then the larger the data that has to be rendered, it takes no account of the rigging overhead, the texture density or any of the other factors. The argument back in the day was that you didn't want to pull all the data for all the objects in a region because it was hogging both your bw and that of the server. In conjunction with LOD you'd only pull parts of the mesh data into cahce on demand. A similar scheme does the same (equally poorly) for textures. As noted above that network concern is no longer true to the same extent and is somewhat moot. LOD is stating what you as a viewer are willing and able to render. the LOD factor exists and gets fudged around with in all kinds of places. With prims it was never one. it gets adjusted based on the proportional volume of the primitive relative to the bounding box IIRC. It is too late at night to go on code archaeology. but as I recall the LOD factor of a sphere is higher (or is it lower?) than that of a cube, because a cube of "radius" 2m is visibly smaller than a sphere of radius 2m and the adjustment is "meant" to compensate so that they LOD at the visually appropriate point rather than the dimensionally accurate one. If back in the day a typical screen had say 800 vertical pixels and now people are frequently operating on 4K with most of us on or around 1200, then clearly we have more visual real estate. an object that was 10% of the screen high would have been 80 pixels and is now 120+. The screen, as you say, can also be physically larger making low resolution models seem all the more coarse (not a problem I suffer mind you, like most of us, I am not inflicted with the burden of a 50 inch 4K screen 🙂) but it is rendering resolution, reducing overdraw etc that is of most value. LOD is about eliminating geometry, my point is that I can handle a lot more geometry now that I could then. and I don't believe for a second that the LOD algorithm ever came close to managing to cull geometry as it approached the density limits of screen space. in fact that is one of the major flaws that it has. It has no concept of screen space whatsoever. It is a rough approximation of how much virtual view space it might take up, but no clue about your rendering resolution. But the real point is that this is all blown out of the water by rigged mesh cockups. One of the major bottleneck in mesh processing is managing the the transformations from unit space where it is packed in the assets into world space and ultimately view space. to be drawn. A lot of that maths happens in the CPU. For static mesh the inflation and rotation are pretty simple. for rigged mesh it is compounded by the chain of bones to which it is rigged, thus rigged mesh has a considerably higher overhead, add to that the fact that mesh clothing is frequently very dense both to facilitate movement but also due to poor optimisation. Yes we can all point at poorly made, excessively dense, overly bevelled static mesh but it has a lower overhead than a comparable rigged mesh and yet there is a) no accountability for it whatsoever in terms of the revered "land impact" b) rigged mesh barely ever decays beyond the medium LOD due to the aforementioned bugs. If Land Impact is meant to protect us from some kind of rendering bogey man, it is a very poor choice of weapon. Maybe so. I don't think LODFactor as such is the problem, at least not at the 2 vs 1 or 1.125 or whatever the LL default is. As I say it has reasons to exist and I don't generally agree with the view around it being best left at an arbitrary number just because of some handwavy nostalgia. I am all for solid evidence based arguments and you know very well that I am very pro anything that pushes content in the right direction. I don't think that this is the demon that needs to be slain, not the first one at least. I do think that the LOD mechanism is flawed and should be screenspace based this removes some of the need for "factors" to correct between apparent and actual volumes but more importantly it respects the actual limits of your rendering based on personal hardware availability.
  22. TL;DR (cos this turned out long even by my standards of blabbering) I don't disagree that we want to encourage better content. I wander around shopping events checking LODs on items and generally facepalming. The number of pianos I've seen the crumple into a mess before you get more than a few metres away is ridiculous and that is WITH a LOD factor of 2. While I understand the argument, and the reality that anything in FS is kind of the de facto standard by weight of numbers, I don't think pointing a finger at a historical choice made by FS is either correct or the right answer to fixing it. The Linden viewer default LOD volume is 1.25, in FS it is 2. The Upper bound on the Linden Viewer is currently unlimited I believe, but it was FS that took the lead to make it harder for the ill-advised practice of creators that tell their users to change settings. When we clamped LOD Factor to 4 we did so unilaterally to push the agenda forward. OTher followed suit. It was around the same time (I think perhaps a bit earlier) that I added the ability to view in the edit tools exactly what the LODs of a mesh were composed of, exposing poor practice, and working in conjunction with the other inspection tools that show texture density etc. I think we still had some whining remark about "nanny state" in the comments from our most recent release blog, from someone still sore and belly aching over that change 🙂 To get a taste of the kind of response and how some applaud it and some very much do not. You can read the replies on my blog from that older release. https://beqsother.blogspot.com/2018/01/for-lods-sake-stop.html The assertion that FS default (2) is somehow "wrong" and should be reverted requires a lot more justification than I have seen in this thread. I am not going to argue that it is right that we have 2, or contest the point, but I won't simply accept that 1.25 or 1 or any other magic number is right either. I would actually argue that the LL default (1.25) is not right at all and was based on typical screen resolution and rendering capability in 2010 (if I am being generous) and is a leftover from earlier times. The real point here is that the LOD Factor is simply one arbitrary input into an arcane, outdated and equally arbitrary calculation that is demonstrably flawed I think this thread is chasing the wrong beast entirely. Yes we need to reduce complexity in the HIGH LOD, and yes yes yes we need to encourage more and better lower LODs but as much as some might like to point the finger at our default for a setting it is not really a correct apportionment of responsibility. The real problem lies in the unrepresentative cost imposed by the land impact algorithm which penalises the use of lower LOD geometry for more harshly than it should and this in turn prevents people using it and also in the layer upon layer of historic bugs that all viewers perpetuate (see later). The most extreme example of inappropriate cost is the Lowest LOD (aka the imposter) anything that uses more than a tiny handful of triangles in that model will be hit with an increase in LI disproportionate to the load that it causes. for at least 4 years not we have been pushing to get the land impact calculation reviewed to better reflect this and allow creators to take advantage of less costly lower LOD geometry. The way that I and others have proposed is to allow a "free" quota at each LOD that does not incur anything above the base LI cost. Let's say we allowed 200 triangles "free of charge" in the lowest LOD, we could now actually block model a representation of the majority of use cases without incurring a significant penalty. On the other end of the scale you need the cost for abuse of this to go close asymptotic as it strays above a "reasonable" limit, this curve should allow people to push beyond the basic allocations accepting the cost of their choices but with a deterrent to going into the crazy spaces of just reusing the higher lods. The proposal to define "reasonable" complexity in relative terms to the LOD above it is a false design directive (as indicated by "Target tris will be expected to be within +/-5%"). It rewards creators who push the HIGH LOD to it's limits and punishes those who follow good design practices such as Imposter rendering for flat surfaces. Any scheme that proposes that my Medium LOD MUST be within +/- X percent of the some fraction of the LOD above is encouraging mediocrity. Consider, for example a model of small submersible, a bathyscaphe perhaps, it has a nicely modelled interior and portholes through which that interior is seen by anyone peering in (high lod), or sat inside. In the medium LOD, you are far enough from the object that modelling any of the interior is a waste and so you remove it entirely. The remainder of the object is basically just a metal sphere now and the LOD is a tiny fraction of its parent, and far better than anything autogenerated. This applies to all kinds of things that have interiors or which are seen from predominantly one perspective. A similar argument applies to all kinds of situations when building houses/stores, walls and large decor (trees are a classic example where low and lowest lods are perfect for impostering). As soon as you slip to Medium LOD anything interior or high detail is not longer relevant and anything that is ostensibly flat (windows, doors, paintings, curtains, or anything that is too small to warrant modelling) simply gets "impostered" and reduced to a Billboard. This is good modelling practice and can be observed in the products of many of our more considerate creators, my go to exemplar for great quality modelling is Faust Steamer of Contraption whose buildings are an example to us all. It is entirely possible to force the LOD default back to 1.25, if the Lab felt the need to force this we would of course comply (it would be easier for us when dictated to do so), but without a well supported and qualified argument for the benefits of doing this I don't see that we would want to go down that path on our own. In moving the default you are just compelling the majority of people for whom the current default has been the de facto standard forever to change it after every update ( and we'd reignite the spread of misinformation by ill-informed creators that demand that you see their product in all its glory by hacking away at debug settings) but more to the point what does it achieve? If you revert back to a LOD factor of 1.25 (the lab's default) then what actually happens? Objects decay to lower LODs sooner, meaning a little less geometry to render; this in turn means that the importance of those LODs increases, as is the objective of the "proposal". The problem is that underlying calculation of Land Impact based on the flawed LOD based equation charges excessively high for triangles in the LOW and LOWEST LODs in particular. It has long been the case that you cannot represent an asymmetrical model cost effectively because the land impact of Imposter (AKA lowest LOD) triangles is too high. By reducing the LOD factor we start to distribute display into the lower LODs, this means more people will see the lower quality models and creators will need to focus on those, this will drive up the land impact, and what impact on scene rendering (very little I suspect - see later). So we have lower quality being seen by more people, and higher land impact by creators who care about this. Who has won here? What are the other side-effects of this? Those of us who try to create content that performs well and looks reasonable will typically exploit materials, taking a high poly model and "baking it down" onto a lower poly model with a normal map. The use of materials is a lower rendering overhead solution to get more visual detail into a model. But here's the thing... I've lost count of the number of creators who have told me that they MUST model every rivet and button and tooth in a zipper, because their customers don't use ALM (i,e have materials enabled). This can be a downward spiral, person A disables ALM because they go faster without it, so the creators add more geometry to compensate, making others slower. The problem here is that many (and by general consensus with our support teams and the feedback I get from others that "many appears to be somewhere around 40-50% of us) have ALM disabled but not all do it for the same, or even the right reasons. ALM increases (threefold arguably) the amount of texture data you are using. you go from a simple colour image, to the colour image, plus a normal map, plus a specular map. If your personal viewer bottleneck is network, or disk IO, or RAM then this is probably going to be bad for you. For many people though those are not the bottleneck (as noted elsewhere CPU cycles are the most common resource issue) and while more textures does mean more CPU it is arguably lower than dealing with lots of rigged mesh. Other users incorrectly equate shadows with ALM and turn off ALM when the big performance hit for them is really just shadows and disabling those alone would help. I think the fallacy in all this is creators trying to excuse excessive detail, and poor modelling practice on their customers. At some point we have to accept that users on lower power machines are going to get a lower quality experience, nobody would be surprised at this. You don't expect to get in a ford fiesta and drive at 180 mph, why should a user on an ancient laptop realistically expect to get high quality performance or graphical fidelity? I am not saying throw them under the bus, far from it. But using their low end hardware and lack of materials as an excuse for bloated high detail models is not constructive and does not do anyone any favours. Coming back to the issue of LODs and forcing the generation of "good" LODs. What exactly are we aiming to achieve here? What are we asserting is the problem that must be solved? If we are saying that the rendering cost of static mesh content is causing significant drops in frame rate then I would strongly question that. The rendering of static mesh is a comparatively small proportion of the cost of what I would consider a "typical" scene. Then again my "typical scene" may not be yours. And as such your mileage will differ. There is no definition of "typical scene" that anyone would agree on. I would suggest that a typical scene is one with a mixture of avatars and some scenery. For others it might be their home, unchanging and crammed with high quality textures in photo frames but few avatars; for others it might be a shopping event, jammed full of avatars and vendor textures. For many (perhaps most) their "typical scene" is observed from a comparatively stationary POV, but for a significant minority it is whizzing around on a bike, car, boat. Each of these situations has different loads on the viewer and potentially different bottlenecks. I digress though, the question is what are we trying to achieve by forcing the display of lower poly models? If your definition of a "typical scene" is like mine and includes a couple of mesh avatars, materials and shadows then frankly fiddling about with your static mesh rendering is a waste of effort. Rigged mesh rendering is far more expensive and the LOD system is broken at so many points that it is beyond a joke. 1) Your items do not LOD based on their own scale but on the scale of the avatar bounding box. https://jira.secondlife.com/browse/BUG-214736 2) They do not obey the LOD decay equation, but instead more than double the distances involved.https://jira.secondlife.com/browse/BUG-40665 3) The complete lack of any real penalty for having an excessively complex avatar. You will note that on one of the other related Jiras (https://jira.secondlife.com/browse/BUG-11627) Andrey has stated that this type of thing is accepted as being an issue but is on hold pending "better content policies" which brings us into the scope of this discussion (yay) In any scene that has shadows enabled and a handful of avatars in those abhorent alpha cut mesh bodies (that the majority of users wear in spite of more performant options being available), the cost of rendering those avatars and their shadows quickly dwarves the cost of all the scenery around them combined. It is certainly possible to create a static scene full of gorgeous trees and buildings that hurts your frame rate, but you have to work pretty hard and have a lot of scenery. An easier way to get the same effect is to invite a small circle of friends round for a chat and watch the fps plummet. The other problem that comes up time and again when we discuss adjusting build parameters and changing land impact is that people start running about with their hair on fire panicking that all their decades old crap content is suddenly going to be bulk returned. I would argue that the Land Impact calculation is so old now that even those users on tail end hardware should be capable of handling more than the perceived limits from back then, we certainly should not be setting future goals in that context. As such I see no reason why any LI costs should go up in a way that forces the content to be returned, you simply rescale things. Don't make te old stuff "invalid" it has been valid for many years after all, simply make new stuff "more valid and more desirable". I think this is where @Coffee Pancake and I agree completely. The "fast mesh" sales pitch is a way to make thing more attractive, but in the end I think (if we can) a new land impact measure would be a far more resilient and motivating solution. It should be feasible for an old bad item that cost 1LI to remain at the same relative cost in a new metric. New items would simply be lower cost if made well. To make that easier to understand you recalibrate everything that cost 1 LI today costs 100LI tomorrow (but you parcel limit is also 100 times larger so nothing gets hurt). We've now made space for new efficient content that costs <100LI (<1LI in old money). No ancient content is hurt in the making of these new rules. Of course this is far easier to write down than it is to actually define and implement, the new rules need to be rigorous, comprehensive and consistent. Of course, comprehensive and consistent in a world where no two users have the same set of constraints is near impossible. This latter fact is a large part of why @Vir Lindenand his team have pondered over the knotty issue of complexity and the ArcTan project for so long even when restricted to only the rigged mesh problem. Getting a "right" answer is not a simple thing.
  23. The problem is more that with the wide variation of hardware viewers have to run on what improves one person is worse for another, it will also vary significantly from scene to scene The majority of people are typically CPU bound in the viewer. This is because the viewer (as we all know) is predominantly single threaded, but also that it simply does an awful lot of pre-rendering preparation on the CPU. If you are basically hamstrung by your CPU with your GPU barely ticking over then all the settings that are GPU heavy (such as anisotropics) won't have any noticeable impact, where someone with onboard graphics where the imbalance is less distinct will potentially see a slow down because the GPU is taking longer to draw a frame and the CPU is waiting on it. With regard to scene impact, rigged mesh is particularly hard on CPUs esp if you have shadows on. Sun shadow calculation takes a good 10-20% of frame time typically on my machine and that will be higher and longer with more avatars and rigged mesh in a scene. I think the safest conclusion is that your mileage will be different to the next person's and all we can reasonably say is "this knob when twisted changes stuff, you might win, you might not". A good example here is EEP related. One of the reasons we (FS) waited a long time to get EEP out was because of the water shader fiasco. For some, not all by a long measure, but enough that it mattered, saw a significant drop in FPS with EEP and water rendering was frequently the cause. One discovery was that occlusion culling had been disabled. Occlusion culling is a feature that is used to remove "stuff" from the rendering pipeline that will be hidden in the final view. When you are drawing reflection/refraction some of what is outside of visible range is within reflection range and so occlusion culling is turned off to get better reflections. the upshot of disabling culling is that you are left with a LOT more stuff to send to the pipeline and ultimately to the GPU for drawing. So...enable culling, less stuff to draw...yay must be faster right? For some people, yes, a lot faster. mostly those with lower end GPUs for whom the lack of culling had pushed the GPU drawtime above whatever threshold made it the bottleneck. For others though, not good, because occlusion culling requires the CPU to decide hat is visible or not and the probing for occlusion itself has an overhead, so if you are CPU bound, then occlusion culling is probably not going to help. Just one example and those who know the full details will hopefully accept the overly simplified explanation for what it is. I've long wanted to have a revised "lag meter" that could tell you where your personal bottleneck was in a given scene. It's really not that simple however. One clue to watch for though. Your GPU probably has an idle speed and a boost speed. When it has enough work to do it'll break into boost and use more power, fans whir etc. When the CPU or disk or network is your limiting factor the GPU will sulk and look bored. I've sat in a busy region clocking a few FPS and my GPU (infuriatingly) idling because my CPU is busy pulling down textures or stupidly complex mesh bodies etc. Once we get to a more performant viewer design we'll hopefully have a nice balance where slow fps can be linked directly to hardware being overburdened.
  24. Given that the build in the thread linked above is mine I can explain a few basic rules. Building underwater in SL is all about suspending disbelief and convincing your eye that things are not flooded. Under water shaders have a fog setting that kicks in sooner than on land and therefore the most important rule is to distract the eye from noticing the water fog by not having long views in large open spaces. A long narrow, well-lit tunnel will still look ok and a large room with lots of architectural substance will also add enough visual keys for your mind to accept that this is a living space. It is all optical illusion. Here is a short clip of the main undersea tunnels, these are a real challenge because they are long and straight, to highlight the difference between inside and out the glass is tinted and the inner space is well lit. I take my camera outsside into the sea and into my own "airlock". The airlock is flooded (don't worry I am good at holding my breath) and if we cam to the airspace at the top of the airlock we can seea a "fake" water surface. https://i.gyazo.com/d0fd30664178ef9a81a7d86c169f06fe.mp4 The next clip shows how the fake water "drains when the airlock is closed. https://i.gyazo.com/5dbce46695375c0d1ccb7613bb51527c.mp4 This build is ancient and technically dated. I built it in ~2008. It is predominantly prim and some sculpties but it demonstrates a few tricks we can still use. There is more that can be done these days to make the effect more visually compelling but it works well enough for a 13 year old build What could be done in the viewer to make underwater work better? One option might be to allow a special volume type, that could be used to identify interior spaces, this would not be water physics, just a trick to allow the viewer to know what shaders to use. It is not clear to me though if that would achieve what we want ultimately. the viewer could in theory determine that it was inside the special volumes and use push a differnt set of shaers to the usual underwater ones, but that would address just the "indoors" aspects. If you have spent any time in RL underwater as a scubadiver then you will be aware that your perception of light is not the same as that of a camera. It gets dark very quickly, your brain automatically corrects (to some extent) for the attenuation of light and the elimination of certain parts of the spectrum as you go deeper. An underwater photo without artifical lighting will be a dark, blue tinted drab thing and any decent photos need to be well lit. Would people actually want this to be how it worked? Some would, many would not, I suspect. With light attenuation we'd also want caustics, these are the lighting artefacts that even non-divers will have seen when snorkelling and swimming, ripples and shimmers upon the sand.These are quite onerous to render. Here is a little article on how sub-nautica deals with this. Keep in mind that sub-nautica is quite literally focussed on underwater and so can afford to throw a lot of rendering effort into this. In SecondLife (sadly) the underwater world is not well loved by most residents. Though we might argue this is a self-perpetuating, all the time we neglect the undersea we are not going to attract people to it. https://www.gamasutra.com/view/news/264997/How_Subnautica_plunges_deeper_into_rendering_realistic_water.php
  25. There is no way to derender a group or selection, but you can tell the SL viewer to not render ALL avatars (ctrl-alt-shift-4) but this also removes your own avatar as well.
×
×
  • Create New...