Jump to content

Beq Janus

Resident
  • Posts

    608
  • Joined

  • Last visited

Everything posted by Beq Janus

  1. @ChinReythe only change I know of that would be in this space is https://jira.secondlife.com/browse/BUG-231583 Paging @Whirly FizzleI think that this bug (Ubit's reported bug, not the one in this thread that may or may not be related) was supposedly fixed in SL viewer but perhaps it has not landed yet? Worth checking.
  2. What viewer are we using and how does this manifest?
  3. There is nothing in SL that requires you to attach those planes to each other. I am sure people will advise you of other/better ways to achieve what you are after, but if all you want is three "layered" planes on the back wall then just go for it. To keep your 1LI dream target alive (when linking to something else. Then the total LI of the linked items must be 1.49 or less (technically more than 1LI but in display terms it gets rounded down). The lowest possible LI for any single linked object is 0.5. So if you look at the full LI cost using the "more info" link on the build tools. That will tell you what budget remains.
  4. What happens when you try to login to the the dashboard? Is you password rejected? Have you tried to reset it?
  5. I'll ask LL if they are likely to do anything with Niran's poser. If not then the "wait for it to flow down from upstream" tactic is not valid. Based on what @NiranV Deansaid it seems likely that they're not going to go anywhere with it without addressing their concerns over the abuse. It is something we can pursue at the next TPV dev meeting.
  6. That's a bizarre suggestion, though it underlines the typical "don't understand how users actually use SL" issue we face. The pose export restriction is not that tough a problem to crack. It is exactly the same as the solution used by @Phate Shepherdin AnyPose. The restriction is effectively enforced for lsl solutions because there is no way to "read " the current pose position. As aa result the AnyPose (and similar solutions) export a new animaiotn that you play at the same time as the pose that is being modified. this allows you to protect the original creator and support the "tweak". The restriction actually required to get to this would be: store the starting animation state (A) change the pose (B) save the delta between A and B This would then be aligned to existing capabilities. If the creator of the poses for A was the current user then in theory you could relax that, but actually, given the complexity of animations today (heads, and hands and bodies all running various layers) it is unlikely that you'll find a case where you are running all of them. (Actually, that probably blows the T-Pose idea out of the water too, you'd literally have to stop all the animations in all the attachments) Ironically, the lack of "read" from the LSL side made the problem more readily addressable to the LSL based solutions. as those problems never arose. Though IIRC, Phate used to have to a disclaimer in the notecards to tell people that there was not way to steal poses using the tool, presumably to stop the whining by people who had not investigated it. Another advantage of the server side solutions is that they avoid all the "OMG someone might make my pixels do silly things" fears as any changes to another avatar's pose has to go through the permission system. @Rowan AmoreI should have asked this in the first place. What are the main advantages that you have found using the BD poser versus the HUD based posers?
  7. Indeed, that was the most explicit, but with little further context. One of the limitations of posing and animation in SL today is that short of sending an animation to everyone and telling their viewer "Wulfie is now playing animation X" we have no way to explicitly describe skeletal movements. Consider the inworld posers (AnyPose being probably the best known), they effectively work by playing lots of small animation files depending on the various states you click through. A brilliant example of the ingenuity of Second Life creators/coders but far from how you'd ideally want things to work. If we do get these new interaction tools then they would (one would hope and pray) come with a new "API" to stream those updates in a more efficient manner, this would truly open the doors for better AO and posing tools. There has been no concrete news of this shared with the TPVs or through the Content Creator User Group. I do hope that the Lab don't try to present such a potentially powerful new tool before consulting the users and tells us it's too late to change things or incorporate our ideas once they do. We've seen them do this in the past and it inevitably tarnishes what should be a great opportunity.
  8. I've wanted to do something like that, I use AnyPose a lot for tweaking poses and it would be nice to have something like that natively. I'm not a fan of the slider setup that Niran uses though. That's not a criticism of the UI per se, more that I don't think it is a very intuitive solution. Niran contributed it to the Lab, but they have managed to swallow it into their blackhole for at least 3 years now, perhaps more. If they were to pick that up then we'd automatically acquire it. I did try taking another approach and I asked the Lab for access to the legacy ragdoll code (from the 2009-ish avatar puppeteering project) but the request was denied. What we really need is a click and drag type pose control. There was a commit made to the LL git repo a few months ago that looked like someone was perhaps playing around with the code that I had asked for 12 months earlier, it might not be related to a poser function, we have heard vague references to "avatar expression" cited by the Lindens a few times now, so perhaps they have something up their sleeves?
  9. It was added by TPVs and does not exist in the SL viewer at all. I knwo we have it in FS, I don't know the history of whether it started with us or one of the other TPVs, but either way it was never adopted by LL.
  10. We're currently in QA. Assuming nothing bites us and resets the timer on that it should not be long now.
  11. @JupitvrI'm a developer on the Firestorm viewer, would you be happy to send me the problematic object please and the diffuser texture too. @arton Rotaruhas made some interesting observations regarding the automatic alpha masking and I'd like to have a closer look. @Whirly Fizzledo we know of any auto alpha bugs that (or even normal behaviour) that would lead to this?
  12. Just to reinforce this. IF you do not need alpha, you are strongly encouraged to upload an RGB only image as @arton Rotarusuggests. When a texture has any form of transparency it has to go into special handling, which is slower. So by avoiding this where you can your asset will be more efficient. yep, very peculiar. When we apply materials then we switch the alpha handling into materials based alpha options (giving us alpha and emissive options), my best guess is that somehow in doing this the behaviour changes. Curious though.
  13. Simple answer: In TPVs at least, and I am pretty sure in LL too, parallel Assets in SL are multi layered. A side-effect in some regards of the evolution we have gone through, but also in keeping with most 3d packages like blender. At the top level we have what is effectively a transformation. A container that has the scale and other meta data pertaining to this instance. On this level lives the default colour texture UUIDS etc, the type of asset "prim (with subtypes) mesh, sculpty" etc. There is a UUID for the materials "pack" and another UUID for the sculpty/mesh. When the asset arrives it is decoded and the prim drawn. If the object has a UUID in the materials slot then that is requested (I am simplifying this a bit cos the materials system is odd) , the standard textures are fetched. (see below). If the type is mesh/sculpt then the UUID for the vertexdata is requested for a sculpt this is an image of course, for a mesh this is a mesh object. Mesh requests go via the mesh repo thread which checks if it has knowledge already and fills the request from cache or otherwise yanks it from the web (glossing over the LOD by LOD part). Texture requests go to texture fetch threads which are fetched in parallel (and in most TPVs decoded in parallel too) An interesting couple of asides from all this is that the top level "container" knows (slightly indirectly) how many faces the mesh has, which may seem bizarre, but comes back to the default textures. The top level carries a UUID for each prim face's diffuse texture. thus for a mesh with no materials the top level object has to know how many textures to expect, entirely breaking the abstraction. Ironically, this also leads to the wonderful hexagons and triangles etc that you see on the region maps. This is because the map tile software was never taught about mesh, it does not render it. Instead it draws the N-sided polygon associated with the nearest matching prim type. And a final piece of SL trivia of the day, the "materials package" tagged on to the top level as a separate UUID is also the basic switch on the legacy prim accounting cap. If the materials pack is empty then you are legacy accounting. If it is non-null, then you get "mesh accounting", the switch for alpha masking/blending/emmission is contained in the same pack as the materials and is thus why switching a prim to alpha masking can screw up the physics costs by causing the prim cap to lift. To get back to specifics. It is the abuse of the default diffuse texture that I would love to see changed as we explore PBR and a new materials regimen. The default "every object has them" colour textures should be exclusively kept for ALM "off" fallback rendering. this would allow creates to bake lighting and do whatever fancy things they want there. The new materials should (I'd like to say MUST) have a dedicated diffuse/albedo so that the PBR materials do not have to do double time for non materials and are not spoiled by have aspects of baked lighting in them. At the moment we have the single colour map shared and that basically makes all the use cases worse.
  14. The problem here is the classic optimisation problem that we see everywhere in the viewer, speed up one aspect and it slows down another. This whack-a-mole optimisation technique is zero sum game if we are not careful. It is not typically the case that a slow mesh fetch affects FPS for example. Instead it affects how long we stand around staring at grey blobs and clouds. improving the scene build delay, can inadvertently add pressure to the FPS through CPU and RAM churn (though threading has helped with the former) and all of a sudden we have complaints about slow downs. The cache system is far from ideal, it requires lookups and the lookups get slower the more assets are in the cache so we can't just keep saying "hey have more disk space", however, it is already easy to max out the cache with a handful of region hops and a high DD. By stuffing more data unnecessarily into the cache we slow down cache fetches and increase cache maintenance and churn. I'd have to re-check the code to see whether the fetch from cache into RAM is an entirely separate request to the fetch from Network to cache too as you don't want to be pulling data into RAM for the mesh preparation (unzipping, rebuilding etc) that is then not used, as this causes more RAM pressure, (CPU) cache invalidation and whatnot. In the end, whether a given trade-off works for you, and I is not the same as whether it works for the laptop user on a satellite downlink (yes we have a number of those), or those poor souls still caching on spinning platters not SSD (we have those too). With fetching largely happening in parallel on multiple threads and most content being relatively local due to CDN edge network support the question of how much we are waiting for network is worth asking, when you see clouds and blobs, can you be sure that is due to network or is it the mesh repo? One very important factor that could well suggest that whole mesh fetches are far better but which I have not yet measured is whether Akamai throttle clients on a request basis or on a throughput basis, I would like to see some comment from the Lab (or ideally from Akamai) @Monty Lindenany thoughts?. My suspicion is that they throttle by request (as by volume would imply some kind of session accounting and consolidation) and thus by reducing the number of small requests into larger ones we can get better response times from the CDN overall. This has a significant potential benefit. I believe that @animatsdid some research into this while developing his rust based viewer, but an authoritative answer from the providers would be ideal. The current focus on DRTVWR-554 (performance project viewer) is stable higher FPS, but this is only part of the story. We all want to have the updates faster and more consistent but the actual time to build a scene from zero, is largely independent of frame rate and is all about data fetching unpacking and prepping. A good example of this is the "OMG I hate seeing dismembered parts and naughty bits when I TP, please make them go away, my eyes my eyes...." which drama-aside is indicative of an underlying issue in the speed of scene building, and last time one of those JIRAs got actioned it was to add more delays and checks into the declouding, missing the point that what people really want is a faster resolving of disparate parts not clouds for longer. As an example of where knowing the use case and reasoning (flawed or otherwise) of users is important. I would like to hear from LL (I have asked at meetings and not been answered) how many users have ALM disabled/enabled? My estimation (based purely on what we hear in support) is that about 40-45% have ALM turned off. There is then a follow up question that we cannot know through stats and metrics alone, which is "Why?". Answering this involves user exposure and consultation, as TPVs we get this through our support teams/forums and general user engagement. I believe LL lack much of this contextual understanding, (though some Lindens are very active residents too, so this is not at all a black and white matter). To answer the question "why turn off ALM?" the following two use cases quickly appear. 1) Some people do it because they think it helps their frame rate (which while things are undoubtedly faster with ALM off, the improvement is mostly down to the side-effect that disabling ALM kill shadows, which is the majority of any scene rendering cost). These people would have a better visual experience by reducing their shadows but keeping ALM active (or at least trying these in phases rather than On/Off). In some cases turning off ALM (when shadows are already disabled) is a backward step. 2) Others, however, do this because their networks are awful and they'd rather not suck down the additional textures/meshes. The satellite downlink users are one example, but there are many other cases. SL has a global reach and is not limited to those with good network infrastructure.
  15. ctrl-alt should be the same behaviour as the camera controls, rotating the camera in a circle around the current focus. I think that the OP wanted rotation of the camera in place (at the centre of the circle, but it's all inferred from the same post so any of the above could be true 🙂
  16. Is this for taking panoramics? The next release of FS will have 360 snapshot support. I don't think we have anything that turns the camera in place, but there are definitely camera HUDs that can. Something like https://marketplace.secondlife.com/p/Wpano-HUD-Panorama-photo-assist-tool/10425972 shoudl do the job. Here's a quick random 360 shot from the future Firestorm (soon), the same functionality is currently in the default LL release viewer if you need it right now. https://www.flickr.com/photos/beq/51870504277/in/dateposted/
  17. The video clearly indicates region crossing issues. This is not a viewer issue for the most part, it is a rather sad fact of life in SL that frustrates many drivers/sailors/pilots and one that LL are aware of. That said, the reason region crossings are hazardous is in part because the entire context of the avatars and vehicles crossing the boundary have to be handed over to a new server. The more scripts, more objects etc the larger the handover task so keeping things simpler may help.. As has been said, reduce the texture VRAM allocation. The video card has to be able to hold all the other graphical components so you want to try to strike a good balance. The viewer will try its best but forcing it into a corner when it comes to resource management is not advisable. The Ultra setting governs many things a number of which you have turned down to mitigate but it still leaves other things higher than your GPU would typically be set to. You may have mentioned this and I missed it. I would suggest that you ensure that you have your cache directory whitelisted. https://wiki.firestormviewer.org/antivirus_whitelisting Flying along causes a lot of churn, if your AV is scanning every file just for fun then your gonna feel it. (Finally, I'd suggest reducing your LOD multiplier too, but that's more out of general good practice, I don't think it'll make a significant change in this case)
  18. @Anakin Debevec TL;DR Performance checklist: For shopping, render friend only enabled, DD as low as you dare, water reflections set to None (Opaque). Shadows off For exploring, render friends only (perhaps),Shadows on, wate reflections set to None unless you need them. DD depends on the scene, keep it under about 180 if you can. For clubbing/socialising... Shadows off, water refelctions off. DD set to the size of the club/dance floor at the very most For photos.. turn on all the things...then turn them off again Use graphics presets to store settings that work well for you in different types of scene. Longer answer: The next version of firestorm will have a feature I developed that I hope will help with adjusting settings for a given region/scene or at least inform people of the causes of lag and even (experimentally) try to auto optimise things for you. However, the same logic that it will apply can also be applied by you manually today. There are actually not that many settings that make a fundamental earth shattering difference to performance (there are lots that make a minor difference). The main ones on current FS are: DrawDistance, Shadows, Water Reflections and Render only friends. There is no single set of settings that will work for all places equally. Firestorm has the ability to save graphics presets (I can't recall if that's a common feature to all viewers or not) use it to create profiles and quickly switch between them. In a busy region with many avatars the primary cause of lag is likely to be the mesh avatars, reducing the value of "MaxNonImposters" will help a little (though the imposters have been broken for 6 years or so if you have shadows enabled, I have fixed this in the forthcoming release, a patch that was also provided to LL) instead, derendering all avatars or using the "render friends only" option will remove most of the lagatars from your scene, not ideal if you like to watch other people in a social event but a means to an end if you want to speed up your shopping trip. Shadows - The use of shadows, triples the work done by the CPU during rendering. Disabling them is an instant boost, however, it can also look crap! turning them off for Shopping events is a good idea, but not so hot for exploring and enjoying SL at its best. Side note: Don't kill ALM just to kill shadows. Too many people turn off advanced lighting (ALM) which has the side effect of disabling shadows and getting the performance boost, but it also removes materials and a whole heap of rendering optimisations that now have to be done on the CPU. Because they see the boost due to removing shadows, they often erroneously think it is ALM that was the cause of their problems. This is unlikely on any vaguely modern machine. There are a few valid reasons for disabling ALM but these tend to be if you have a significantly low network bandwidth and the download overhead of material textures is undesirable, or a very ancient low memory video card. Most people are bettter served with ALM on, and managing the shadows by their preference/use case. DrawDistance is your friend. Doing less work is always faster. reducing DD, reduces the amount of stuff that has to be handled. (You appear to have this in hand with the setting at 64). All those display kiosks are stuffed with mesh, and often these are highly inefficient, some creators do crazy things like modelling a decorative cornice on the tiny boxes that they use to display the colour swatches! In order to deal with this, reduce the amount of stuff that you are rendering by keeping the draw distance to something sensible based on the environment you are in, as before shopping events don't need a long DD, lounging on the lakeside, looking over a forest...another case entirely. Water Reflections - Unless you need water in your scene, you should keep Water refelctions on None(Opaque), of all the water settings this is the only one that makes a significant difference. this is because, like shadows, to create reflections everything has to get drawn twice, when you have "Everything" enabled it is rendering multiple times for both reflections (things above the water) and refractions (things below the water). Opaque looks passable for non photographic use and will boost your fps. Keep in mind that for most shopping events you cannot even see the water. A few related things How to read the CPU usage: As Ansariel pointed out, a typical SL viewer will always use at least 100% of a single core/thread with two main exceptions: 1) you are focussed on another app and the viewer is sleeping in the background 2) you have set an FPS limit and the viewer is sleeping for part of the time. In most other circumstances the viewer is designed to simply hog a core and do as many frames as it can, the viewer will always therefore use 100% of a single thread/core. Windows has a very bizarre way of reporting the CPU usage, even though the viewer is burning a single thread, the windows scheduler will move that all over the CPU at a whim, this means that you get a low average reported across all cores (25% on 4 cores, 12.5% on 8 cores for example.) and so to see the total usage you have to add up all the parts! Users on Linux (not sure about Mac) will generally see a far clearer reporting of 100% single core utilisation. Not so long ago this is where the CPU story ended, but in recent releases we have moved texture decoding to separate threads, this means that you will see a lot more thread activity and higher usage but only during the initial period after arriving where all the textures are being unpacked. This can be tested by observing the usage levels after a TP and again once things are stabilised. Ultimately no matter what you do, the main graphics work is all happening on a single thread and that thread will use all of the CPU it can get (maximum being 100% of 1 core). any additional usage is due to the extra threading. More and more stuff is being moved out to other threads. Also look at Help->About Firestorm and note the "concurrency" number. That is the number of cores that the OS is making available to FS, the image threading etc use this number to decide how many threads to use. There was a mention or two here of the new cache. Firstly, a little history to clear some apparent confusion. FS has had the new cache working and in our releases for over a year. This is because Ansariel took the broken Cache that LL released back in late 2020 (IIRC) and fixed it up. When LL found that their release was broken they rolled back and only released their cache update in the last few months (including Ansariels fixes). (LL delayed their subsequent release because they had issues with the rollback of the cache in the first instance and while investigating those pushed other projects ahead in the release pipeline, meaning that their cache fixes finally got a release slot after the queue in front of it had all been released) Does the new cache improve FPS directly? Unlikely. The cache plays a part in speeding up the sourcing of resources from the network, as such a faster cache will mean less time spent with grey blobs after a TP. It is not, however, intrinsically linked to rendering. that said, as always, there is a caveat. Remember all that "single thread" talk above. Some cache work happens on the main thread and if you have a slow or highly contended/busy disk then time spent "waiting" for the disk will impact ultimately steal cycles from the FPS. If you have an SSD then read/write times are unlikely to be a significant issue .... UNLESS your AV software is constantly scanning and rescanning it. Always ensure that your antivirus software has your cache directories whitelisted. Future: The LL performance viewer is looking very promising. It is not released yet (though some viewers have started to pull bits and pieces out of that for their own releases). It has a number of bugs that are being addressed. FS policy is that we do not release any code from LL before they have released it themselves, and depending on how stable things are post release. This allows us to keep the periodic releases that our users like and mange the three version rule that allows us to not force upgrade everyone. This caution also means that we extremely rarely have to roll back a bad release. If LL release the perf viewer in the next few months then you can expect to see those changes in FS a few months later depending on where in our release cycles it appears. For various reasons, we've had fewer releases in the last year than we'd have liked but a release is on its way *soon* with those people signed up for our QA team, running tests at the moment.
  19. I've not had time to read and digest all this thread in detail, but I can confirm what @Aishagainsays, which is that EEP environments are first class assets and so it could well be that the assets in the asset cache were in a mess and clearing those cleaned it up.
  20. as @tomm55says the primary difference is going to be the use of KDU versus OpenJpeg. KDU remains significantly faster than OJ. The suggestion to check the cache location is very valid and in particular make sure that you have the new cache locations whitelisted in your AV
  21. You can run it from the Release folder in the build tree once the "package" build step has been run once. That step will create the appropriate copies of the configs/skins etc in the build tree and it will then run. This is especially useful if you want to run the Firestorm-bin.exe which is the unstripped binary and thus useable with the debugger (you can run it standalone then attach the debugger later if you want to inspect something). I frequently run from the build tree when I am working on a feature/change
  22. There is occlusion culling on the shadow passes, it is a lot less effective because until you do the shadow projection you can't really know (that's a gross over simplification) At the moment, no. It can and does know what the cost of those are, in an earlier development version I was able to list these, but because of the way that Second Life works we cannot tell what the name of the attachment is. The only way to get the necessary object details (as far as I have been able to determine, having asked both LL and other TPV devs too) is through the mechanism that the "Inspect" function (that you find in Firestorm and other TPVs) uses, and this requires a selection to be sent to the server, which in turn would force the users own selection to be lost making it frustrating at best and unusable at worst. The overhead on the server and network would almost certainly not be worth the effort. Long story short...No ART is subjective to your hardware and your camera position. It is telling you who is dragging down you FPS right now. The worst avatar ever will still appear relatively low so long as they are behind you and not on camera. It would be hard to compile this into any kind of meaningful average. The problem any ARC type number has is that it cannot be right for everyone. Do we want it to be a strict render time mapping or something more abstract that also tells us how long it will take to "decloud", to pull all the materials, how much memory pressure it applies on both VRAM and system RAM. Personally, I think the lab needs to issue concrete guidelines on what well behaved content looks like. Give examples, and keep that information up to date as both hardware and software evolve. If ARC had been modified to include a drawcall() overhead then it would be a far better reflection of the reality for (most) users, but it has been left to rot for a decade and three or more years of talking about "project Arctan" yielded no apparent progress. I doubt we can get to a reliable number for scripts, mostly because I don't believe that there is a "one size fits all" number for complexity. Right now, December 2021, drawcall overhead is the number one problem. If ARC included that it would be closer to reality. BUT once the lab release the performance changes and the new drawcall() batching code comes in to common use (let's say by summer 2022, the perhaps that will have changed. If there is a future for ARC (or some composite complexity number that replaces it) then it needs to be maintained actively and updated to reflect current hardware/software based on real testing in a wide range of scenarios. It needs to be correct for the vast majority of users. The current ARC scripts eject people that are innocent while happily allowing the real lagatars to wander free, that is a broken system and unless it can be fixed and proven to be correct most of the time then it should simply be deprecated. There are two parts at least to this... 1) This is in effect drawcall batching. Right now, we do not have drawcall batching for rigged mesh, we do have it for static mesh and that has always been the case; there is, however, an implementation being looked at as part of the performance improvement viewer that I have referred to frequently. This will shift the balance of things (I hope). There are many bugs still being squashed until we can call that done but the lab have some excellent work in that viewer and I am looking forward to seeing it. 2) There is a further extension to this which is something like a bake service for mesh. take the avatar as a whole and "bake" a single composite mesh, merging the textures and the meshes to optimise them into a single object. This has the effect of reducing drawcalls, but it also culls hidden faces (the legs inside trousers, etc). This is a bigger problem and harder and not currently on the cards.
  23. As soon as it has had some reasonable exposure to real users I will be contributing it. The interesting thing at the moment is that with all the work on performance, things are changing with regard to what is good/bad/awful, ART will remain accurate as a guideline until we remove the CPU bottleneck from most people. In all cases, traditional ARC will just become more meaningless. In the longer term we will need renewed guidelines because it'll become harder to identify the individual avatar impact inside the render batches once we're looking at GPU as the main bottleneck. I use my own RAII wrapper to capture the times, the times are recorded using the LL high res timer (RDTSC on windows, I've not looked at Linux or the other one yet, the LL impl supports them but I think for Linux at least it is gettimeofday() based and thus will be less accurate. We can certainly make it use RDTSC on Linux in a future iteration). The captured timings are placed into a lock-free queue which is processed on a separate thread and written into a double buffered set of maps (so that the UI reads one while the viewer updates the other). As always, capturing this data has a cost, at the moment the overhead of capture is way less than the noise in the rendering I do as much as I can to batch that updates, hopefully, there will come a time when we need to turn off the metrics because they are a statistically significant contribution to the frame time. Frankly, that would be an awesome problem to have 🙂
  24. There's a couple of things to consider here. First and foremost. LODs are your friend and should be your weapon of choice here. The interior of your buildings (as applied to the mesh of the buildings itself) can be removed in everything but the HIGH LOD (and possibly the MEDIUM, you'd need to experiment). The sclae of the building means that the HIGH LOD will be showing under default settings any time the user approaches and certainly if they are inside. The smaller items inside the buildings should by definition LOD switch sooner as well, but the effectiveness of that depends a lot on the conscientiousness of the creator and the settings used by the user of the viewer For the individual items, then yes, viewers already apply object to object occlusion rules though there are many caveats to this. In particular in shadow rendering (just because an object is hidden behind another does not mean its shadow will not be visible). As suggested by @Candide LeMay the render metadata option shows this but it is a developer tool not intended for wide use and rather prone to blowing up in your face. Occlusion does not work as well as we would all like, (terrain does not occlude things properly, for example) there is a lot of work being done in the current performance branch of the LL viewer which hopefully will see a release sometime in Q1 2022, this will then reach Firestorm a few months later depending on how it falls relative to our QA/Release cycles. Some of the performance work changes aspects of occlusion, improving water reflection performance, other parts reduce draw calls and so forth. There are different types of culling and I don't know them well enough myself to comment at length, It is however good practice to use imposter rendering on the lower LODs as this can reduce the textures in use on the model and avoid additional draw calls. I wrote a new blog post yesterday about the forthcoming performance floater, while this initial version is focussed primarily on avatars as things evolve it would be nice to add more information to help creators and region builder tune their products. This is not an easy task, mind you. The further we move along the optimisation path the harder it becomes to separate distinct objects out (batching and coalescing of rendering means that we cannot attribute render time to a given object).
  25. I don't actually recall this, though I might well have done and totally forgotten 🙂 In general triangles are less preferable than hulls, esp. for things like walls as the avoid both 0.5m limit on thin walls and the "inside" problem. I think you've described exactly what is happening, as the physics engine has only a pair 2 dimensional planar triangle there is every chance that in the vagaries of server-side physics accuracy you end up balancing along the edge of one triangle. I quite like the "leaning out" solution for those cases where we insist on/have no choice but using planes. tweaked I am sure. the physics is slightly smaller than the visual prim I think.
×
×
  • Create New...