Jump to content

Beq Janus

Resident
  • Posts

    608
  • Joined

  • Last visited

Everything posted by Beq Janus

  1. The scale cost thing has nothing to do with fairness; that's the rules as they stand nobody said they were fair 🙂 ; the rules, however, are out of step with current rendering technology but that's a slightly different argument. The system and concept is simple enough. A small object is likely only to be seen up close, a large object will be seen from afar and switches LOD later, meaning small details on the large object are likely to be contributing to the render time for more people, more camera positions and directions, and thus they are charged higher to encourage better efficiency. This catches people out because it seems peculiar that a house frontage with 3 windows and a door and all their furnishings costs less LI when the windows and doors are separate than if they are all linked, but the expectation is that the windows will switch to a lower LOD sooner and as such their higher density mesh parts will be rendered from fewer locations. The flip side of that is the reality that as of this moment, most rendering is bottlenecked on the CPU and the drawcalls drive the majority of the render cost. And within certain parameters, the drawcall counts remain the same at all LODs, making the triangle obsession somewhat moot. However.... When the performance updates land, then rigged mesh batching becomes more of a thing. At that point we find that the cost will be less observable (it is harder to measure the impact on the GPU than on the CPU) but that the bottleneck may well move from CPU to GPU. If the cost moves to GPU then that means the CPU will be waiting for the GPU to catchup and the GPU will be choking on all the triangles that being jammed into it. The problem here is known as overdraw, having multiple triangles all fighting over the same on screen pixel means that you waste a lot of time drrawing and redrawing the same point on the screen. The denser the mesh the more overdraw, the longer the GPU spends. At the moment this is largely hidden because of the heinous drawcall induced CPU bottleneck. As for batching...Render batches reduce the drawcall overhead to some extent though batches are limited by texture or textures. Things with the same texture batch together, and some render passes can batch more than one texture. I think it may also only apply within the same linkset. I'd have to go recheck....
  2. As @Rolig Loonand @Quarrel Kukulcan note this would seem to be confusion arising from unfamiliarity with the vagaries of SL. The shader that we use in the mesh upload preview is a very basic single pass shader that has no alpha support. Once uploaded and rezzed in-world then your problems should vanish. Your have two ways to test this without incurring costs. 1) Use the beta/test grid Aditi (see here for how to connect there, https://wiki.secondlife.com/wiki/Preview_Grid) here you can upload anything without incurring real costs. 2) Use the local texture feature, this is more complex, but while the instructions to do it seem long-winded it is (I think) fairly obvious once you do it once. Instructions for local textures. Upload the mesh asset, without textures, rez it inworld by dragging and droppping from your inventory, then right click it and select edit. From the dialogue box that appears, click the Texture tab and click the little thumbnail. in the texture picker You'll see a set of radio toggles Inventory/Local/Bake. Inventory shows the textures that you have paid to upload, Local shows the textures that you have imported locally for testing (nobody but you can see these). At first this is empty, pick "add" then find the texture on your hard drive and select it. That texture will now be listed on the right and can be clicked on to be applied to the mesh...phew. Now that you have this, every time you save/export that texture in Photoshop/Substance etc it will auto refresh inworld. Note: these are local to you, anyone in world with you will not be able to see what you are working on. The following gif shows the basic workflow up to the point of hitting 'add' for the local. Stupidly I picked an item with no transparency 🙂 . The gif is a bit sparse so the mp4 may be a little clearer https://i.gyazo.com/73e107ae9ca6b756d9716b2f7047a815.mp4
  3. Does the LL viewer work? If not then contact the Linden Lab Helpdesk and ask them. You'll need to try their viewer first because they'll not support you otherwise. If the LL viewer works then we'll probably need to get your logs so raising a Jira would be advisable. Though you've tried it all, the most common causes are the AV and Malware systems. shutting those down completely and re-running the tests would rule them out for certain.
  4. Some images would help us help you. 1) the mesh in Blender or whereever you are creating it 2) the uploader floater 3) the "deformed" mesh.
  5. Lobbying is a good solution. I'd undoubtedly move to Maitreya if an efficient Lara-compatible body were available. As @Gabriele Graves says, there may be improvements just around the corner to ease out consciences though it should be noted that these (so far) show similar improvements across the board so an uncut body remains more efficient, but all the numbers get smaller. With that in mind it defers the problem for later, which may be "good enough". Personally, I'd like to think that these creators that have made so much money out of SL might have some inclination to give back in the form of helping performance. I am ever optimistic. A lot of people misunderstand the alpha cut workflow. I've seen many claims that that "oh but the weights all change, it won't work", this is frankly nonsense. For one thing just look at Slink. There is an example, where we have both cut and uncut sharing the same clothing. It is possible. Yes, there is work involved for sure, but an uncut/low cut body is shares the same per vertex weighting. My belief (which has not foundation in actual knowledge so take this with a large pinch of salt) is that it is more complicated making alpha cut bodies because you have to take time fiddling with the edge vertex weighting to avoid gaps and glitches at altitude. When I started to track body performance there was no work on the performance viewer, in part focus on this area was invigorated by the numbers I showed, that illustrated the scale of the problem, and ensured that soem of the excellent work that the lab have done here was targeted at the problem space. With that in mind, if the performance viewer can and does minimise the issue, that particular battle has been won, I'll be happy, who knows even I may even overcome my aversion to the inefficiency 🙂 The irony is that in fixing one problem we will likely move the bottleneck elsewhere, it remains to be seen who will fair best and worst in that. In the end though, less is always more wfor performance. The more redundant vertices we can remove from meshes the less overdraw happens on the graphics cards, this will be the new battleground. Whether it is CPU lagging drawcall or GPU heating overdraw, inefficiency reduces performance, limits the size of crowds, pushes up the minimum spec that is viable in SL etc. I'd love to see these bodies get the option to wear a weight compatible uncut version just because it's the right thing to do and the need for alpha cuts has mostly past.
  6. I feel your pain @Anna Nova. I wear Slink HG redux with petite boobs. I'd love to have a better supported body (or I'd love to see support return for Slink, All is not lost depending on certain design limitations. I have found that tops rigged for maitreya perky can work well with slink petite on occasion. It is easier too when the creator provides an alpha layer for you to save having to make your own. In this photo I am wearing a steampunk outfit from Silvery K. The top is made for Maitreya with small chests, The Skirt is the Legacy rigged one, which coupled with the alpha layer that Silvery K's designer @gin Fhang kindly provides allows me to buy and adapt the clothing.; This is limiting in design terms I get away with it because it is covering so much of me but my point is that it is worth trying these things out. I would also keep an eye on things over the next 6 months. The forthcoming performance viewer update will improve all of our rendering. and while I will undoubtedly contine to be bitter and twisted about unnecessary, badly optimised bodies they will (I hope) be a lot less damaging than they are right now.
  7. Sadly, the Jira that I raised https://jira.secondlife.com/browse/BUG-134006 was accepted but never actioned (either of the two options that I suggested); which has still left us with this silly situation where Firestorm and the Second Life server are agreed about what physics state and behaviour is, but the LL viewer is just wrong. Because of the number of times this question comes up, both for new builders and even those who have been around a while. I am adding a new "warning" on the FS mesh uploader. It will display a message if the default scale of uploaded object would lead it to be convex once inworld. https://gyazo.com/2bc8cc06c1fad953d20692a07cc11c66 It doesn't fix anything but might save wasted linden dollars on uploads. I have chased up that Jira today - perhaps we will get a better server side behaviour gong forward which would be the best thing all round.)
  8. Good feedback, thank you. Synchronous is probably too technical for that tip and I suspect is my fault. Synchronous in this case meaning it happens on the same thread, interleaved with other things. As opposed to asynchronous meaning happening in parallel. Coders should not be allowed to write tooltips 😉
  9. There's not really a general "optimum setting". It depends a lot on what else you have running on the machine and so forth. What we're basically allowing you to do here is provide strong hints to openGL about how much VRAM you can use. Due to the way OpenGL works actual VRAM usage is governed by the driver so the viewer has only indirect influence. What you probably want to consider is how much VRAM other programs need as well and make sure they have enough, if you over allocate you'll just end up with the drivers discarding stuff anyway. But OpenGL does take most of the control so in the end you are somewhat constrained and if a busy sceen is really pushing you to max that is sadly how it is and you have to let the opengl driver do its job. You might find the help text (click on the ? at the top of the floater) useful, though it isn't worded very clearly. (suggested edits most welcome) Here is a very explicit explanation of what those numbers get used for (I realise that this may not be helpful to you but it hopefully builds on the help text a little) What we want to do is get a number that represents the maximum VRAM ask, based upon the user settings. Minimum Viewer Texture Buffer is used as our "minimum ask". We then add the two reserve amounts to that to get the minimum requested. This gives us a total for how much the user would like us to use at a minimum. Using this number we then have to work out what is actually viable, ensuring we have at least enough room to work and do not exceed the total available and so froth. So... given the minimum ask from above: We limit that to the total VRAM that the system drivers tell us that you have, if you've been overly greedy 🙂 We then do a little juggling where we work out the maximum total texture memory available (by asking opengl what is free and adding the amount we are already using to that number). This basically tells us how much RAM the driver has available given all other usage at that moment in time. If the user has asked for more than the total the GPU is saying it has available, then that is fine, the driver will try to expunge other things based on our demands (remember what I said about hints. Ideally you should avoid this it cause delays as the card has to swap stuff in and out when you have mutliple things competing for VRAM. So just set sensible limits. so we take the larger of the two values (min requested by user, max_available) to determine the amount of texture memory we want. But we deduct the GPU reserve to ensure that there is room for the working space that we wanted. This number is the Total texture memory the viewer will try to reserve. Finally we then deduct the Cache reserve (the amount of space we want to keep for "working" textures) from this number, to give us the amount that we are allowed to use for "resting textures" So.... ultimately these are advanced options and "(ab)use with caution" should be stamped on them. But so long as you are sensible and not trying to allocate all of the card on the misguided assumption that the rest of the system will not want any, you should be fine. Work out how much of you card you'd like to allow FS to use. this value is what you set the Minimum slider to. (M) Work out how much of the total VRAM you'd like the viewer to keep for active work, this is the physical reserve (R) Work out how much of total VRAM you'd like to hold back for the active textures in the scene. This goes into the Additional Texture Memory reserve. (T) I would advise that M + R + T should be comfortably less than the full VRAM but you can experiment and see how it works. Additional observation. You have the decode concurrency set to 1, is there any reason for this? You are forcing the viewer to use a single thread for all texture decoding, on a machine like yours that is a significant barrier to performance and I would generally only ever suggest that for rally old machines that have just a couple of cores. Setting the concurrency to zero will allow the viewer to use as many cores as it wants when decoding textures, what this means is that the scene should be grey for less time when you arrive as it will be able to unpack multiple textures at once instead of one at a time. (all of that happens on the CPU btw and is independent of the GPU parameters) Caveat: The above is written based on the actual code in Friestorm. However, I was not the dev that wrote those lines and it's 2am...so "best endeavours" applies. Hope this helps.
  10. We do not support displacement maps in SL. So no that will not work I'm afraid. As you predicted you'd have to convert the resulting displacement mesh into a high poly mesh that would not be viable in SL.
  11. The standard viewer has never allowed this limit to be increased, that has only ever been supported in third party viewers. However, as Gavin stated, the forthcoming performance viewer update will address this. You can test the latest pre-release build of that by going to https://releasenotes.secondlife.com/viewer.html and looking for the latest performance viewer.
  12. haha, that's the name I gave my blender addon for SL. I never quite get around to telling people about it. esp as this year RL has been throwing too many curve balls. https://github.com/beqjanus/slender My plans for this, that get constantly derailed, is to use it as a bridge between SL and Blender. Very much an active work in progress. I have started and shelved the project many times, but @Vaalith Jinn who previously built the temporary textures facility that you may be familiar with is working on it and I hope to be able to integrate their work once they are ready. The external toolkit is not entirely alien as a concept, consider Robolox Studio. I don't think it is a viable product for SL as the effort needed to maintain it barely justifies it. We struggle enough to find viewer developers, let alone people willing to throw their spare time into a toolkit. Roblox has the advantage of having very deep pockets.
  13. We shouldn't blame OpenGL for these shortcomings these are entirely on historical choices made in SL, for valid reasons that have been undone by time. OpenGL has many issues but take a look at Blender, it is OpenGL and has absolutely no issues at all with rendering PB, nor does it have issues with HDR. We have an awful lot of old cruft built around OpenGL, and we are apparently hamstrung because Apple made unilateral choices to undermine open standards (even though the actual number of Apple users is less than those on Windows 7/8.) Yep, no quibbles around that, though again I detach PBR from renderer as an exclusive. Consider the principled shader, developed originally by Pixar, and commonly regarded as the standard for things such as VFX. It is at its heat adherent to PBR principle, but it offer simplified artist friendly inputs .Pixar on Principled Shaders Yes, maybe... see later. if you want to pick a format then GLTF is one and it is open and not tainted by adobe or autodesk. stink. But it cannot be the only option because for many many content creators in SL, 3d tools are not the path of choice right now. Again, as an option this makes sense, but it can only be an option because it is not fit for purpose for many (possibly most?) workflows and usecases. But you are mixing up different things here too. 1) Bringing things in together, a single shot upload of a fully textured asset. Not every map that comes into SL starts in Blender/Maya/Substance. In fact today the vast majority come via Photoshop/Gimp. Consider, clothing makers. Very few creators make money out of a single item with a single colour. The time in creation is dominated by mesh design and creation, but the revenue is derived from the colour sets and designs, the multipacks and fatpacks. Very few designers will want to upload a new mesh with every colour of fatpack. In fact, if they did it would not only cost them a fortune but be a content performance disaster. Consider also the extensive set of creators that are texture artists, those that buy template mesh, many of whom never have a Collada asset but receive a full perm inworld asset and full per m maps from which to work. This is how many creators started out, it is how many still earn an SL living today. Their tool of choice is Photoshop/Gimp or another texture painting program and their preview (WYSIWYG) space is through temporary textures inworld. It is imperative that single map, bulk upload of some form is supported. GLTF is almost certainly not the right vehicle for this and it worries me a lot that LL are racing ahead to pick a single means of transport (GLTF) without actually knowing what the full range of cargo is that needs to be transported looks like. Like buying a new motorbike, before working out how often you need to take you entire family out with you. I have appealed for a specification to be published ahead of any deliverables but the lab's record on such things is beyond poor. We have seen time and again that an idea is formulated, code is laid down and then when the shortcomings are pointed out it is "too late, you should have said before". I really don't think we (or they) can afford for this to happen. This set of changes has to be done right, and right from the very outset. There is no halfway house on this. We need to see the list of use cases and make sure that it fits. 2) what you see in your tools is what you get in world Yes 150% agreed... and this has almost nothing to do with PBR explicitly. You can call the new renderer Colin or Mildred, it does not matter one iota. If our creators can make something in Blender/Substace/May/Daz etc etc and through a simple well-defined workflow transfer that creation to Second Life so that their artistic visions are realised inworld nobody gives a damn what name us rendering geeks and nerds give it. "Wow that new Mildred update is awesome, my stuff looks exactly like I designed it." and it is the need to be able to realise our artistic visions that drives SL creators to push the envelope all the time and why, therefore, a halfway house, partial solution will fail to meet the actual needs of the creator community and cause more strife. I voiced my concerns last week that the plans to create PBR assets and a lightweight PBR render pass BEFORE considering the lighting changes and other aspects are fundamentally flawed and dangerous. Consider the problem you have highlighted above. A lot of content looks shabby, poorly lit and dull. There is also a lot of really impressive content that works really hard to make the best of a bad lot; such content calls upon numerous features of the existing renderer, adds backed lighting, additional AO (to compensate for the very weak SSAO in the viewer). Such content when placed into a PBR pipeline is likely to be detrimentally impacted, compared to the content that was less artistically designed (for want of a better term) We saw good evidence of this with EEP. A number of the issues that had to be "rolled back" and "toned down" were actually moving the lighting to more physically correct levels. The outcry of OMG all my things are overexposed, were only drowned out by those who thought their shadows were now too dark. Applying you current content to a PBR shading model will work as you have shown, improve the lighting solutions will lift average content to a new level. But in doing so it will break the content that had already gone the extra mile to compensate. So... if the Lab delivers upon the plan to rollout out some halfway house, nod towards PBR that does not include proper lighting and appropriate calibrations then the first thing that will happen is people will realised that the lovely items in their PBR workflow tools still do not look the same in SL, they'll perhaps look better, but they won't be right. the next thing that happens is that they come up with workarounds and solutions to compensate for these shortcomings (baked reflections and lighting will be the immediate ones). What we've created overnight is yet another set of content that is going to be ruined by the proper PBR when it arrives. Another mob with pitchforks and torches to shout down the proper revision of the pipeline. This is the number 1 reason I am most agitated about all this pseudo-PBR talk. At the end of the day maybe 10% of creators (I am probably being generous) actually care about PBR, in the sense that they understand the PBR concept and are fluent in what it does and why). A far far larger proportion want to see PBR because the know that's what they are working in in Blender/Substance/Maya/Cinema4D/Daz etc. These combine with the third demographic, those who don't care about the acronyms, the physics models, the lights....The "I just want my stuff to look right when I upload it!!" crowd, which in the end, is the real problem. you spend hours crafting an amazing model, then painting it in glorious materials, only to have the SL lighting take all the life out of it and make it look sad. They just need Mildred to do what they want, they don't care how (those of us who do care, want the same but are more hung up on the science) Again 150% agreed. This is need more than any change in what maps we have. As you have shown (we are singing from the same songsheet here) is that by improving the shader, upgrading the lighting and modernising the rendering we can lift up the dull dark, muddy content we have today. Sure PBR/Colin/Mildred whatever will make it look even nicer and open up new possibilities but this has to come first. Partly because it will break content that already compensates and the sooner we understand those extents the better., but partly because we all win. Yes, and to be fair this is what the lab are proposing in a different way. The plans that we have heard so far state that the "PBR" will be an additional render pass alongside the other passes we have today. Let's ignore my look of incredulity that PBR could be a single additional pass (I'm assuming that's a lost in translation problem) the concept being touted is to make the new rendering appear in parallel to the old. This preserves old content look and feel and introduces new. In theory that can work though I struggle here because the simple fact that PBR is all about the light. this means that to deliver PBR we have to have updated lighting (see above, I believe we need this to be first not last in the plan) if the parallel legacy pipeline retains the legacy lighting that will be incongruous at best. Another solution that does not have this problem is your path, where the new environmental models are used and we have a best endeavours attitude to existing content as it gets passed through the new shaders. What I would not like to say is whether your solution is more acceptable for the majority of modern content and older content alike than the other one. Which causes the least screaming? This is going to upset some apple carts. I don't think that can be entirely avoided, the real challenge here is to find the solution that results in the fewest apples rolling in the gutter. Henri, raised the important concern about performance and inclusion. I did a lot of datamining recently to analyse the nature of the FS user base. We know that many users have ALM off and that there are multiple reasons for this (an entire separate thread could spawn from that), notably among this though are those with lower RAM quantities and those who are on limited (and even metered) networks. A fallback rendering solution is required. I have proposed this through the provision is an exclusive baked light map. distinct from any future albedo, which would allow those that only want the simplified view to choose to only ever render the baked light. I am sure there are many other (better) solutions to this too, but it does need to be a consideration somewhere. We've probably all forgotten the days when travel was a thing, but as the world returns to work and conferences, more and more of us will resuming spending a part of their time on less that typical networks and devices and irrespective of the Liquid-cooled AlienWare, neon meltdown super rig, that is sitting in their home, be needing to connect to SL on the shoddy shared wifi of the downtown AirBnb, or low cost hotel. Fallbacks are good, if we can fit them in. The other question, that I have asked and not really had a convincing answer to, and which you are screaming from the rooftops, with much the same result. Is "Should we really be doing this on top of OpenGL at all?" OpenGL can deliver all this, but we know it has limited miles left, we have the issues around rendering performance and platform support to name just two reasons we expect OpenGL to be replaced. Sadly, we've been talking about it for at least three years with no actual progress on this (you've made more progress). If we build PBR on top of OpenGL, can we be sure that when we subsequently migrate to A.N.Other graphics API that it will 100% match? Or are we going to have another round of breaking things? Would we be far better placed to bite the bullet and replace the pipeline, "do a Joe" and upgrade the lighting and rendering (but let's not call it PBR....) and then once that's in place and we're all in a better, faster, lighter new world. we can introduce the new assets, new material support with full PBR (because our engine already does it) Personally, I think that this is the best path and should have been the path that was taken before now BUT consider it from the other side here... Is it better to give our long suffering content creators something that helps them really push SL content forward NOW rather than later? the big problem LL have and I do not envy them in this, is coordinating these disparate concerns and needs. Forming all the needs into a cohesive plan that can be delivered in a timeline that makes sense, shows continual progress and does not break more stuff in the process is no small task.
  14. You underplay the excellent work and truly impressive achievements that you have made starting from scratch and building from first principles. What you have as a result is a very nice looking, vivid scene, a great illustration of how with a more capable renderer things can look quite striking. The lighting is truly a massive uplift and the single biggest failing in the current render. However, I'm far from convinced that calling this PBR makes it PBR. Though perhaps, some of what you show here can help reduce the painful visual conflicts that will result if you simply place a proper PBR system alongside the poorly lit and lifeless SL system. In a sense, what you have is akin to my shader setup I use in Blender. A principled shader being fed with the existing SL maps in a way that mostly works inside the PBR concept but is not physically correct. I am intrigued by your spec colour averaging to grey as I would imagine that it should break a lot of metal in SL where the spec colour, combined with the inverse roughness channel in the normal map alpha, is used to give a present day approximation of metalness, resulting in the "glint" of gold. I have to agree that the most important step is to fix the abhorrent mess that environmental reflection mixing causes; this alone accounts for why so many attempts to use materials in SL today wind up with things looking either wet or plastic coated, rather than leather, or polished. We've talked recently about the poor resolution of the cubemaps, it is not even that. I have been running an FS with higher resolution cubemaps for both the skymap and the shinymap and whilst I cannot rule out oversights in my hacked together test, it is my belief that the generation is too poor, e.g. the skymap rendering is sampling from the skydome but does not appear to sample the clouds. The shinymap is a just a weird dirty grey to get around the fact that you note, we cannot differentiate indoor from outdoor, occluded and open. Great work, I love seeing how you are innovating within each cycle that you show us.
  15. Not a bug, per se. I suspect that you have a build up of notifications and when the viewer starts it is replaying them. Give this a go, hopefully less painful than a reinstall. Open the notices panel and clear them out. I don't have any system notices to demonstrate, but I have a lot of group notices and the method is identical. https://gyazo.com/fc61ffe8493624848b7a1c06e2a4d13a
  16. I'm far from convinced it ever went away. There have been various guesses as to what causes this (as per my comment above) but I don't think anyone has nailed it down.
  17. Out of interest, do you both have edit rights to the plants (i.e. do you have edit rights to your partner's objects and/or vice versa)? The potential causes of this bug revolve around material updates and shared edit rights and a perfect storm... It's not super important but does add an extra data point for the future.
  18. That does indeed look like the alpha texture bug. I assume that this is visible to everyone once it has been triggered. It happens in all viewers. It tends to happen to plants but possibly because these are more noticeable. There are Jiras for this, the only one If could quickly find was this one. Which is set to "needs more information" so you might want to try adding a comment. https://jira.secondlife.com/browse/BUG-230211 @Whirly Fizzlewill know if there is a canonical Jira that has all the history of this bug, I am sure. It appears to be independent of viewer This is a better report, https://jira.secondlife.com/browse/BUG-8715
  19. The viewer uses a hardcoded string "DARWIN_PRODUCT_NAME", the rest of the data of fulfilled form the OS const char * DARWIN_PRODUCT_NAME = "Mac OS X"; SInt32 major_version, minor_version, bugfix_version; OSErr r1 = Gestalt(gestaltSystemVersionMajor, &major_version); OSErr r2 = Gestalt(gestaltSystemVersionMinor, &minor_version); OSErr r3 = Gestalt(gestaltSystemVersionBugFix, &bugfix_version);
  20. Just for context here is graph of FS Sessions in Feb 2022. What we can never discern from these is what the relative contribution to the economy is. 1.2% Linux and 5.6% Mac is still a decent number of users.
  21. I've mentioned this in my blogs (but probably after the point that most people have already fallen asleep) the majority of TPVs have an optimisation originally provided by Niran, that will skip over any fully transparent attachments. Notably these have to be 100% transparent at the material level (i.e. the transparency setting on the texture tab in the build floater) it does not work for fully transparent textures (because there is no fast way to confirm that a texture is 100% transparent). It is for this reason that my table on bodies earlier in this thread shows the "typical visible" count of faces (which is based on some relatively arbitrary sampling at events and other crowded places). It is also the reason that I added the "visible faces" count to the inspect floater in an earlier release. If you do wear an alpha segmented body then ensuring that you toggle off as many faces as possible for the outfit that you are wearing will help reduce your count. It is also why I am typically less concerned about multi-pose feet and multi-style hair than I once was. The invisible parts of the hair have very little render time overhead. However, a word of caution in this regard. The viewer still has to load the meshes into RAM and do various other items of prep work that typically happens on a separate thread, so while it does not affect the FPS outright they do have an impact on memory consumption and the overall scene rendering time upon arrival (how long things are gey blobs for). Nothing is entirely free 🙂 I transitioned from Physique Original to HG when the Redux was released, primarily because of the clothing supply. as you say, you can get a reasonably decent shape, with a little more padding than we were perhaps used to (hey we're all getting older). However, I do also have the petite boobs addon for Slink. This gives you a nice itty bitty option, but does cut down on clothing choices further still. As such I have two sets of shapes that I use, my petite boob shape that I wear with "proper" HG petite clothing and a "mostly flat" non-petite shape, that gives the smallest acceptable breasts on the normal HG and is usable with all HG clothing that is fully covered such as sweaters overcoats and high neck dresses (it is not a nice look when uncovered). I believe the is a flat butt option though I do not have it. Incidentally, bringing things back to the technical domain. The provision of the boob and butt options is why I listed the Redux as "low cut" count as opposed to "no cut" because in order to provide the interchangeable boobies, the base body has to allow the larger boobs to be set invisible. this is done as a single cut per body part.
  22. There is some cost in them just being there, what you see is the cost of working out where they are and deciding that they are not relevant basically. It depends on a couple of factors. As I noted waaaay up higher in this thread (I don't expect anyone to have seen it) the current version of this feature has limited tuning capability for scenery, there are only a few things it can actually tweak in that regard (in an avatar rich scene there is more it can and will do). Within those parameters that it does alter I have tried to place "sensible" defaults, there is for example a limit on how far it will adjust the draw distance and the highest water reflection level. It tries to set these based on the existing settings (which implies that if you were already out performing the limit it will not (by default) improve the settings. You can adjust the upper limits by clicking the "gear icon" next to the auto tune button, this takes you to more advanced settings and in there you can change some of the targets. This is an area that I expect to evolve in the future. One of the problems with scene wide settings changes is that they have a large impact. Water reflections for example, can (for some people) take you from 35 FPS to 20 FPS, and one of the difficult tasks in trying to write an algorithm to manage this all is working out when you are "good enough" you really don't want water reflections flickering on and off every few frames as it hops one side of the target and then the other. It needs to be able to settle, and I am far from happy with how that works in many situations at present. Yes, some hair is scary slow and I've lost a few of my faves over this 😞 On the plus side, we can demo them now and make more informed choices, at least that's the idea.
  23. Thanks for this @Echelon AlcottI think I know what is happening here and I think it is basically an oversight on my part in how I track the sum. Kind of a bug, but it is not that the code is wrong, I just didn't think about there being a mismatch between the way that the sum it calculated and the smooth costs of each item. In theory, you are correct, the total there should reflect the sum of the listed ones. There is a caveat (and within that the bug), the numbers you see on the lists are "smoothed" to remove the jitter you see from frame to frame (I use the term smoothing as I got told off in the past for talking about statistical averages 🙂 - but I'll expand on it if anyone cares.) The sum at the top right, on the other hand, is a point in time (this frame) sum, every frame the counter gets set to zero and the total accumulated anew. As such these will differ, both are right in their own way, but they are not comparable (sorry). What I should be doing, in order to make the sum and the parts correlate better is to smooth the sum over time as well, it'll still be slightly astray because of the nature of the smoothing but it will be a lot closer than now.
  24. Attached particles is (I believe) the one part of the CPU based avatar rendering that I have not been able to isolate in this first version (though part of the point of this version is to also find other corner cases if we can). The particles issue was raised during the QA cycle by one of our testers (ironically also a Mac user - what is it with you Mac users and particles 😉 ?). At the moment their cost should be showing as "scenery" in the summary stats because I have not associated them to an avatar. Is your particle avatar available? I'd be interested in having a look at it to see whether I can capture it properly for the next release. In terms of "viewer load" the viewer was historically designed as a single continuous loop. When you are not "tabbed out" or deliberately limited the fps, it will simply spin as fast as it's little cpu legs will carry it. This results in the typical profile of usage as being 100% of a single core. The OS schedulers hide this a bit. Windows being the worst of the bunch spreading your 100% of one core across all the available cores (thus if you have 8 cores you'll typically see windows showing an average core utilisation of 12.5%. On Linux it tends to be more literal and you get the full 100% core usage shown. Mac, I have no idea as I have no Mac, though due to some issues uncovered in the QA cycle I did recently discover that the viewer when running on Apple Silicon does not correctly determine the clock frequency reporting 2.4GHz when the nominal frequency of the M1 is 3.2GHz. These days, we've moved a bunch of ancillary tasks off onto other threads, but the main thread will still spin as fast as it can, constrained by single core speed, IO latency and where applicable GPU driver (see previous note on swap buffer). As a result the total CPU load machine wide, of the viewer will be one and a bit cores worth, the size of "a bit" depending a lot on whether we're busy decoding textures or one of the other "off main thread" jobs. But for all intents and purposes the viewer is always going to spin as fast as it can (it should throttle back a bit on the login screen IIRC, but I've not check that lately, and it still spins just not quite as maniacally as it would like to). I am interested to hear that the particles cause the GPU to spin high. I was my understanding that we handled particles in a very inefficient manner and would have expected them to be a CPU bottleneck, but that is something I can explore.
  25. right click 'zoom to" on the avatars in the nearby panel, will zoom your camera to that avi. Which I think will do what you want. Though to be clear none of this is load on the server but on the viewer. What my post should have said, to make it less ambiguous is that I'll have a "self facing" camera to help make sure you are getting a proper account of your own attachments. I'm not sure we have a good measure of server load at the present time especially as the server efficiency when running scripts was apparently boosted a month or so back. Meshes, textures etc. have close to zero impact on the server because apart from sending us small real-time updates on positions etc, everything else comes from the content delivery network (CDN) and is never touched by the server. A question worth exploring though, do avatars constitute a significant load on a server? If so, what aspect of them? Scripts will no doubt be part of it, physics another, but what else?
×
×
  • Create New...