Jump to content

ChinRey

Advisor
  • Posts

    8,380
  • Joined

  • Last visited

Everything posted by ChinRey

  1. Sculpts are heavier for the cpu and to some extent the gpu than comparable meshes. This is not because sculpts are inherently heavy, it's mostly about poor implementation. Meshes also often require less heavy texturing than sculpts although it's very rare for SL mesh makers to take advantage of this nowadays. However, well made and properly optimized sculpts will usually have much smaller file sizes than the same shapes as mesh and that affects performance in several ways. How the LOD models end up is also also an important factor for the performance difference between sculpts and mesh and. That's a topic for a big discussion in itself and it depends a lot on the creator's care and technical skills. There is no absolute answer. Well optimized mesh will usually be better than even the best optimized sculpts but there are exceptions and when it comes to the usual poorly optimized sculpts and meshes, I'm not sure if we can even come up with a rough guideline. Btw, I have to add that I don't entirely blame the LL developers for the poor implementation of sculpts. They were obviously under a lot of pressure to finish the task with limited resources and time so they were forced to take some regrettable shortcuts. They really should have done a better task analyzis though and that's their responsibility.
  2. It's about tri count. Prims are standardized multi-purpose shapes which means they will nearly always contain some tris and vertices that aren't actually needed but still keep the gpu busy. With well made mesh you reduce the gpu workload by eliminating all that extra geometry. If you look at the second test of mine, the prim linkset has 48,960 tris whilst the identical looking mesh has 3,060. There's no doubt whatsoever that it was much easier for the gpu to handle the mesh than the prims in that test. (The number of faces is also a minor factor. In this test the prim linkset has 1,530 different faces, the mesh only one. The SL software does consolidate draw calls for prims and static meshes - but not for rigged/fitted mesh unless something has changed recently - so the prims still only use a single draw call but consolidating is of course a little bit of extra work for the computer itself. However, the gpu can't draw all those tris before they have been loaded and processed by the cpu. Prims have considerably lower file sizes than meshes and are also easier to handle for the cpu in other ways. There's is no doubt that prims will outperform mesh if the tri and vertice counts are the same for both. But with a mesh you can improve the performance by cutting down on the tris. The question is how much do you have to cut before it beats them prims? The way land impact is calculated suggests that 24-50% is more than enough. That's obvioulsy not true, the land impact system is made to strongly favour mesh over prims, but 75% should do, right? This test suggests that even 95% is not enough. Oh, and there's also no doubt that prim builds outperform the kind of quick-and-dirty unoptimized mesh we see far too often in SL. But I never do that kind of mesh (Note to self, remember to remove this link before submitting the post.) I have one important reservation though: This is a single test only. Both the scene as a whole and the client hardware are important factors here so is we want to determine an average optimal prim-to-mesh ratio, we need to perform such tests in different places with different hardware.
  3. A little bit off topic but I'm not quite convinced about this. I don't know if you saw this thread, Beq. In one of the tests I made for it I had a 48,960 tri prim build outperforming a 3,060 tri mesh. It's hard to see how that can be explained unless the cpu performance is still the bottleneck. But of course, the cpu vs gpu "power balance" is an important factor here and my computer - a 3.60 GHz Intel i7-11700K cpu and a Nvidia Geforce RTX3090 gpu - may be significantly more slanted towards gpu power than what most people have. It would be great if we had similar, and less casual, tests with different hardware setups.
  4. That's brilliant! I'd love to har more about how you modify the normal and specular maps to get such a good result!
  5. Except it doesn't do exactly that of course. The viewer has no way of knowing which avatars are heavy to render and which aren't so it more or less picks some avatars at random to "jellydoll" out.
  6. But then I would have ahd to do the same with the mesh to have fair comparasion. 😉
  7. Do you want the long answer? I wrote this a while ago and have already posted it on the forums. But it's so buried in all the old cotnent here I think it's better to repost than to link. Just a very short explanation to one of the problems most builders run into every now and then: Why does the land impact (or "LI" or "prim count") suddenly raise or drop when objects are linked together? 1. The History of LI Originally the land impact of a linkset (that is a set of objects linked together) was calculated very simply: 1 LI for each prim. That didn't work in the long run though. LI is supposed to indicate how much load the objects puts on the servers and the network and different types of prims can be very different there. The problem became even more critical with the introduction of mesh - there simply is no sensible way to handle meshes with that old system. The only solution was to introduce a brand new model for calculating LI, based on three of the four "weights" an object has. This modern calculation is far from exact but it gives a much closer estimate of the actual load the objects causes. There was still a problem though: Quite a few older builds would break under that new calculation method, that is their LI would increase beyond the limit allowed. The solution to that was to use both calculation methods in parallel: anything that could have been built before the new LI formula was introduced still has its LI calculated the old way, anything that includes features that didn't exist back then, is calculated the new way. Quite confusing and hardly an ideal solution but there really was no alternative. --- 2. The Jumping LI Problem One problem this dual model causes, is that the LI of a linkset can suddenly jump up or down when objects are added or modified. It only takes a single object with a single modern feature to switch the whole linkset between the two formulas and the difference in LI can be huge. Usually the modern formula gives the best (that is lowest) result but there are exception and it's not that uncommon for LI to increase by several hundreds - or even thousands - if the modern LI formula is triggered. --- 3. The Solutions a) To trigger modern LI calculation This is what you usually want to do and the solution is simple: just introduce one modern feature. Usually what you do is "convex" the linkset, that is change the Physics Shape Type to convex hull. One minor warning though, some prims may act a little bit funny when convexed. If that is a problem, just keep the physics shape type of those prims as "Prim". You only need to convex a single prim in the linkset to trigger modern LI calculation. b) To fix LI jumps The reason why the old method of LI calculation is kept is that some older builds have very high actual LI - in extreme cases several hundred or even thousand times the number of prims they contain. There aren't that many of them but it can be a rather nasty surprise if the one you're working on is one and you do something that triggers modern LI calculation. The simplest and most obvious solution is of course to revert the build back to its original state. But maybe you'd want to fix the problem instead? As far as I know, huge LI jumps are always caused by physics weight. (In theory it can also be caused by download weight but I can't think of many realistic scenarios where that will actually happen.) So the first thing we should try do is to reduce the physics weight. No, the very first thing we should do is take a backup copy of the linkset into our inventory, *then* we take a look at the physics weight! An object in SL can have three different physics shape types: • Prim: more or less the same as the shape you see. • Convex Hull: A simple rectangular or triangular box around the object. Can give a much lower physical weight than prim. • None: No physical shape at all. More or less the same as phantom - except it works for individual objects within a linkset. Removes physical weight completely. The physics shape type determines the object's interaction with an avatar. It's the shape you crash into or walk upon. It has no other function than that. To minimize physics weight, keep all objects that actually need a detailed physics shape (the ones with walkable surfaces, hollow prims you're supposed to walk inside or through etc.) as "Prim", change all objects you're not supposed to interact physically with to "none" and change everything else to "Convex Hull". Smaller LI jumps can be caused by download weight or physics weight. If it's physics weight, you can use the method above but usually there's no simple way to reduce download weight so if that is the problem, the only easy solution is to revert the build. That is, unless you're desperately short on LI, you can just leave it as it is. After all a modern LI calculation is just a more accurate estimate of the linkset's load. So unless you're running out of LI, the jump doesn't actually change anything. c) When the LI count doesn't revert Sometimes when you revert a linkset to use the old formula, the LI figure you get in the Build window stays high. It might be that there is some minor detail you missed when reverting the build but most likely it's just that the data isn't updated. If so, you can unlink and relink to force another recalculation but there's no real need to worry. The LI count you read in the Build window is calculated by your viewer and not the server and it shouldn't take long for it to be updated anyway. --- 4. Copyright Notice This text was written 25.03.2014 by Rey (ChinRey Resident). Please feel free to distribute it any way you like as long as you don't change the text or charge any money for it.
  8. You know me by now: I had to do a quick test of this of course: No fps limit (in a setting that gave me about 420 fps): GPU use about 65% Using VSync to limit the fps to 60: GPU use about 7-8% Using the old fashioned fps limiter set at 120: GPU use about 9% Using the old fashioned fps limiter at 60: GPU use about 7% Standard disclaimer: This was a quick "snapshot" test and the result should only be taken as a rough indicator, not as the absolute answer.
  9. But you have all forgotten the most important question! Will the Upgrade Bees still perform all those small but crucial side tasks nobody else seem to take responsibility for? (Edit: Yes, I know Torley has left but surely LL has found a replacement who needs to be kept an eye on, right?) (Edit 2: On second thought, discovering Mrs. Higgs' bosom is probably a waste of time. From what I've been told it's very, very small so she wouldn't have fitted in with the SL crowd anyway. I suppose the task list does need a little bit of revision.)
  10. Yes, you see that in the net time too. I'm not sure why. Mesh is of course far heavier than prims when it come to bandwidth but I did wait until things seemed to have stabilized before I took the snapshots. I hope this discussion will continue for a bit, it's quite intriguing and I don't think we have all the answers yet. But it seems to me there are three conclusions we can draw right away: When it comes to actual performance, there is nothing to gain from converting prims to mesh neither for server, bandwidth or client. In his blogpost "How Second Life Primitives [Really] Work" Avi Bar-Zeev Mentioned that it's generally much easier for the client to draw algorithmically generated shapes with all their inevitable extra geometry than fully optimized (polylist) meshes. It seems this is an even more significant factor than I (and probably most others who thought of it at all) realized. It seems that as a rule of thumb we should expect one mesh triangle to be as render heavy as 20-30 (or even more) tris generated from prims. The land impact system is even more broken and unbalanced than we thought. It's supposed to measure the work load on various parts of the system but when it comes to actual performance it seems that 1 LI of mesh can easily be as heavy on all parts of the system as 20-30 prims and even that may be a gross underestimation.
  11. There's always a question to what degree the minute differences in a smal, scale test like this relate to a real scenario. That is the main reason why I post here. It would be great to hear if others have some experience with this. When it comes to the server, there's also a question how relevant data from opensim and even from SL Beta are for the main SL grid. I'd love to do a proper full scale test on the main grid but that's not possible of course. Yes but those factors should favour mesh over prims in these tests yet the results show exactly the opposite. When it comes to the client side I think the conclusion is already quite clear: replacing prims with identical looking meshes will never improve viewer performance. At best the performance reduction is small enough to be ignored, at worst it can be considerable- That doesn't mean prims are always preferable to mesh of course. There are things we just can't build with prims and besides, even I'm not so obsessed with performance I'd say we we should always prioritise real gains over imaginary (ie LI reduction) ones in cases like this.
  12. Good point but the sim server still does quite a bit of work on each object even if the physics engine isn't engaged. I don't know why but it does and that's what the server weight part of the land impact calculation is supposed to represent. Then there's the frame rate. The second test from SL Beta is the clearest on here. The prim build to the left has 49,152 tris and 18,432 vertices. The mesh build to the right looks exactly the same but only has 3,072 tris and 6,144 vertices. And still the prim version renders considerably faster than the mesh. Of course, there's no practical difference betwwen 530 and 500 fps but this is only one fairly simple object rezzed on a platform high up in the sky, well away from everything else.
  13. The only two ridiculous things are that LOD factor can be set globally by each user and that it can't be set for individual objects. The LOD factor determines the swap distances for an object and a good LOD model is always made to work with one - and only one - specific swap distance. Increase the swap distance and you're only getting worse performance and no visual gain, reduce it and you get collapsed meshes. This means the creator has to know what the LOD factor is. Is it 1? Fine, we make meshes that work for that. Is it 8? No problem, we adjust our meshes and work with it. What we can't possibly adjust our meshes to, is an unknown LOD factor. On top of that, ideally different meshes need different LOD factors and even different LOD factors for different LOD models of the same mesh. Here are two illustrations of Unity's LOD system, both from the official Unity manual: I hope you can see what's happening here: you can set the swap point for each LOD model for each and every mesh individually, and also the cutoff distance. You can even change the number of LOD levels for each mesh. This is how all modern game/vitrual world engines does it and it's how SL should have done it. Unfortunately Second Life's mesh implementation was made by developers with little or no understanding of practical content creation and when some user who actually knew what they were talking about proposed a proper LOD handling mechanism, Andrew Linden shrugged it off as "too complicated". That was a HUGE mistake and the reason why we have all these LOD issues and arguments today. It's kind'a understandable they made that mistake if they assumed that prims were going to remain the main building tool and meshes would only be rarely used. But if that was the case, they were incredibly naive. Come to think of it, Andrew Linden is back. I wonder what he has to say about the issue now... Maybe he watches the forums... Paging @Leviathan Linden
  14. A friend of mine has this lovely old prim house. It's rather heavy though - 1804 prims - so he asked me for help converting it into mesh. Yesterday I had finished replacing 275 of the prims with 25 highly optimized meshes, including 21 copies of the same six prim (originally) object and I thought it was time to check how much performance improvement I had achieved so far. There was none whatsoever. In fact both server and viewer performed marginally worse when dealing with the partly meshed house. The difference wasn't big enough to matter and well within the margin of error I should expect but it was consistent through several tests and the meshes certainly didn't improve performance: The graph on the Kitely web site showed a similar story but it's not really detailed enough to be useful for this; the server load it showed stayed at 1% throughout all the tests. I decided to do another test so I went over to Second Life Beta, made a linkset of 256 prim cubes and converted it into a 6 LI mesh. Here are the results: Again, the mesh performed slightly worse than the prim original. The difference was all but insignificant and still within the margin of error but there certainly was no improvement. I already knew that SL's land impact algorithm gives meshes an unfair advantage over prims but 256 prims performing at least as well as a 6 LI mesh??? Interestingly I once calculated that with improved asset handling it would be possible to get 20-25 prims to perform as well as 1 LI worth of mesh. These test results indicate that this is already the case and my calculation was well and truly on the conservative side. I really need some advice on this. Right now it looks as if converting prims to mesh is just a waste of time unless of course you're really struggling with the prim limit or want to add details that just can't be done with prims. But we need more info before we can draw a conclusion. What do you think? And has anybody else done similar tests?
  15. Oh yes, big unbroken surfaces are an exception of course but that's not the most common kind of mesh in SL.
  16. I'm quite sure there was but to be fair, I don't have any actual data about it. Maybe somebody else does?
  17. That's what I call "Mole mesh" and it's not a good idea either although not nearly as bad as butchering the LOD models. If it was that simple there would have been no point in having LOD models at all. You want to get rid of smaller details on objects that are viewed at some distance. Fewer tris and vertices improves the frame rate a little bit and it improves load time quite a bit. But perhaps msot important, with LOD all maxed out, you soon get tris so small the viewer has to find a way to sqeeze several of them onto a single pixel on the screen and then you're really giving your poor GPU some work to do. I seem to remember that @Beq Janus posted an article about this in her blog a while ago. I can't find it now but maybe she can help.
  18. Thanks! That makes sense of course, no need for the gpu to update the picture faster than the screen can do it. But this needs to be better documented and explained better. Most people don't read through the release notes and only a few of those who do will understand what "VSync" is.
  19. The latest Firestorm release has the frame rate capped at 60 fps. I'm not sure if this is one of the new performance improvement changes or if it's a bug and I was wondering, is this the case with the LL viewer too?
  20. As Wulfie said, Animats can probably explain the details and so can @Coffee Pancake. However, one thing that is quote important in this context is that planar mapping completely ignores the object's UV map so there is nothing you can do in Blender to change the end result.
  21. And just for the H*** of it, here are my thoughts on how it could be improved. I really need to improve this article too. There are a few points I forgot to include. Second Life's and Opensim's current texture map is brilliant in its simplicity but it dates back to 2002 and is not really up to today's standards. We have to be careful considering the various upgrades since some will increase the client side load significantly, others add several new parameters to edit. However, they will also provide a huge visual improvement and reduce the necessity for ground covering objects so they should all be well worth it. Precise elevation parameters Number of textures Perlin noise pattern Border blending Texture repeats and texel density Decals Synchronized corners Precise elevation parameters The elevations for the various textures are currently set with only eight user defined parameters, max. height for the lowest texture and min. height for the highest texture for each of the four region corners. This makes it very hard to control the blend. In particular it's impossible to keep a seabed texture to bleed onto the land and a land texture to bleed onto the seabed. To make matters worse, the specified values aren't even real, the highest texture extends way below it's min. height and the lowest way above its max. height. Introducing user defined and precise min. and max heights for every texture will make the UI more complicated but it doesn't add to the lag and should be well worth it. It may however be a good idea to keep and alternative simple UI similar to the existing one. Number of textures This is currently fixed to four which is a bit too much for some regions and not enough for others. This is especially a problem with coastal regions since you really want at least one texture for the seabed and one for the shoreline/tidal zone which leaves you with only one or two textures for the land. The solution is to increase the max. number of textures to at least six but also, to reduce the lag for regions that need simpler texturing, make it possible to disable unneeded texture layers. Perlin noise pattern The perlin noise patter used by opensim is not ideal and can sometimes create annoying artifacts. Second Life does this better and it may be a good idea to look at which parameters it uses. However, maybe it's an even better idea to allow parameters to be user configurable. The three most relevant ones are: Roughness Scale Seed It may also be a good idea to add x and y offset. Perlin noise blending is not always desirable so there should be an option to switch it off. One downside to making the perlin pattern user configurable is that it causes problems along region borders so this is only really a good idea if the border blending function is also implemented. Border blending Currently textures blend seamlessly across region borders if, and only if the two regions use the same textures and matching values for the elevation parameters at both their shared corners. The obvious solution is to add blend zones along the borders where the textures from the neighbor regions are added an gradually faded out. The width of the blending zones can of course be set by the region owner, separately for each side, and the blending zones are deactivated where they aren't needed. It will take a bit of work to figure out exactly how to do it, especially how to handle region corners and borders where both sides are set up with a blending zone, and it will make the baking of the ground texture more complicated and time consuming for the client software but it would be such a huge improvement it is well worth considering. For larger continents it's almost mandatory. Texture repeats and texel density The original SL setup developed back in 2002 seem to have been based on 128x128 ground textures with 1/8 repeats/m, giving a texel density of 16 pixels/m. Current ground textures tend to be 512x512 (64 pixels/m) or even 1024x1024 (128 pixels/m). Today Firestorm at least uses a different repeat rate with 1/12 repeats/m which means there won't be a whole number of texture repeats across a region. With that repeat rate texel density is 42 2/3 pixel/m for a 512x512 texture and 85 1/3 for a 1024x1024. All those current texel densities are actually a bit too high for most purposes since too high resolution for a texture repeated as much as the ground needs tends to induce too many artifacts. Even the original 16 pixels/m is actually higher than what many modern games use and it's very rare for the ground to need more than 32 pixels/m. Most of the time we want to increase the texture resolution to reduce the repeat rate, not to increase the texel density. Not always though. Adjustable repeat rates (set separately for each ground texture) would be a great improvement. The options should probably be kept to values that give a whole number of repeats across the region (1/4, 1/8, 1/16 and 1/32) but we may need to include opensim's existing 1/12 for legacy reasons. This should not increase the lag significantly since people these days tend to (ab)use high resolution textures for the ground anyway. Allowing for adjustable texture repeats will only encourage region builders to use those textures more effectively. Even so and independent of this, it may be a good idea to add a client side option to substitute lower resolution textures when performance is a critical issue. Decals You want a sandy beach? Or a mountain top with a different texture than the surrounding landscape? Or a bright green lawn around your house? Today such features are done with mesh, sculpt or prim ground. Texture decals would be a much better solution since they would reduce the need for extra objects and are easier to blend into the surroundings. The blending can either be done with blend zones (allowing for tiled and larger decal textures) or with alpha decal textures (allowing for irregular shapes). Ideally we want both options available. Synchronized corners Matching the elevation parameters at region corners can be rather cumbersome so a function to do this semi-automatically would be really useful. The idea of the synchronize corner function is to give the region owner the option to select either of the (up to) four existing data sets for a corner, the average of them or enter new data to be applied to all four regions where they meet.
  22. As you know, Animats, I've been working on and off on my own documentation of SL/opensim and some futile thoughts on how they can be improved. Here's what I've got about the current ground texture system. Corrections and other comments are very welcome: The ground texture is a blend of four tiled (one repeat per 12 m) textures. The blending uses masks made from a preset perlin noise pattern and gradients between the four region corners. The maximum elevation texture 1 is applied to and the minimum elevation for texture 4 can be set independently for each corner of the region. Those numbers are way off though; texture 1 extends considerably higher and texture 4 considerably lower. To illustrate how bad this is: with max elevation for texture 1 set to 10 m and minimum elevation for texture 4 to 200 m, texture 1 is the dominating one up to c. 30 m and occurs up to c. 60 m while texture 4 is the dominating one down to c. 130 m and occurs down to c. 115 m. The perlin noise mask is shared across the entire grid allowing for seamless texturing across region borders if, and only if, adjacent region corners are set to the same texture blend heights and adjacent regions share the same texture set. Compared to SL, opensim uses a different - and less satisfactory perlin noise mask.
  23. The default for Firestorm is 21 1/3 - one repeat per 12 m but it can be adjusted to anything from 1 to 24 m/repeat: I'm not sure but if I remember right, the LL viewer uses one repeat per 8 m and I don't think it can be changed. Look at the ground texture sets in the library folder. There are two versions of each texture there - a regular with 512x512 and a "Base" with 128x128 resolution. I suppose that's the explanation: the "base" textures were simply lowerresolution alternatives, probably for clients that couldn't handle 512s.
×
×
  • Create New...