Jump to content

6.4.23.562625 - Second Life Project Performance Floater


Mollymews
 Share

Recommended Posts

i like how the floater is laid out

i like that the numbers have been reduced by 1000?  

my maitreya body says 5. the hdpro head I have been playing with says 6.  And that with all the other things I have attached then my weight is 69. and not 69,000. Which is a whole lot cleaner I think

i notice that one attachment is 16. And my hair is 12. Which is a lot when compared to my head and body. So that made me go oooo! better do something about them

only thing that I wonder about is that when I turn off Transparent Water then it also turns off Shadows. Seems it might be a bug? as Shadows and Transparent Water present in the floater in separate sections and if there is a relationship between the two then the floater doesn't make this clear

other than this I like it quite a lot

Link to comment
Share on other sites

I have very very mixed feelngs about this viewer. It needs a radical change to avoid making problems worse.

On 9/5/2021 at 8:44 AM, Mollymews said:

i notice that one attachment is 16. And my hair is 12. Which is a lot when compared to my head and body. So that made me go oooo! better do something about them

A great example. This is exactly why this viewer cannot be released. It has just convinced you to change something, quite possibly for the worse. The 12 ARC is not that badly skewed, you can get hair with 10 or 20 times that (incorrectly calculated). In this case it may be that you have a rigged hair with multiple styles incorporated and a number of textures. If it were higher I would more confidently guess that your "problematic" hair was unrigged. Unrigged hair is penalised by ARC, unequally compared to its rigged equivalents. Given the same base mesh, the rigged ones will take at least as long to render, and yet their ARC can be an order of magnitude lower. Thus the entire premise of the data displayed in this floater is flawed. 

Back to the floater itself.

I very much dislike the renaming of existing variables such that they have different names in preferences to that in the floater, this means that anything learned in the new floater does not easily transfer to the main preferences. These should be made the same.

The avatar list is a nice display, however it comes to the very heart of my dismay. It uses ARC which is so fundamentally wrong as to be misleading. The Maitreya body,as an example (but not the worst), is a significant rendering overhead due to it clinging on to alpha segmentation. 

This is not a poke at Maitreya, ALL multipart bodies are very bad performers, in the case of Maitreya, a full body will require in the region of 3-400 draw calls (batches of triangles sent to the GPU), Legacy is the worst, with Belleza close behind. Male bodies are typically worse than female ones too.

Compare this to bodies that have sensibly embraced BOM properly, such as Slink Redux or the Kupra which have far less (typically 10-15% of the number of draw calls), the render time in my experiemnts is almost linear in terms of drawcalls (almost irrespective of the triangle count even on older laptops without GPUs). ARC does not come close to reflecting this, it is mired in concerns of triangles. Tirnagle counts do have an impact, it is just hidden in the noise when draw calls are over used.

The result is that you have a sorted list with supposedly the "worst" lagging avatars at the top, but which is generally misleading and in many cases utterly wrong. You can easily get into the situation where someone wearing a comparatively efficient SLink  or Kupra body and a relatively low overhead unrigged mesh hair will appear at the top of the list, will appear with a very long line at the top of your list, meanwhile a person with a body full of alpha segments and a rigged mesh hair is way down lower. By using the tool you eliminate the efficient avatar and keep the inefficient one, utter madness. Moreover, we'll see this used to persuade people, either through finger pointing by their friends/associates, or through good-willed self-improvement, to ditch their efficient outfits for worse ones, achieving the opposite effect to that intended.

I am in the process of developing a way of scoring these that will alter this and give more accurate data personalised to your machine. Assuming I can get it to work, I'll be contributing it the lab in the hope that we can avoid this new "finger pointing" disaster.

TL;DR The presentation of the floater is not bad, the idea good, the information that it provides completely flawed and in many cases counter-productive .

 

  • Like 1
  • Thanks 4
Link to comment
Share on other sites

1 hour ago, Beq Janus said:

I am in the process of developing a way of scoring these that will alter this and give more accurate data personalised to your machine. Assuming I can get it to work, I'll be contributing it the lab in the hope that we can avoid this new "finger pointing" disaster.

TL;DR The presentation of the floater is not bad, the idea good, the information that it provides completely flawed and in many cases counter-productive .

 

i may be never gave enough info in my first post

i like the idea that the numbers give me a way to compare to my body, in a simple way. I like that the numbers are low. Like body is 5. 1 ~ 5 is better. 16 ~ 5 is not all that good, relatively 

my tail is also 5. Which seems about right as is 32 flexi prim cones (unrigged).

with the 16 (unrigged) attachment i remade it in a different way. Is now 3 (unrigged)

you are right about the 12 hair. Is unrigged. The rigged version is 2. I changed to another unrigged hair from a different provider with about the same visual detail. Is 7. I like unrigged hair as I can move the pieces to where I want them (same as old school prim hair)

so reduced my complexity from 69 to 51

 

if you come up with a better way to calculate each item then that will be good

I just add that many/some people consider what is good/bad using item numbers relative to their body. And that display numbers that go up into the 100s or 1000s while containing more information, from a presentation pov tend to make people's eyes glaze over.  69,000, 49,000. 89,000. Whats a heap of extra zeros here or there when I am wearing a heap of attachments

69 down to 51 tho. I can get my eyes around this a whole lot easier. Which prompted me to do something about it. If I had wanted to lower the numbers for the sake of lowering numbers then I would wear the rigged hair, but this is not what I want to do. I want unrigged

 

Edited by Mollymews
to 51 not 49 as I had taken off another attachment, so put back on to compare
  • Haha 1
Link to comment
Share on other sites

1 hour ago, Mollymews said:

i like the idea that the numbers give me a way to compare to my body, in a simple way. I like that the numbers are low. Like body is 5. 1 ~ 5 is better. 16 ~ 5 is not all that good, relatively 

I totally agree. The problem is that your body is not good relative to the hair. It will almost certainly be 100 times worse (making an assumption that a mesh hair is likely to be <5 draw calls. We desperately need these tools to raise awareness of the problems we have. Unless this gets fixed we will be teaching people the wrong lessons.

 

1 hour ago, Mollymews said:

69 down to 51 tho. I can get my eyes around this a whole lot easier. Which prompted me to do something about it. If I had wanted to lower the numbers for the sake of lowering numbers then I would wear the rigged hair, but this is not what I want to do. I want unrigged

This is my concern, you should be empowered to make choices based on good data. If you have a higher complexity because of an item, but you are content with that item then you are making an informed choice. the problem is that already you are feeling dissuaded from the perfectly good unrigged hair towards the rigged hair because the tool is misleadingly showing it as being better. It almost certainly is not. 

There is a reason why the unrigged hair shows higher. The unrigged hair has to have LODs that are populated because they switch LODs correctly. Rigged hair, due to ancient unfixed bugs, do not switch LODs when they "should" and as a result the creeators can choose not to porivde lower LOD models, this reduces the complexity score. What it does not do is change any aspect of the rendering time. In both cases the viewer will be drawing a few (<5) batches of data totalling ~25K triangles (or whatever) of hair. If anything the otherwise identical rigged hairwill take longer to display because there is a lot more mathematics involved in applying the weights to deform it to the rig. These are Techy details that most people won't want to understand, and the tools should be saying "This one good", "This one not so good" in a way that we can trust. Right now it is blatantly misinforming us. 😞

  • Thanks 1
Link to comment
Share on other sites

4 minutes ago, Beq Janus said:

I totally agree. The problem is that your body is not good relative to the hair. It will almost certainly be 100 times worse (making an assumption that a mesh hair is likely to be <5 draw calls. We desperately need these tools to raise awareness of the problems we have. Unless this gets fixed we will be teaching people the wrong lessons.

 

This is my concern, you should be empowered to make choices based on good data. If you have a higher complexity because of an item, but you are content with that item then you are making an informed choice. the problem is that already you are feeling dissuaded from the perfectly good unrigged hair towards the rigged hair because the tool is misleadingly showing it as being better. It almost certainly is not. 

There is a reason why the unrigged hair shows higher. The unrigged hair has to have LODs that are populated because they switch LODs correctly. Rigged hair, due to ancient unfixed bugs, do not switch LODs when they "should" and as a result the creeators can choose not to porivde lower LOD models, this reduces the complexity score. What it does not do is change any aspect of the rendering time. In both cases the viewer will be drawing a few (<5) batches of data totalling ~25K triangles (or whatever) of hair. If anything the otherwise identical rigged hairwill take longer to display because there is a lot more mathematics involved in applying the weights to deform it to the rig. These are Techy details that most people won't want to understand, and the tools should be saying "This one good", "This one not so good" in a way that we can trust. Right now it is blatantly misinforming us. 😞

TLDR; LL need to remove penalties for having correctly made LODs so creators don't have an "excuse" not to provide them.

(I'd much rather have a higher complexity and not have my avatar break at distances of 8m +, but so many creators are laser focused on advertising a low complexity value at whatever cost, meaning their content breaks the second it isn't 'in focus' )

Link to comment
Share on other sites

1 hour ago, Beq Janus said:

There is a reason why the unrigged hair shows higher. The unrigged hair has to have LODs that are populated because they switch LODs correctly. Rigged hair, due to ancient unfixed bugs, do not switch LODs when they "should" and as a result the creeators can choose not to porivde lower LOD models, this reduces the complexity score. What it does not do is change any aspect of the rendering time. In both cases the viewer will be drawing a few (<5) batches of data totalling ~25K triangles (or whatever) of hair. If anything the otherwise identical rigged hairwill take longer to display because there is a lot more mathematics involved in applying the weights to deform it to the rig. These are Techy details that most people won't want to understand, and the tools should be saying "This one good", "This one not so good" in a way that we can trust. Right now it is blatantly misinforming us. 😞

I say, starting with this viewer, we FINALLY remove the stupid user bounding box reference with attached rigged mesh LOD drop calc and use the bounding box size of the mesh links them self.

I don't care about the screams.

Edited by Lucia Nightfire
  • Like 2
Link to comment
Share on other sites

So i just downloaded and tried the new performance floater.

I'm getting very bad vibes from this floater.

I mean good lord, finally someone with some UI knowledge touched the UI, it looks clean and fancy and kinda modern too (still wastes a lot of space though).

But the floater itself really... doesn't do anything. It's completely pointless, its a pointless rehash of a few very select options and some very generic tips that always say the same "this option may or may not cost frames". The complexity section is... barely functional to say the least and really is not helpful at all.... 21... 21 what? fishies? Also, what 21, whats making it 21? I want a full on breakdown what and why its 21. What does it consist of, what does it use, whats the lion share of these 21 fishies. It presents me yet another number that has no meaning, this time even more confusing than last time.

After spending 1700+ hours on VRChat and getting performance rankings and working hundreds of hours on avatars and optimizing them i'm really spoiled by their performance ranking system. I've implemented a basic version of that into my Viewer already:

image.png.c46d508c0241bab4c29fca68debdaf03.png

A full on breakdown is there too and even then i feel this is still too little:

image.png.67059bc7a8cef6d641e6b983ae69455d.png

 

Here's an example of the on-upload performance rating system:

image.thumb.png.71e822689bf09166d3b792df655912e6.png

Edited by NiranV Dean
  • Like 4
Link to comment
Share on other sites

Whoo it works!!

It needs some proper loving but essentially it works.

This (very early test build of Firestorm) has a measurement of the actual cost of rendering avatars. It builds upon the existing performance floater from LL, but I have integrated some proper accounting that measures the actual time spent on the CPU for each avatar. 

Of course CPU is not the whoel story but the stats I am using are effectively the proportion of each frame spent rendering a given Avatars geometry and any shadows. etc. I'll be adding more...

For the purposes of this test I left the list sorted by complexity (ARC) and updated it so the graph (that white line) shows the true render cost. As a result you can see that the people at the top of the list are not typically the problem. 

0bca2def5c539733d563da9434b058f3.png

 

Now, this being a first run through of this integration, the values might be misleading, so the following video shows a test. 

https://i.gyazo.com/c98b325ebd4bfb22699458db20be7fb1.mp4

First I disable the Bea person. Notice how the FPS noticably goes up (a couple of FPS when we're only running at 12) I re-enable them. Down it drops, I then disable Poyi, there is no change, and even with snowflake, arguably the second largest render cost and we see only a slight change by comparison.

Early days but I think this highlights my concerns pretty well. 

Edited by Beq Janus
  • Like 6
Link to comment
Share on other sites

2 minutes ago, Beq Janus said:

This (very early test build of Firestorm) has a measurement of the actual cost of rendering avatars. It builds upon the existing performance flaoter form LL, but I have integrated some proper accounting that measures the actual time spent on the CPU for each avatar

Now this is something i'd be highly interested in, i tried doing this via fast timers but fast timers does not allow dynamically naming each timer section according to the avatar (i'd have to name them manually). Actual CPU time spent on avatars would help a great deal making the complexity floater and breakdown more accurate.

Also, i've said this a million times before, the current complexity values from LL are trash, outright wrong and in no way shape or form are even remotely accurate, they penalize actual good avatars and give free roam to the worst of the worst. (which made me change it completely in the first place)

Edited by NiranV Dean
Link to comment
Share on other sites

23 minutes ago, Beq Janus said:

Of course CPU is not the whoel story but the stats I am using are effectively the proportion of each frame spent rendering a given Avatars geometry and any shadows. etc. I'll be adding more...

The CPU is indeed only part of the problem, and neglecting the GPU might give you entirely false render cost figures on systems where the GPU is the actual bottleneck (*)... It would be great to get figures with CPU + GPU render time for each avatar. Not sure it is at all feasible however (perhaps by rendering only a given avatar and nothing else for a few frames in order to get its render cost).

(*) Typically, systems using iGPUs, since those are especially weak; with a discrete GPU, even as old as a GTX 660, the CPU becomes the bottleneck in SL.

  • Like 1
Link to comment
Share on other sites

24 minutes ago, NiranV Dean said:

Now this is something i'd be highly interested in, i tried doing this via fast timers but fast timers does not allow dynamically naming each timer section according to the avatar (i'd have to name them manually). Actual CPU time spent on avatars would help a great deal making the complexity floater and breakdown more accurate.

I'll be sharing my implementation once I have it in a sensible form, the main objective is to get something to LL that can make sure that this new floater adds some value. This is literally the first step, I have a related "overlay" mode which is fun but of course does not work well with Realtime stats as the overlay itself is changing the avatar render 😄

15a2c816234cecba93158ba56c2e1589.jpg

6 minutes ago, Henri Beauchamp said:

The CPU is indeed only part of the problem, and neglecting the GPU might give you entirely false render cost figures on systems where the GPU is the actual bottleneck (*)... It would be great to get figures with CPU + GPU render time for each avatar. Not sure it is at all feasible however (perhaps by rendering only a given avatar and nothing else for a few frames in order to get its render cost).

Indeed, and therein lies a problem, we can do those kind of things in benchmarks but it is not an end-user tool. I don't think you can easily extract per avatar costs at GPU level. I'll take a look into that once I get further on with this.

The other problem which is related is textures. I'd be interested to hear from other devs as to whether there is a way to fully account for the texture transfer cost per avatar. I don't think we can. What I do have (not yet integrated) is measurements of the swapbuffer latency which to some extent relates to the volume of information being pushed to the card but it is per frame and cannot be easily subdivided as far as I know. 

Even so, this is a step in the right direction I hope. The more information we give users the better choices they can make. 

  • Like 4
Link to comment
Share on other sites

1 hour ago, Beq Janus said:

I'll be sharing my implementation once I have it in a sensible form, the main objective is to get something to LL that can make sure that this new floater adds some value. This is literally the first step, I have a related "overlay" mode which is fun but of course does not work well with Realtime stats as the overlay itself is changing the avatar render 😄

15a2c816234cecba93158ba56c2e1589.jpg

Indeed, and therein lies a problem, we can do those kind of things in benchmarks but it is not an end-user tool. I don't think you can easily extract per avatar costs at GPU level. I'll take a look into that once I get further on with this.

The other problem which is related is textures. I'd be interested to hear from other devs as to whether there is a way to fully account for the texture transfer cost per avatar. I don't think we can. What I do have (not yet integrated) is measurements of the swapbuffer latency which to some extent relates to the volume of information being pushed to the card but it is per frame and cannot be easily subdivided as far as I know. 

Even so, this is a step in the right direction I hope. The more information we give users the better choices they can make. 

Not a viewer dev, but here's my 2 cents:

Texture transfer cost is a tricky one, but off the top of my head:

1. Count the amount of textures used above the standard 'system' textures.

2. Find the size of the textures (fewer but larger being preferred, as this promotes the use of a texture atlas and reduces overall draw calls (?))

3. Account for if the texture has an alpha channel (if so, is it used as a mask (preferred) or a blend)

There's probably some nuances I missed out on, and I'm sure someone else with a bit more knowledge could suggest how to improve this, but may be a good start point

(P.s. Just realised that I had a brainfart and texture transfer cost could mean something entirely different from what I assumed when reading - if so, I'm sorry!)

Edited by Jenna Huntsman
  • Like 1
Link to comment
Share on other sites

2 hours ago, Jenna Huntsman said:

Not a viewer dev, but here's my 2 cents:

Texture transfer cost is a tricky one, but off the top of my head:

1. Count the amount of textures used above the standard 'system' textures.

2. Find the size of the textures (fewer but larger being preferred, as this promotes the use of a texture atlas and reduces overall draw calls (?))

3. Account for if the texture has an alpha channel (if so, is it used as a mask (preferred) or a blend)

There's probably some nuances I missed out on, and I'm sure someone else with a bit more knowledge could suggest how to improve this, but may be a good start point

(P.s. Just realised that I had a brainfart and texture transfer cost could mean something entirely different from what I assumed when reading - if so, I'm sorry!)

Thats easily implementable. I should do just that.

  • Like 3
Link to comment
Share on other sites

3 hours ago, Jenna Huntsman said:

Not a viewer dev, but here's my 2 cents:

Texture transfer cost is a tricky one, but off the top of my head:

1. Count the amount of textures used above the standard 'system' textures.

2. Find the size of the textures (fewer but larger being preferred, as this promotes the use of a texture atlas and reduces overall draw calls (?))

3. Account for if the texture has an alpha channel (if so, is it used as a mask (preferred) or a blend)

There's probably some nuances I missed out on, and I'm sure someone else with a bit more knowledge could suggest how to improve this, but may be a good start point

(P.s. Just realised that I had a brainfart and texture transfer cost could mean something entirely different from what I assumed when reading - if so, I'm sorry!)

Indeed I meant something else entirely 🙂 , but as a texture accounting scheme this is ok.

In one sense I am not entirely sure what I am looking for here either. When we render a mesh most of the CPU focus is on preparing the geometry for the GPU. This includes associating textures with faces and binding them. It is not clear to me whether the binding cost happens synchronously i.e at the time we do the bind in the code. Or later on, when opengl decides to flush things. In the synchronous case we should be good, my stats have it covered. If it is happening asynchronously then that cost would be missing from my stats and almost certainly un-attributable to the avatar.

 

I am not a graphics expert by any definition so a lot of the time there are nuances that are missed. I am happy with these stats as I can prove that while they may not be 100% fully accountable they are highly representative of the costs. It's always nice to have a fuller picture though.

 

 

  • Like 1
Link to comment
Share on other sites

8 hours ago, NiranV Dean said:

So i just downloaded and tried the new performance floater.

I'm getting very bad vibes from this floater.

I mean good lord, finally someone with some UI knowledge touched the UI, it looks clean and fancy and kinda modern too (still wastes a lot of space though).

But the floater itself really... doesn't do anything. It's completely pointless, its a pointless rehash of a few very select options and some very generic tips that always say the same "this option may or may not cost frames". The complexity section is... barely functional to say the least and really is not helpful at all.... 21... 21 what? fishies? Also, what 21, whats making it 21? I want a full on breakdown what and why its 21. What does it consist of, what does it use, whats the lion share of these 21 fishies. It presents me yet another number that has no meaning, this time even more confusing than last time.

After spending 1700+ hours on VRChat and getting performance rankings and working hundreds of hours on avatars and optimizing them i'm really spoiled by their performance ranking system. I've implemented a basic version of that into my Viewer already:

image.png.c46d508c0241bab4c29fca68debdaf03.png

A full on breakdown is there too and even then i feel this is still too little:

image.png.67059bc7a8cef6d641e6b983ae69455d.png

 

i get that people do want a full breakdown, as you are showing here. But for me personally I just want the headline number in my initial view.  If is made so that all the detail can be shown in a popup, or popdown accordion then that will be good. Could be a toggle. Pick Headline or Detail view and the toggle position is remembered

i also get the concerns about ARC not being as accurate as it could be, and I appreciate all the dev people looking into it and how might it be done better

this said. I like the floater, It meets the form and function that I expect in a UI

 

edit ps. When this is all done then I would like a similar floater done for objects on parcel please.

 

Edited by Mollymews
Link to comment
Share on other sites

@Beq Janus You talk about the issue of 'Alpha sliced' avatars creating more 'draw calls' for the viewer. You seem to have a good understanding of what is going on under the hood. What creates more draw calls in a body specifically? More texturable faces? Or the mesh being split into multiple links?

Related: In a recent JIRA, a Linden mentioned there is a Magic Texture UUID which is built into the viewer but not built into LSL. This is IMG_INVISIBLE. Which I gather the UUID is "3a367d1c-bef1-6d43-7595-e88c1e3aadb3". The Linden mentioned that if a face uses this UUID, it is skipped from rendering 'very efficiently'. They said the effect is only known to be used in Alpha Wearables when 'Fully invisible' is checked but could not comment on if this efficient skipping is used elsewhere within the viewer. I would be very interested to know if the 'efficient skipping' applies to other worn attachments or rezzed objects as I imagine this could be an FPS booster.

Beq also seems to advocate that bodies should not be using alpha slices any more, but instead use BOM Solely. I would like to draw attention to the fact that this would make products such as breast mods, or in my case Squeezy Stockings which change the shape of thighs by hiding a part of the original body impossible. I think that SecondLife should have some way to make such mods possible without incurring a 'draw call' performance penality, as that remains once of the big reasons why a creator is likely to supply their body split into many parts and faces, even with BOM.

Link to comment
Share on other sites

42 minutes ago, Extrude Ragu said:

You talk about the issue of 'Alpha sliced' avatars creating more 'draw calls' for the viewer. You seem to have a good understanding of what is going on under the hood. What creates more draw calls in a body specifically? More texturable faces? Or the mesh being split into multiple links?

All of the above. 

At the pipeline level, every texturable face is a separate "mesh", unfortunately the word gets overloaded here so at times it can be hard to know what we are referring to. When I talk about a mesh for the rest of this reply, I will be talking about a single texture face on an object. 

Every rigged mesh is processed separately, the render pipeline is constructed of multiple passes and different types of mesh need to be in different passes, if you have alpha blend transparency you go into a different set of passes to when you have an opaque texture for example. the implication of this is that every mesh gets dispatched to the GPU separately. It is this "dispatching" that I refer to as a drawcall(). 

Because the drawcall overhead is so high, the more of them you have the slower things go. I will write up a full explanation tomorrow if I can. It is 1am now so time to sleep, not to start a deep technical dive 🙂

54 minutes ago, Extrude Ragu said:

The Linden mentioned that if a face uses this UUID, it is skipped from rendering 'very efficiently'. They said the effect is only known to be used in Alpha Wearables when 'Fully invisible' is checked but could not comment on if this efficient skipping is used elsewhere within the viewer. I would be very interested to know if the 'efficient skipping' applies to other worn attachments or rezzed objects as I imagine this could be an FPS booster.

That Jira is actually on about something slightly different. I also happen not to believe that IMG_INVISIBLE does anything valuable at present. It is mostly used within the bake system not more generally. 

However, all the viewers, certainly all the TPVs have code that will drop fully transparent "meshes" (see above note on what I mean by mesh) before they get rendered. This does not fully eliminate the overhead, but in rendering terms it mitigates the vast majority of the cost.

57 minutes ago, Extrude Ragu said:

Beq also seems to advocate that bodies should not be using alpha slices any more, but instead use BOM Solely. I would like to draw attention to the fact that this would make products such as breast mods,

This is not really the case. SLink Redux uses BOM exclusively, it has no alpha slicing and yet it supports breast and buttock mods quite happily. Such meshes which are enabled/disabled through transparency, then it fall into the same category as multiipose feet and multiistyle hairs. Which is to say that so long as the unused ones are fully transparent then the worst of the performance issues are avoided. I want  to be very clear here that should some magic wand be waved and all of a sudden these disastrous bodies were all removed and replaced with lower segment versions, we'd  most likely be able to see the downside of these transparent ghosts, but right now, in a world of heinous mesh body designs they are a very minor evil. (i.e. even if it is not being drawn it is using RAM and takes CPU time to load it and process it, right now that cost is lost in the screaming nightmare of alpha cuts)

1 hour ago, Extrude Ragu said:

in my case Squeezy Stockings which change the shape of thighs by hiding a part of the original body impossible.

For this you'd need to be able to wear and unwear an alpha layer. This would achieve the effect, in fact you have far greater control over things using this method as you are not limited to wear a body creator has placed the cuts. The problem is that I do not think we can add alpha layers from a script at present without the use of RLV. Even with an experience. This is the only change that you'd need. 

Going back to the general "Beq advocates removing alpha cuts" yes, Beq totally does, but that is not to say it is an absolute thing. The trick that you can see with SLink Redux or Inthium Kupra both of which are pure BOM bodies; is that they minimise the number of meshes and as a result are far more efficient to draw. I am not at all saying "thou shalt make all things as single mesh or be forever damned", it is use as few meshes as possible to achieve what you need. If one mesh body  is made up of 240 meshes and another is made up of 24 meshes, the 24 mesh body will draw 10x faster. It is quite literally that simple and quite literally that linear for most people on most hardware. 

Liz and I worked on a full set of benchmarks to test all of this and the results are pretty compelling, but they also need a lot of explaining as there's a lot of data in there. I will do my best to finish my blog post on this and link it here tomorrow or Sunday, RL permitting. I've been trying to get this out for a few months though 😞

 

  • Like 3
Link to comment
Share on other sites

Thanks for the information

8 hours ago, Beq Janus said:

For this you'd need to be able to wear and unwear an alpha layer.

This would not work - The creators body modification would need to use BOM too to match the bodies skin. If the user wore an alpha layer, it would hide the skin on both the body and the mod. There is no way that I am aware to have an alpha layer affect only one BOM attachment. That is why I am saying, even with BOM there is still currently a need for a body to have lots of texture faces because it's the only way to have control of visibility without affecting other attachments using BOM too.

Link to comment
Share on other sites

7 hours ago, Extrude Ragu said:

Thanks for the information

This would not work - The creators body modification would need to use BOM too to match the bodies skin. If the user wore an alpha layer, it would hide the skin on both the body and the mod. There is no way that I am aware to have an alpha layer affect only one BOM attachment.

Set the alpha mode on the attachment to "none." If the body is set to "alpha masking" it will read the alpha, but the attachment set to "none" won't. This is why previous mesh bodies that are converted to BOM with an applier can't use worn alphas - their alpha mode was "none."

  • Like 1
Link to comment
Share on other sites

6 hours ago, Beq Janus said:

Part #1 of my blog is finally up. I'll start writing up the more numbers-based second part today.

 

Excellent read, really curious about the numbers, even if all benchmarks are advanced examples of lying with statistics.

I guess there is no real chance to get the current render pipeline to use stuff like glMultiDrawElementsIndirect()  to optimize the batches for those bodies,  other than just rewriting the whole stack for Vulcan?

But guess the OS X (non-)support for anything OpenGL means a solid no, as it stops with OpenGL 4.1.

Link to comment
Share on other sites

9 hours ago, Kathrine Jansma said:

Excellent read, really curious about the numbers, even if all benchmarks are advanced examples of lying with statistics.

Part 2 just landed - https://beqsother.blogspot.com/2021/09/find-me-some-body-to-lovebenchmarking.html

It's not as easy on the eyes as the last one.

9 hours ago, Kathrine Jansma said:

But guess the OS X (non-)support for anything OpenGL means a solid no, as it stops with OpenGL 4.1.

Pretty much. No major overhaul is likely until we get a completely new pipeline

  • Thanks 3
Link to comment
Share on other sites

Nice as well, but, well I'm trained in physics, so not scared of graphs and statistics.

Thats quite some significant FPS drop in your intro text:

Quote

The results were quite stark. Running with just my Alt alone (the baseline), I saw 105 FPS (remember, this is basically an empty region). With SLink Redux, it dipped a little then recovered to 104FPS. With Maitreya, it dropped to 93FPS.

Does that scale linear? Like 2x Maitreya users => 81 FPS, 3x => 69 etc.?

Otherwise it looks pretty conclusive from the numbers and graphs.

 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...