Jump to content

mystery complexity quadrupling


Quarrel Kukulcan
 Share

You are about to reply to a thread that has been inactive for 1107 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

1 minute ago, Kyrah Abattoir said:

Have you inspected their LODs?

I've done a few quick checks on some of the products from three of the brands I mentioned. Without going into details I'd say their content creators are people who have no idea how to make good LoD models but have actually read the official documentation and honestly tried to do their best based on that misleading information. In other words, not good but better than most and the mistakes they've made are LL's fault, not theirs.

There's a question how relevant LoD is for fitmesh in its current state anyway. Two of the three fitmesh specific errors effectively disable the entire LoD system effectively forcing the viewer to render fitted mesh at highest LoD in all but a few very special situations. This is of course a major reason why LL can't simply fix those pesky fitmesh bugs. By the time they noticed (mostly thanks to Optimo and Beq, I think) there was already a lot of fitted mesh on the market and nearly all of it would have to be scrapped.

Link to comment
Share on other sites

31 minutes ago, ChinRey said:

There's a question how relevant LoD is for fitmesh in its current state anyway. Two of the three fitmesh specific errors effectively disable the entire LoD system effectively forcing the viewer to render fitted mesh at highest LoD in all but a few very special situations.

Of course but do you do you think they are just gonna "fix" their meshs once the bug is fixed? Even LL says it themselves: Don't rely on bugs or undocumented secondlife features in your products.

31 minutes ago, ChinRey said:

This is of course a major reason why LL can't simply fix those pesky fitmesh bugs.

They absolutely should tho, it's not going to get better if they let it all rot.People aren't going to stop pushing for higher graphical fidelity, only the infrastructure to support those has currently fallen off.

Edited by Kyrah Abattoir
Link to comment
Share on other sites

On 7/27/2020 at 1:32 AM, Wulfie Reanimator said:

Complexity doesn't seem to take VRAM use into account at all. There is a comment that says "weighted attachment - 1 point for every 3 bytes" in a part of the calculation that concerns rigged mesh, but nothing points to that calculation actually being done. Rigged mesh gets a 20% increase to Complexity, that's it.

P.S. I can't for the life of me find the part that accounts for the triangle counts either. The getRenderCost is only concerned about textures, the other parts are... somewhere else... and they're not clearly paired up for some reason. I can find lots of functions that get the total/high LOD triangle counts, but those aren't used in any complexity calculations I can find.

The Viewer actually includes VRAM usage, you've even code-quoted that part. Each texture has a baseline cost which additional complexity is added on top for its size. It's not much though.

Triangles are counted in the getRenderCost() call in vovolume.cpp.

if (has_volume)
{
  volume_params = getVolume()->getParams();
  path_params = volume_params.getPathParams();
  profile_params = volume_params.getProfileParams();

  LLMeshCostData costs;
  if (getCostData(costs))
  {
    if (isAnimatedObject() && isRiggedMesh())
    {
      // Scaling here is to make animated object vs
      // non-animated object ARC proportional to the
      // corresponding calculations for streaming cost.
      num_triangles = (ANIMATED_OBJECT_COST_PER_KTRI * 0.001 * costs.getEstTrisForStreamingCost())/0.06;
    }
    else
    {
      F32 radius = getScale().length()*0.5f;
      num_triangles = costs.getRadiusWeightedTris(radius);
    }
  }
}

if (num_triangles <= 0)
{
  num_triangles = 4;
}

...
  
// shame currently has the "base" cost of 1 point per 15 triangles, min 2.
shame = num_triangles  * 5.f;
shame = shame < 2.f ? 2.f : shame;

num_triangles is the key here though getRadiusWeightedTris doesn't account for all triangles... i suppose what it does it basically just taking a sample of the polygon density/count and takes that for further calculations, one that should roughly equal to a sphere as big as the object is, it goes without saying that this is pretty much useless and could be gamed.

Either you take all into account or don't take them into account at all.

Here's the same snipped from my Viewer, i'd suggest simply taking the LOD triangle count, i've chosen the highest since thats the one you'll be looking at most if not all of the time and all other LOD's shouldn't have any bearings in this calculation unless you want to calculate the actual current triangle count based on the current LOD being used but then you'd see complexity constantly changing while moving your camera, which would be gamed once again by single-triangle LOD's which have been used for some mesh bodies.

if (has_volume)
{
  volume_params = getVolume()->getParams();
  path_params = volume_params.getPathParams();
  profile_params = volume_params.getProfileParams();

  //BD - Punish high triangle counts.
  num_triangles = drawablep->getVOVolume()->getHighLODTriangleCount();
}

You could go one step further and calculate a reduction in complexity based on the LOD triangle counts and whether they roughly lie in a range that you could consider somewhat optimal for each LOD state, this would encourage having at least somewhat decent triangle counts for each LOD state, thus encouraging making somewhat decent LODs. In reverse you should then punish with higher complexity based on how far you stray away from the optimal triangle count range for each LOD, which would heavily punish bad LOD's and single-triangle LOD's.

Edited by NiranV Dean
  • Like 1
  • Thanks 2
Link to comment
Share on other sites

3 hours ago, NiranV Dean said:

The Viewer actually includes VRAM usage, you've even code-quoted that part. Each texture has a baseline cost which additional complexity is added on top for its size. It's not much though.

Yes, that's right. LL doesn't seem to have considered VRAM when they created the render cost formula but textures are rather significant to gpu load too so they couldn't completely ignore them.

However, as NiranV says, it's not much and the calculation has one very obvious flaw and three rather dubious aspects. (For those who feel uncomfortable with programming code, there is a description in plain(ish) English(ish) here: http://wiki.secondlife.com/wiki/Mesh/Rendering_weight)

The obvious flaw is that it only takes diffuse maps and sculpt maps into account. Normal and specular maps are completely ignored.

The first dubious aspect is that it seems to underestimate the significance of higher resolution textures. Here's the relevant part of the formula:

256 + 16 * (resX/128 + resY/128)

Filling in the numbers we get:

Image resolution Added render cost
256x256 320
512x512 384
1024x1024 512

A 512x512 texture has four times as many pixels as a 256x256. A 1024x1024 has four times as many pixels as a 512x512 (and 16 times as many as a 256x256) yet the differences in their calculated render costs are all but trivial. It does make sense to give higher resolution textures a little bit of "quantity discount" when calculating render cost (although not when calculating VRAM use) but this seems to be way too much. I've done variants of a simple reality check (cam in on a single surface, switch btewen different texture resolutions and see how they affect fps) many times myself and also had others do it for me to compare results from different hardware and software setups. So far the result has always seemed to indicate that texture resolution is far more significant to performance than the render cost formula stipulates.

The second dubious aspect is that the formula is skewed towards favouring quadratic textures. Both a 512x512 and a 1024x256 texture has 262,144 pixels yet the calculated render costs are 384 and 416 respectively. That's hardly a big difference of course but it still seems strange and I can't see any good reason for it.

Lastly, the balance between geometry and textures seems to be off with pixels being given far less significance compared to vertices and triangles than they should. I haven't seen any tests on this though, so I won't insist I'm right.

 

Edited by ChinRey
Link to comment
Share on other sites

On 7/28/2020 at 11:09 AM, Kyrah Abattoir said:

Of course but do you do you think they are just gonna "fix" their meshs once the bug is fixed? Even LL says it themselves: Don't rely on bugs or undocumented secondlife features in your products.

What fitmesh creators would have to do to upgrade their works to comply with a potential bugfix is make brand new LoD models from their old fies (assuming they've kept backups of their work files), reupload and distribute free upgrades to all their customers. This is a huge job and I'm sure many will simply give up. Many of them won't even have the faintest idea how to make LoD models. Even those who are willing and able to (or to be more precise: feel they are forced to) do it will need a lot of time. Imagine how people will react when all mesh bodies suddenly break down and we have to wait weeks, maybe even months, for replacements.

 

On 7/28/2020 at 11:09 AM, Kyrah Abattoir said:

They absolutely should tho, it's not going to get better if they let it all rot.

I agree. Fixcing the core problems is the only solution that works in the long run and the longer they wait, the harder it gets. But the short term effect will be devastating so I don't blame LL for dragging their feet.

Link to comment
Share on other sites

4 hours ago, NiranV Dean said:

other LOD's shouldn't have any bearings in this calculation unless you want to calculate the actual current triangle count based on the current LOD being used but then you'd see complexity constantly changing while moving your camera, which would be gamed once again by single-triangle LOD's which have been used for some mesh bodies.

First of all: Thanks, I'm literally blind.

That said, I may be more charitable, but I think if a mesh body is currently being rendered as a single triangle (never happens), only that triangle should be considered for the current render complexity as far as the mesh goes.

4 hours ago, NiranV Dean said:

You could go one step further and calculate a reduction in complexity based on the LOD triangle counts and whether they roughly lie in a range that you could consider somewhat optimal for each LOD state, this would encourage having at least somewhat decent triangle counts for each LOD state, thus encouraging making somewhat decent LODs. In reverse you should then punish with higher complexity based on how far you stray away from the optimal triangle count range for each LOD, which would heavily punish bad LOD's and single-triangle LOD's.

LL does do this though, right? Just looking at getEstTrisForStreamingCost in llmeshrepository.cpp:

// in llMeshCostData::init
// mEstTrisByLOD[i] = llmax((F32)mSizeByLOD[i] - (F32)metadata_discount, (F32)minimum_size) / (F32)bytes_per_triangle;

F32 LLMeshCostData::getEstTrisForStreamingCost()
{
    LL_DEBUGS("StreamingCost") << "tris_by_lod: "
                               << mEstTrisByLOD[0] << ", "
                               << mEstTrisByLOD[1] << ", "
                               << mEstTrisByLOD[2] << ", "
                               << mEstTrisByLOD[3] << LL_ENDL;

    F32 charged_tris = mEstTrisByLOD[3];
    F32 allowed_tris = mEstTrisByLOD[3];
    const F32 ENFORCE_FLOOR = 64.0f;
    for (S32 i=2; i>=0; i--)
    {
        // How many tris can we have in this LOD without affecting land impact?
        // - normally an LOD should be at most half the size of the previous one.
        // - once we reach a floor of ENFORCE_FLOOR, don't require LODs to get any smaller.
        allowed_tris = llclamp(allowed_tris/2.0f,ENFORCE_FLOOR,mEstTrisByLOD[i]);
        F32 excess_tris = mEstTrisByLOD[i]-allowed_tris;
        if (excess_tris>0.f)
        {
            LL_DEBUGS("StreamingCost") << "excess tris in lod[" << i << "] " << excess_tris << " allowed " << allowed_tris <<  LL_ENDL;
            charged_tris += excess_tris;
        }
    }
    return charged_tris;
}

 

1 hour ago, ChinRey said:

What fitmesh creators would have to do to upgrade their works to comply with a potential bugfix is make brand new LoD models from their old fies (assuming they've kept backups of their work files), reupload and distribute free upgrades to all their customers. This is a huge job and I'm sure many will simply give up. Many of them won't even have the faintest idea how to make LoD models. Even those who are willing and able to (or to be more precise: feel they are forced to) do it will need a lot of time. Imagine how people will react when all mesh bodies suddenly break down and we have to wait weeks, maybe even months, for replacements.

This is a sacrifice I'm willing to endure. And I don't say that because I'm not the one who's having to do the work. Losing creators is bad for everybody, even people who just buy the stuff and nothing else, because they lose options. But considering how bad the situation is regarding to content and performance, I would risk some creators leaving. If a creator who keeps dumping 100ktri clothing on MP like a factory (using Marvelous Designer or whatever, or buying ready-models from somewhere else) decides to leave because they literally don't know and can't be bothered learning/making LODs... good. They cause significant harm to the whole.

And consider that any change (including no change) that could be perceived negatively will cause some people to leave, like raising the MP fees... or introducing mesh in the first place and making SL literally impossible to run for some people as a result (never mind creating a huge learning curve for content creation). Some problems don't have a "right" solution and every choice has its own negative consequences.

Edited by Wulfie Reanimator
  • Like 1
Link to comment
Share on other sites

2 hours ago, ChinRey said:

Lastly, the balance between geometry and textures seems to be off with pixels being given far less significance compared to vertices and triangles than they should. I haven't seen any tests on this though, so I won't insist I'm right.

But that's correct behavior, while textures are important and many big ones do have an impact on your framerate, their impact is by far not as big as geometry as geometry unlike textures aren't just rendered once. Geometry in SL depending on which graphic effects you have enabled is essentially rendered multiple times. Once for rendering it, another time for water reflections, another 4 times for shadows, another 2 times for projectors, another time for Motion Blur (in BD). What i mean by rendering it more than once is that not the entire scene is rendered multiple times but only that part relevant for the feature.

Shadows for instance -> if a mesh body is inside one of the 4 shadow maps, it has to be rendered into that shadow map, which means it has been rendered twice (although doesn't need any additional calculations such as textures and so on since a shadow map is only a greyscale image in SL). This is not only a problem but a big one. It's also the reason shadows for instance nuke your framerate so hard when using higher resolution shadows. The geometry doesn't just need to be re-rendered, it also needs to be done so at a different screen resolution which might be much higher (example 4K shadows) depending on your chosen shadow resolution, this further increases the impact of objects with higher complexity, this snowballs into a massive overhead when you have all the usual graphical features enabled and a complex object appears. A couple thousand more triangles suddenly become a couple ten thousand, a couple ten thousand suddenly become a hundred thousand and so on.

Textures are not affected by this on top of being much easier and faster to render than a big old pile of triangles which also need to be transformed and bent with your shape and animations if they are rigged. Then theres the texture memory limit... before you start trashing textures (unless you disable that). Triangles can be as many as you want, as many as the rendering engine can keep rendering before hardware simply gives up with freezes long enough to either outright crash your viewer or at least make it basically unusable.

Textures should remain a supplementary value rather than what makes the most complexity since textures (especially on a single avatar) simply cannot reach a point under normal circumstances (unless you absolutely insist on making a graphics crasher) that will have a bigger impact than the geometry in need for rendering in that case a supplementary value will overtake the triangle complexity naturally anyway.

image.png.0e51c97f7ce4b6b9068dcf817c308f11.png

Currently i've set textures to be basically 1:1 of their memory usage. Checking against my total memory usage (9500 + 9500 + 10500 + 1300 + a couple hundred here n there) it seems to work just fine and pretty accurately depict its actual memory usage. What it doesn't take into account is reusing the same texture (it does in the total but doesn't in the breakdown for each attachment obviously)

Going through a few avatars i feel like people get quite what they deserve how it is right now, that is most of them getting jellydolled and my framerate staying stable.

image.png.92ffd873179a014be4aea933d9d94346.png

image.png.ed22fd5b135e0061031070629292f966.png

image.png.1dc116e0295e0d76a8226a3375959682.png

image.png.0ecca03b4b9efaa481e606be700269b1.png

image.png.adaa3904fc775e4c7830085d63f163cb.png

image.png.1d3397d2c092812f639121c36744c81e.png

 

8 minutes ago, Wulfie Reanimator said:

First of all: Thanks, I'm literally blind.

That said, I may be more charitable, but I think if a mesh body is currently being rendered as a single triangle (never happens), only that triangle should be considered for the current render complexity as far as the mesh goes.

LL does do this though, right? Just looking at getEstTrisForStreamingCost in llmeshrepository.cpp:


// in llMeshCostData::init
// mEstTrisByLOD[i] = llmax((F32)mSizeByLOD[i] - (F32)metadata_discount, (F32)minimum_size) / (F32)bytes_per_triangle;

F32 LLMeshCostData::getEstTrisForStreamingCost()
{
    LL_DEBUGS("StreamingCost") << "tris_by_lod: "
                               << mEstTrisByLOD[0] << ", "
                               << mEstTrisByLOD[1] << ", "
                               << mEstTrisByLOD[2] << ", "
                               << mEstTrisByLOD[3] << LL_ENDL;

    F32 charged_tris = mEstTrisByLOD[3];
    F32 allowed_tris = mEstTrisByLOD[3];
    const F32 ENFORCE_FLOOR = 64.0f;
    for (S32 i=2; i>=0; i--)
    {
        // How many tris can we have in this LOD without affecting land impact?
        // - normally an LOD should be at most half the size of the previous one.
        // - once we reach a floor of ENFORCE_FLOOR, don't require LODs to get any smaller.
        allowed_tris = llclamp(allowed_tris/2.0f,ENFORCE_FLOOR,mEstTrisByLOD[i]);
        F32 excess_tris = mEstTrisByLOD[i]-allowed_tris;
        if (excess_tris>0.f)
        {
            LL_DEBUGS("StreamingCost") << "excess tris in lod[" << i << "] " << excess_tris << " allowed " << allowed_tris <<  LL_ENDL;
            charged_tris += excess_tris;
        }
    }
    return charged_tris;
}

 

I thought about something much cleaner and easier. As example:

LOD0 = 100%

LOD1 = 70-90%

LOD2 = 50-75%

LOD3 = 30-55%

Values need to be fixed and easy to understand for everyone, transparency is important. Ll failed in that regard.

8 minutes ago, Wulfie Reanimator said:

That said, I may be more charitable, but I think if a mesh body is currently being rendered as a single triangle (never happens), only that triangle should be considered for the current render complexity as far as the mesh goes.

In that case you'd make render complexity once again a varying variable that is of no use to anyone as it would wildly differ from person to person depending on their settings. Essentially you're doing what LL already did (and is going to do again) by testing against a wide range of hardware and calculating an average which is what made the current calculation such a huge fail in the first place (aside from not being punishing enough). Only one hardware setup should be tested one that fullfills the average requirements for a normal 1080p ~45 FPS with Deferred Rendering, Shadows and Ambient Occlusion enabled. Take it as baseline. It's much more accurate to test against recommended hardware than taking an average out of a range between potato and NASA PC as the averaging will once again push wide gaps closer together making differences between objects less punishing, features will become less punishing as they become all "the same" performance wise as the average pushes them closer together with wildly varying values.

People should judge the impact of avatars based on their maximum impact, not their current impact. You want a clear, simple and easy to understand value, one that immediately tells the user at a glance whether something is bad for your performance or not, they shouldn't have to guess whether this avatar will tenfold in complexity if they zoom closer now.

Edited by NiranV Dean
  • Like 2
Link to comment
Share on other sites

53 minutes ago, Wulfie Reanimator said:

like raising the MP fees

Oh, please don't remind me of that misery. Fortunately it seems LL learned from their mistake this time and cancelled the next round.

 

48 minutes ago, Wulfie Reanimator said:

That said, I may be more charitable, but I think if a mesh body is currently being rendered as a single triangle (never happens), only that triangle should be considered for the current render complexity as far as the mesh goes.

I still think the biggest elephant in the room is when the render cost is calculated as if there was only one triangle when there are in fact tens of thousands. ;)

So yes, this is something that needs to be adressed, I'm not denying that. But it will be a very hard cure. Think of all the fashionistas who have to make do with a system body for weeks while waiting for upgraded mesh bodies to appear. They're going to raise one helluva stink about it and they are of course far more important than lowly content creators.

 

Edited by ChinRey
Link to comment
Share on other sites

47 minutes ago, NiranV Dean said:

In that case you'd make render complexity once again a varying variable that is of no use to anyone as it would wildly differ from person to person depending on their settings. Essentially you're doing what LL already did (and is going to do again) by testing against a wide range of hardware and calculating an average which is what made the current calculation such a huge fail in the first place (aside from not being punishing enough). Only one hardware setup should be tested one that fullfills the average requirements for a normal 1080p ~45 FPS with Deferred Rendering, Shadows and Ambient Occlusion enabled. Take it as baseline. It's much more accurate to test against recommended hardware than taking an average out of a range between potato and NASA PC as the averaging will once again push wide gaps closer together making differences between objects less punishing, features will become less punishing as they become all "the same" performance wise as the average pushes them closer together with wildly varying values.

People should judge the impact of avatars based on their maximum impact, not their current impact. You want a clear, simple and easy to understand value, one that immediately tells the user at a glance whether something is bad for your performance or not, they shouldn't have to guess whether this avatar will tenfold in complexity if they zoom closer now.

I don't know if I agree with the first part. Sure, the complexity would be "variable" in the sense that lower settings will experience lower complexities -- as you should expect -- but they're not variable in the sense that if/when you are viewing the highest LOD, you get exactly the same complexity result as everybody else on other settings. After all, an object with higher max complexity might be more efficient than the object next to it, depending on how their LODs are constructed. And I would definitely not defend calculating averages, whatever complexity calculations should be based on objective measures, not averages (or estimates), and should not favor specific hardware.

Imagine, for example, some mesh object that is simple overall, but breaks all LOD rules by reusing the same LOD at all levels, or just doesn't optimize well. The complexity for this object would be penalized and hike up. Then, next to it, you have a relatively more complex mesh object that has great LODs (not affected by penalizing), but each LOD is more complex than on the simple object. If we only displayed max complexity, one might get the idea that the more simple object has the same or higher performance impact than the objectively more complex object. If the reported complexity was based on the current LOD instead, the differences would ideally be more distinct/realistic.

But I do agree that seeing the max complexity for an object is a useful metric for comparing objects at a glance, especially among people who tend to use higher settings and don't get LOD swapping as quickly. Displaying both (max by default, optionally/additionally current) seems like the best of both worlds.

Edited by Wulfie Reanimator
Link to comment
Share on other sites

4 hours ago, NiranV Dean said:

You could go one step further and calculate a reduction in complexity based on the LOD triangle counts and whether they roughly lie in a range that you could consider somewhat optimal for each LOD state

Yes, but how? The optimal amount of reduction varies wildly between different meshes. In some cases you can't really improve performance by simplifying a LoD model at all without significant reduction in the visual quality. In other cases you can easily do a 90+% reduction with no negative effect worth mentioning.

A fixed "optimal reduction amount" is very likely to cause more harm than good since it's bound to be taken as "the one and only truth" and no matter where you set it, it will be unsuitable for the vast majority of meshes.

I don't even dare think about all the complications a flexible optimal reduction calculation would cause.

 

32 minutes ago, NiranV Dean said:

But that's correct behavior, while textures are important and many big ones do have an impact on your framerate, their impact is by far not as big as geometry as geometry unlike textures aren't just rendered once.

I have no doubt that geometry is a more important factor for render load than textures but the question is, is it really as much bigger as the render cost formula stipulates? I'm not convinced but, as I said, I don't know and it would be great to have some reliable data here.

 

42 minutes ago, NiranV Dean said:

Textures should remain a supplementary value rather than what makes the most complexity since textures (especially on a single avatar) simply cannot reach a point under normal circumstances (unless you absolutely insist on making a graphics crasher) that will have a bigger impact than the geometry in need for rendering in that case a supplementary value will overtake the triangle complexity naturally anyway.

How about 20 avatars in the scene, each with a hundred uinique 1024x1024 textures? Then add another 1500 1024s for the surroundings.

Link to comment
Share on other sites

Just now, Wulfie Reanimator said:

I don't know if I agree with the first part. Sure, the complexity would be "variable" in the sense that lower settings will experience lower complexities -- as you should expect -- but they're not variable in the sense that if/when you are viewing the highest LOD, you get exactly the same complexity result as everybody else on other settings. After all, an object with higher max complexity might be more efficient than the object next to it, depending on how their LODs are constructed. And I would definitely not defend calculating averages, whatever complexity calculations should be based on objective measures, not averages (or estimates), and should not favor specific hardware.

But I do agree that seeing the max complexity for an object is a useful metric for comparing objects at a glance, especially among people who tend to use higher settings and don't get LOD swapping as quickly. Displaying both (max by default, optionally/additionally current) seems like the best of both worlds.

The display every user gets to see should always be based upon the max complexity to get a fair comparison between objects, whats the point of one object having much lower complexity right now just because for some reason the LOD falsely lowers itself 1-2 levels or doesn't go up anymore. SL has too many bugs to be reliant on proper LODing, we shouldn't even consider basing calculations on it as long as it isn't reliably working. I know i'm opening a pandoras box here but mesh especially rigged has big problems with LODing. Put the avatar of someone down right besides them, the avatar will be lower LOD for sure, the avatar will stay max LOD and poof you're seeing different values in the exact same object at the exact same distance (not counting that one is rigged and being transformed of course). Which is why i highly recommend using max LOD, its a static number that can be compared to any other object regardless of distance or their current state (or mixed states if multiple objects and different sizes which result in different LODs).

"Current" complexity would be something you wouldn't show directly, at least not in the complexity display unless you want to add even more clutter to it and make it even harder to read as it already is. This is something i'd add into the complexity window, together with info on each LOD level of each object.

The reason i'd choose average recommended hardware as a starting point is because i'd want a PC that produces a decent framerate, one that is capable of decently running all graphics and doing so in a way that shows clear impacts of certain features. A potato PC with only 1 FPS will only have a single FPS to lose, thus the impact of big amounts of anything will be far smaller in comparison to something with lets say 60 or even 100 FPS. Bigger numbers makes it easier to see smaller impacts but sadly will make fluctuations bigger too and if the hardware is too strong will potentially ignore impacts again, what i mean is if you put a NASA PC there it will most likely laugh at your pitiful attempts to see any difference in framerate, it will simply have so much horsepower available that it wont care if you put something there for it to render. The opposite is that a potato PC is already so slow theres not much difference to show, you're already crawling on your teeth and kicking it again isn't gonna do much, its either going to stop entirely and give up or will continue to crawl along the floor. That's why i'd recommend something that is somewhere in between, something that offers a decent hardware, not too powerful to simply ignore additional load while not being too slow to not being able to see the impact.

I've only roughly based my values on my own hardware and tried to take into account where my hardware stands in comparison to something that i'd personally recommend for a decent running SL. (which would be somewhere around a GTX 600 series) Some deeper research and more sophisticated tests would be welcome there, my personal tests were somewhat successful so far but i've only been able to test it on my own users who have reported what i'd have expected -> lots of humans getting jellydolled but the framerate is good, then people want to get rid of it and get a warning from me that this will drastically lower performance before they come back to me telling me that exactly that just happened and its almost unusable having those crazy 500-800k+ avatars rendered.

Link to comment
Share on other sites

19 minutes ago, ChinRey said:

Yes, but how? The optimal amount of reduction varies wildly between different meshes. In some cases you can't really improve performance by simplifying a LoD model at all without significant reduction in the visual quality. In other cases you can easily do a 90+% reduction with no negative effect worth mentioning.

A fixed "optimal reduction amount" is very likely to cause more harm than good since it's bound to be taken as "the one and only truth" and no matter where you set it, it will be unsuitable for the vast majority of meshes.

I don't even dare think about all the complications a flexible optimal reduction calculation would cause.

Wrong. An optimal range is an optimal range because it is recommended, not necessary.

Taking your example as example. A mesh that can be easily reduced 90% on its first LOD level would get a penalty yes but this penalty would be insignificant since you'd be reducing the LOD by 90% ontop of the max LOD most likely being already very low complexity in itself (and if not, you might be once again doing something wrong). Even if you were to add a 10 times multiplier here (going with absolute extremes here) you'd only multiply the max LOD for the most part since the first LOD is already a reduction in 90% of the original value and all further are even more basically making the mesh only consist of the max LOD effectively. An object with 1000 triangles reduced down to 100 triangles and then multiplied by 10 would be as complex as a 1000 triangle object which is laughably little to the point its not worth mentioning and thats with an extremely punishing calculation like i have. If you manage to find an object with 100k triangle and reduce it down to 10% on the second LOD i'd highly question whether that first LOD really needs 100k triangles in the first place. The goal of LOD1 is that it should look ever so slightly less detailed which at medium distance should become next to not visible. Yes, there will be edge cases but you cannot fix them all and as explained these edge cases would be... well edge cases.

A flexible calculation would be hell to implement and impossible to get right (an algorithm would never be able to replace a human with knowledge and expertise as well as human common sense aside from making it extremely complex which in itself would mean a massive performance cost for simply calculating this thing), because getting it right here doesn't mean finding a decent average range but finding a very very tight optimal range much smaller as with fixed ranges and you'd want to make them much more punishing given that you trust the calculation to do its job well, otherwise you could just not do it at all and we'd be back to square one.

19 minutes ago, ChinRey said:

How about 20 avatars in the scene, each with a hundred uinique 1024x1024 textures? Then add another 1500 1024s for the surroundings.

So any generic adult furry hangout. I guarantee you that these 1500 1024x1024 will cause laughably little impact on your framerate (especially since that would be 6GB of VRAM given you can get all of them to fully load, i'd even doubt whether you an even load all of them at the same time) compared to a single avatar sporting his average 500k to 1million triangles flailing all over your screen, possible half of which are alpha, flexi and whatnot with most of the rest being rigged.

I've run several tests on such places already turning on full res textures just to make all textures fully load (to the point my 6GB were completely full), the impact was quite slow until my GPU started texture swapping at 6GB, compared to just having 2 more avatars not being jellydolled. Not to mention that with all those textures loaded my framerate absolutely skyrocketed when i jellydolled them all. (This will keep their textures loaded)

Edited by NiranV Dean
Link to comment
Share on other sites

1 hour ago, NiranV Dean said:

Taking your example as example. A mesh that can be easily reduced 90% on its first LOD level would get a penalty yes but this penalty would be insignificant since you'd be reducing the LOD by 90% ontop of the max LOD most likely being already very low complexity in itself (and if not, you might be once again doing something wrong). Even if you were to add a 10 times multiplier here (going with absolute extremes here) you'd only multiply the max LOD for the most part since the first LOD is already a reduction in 90% of the original value and all further are even more basically making the mesh only consist of the max LOD effectively.

I'm not sure if I understand you right, it seems to me that what you describe is pretty much how it works at the moment. (For rigid mesh that is, LoD is totally broekn for fitted mesh.)

1 hour ago, NiranV Dean said:

If you manage to find an object with 100k triangle and reduce it down to 10% on the second LOD i'd highly question whether that first LOD really needs 100k triangles in the first place. The goal of LOD1 is that it should look ever so slightly less detailed which at medium distance should become next to not visible. Yes, there will be edge cases but you cannot fix them all and as explained these edge cases would be... well edge cases.

A 90+% reduction from high to low doesn't sound realistic to me either but it's not at all unusual that it can be achieved from mid to low and low to lowest.

Let me show you what I mean.

This is an OPQ Poplar 03 L39-01. 3 LI, 948 render weight.

bilde.png.aad6eeb66f02f948ca74d4f4365b521e.png

The triangle counts for the canopy is (from high to lowest LoD) 80-80-80-80, for the trunk: 123-73-31-16 - % reduction between LoD models: 40-57.5-48.5

---

Here's a copper bowl (unnamed since it was a custom build and not for sale) 1 LI, 527 render weight:

bilde.png.806cd9975d3554aa78f7509f9393bdd7.png

 

Triangle counts: 572-572-80-12. % reduction: 0-86-85.

This is almost perfect optimisation for LoD factor 1 (except the customer for the bowl wanted a commercial texture with higher resolution than neccessary) but they're not at all unusual items. Yet the amount of reduction they need between different LoD models varies from 0% to 86%. I couldn't find an example of 90+% reduction in a hurry but I think 86 is close enough to illustrate my point.

Edited by ChinRey
Link to comment
Share on other sites

1 hour ago, ChinRey said:

This is an OPQ Poplar 03 L39-01. 3 LI, 948 render weight.

The triangle counts for the canopy is (from high to lowest LoD) 80-80-80-80, for the trunk: 123-73-31-16 - % reduction between LoD models: 40-57.5-48.5

---

Here's a copper bowl (unnamed since it was a custom build and not for sale) 1 LI, 527 render weight:

Triangle counts: 572-572-80-12. % reduction: 0-86-85.

This is almost perfect optimisation for LoD factor 1 (except the customer for the bowl wanted a commercial texture with higher resolution than neccessary) but they're not at all unusual items. Yet the amount of reduction they need between different LoD models varies from 0% to 86%. I couldn't find an example of 90+% reduction in a hurry but I think 86 is close enough to illustrate my point.

Keeping the canopy as is and reducing only the trunk seems fine to me and with this low triangles you wouldn't see much penalty especially since these values are somewhat close to my examples of triangle counts for each LOD.

The copper bowl on the other hand seems to be kinda wonky, why would you keep its max and first LOD max triangle and then massively reduce it to what is essentially an unidentifiable mess? Wouldn't it make much more sense to keep the first LOD pretty high (~80-90% of the original) to preserve the distinctive look of a bowl and then slowly going down until you hit like 40-50% at the last? There would be zero penalty (with my examples), the bowl would keep its shape even if you zoom out a bit and will be somewhat recognizable even at distance while reducing the (extremely low polycount) a reasonable amount.

But aside from that you show exact examples of what would i mean would invalidate what you previously noted. Your examples have so extremely tiny triangle counts that even the craziest of crazy penalty multipliers wouldn't do much to punish you since what you've created isn't really punishable in the first place, with your shown object you dont need to worry about getting punished even if you did it wrong on purpose.

The point of what i explained is that we get sort of a "guideline" an incentive to optimize LOD's , the higher the baseline complexity the higher the punishment when doing it wrong, which doesn't mean that you can't do it, but you'll simply be penalized with worse complexity and/or upload cost (which are honestly quite low unless you upload super dense meshes... and even then they are quite low). Also when straying from the expected "optimal range" you get increasingly higher punishment, that means you don't get an instant 100 times multiplier just for being a few % out of the expected optimal range. Something like a 1.5-2x multiplier on complexity in worst case (like going all the way up to 100% triangle count when 30-55% is expected) would be totally enough. If your items are already optimized with low polygon count, you won't notice any punishment nor is it life threatening in any way, its merely a small slap on your wrist, whereas if you upload something with 100k triangles you're going to feel it when you do some crazy stuff like keeping all 4 LOD's at max. We could lower the penalty to lets say 1.25 and do the other way too, the closer your triangle count to the middle of the range, the more complexity reduction you get, once again a simple multiplier ranging from 1.0 to say ~0.95 , you get to reduce your complexity by up to 5% up to three times. Combine this with much much harsher complexity/jellydolling calculations, make jellydolls harder to turn off completely to prevent bad creators from "recommending" turning it off and you'll quickly see them in a new situation where they will have to adapt and attempt to optimize their content in order to stay competitive or face getting culled by jellydolls.

Edited by NiranV Dean
Link to comment
Share on other sites

5 hours ago, NiranV Dean said:

But that's correct behavior, while textures are important and many big ones do have an impact on your framerate, their impact is by far not as big as geometry as geometry unlike textures aren't just rendered once.

It would be more correct to say that textures and geometry impact performance in different ways. Excessive use of textures can have a significant (far more than LL's complexity calculations account for) impact on performance if they fill up your VRAM. Textures may be "rendered once" but they have to remain in memory to remain visible on your screen. When you have avatars wandering around with literally half a Gigabyte of textures (or more, I've seen avatars using well over a gig) while enjoying a low complexity score, it indicates a major disconnect between the complexity calculations and performance impact. LL is absolutely undervaluing textures in these calculations.

 

3 hours ago, NiranV Dean said:

So any generic adult furry hangout. I guarantee you that these 1500 1024x1024 will cause laughably little impact on your framerate (especially since that would be 6GB of VRAM given you can get all of them to fully load, i'd even doubt whether you an even load all of them at the same time)

This is demonstrably incorrect. I've been building sims for years and reducing the texture load of those sims has resulted in significant performance gains. Not just increased FPS (and I'm talking doubling or even tripling my framerates) for myself and all visitors to my sims, but also faster rez times and the complete absence of "texture thrashing". 

Link to comment
Share on other sites

2 hours ago, NiranV Dean said:

The copper bowl on the other hand seems to be kinda wonky, why would you keep its max and first LOD max triangle and then massively reduce it to what is essentially an unidentifiable mess?

Not at all:

bilde.thumb.png.39d5154e1f19704c3daf0beb790ae325.png

2 hours ago, NiranV Dean said:

Wouldn't it make much more sense to keep the first LOD pretty high (~80-90% of the original)

Not in this case for two reasons, one is that with such small reduction there is very little to gain in performance and you may well reduce performance that way since the switch between LoD models also adds load. Hyper Mole did some testing on this and recommended that if you can't reduce more than 25% it's generally better to use the same model. I haven't tested this myself but @arton Rotaru gave the same advice and he tends to know what he's talking about.

  

2 hours ago, NiranV Dean said:

But aside from that you show exact examples of what would i mean would invalidate what you previously noted. Your examples have so extremely tiny triangle counts that even the craziest of crazy penalty multipliers wouldn't do much to punish you since what you've created isn't really punishable in the first place, with your shown object you dont need to worry about getting punished even if you did it wrong on purpose.


I wouldn't call this extremely tiny but that brings us to another point of course: optimisation of the main model. When it comes to visual complexity the 203 tri tree in my picture is about the same as a 1000-2000 triangle tree by one of the better mesh plant makers and I've seen plenty of trees that struggles to match it even with a five digit triangle count. I know I shouldn't brag but the fact is, when it comes to optimising trees and other plants I don't know of anybody who are even near my level.

But it could be. This isn't rocket science and most of the techniques are freely shared on this forum for anybody to read and learn. But unless they're as obsessed with performance as I am, why should the content creators bother going that extra mile to make their builds more performant? There's nothing in it for them. This is one reason why I'm a little bit sceptical about a fixed standard for LoD reduction. Set the level too low and only a few builders will be able to take advantage of it, set it too high and there will be even less reason for content creators to do serious optimisation. Who is going to define what those levels are anyway? As far as I know there isn't a single Linden or Mole who have the fantest idea what is actually possible when it comes to mesh geometry optimisation.

Another reason is that I'm afraid we're on a slippery slope once we start adding other considerations than actual documentable performance to the calculation of performance metrics. There are so many other reason - good and bad and dubious - why somebody would want to tweak the figures to encourage certain kinds of builds.

Edited by ChinRey
Clicked the post button too soon
Link to comment
Share on other sites

Another point here is that nothing exists in a vacuum. Sure, you might have a furry avatar loaded down with a million flexi and rigged triangles, and you could argue it has the bigger impact on performance, but the texture load of the environment is also impacting performance. When both are bad enough (say, 6+ gigs of textures in the evironment and said high-poly furry avatar) saying one is more impactful than the other becomes meaningless because both are killing your performance. Get rid of one and you've still got a laggy mess. That's why so many of those generic furry adult hangouts are laggy framerate killers even when no one is around and why I'd rather spend my time at the hangout that uses less than a gig of textures in its environment because there my framerates are significantly higher even WITH avatars around. (I can jellydoll/derender those at least.)

Edited by Penny Patton
Link to comment
Share on other sites

29 minutes ago, Penny Patton said:

When both are bad enough (say, 6+ gigs of textures in the evironment...

Just to chime in that this is not at all unrealistic. I know there is at least one house with almost 1000 1024x1024 textures for sale on MP. And yes, that is just the house itself. Fill it up with furniture from brands like A... and T... and decorate the surroundings with plants with a similar amount of lavish texturing and you may well end up with ten gigs or more of textures even without a single avatar anywhere to be seen.

Link to comment
Share on other sites

57 minutes ago, Penny Patton said:

This is demonstrably incorrect. I've been building sims for years and reducing the texture load of those sims has resulted in significant performance gains. Not just increased FPS (and I'm talking doubling or even tripling my framerates) for myself and all visitors to my sims, but also faster rez times and the complete absence of "texture thrashing". 

I'd like to see a place where you can clearly proof that textures can have such a drastic impact.

The only way this can be effectively proven is creating the exact same region or place (a skybox for that would be great although you can kill water, cloud, reflections, terrain and so on if they are not necessary to lessen the bloat from things not relevant for this test), one with everything 1024x1024, until your 6GB are completely full. Make a copy and optimize its textures, write down the framerate. Even better if you repeat this with no textures at all and another time with absolutely no geometry whatsoever to get some control values. I doubt you'll triple your framerate unless you make heavy use of alphas.

Again what i'm saying is that it would be hard to have such a huge impact with textures alone unless something is going on that shouldn't be. You won't even be able to load everything 1024x1024 which is the next thing, so any test would be kind of unrealistic, this test would absolutely require using FullResTextures enabled and this can completely ruin the test.

Not to sound like an ***** but i'm in heavy doubt of this much of a performance impact of textures alone. You can do a quick and dirty test yourself.

I was just standing at bare bottom, there are several avatars here, i checked my framerate with everything and everyone on screen. 26 FPS average. I opened debug settings, enabled TexturesLoadFullRes debug and watched it redownload all textures, many 256x256 and 512x512 have been redownloaded as 1024x1024. My VRAM usage jumped from ~1.5gb total to 4.4gb usage. Thats easily tripled. My framerate is still 26 FPS. Nothing changed (after all textures were loaded, and they did successfully load, i'm at 5.4gb usage right now with background apps counted). If texture size can really be such a huge impact, i would have seen at least some kind of FPS change, like 1 FPS at the very least, it tripled my VRAM usage, thats a big chunk of bigger textures... but nothing, no change whatsoever which kind of makes me doubt that they alone can (up to) triple your framerate. Which leads me to believe that something else is going awry there. Wouldn't be the first time that we uncover a weird behavior... Jas and me found out that super low poly meshes, stacked into each other for "faked animations" (before animesh was a thing) caused an unbelievable FPS drop, we're talking about 113 FPS down to 16 FPS. Invisible meshes, super low poly. It was hard to even see what these meshes were supposed to represent that's how low poly they were. Yet a chunk for them ~30-40 or so basically nuked our framerate. Making all states visible immediately restored the full 113 FPS in an instant. That was before not rendering invisible things was a thing of course (LL Viewer still renders invisible things btw so its still an issue). The reason for this behavior was simply that these meshes although completely invisible were still using alpha and considered for rendering and since they were not 0% alpha they were shoved into the alpha pass which is slow af.

46 minutes ago, ChinRey said:

Not at all:

It's hard for me to imagine that a bowl still looks somewhat like a bowl with 12 triangles but i'm not a computer i can only use my imagination and in my imagination that bowl would be quite ... well triangular... a pyramid at best. 80 triangles... yes kinda. Still, the reduction seems quite harsh and doesn't really make sense to me. I know you want to game the system by using 100% on the first and second LOD so it doesn't collapse as quick and thats fine with a super low poly item but it just shows that the LOD system in itself isn't really good to begin with which is why i don't want to base any calculations on it but the maximum LOD.

Edited by NiranV Dean
Link to comment
Share on other sites

14 minutes ago, NiranV Dean said:

It's hard for me to imagine that a bowl still looks somewhat like a bowl with 12 triangles but i'm not a computer i can only use my imagination and in my imagination that bowl would be quite ... well triangular... a pyramid at best.

I suppose you saw my post before I added the pictures. (I accidentally held down the Ctrl key when addign a line shift so the post went online before it was ready.) But I have to take back what I said about it being almost perfect. It turns out I actually managed to pick an early protoype, not the finished one. (I was on the beta grid and in a bit of a hurry. Sorry.) But even so, with the pictures in my next reply you can see what kind of LoD resiliance it's possible to achieve with those triangle counts.

 

14 minutes ago, NiranV Dean said:

...

it just shows that the LOD system in itself isn't really good to begin with

You can say that again but we have to work with what we've got.

I think the biggest problem with it is that the swap distances are hardwired into the system. The way it is made it's really only suitable for a very narrow range of meshes (mainly dependent on size), for everything else you just have to try your best to tweak the LoD models to minimize the damage. The RenderVolumeLODFactor doesn't help here since it is applied to all items, not just the ones that need a bit of LoD boost.

Edited by ChinRey
Link to comment
Share on other sites

6 minutes ago, ChinRey said:

I suppose you saw my post before I added the pictures. (I accidentally held down the Ctrl key when addign a line shift so the post went online before it was ready.)

Nope i saw the bowl, which is exactly the reason i'm saying that. It looks already very low poly, taking away like 80% of its polycount would end up like a uh... octagon-ish half-sphere? Stylish. But 12? common. Pretty sure something like 572, ~500, ~400, ~250 would be much more desirable since the object is already super optimized and has very little triangles to begin with. It would even work and still look decent with LOD Factor 2 although it would be stretching it.

Edited by NiranV Dean
Link to comment
Share on other sites

4 minutes ago, NiranV Dean said:

Pretty sure something like 572, ~500, ~400, ~250 would be much more desirable

I see your point but that would increase the land impact to somewhere between 5 and 10. Keep in mind that this is a typical indoors objects so if you're in its "lowest LoD zone" it's almost certainly going to be hidden behind a wall or two anyway. That's another complication for the simplification baseline idea btw. Objects that are made for secluded places, such as inside a house, need different LoD handling than those who are made to be seen from far away.

Link to comment
Share on other sites

1 hour ago, NiranV Dean said:

I'd like to see a place where you can clearly proof that textures can have such a drastic impact.

The only way this can be effectively proven is creating the exact same region or place (a skybox for that would be great although you can kill water, cloud, reflections, terrain and so on if they are not necessary to lessen the bloat from things not relevant for this test), one with everything 1024x1024, until your 6GB are completely full. Make a copy and optimize its textures, write down the framerate. Even better if you repeat this with no textures at all and another time with absolutely no geometry whatsoever to get some control values. I doubt you'll triple your framerate unless you make heavy use of alphas.

I've done this with multiple sims, always noting my FPS before and after optimizing the textures. I wouldn't put the time and money into optimizing the textures unless I was consistently seeing a high return in performance. Unfortunately, SL being SL I can't exactly show you the sims before and after. To do this I would need to buy two sims, make the exact same build in both sims, then keep them both online to compare. I think we can all agree that's just not feasible. Still, I can point you towards a couple places in SL with optimized texture use and you can see for yourself how your performance fares.

Both are fairly "high detail" environments. I urge you to look at not only the FPS, but the consistency, by which I mean you should note far fewer, if any, "freeze-ups" after the sim loads, faster rez times, and little to no texture thrashing. Keep in mind that your experience will vary based on the avatars present. When people quote FPS alone it always makes me skeptical because you can have choppy performance while SL shows you a nice big happy FPS as long as you're standing still.

The Ohnaka Strip (A sci-fi RP sim, not only high detail but also full of animesh) (ETA: Well, I can only vouch for the main street area and the branching rooms contained within that sim, looks like they added TPs to other sims that aren't as optimized.)

http://maps.secondlife.com/secondlife/The Outer Rim/107/196/3208

 

The Hentai Arcade (a generic furry adult hangout, but one I know that is mindful of texture use)

http://maps.secondlife.com/secondlife/Bexington/18/223/403

 

 I'm on a PC with a 2GB GTX 960 videocard and 12GB of RAM and I see consistently higher performance in these sims than I do in similar sims.

Edited by Penny Patton
Link to comment
Share on other sites

10 hours ago, ChinRey said:

Just to chime in that this is not at all unrealistic. I know there is at least one house with almost 1000 1024x1024 textures for sale on MP. And yes, that is just the house itself. Fill it up with furniture from brands like A... and T... and decorate the surroundings with plants with a similar amount of lavish texturing and you may well end up with ten gigs or more of textures even without a single avatar anywhere to be seen.

I keep telling people that a lot of those texture bakes on buildings are unnecessary but it isn't a discussion they want to have.

Link to comment
Share on other sites

15 hours ago, ChinRey said:

I see your point but that would increase the land impact to somewhere between 5 and 10. Keep in mind that this is a typical indoors objects so if you're in its "lowest LoD zone" it's almost certainly going to be hidden behind a wall or two anyway. That's another complication for the simplification baseline idea btw. Objects that are made for secluded places, such as inside a house, need different LoD handling than those who are made to be seen from far away.

Yes see, thats why we need a system like this that actually reduces impact for doing it right rather than punishing you.

15 hours ago, Penny Patton said:

Both are fairly "high detail" environments. I urge you to look at not only the FPS, but the consistency,

I'm sure i would have written something about it if the framerate consistency changed, my Viewer shows current framerate, not average. You can also see a better framerate overview in the Fast Timers console, which will show hitches.  I might have just been lucky and this place simply doesn't bring up enough "trash" to show a difference. Although it should be sufficient as test as i'm not changing the texture amount, just size, thus if anything it should be worse for me since optimizing can also mean removing textures all together. On the other hand that might also be the reason i saw no difference... SL is extremely unstable and its hard to say that something is X or Y when your framerate alone can vary between unusable and over-the-top in the same place with no change whatsoever.

I suspect that its your GPU being at fault here. Also whether textures actively remain being rendered or need to be looked up in cache can have a big influence here. I did not turn around (since the entire place was in front of me on screen). Same goes for objects. I heard the Viewer over-aggressively culls when it reaches around ~1.2gb memory usage (a leftover from pre 64bit times) which will quickly de-rez textures and objects forcing the Viewer to reload and thus hitch on rezzing them again.. but thats a whole different issue and should be addressed too. 

EDIT: Also went to Outer Rim. Framerate stable as long as i stand still, as expected, turn around and my framerate drops like crazy, no matter where i'm standing, once again as expected. I doubt there is such a thing as a stable framerate when turning around, you can't even stabilize it it seems as in not having huge random spikes. Repeated the test with TextureLoadFullRes for fun, no change. Given that the hangout i was in had a big ton of textures being updated to higher resolutions and this place barely 100 i would have been surprised if it was any worse here. In both tests i get around 40 FPS (37-43 FPS with occasional semi-periodic drops to ~30). Turning drops me down to 20 in both tests.

Conclusion: SL remains different for everyone.

Edited by NiranV Dean
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 1107 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...