Jump to content
You are about to reply to a thread that has been inactive for 1330 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

Posted (edited)

I made a mesh avatar. I used it for weeks. It had a render complexity under 20,000.

Recently, I replaced just the hands with a higher-quality mesh taken from one of the many Bento reference models out there (I don't remember which). (The armature didn't change, just the hand mesh.) I uploaded it, put in all the same textures and scripts as the last version had, and wore it for a day. Then I noticed my avatar complexity was over 75,000.

I switched to a slightly older version of the new one that I still had lying around (it had a minor error I hadn't noticed on the test server). Its complexity was about 20,000. I put all the textures and scripts into it and noticed it stayed around 20,000. Eventually I quit. Next time I logged in, it had also changed to over 75,000.

I've confirmed I'm using the same 1024x512 diffuse texture and 512x256 normal texture. I've also tried wearing (copies!) of the original version and it's staying stable at the old complexity. They all have custom LODs and a cube physics mesh, created with the same software. I did recently start playing with Firestorm for the first time ever but all uploads were done with LL's viewer.

Does anyone have an idea what might be going on?

EDIT: Problem found! See replies.

Edited by Quarrel Kukulcan
update
Posted
32 minutes ago, Wulfie Reanimator said:

When you say "higher-quality mesh," what do you mean by that exactly?

More verts.

But never mind. PROBLEM IDENTIFIED!

It was alpha blending.

If any face is set to Alpha Blending mode or given a Transparency other than 0%, complexity shoots up.

If they're all set to Emission, Cutoff, or None, it drops to what I'm used to seeing.

Posted (edited)
On 7/25/2020 at 9:11 AM, Wulfie Reanimator said:

P.S. Is Complexity still based on an average of the reported complexity from surrounding viewers?

I think not but what it is based on, is anybody's guess.

Take a look at this:

bilde.png.dbdf7078166483ab4e0b291a3088cb61.png

That thingy you see in the center is the prototype for the first ever render cost based lag monitor ever to be sold in SL (I withdrew it from the market when I discovered that render cost based lag monitors are useless. Sadly, many other ignorant and/or unscrupulous seller still foist them off to unsuspecting customers.)

The significant code is: llGetObjectDetails(ThisAviKey,[(integer) OBJECT_RENDER_WEIGHT,(integer) OBJECT_RUNNING_SCRIPT_COUNT,OBJECT_NAME]);

Perfectly standard in other words.

As you see, there is one avatar in a neighbor region, 2,480 meter away. His viewer will return 0 render weight for Bel since they are well and truly out of sight from each other. There were also three other avatars even further away in other neighbor regions when I took this picture. (Bel is one of my alts in case anybody wonder. She's supposed to be my scripter but most of the time she's just horsing around.)

That means, the way it used to be calculated, the script would have returned a render weight of 7448 - (0+0+0+0+37242)/5 - for Bel. Adding or removing avatars from the regions - with Bel inside or outside their draw distacnes - did not affect the redout. So obviously and fortunately data from other avatars' viewers no longer counts.

But as you can see, it doesn't match the actual data from Bel's viewer either. That is, it didn't when I took the picture. The first reading I got was about 16000. Right now, about 25 minutes after I rezzed it and 26-27 minutes after I logged on, it says 37178. So what happened, is that it started with a result that didn't seem to make any sense at all, then it slooooo....oooooo...............ooooooooooooo.....wly adjusted towards the actual input data.

Oh well. Those render cost reader scripts were bogus anyway. It's not as if one more fatal blow makes any difference to a corpse.

Edited by ChinRey
  • Thanks 2
Posted

Your complexity is calculated by the viewer using a well-defined, if pretty useless algorithm. 

It can be found in the code (for those who can read such things) starting from llvoavatar.cpp function calculateUpdateRenderComplexity()

The complexity is reported to the lab and as Wulfie says they use an average on their side.

In future, the expectation is that this calculation will move mostly serverside as part of the bake service. This is one of the changes in the ArcTan project, I don't know any more details than that though.

 

 

 

Posted (edited)
18 hours ago, Beq Janus said:

It can be found in the code (for those who can read such things) starting from llvoavatar.cpp function calculateUpdateRenderComplexity()

It can also be found here in a more "human-readable" form: http://wiki.secondlife.com/wiki/Mesh/Rendering_weight

Reading the code is fun though. Object complexity is internally referred to as the "shame" value.

Edited by Wulfie Reanimator
  • Haha 1
Posted (edited)
11 hours ago, Beq Janus said:

The complexity is reported to the lab and as Wulfie says they use an average on their side.

The average of what?

It used to be the average of data reported from "nearby avatars", meaning all avatars in the same region and in neighbor regions. This was what took the whole thing from useless to hillarious. That is not the case anymore; data retrieved from other avatars is no longer included in the calculation.

If I remember right, last time I dragged out that miserable ARC monitor, I think it was about a year ago, it simply relayed the value from the one avatar straight away. This time it started with a ridiculously low figure and then it was gradually adjusted to the value actually reported by the viewer.

 

2 hours ago, Lucia Nightfire said:

It's slow updated, long trended data querying what viewers tell the server, plus there is a 500k cap which is done pre-trend calc, not post.

Add to the list of fatal flaws that there is no way to calculate a realistic render cost for fitted mesh.

Edited by ChinRey
Posted
2 hours ago, ChinRey said:

The average of what?

It used to be the average of data reported from "nearby avatars", meaning all avatars in the same region and in neighbor regions. This was what took the whole thing from useless to hillarious. That is not the case anymore; data retrieved from other avatars is no longer included in the calculation.

To my knowledge it is still an average, though not a straight mean, I have a notion that it removes all outliers from any calculation, the fact that Black Dragon reports completely different numbers to other viewers but does not skew the overall figures is an indicator of this.To be honest though I could have completely dreamed that up 🙂 .

@Vir Linden is best placed to talk about this though as he has worked most closely with the calculations lately and might also be able to explain a bit more about the future direction. 

@Wulfie Reanimator's link is good and appears mostly correct for viewers other than BD. The key addition that is not on that page is a base 1000 addition for animesh IIRC. 

Posted (edited)
Quote

@Wulfie Reanimator's link is good and appears mostly correct for viewers other than BD. The key addition that is not on that page is a base 1000 addition for animesh IIRC. 

From the source code I linked:

// llmeshrepository.h
const F32 ANIMATED_OBJECT_BASE_COST = 15.0f;
// LLVOVolume::getRenderCost

// Streaming cost for animated objects includes a fixed cost
// per linkset. Add a corresponding charge here translated into
// triangles, but not weighted by any graphics properties.
if (isAnimatedObject() && isRootEdit())
{
    shame += (ANIMATED_OBJECT_BASE_COST/0.06) * 5.0f;
}

Render cost gets an additional 1250.

Edit: I found another bit of code. It seems like attached animesh gets the additional 1000.

// LLVOAvatar::accountRenderComplexityForObject

const F32 animated_object_attachment_surcharge = 1000;

if (attached_object->isAnimatedObject())
{
	attachment_volume_cost += animated_object_attachment_surcharge;
}

 

Edited by Wulfie Reanimator
Posted (edited)
2 hours ago, Beq Janus said:

To my knowledge it is still an average, though not a straight mean, I have a notion that it removes all outliers from any calculation, the fact that Black Dragon reports completely different numbers to other viewers but does not skew the overall figures is an indicator of this.To be honest though I could have completely dreamed that up

None of us have access to the server code of course so we can't know for sure. I can only report the observable facts.

When I originally developed the script, it was very obvious that the result was skewed by distant avatars. I remember one specific incident when I was doing some tests 1000 m up in the sky in a perfectly empty quartet of premium sandboxes. Suddenly the reading dropped to half and when I checked, I discovered somebody had entered one of the neighbour sandboxes at ground level.

One thing that is absolutely certain is that although the script hasn't been modified in any way, its behaviour is totally different now so something must have been changed in the server-side formula. Currently I can't find anything to indicate that the output is affected by nearby avatars in any way.

Maybe Vir will notice you paging him and chime in with an answer.

Edit: I probably should stay away from this discussion because I find it really agitates me even now after so many years. I ... ummm I mean my alt ... spent a lot fo time and effort on that script, mostly trying to figure out what the h**k was going on and why it didn't work as intended (and creating the documentaition for a JIRA which of course was neglected) and Hattie Panacek did a great job making an amazingly cool looking mesh thingy to replace the prototype's simple prim container. But then of course, before we really could start marketing the monitor, we realized how badly LL had screwed up and being honest, we decided we just had to scrap the project, only to see less competent and/or lessing honest merchants moving in to the market a few months later. I am mostly happy to hear there may be a solution on the horizon of course but it's certainly mixed with sadness and even a little bt of anger. By now the fakers own the market and those of us who actually chose to do the right thing aren't going to get anything in return for the time and effort we spent, not even an aplogy from LL who is to blame for this whole misery. Then again, it's not as if it's the first time LL has screwed over me and other honest cotnent creators this way and it certainly isn't the last. I suppose it only proves the old saying that no good deed goes unpunished. :(

Edited by ChinRey
Posted (edited)
3 hours ago, Lucia Nightfire said:

Vir didn't implement OBJECT_RENDER_WEIGHT. Simon did.

I think the reason why Beq paged Vir rather than Simon was that he is in charge of ArcTan and upgrading/correcting OBJECT_RENDER_WEIGHT is part of that project.

But we don't want anybody to feel left out of course so  just in case: Paging @Simon Linden ^_^

Edited by ChinRey
Posted
15 hours ago, Wulfie Reanimator said:

Reading the code is fun though. Object complexity is internally referred to as the "shame" value.

Woops, I was stupid enough to take a look at the code and I can actually understand it. Does that mean I've become a nerd??? 🤢

  • Like 2
Posted

 

On 7/25/2020 at 8:35 AM, Beq Janus said:

Your complexity is calculated by the viewer using a well-defined, if pretty useless algorithm. 

I usually tell people "high complexity is always bad, low complexity is not always good." Unless LL has changed the formula recently it largely undervalues things like VRAM use. An avatar with a lot of large textures can tank framerates just as badly as one with a high poly count. High VRAM use also sucks up bandwidth.

It would be more useful if LL displayed VRAM use and triangle count. Even better if the viewer gave people that info on their own individual attachments, so residents could pinpoint problematic attachments and optimize their appearance.

Posted
3 hours ago, Penny Patton said:

I usually tell people "high complexity is always bad, low complexity is not always good."

What is a rigged mesh object with a complexity of 20k when it's all visible but 75k when the hair is off? Good? Bad? Something in between? Is the "real" complexity closer to the low end in both cases? The high end? It seems like the complexity calculation is weighting based on the vertex count of the entire object instead of just the count of alpha-blended triangles.

Posted (edited)
15 hours ago, Quarrel Kukulcan said:

What is a rigged mesh object with a complexity of 20k when it's all visible but 75k when the hair is off? Good? Bad? Something in between? Is the "real" complexity closer to the low end in both cases? The high end?

It's hard to say but this online tool might be helpful: https://www.mathgoodies.com/calculators/random_no_custom. Set the lower limit to 2000 and the upper to 2000000 and the result is as good as it gets.

Seriously, there are at least five six known factors that skew the render weight calculation, three of them are unique to fitmesh the other two three apply to all objects. In addition there are two factors that look a bit suspicious. One Two of these factors can cause the calculated weight to be higher than the actual, the others work the other way.

I and others who have looked into the issue have tried to come up with a reliable way to estimate actual render weight for fitted mesh but it's very, very difficult. The best I can say is that typically you can expect it to be somewhere between two and 100 times the calculated weight but it can be much higher or much lower.

(Edit: Changed some numbers - I forgot about the alpha multiplier being applied to the entire object. Thank you for remininding me, Wulffie.)

Edited by ChinRey
Posted (edited)
9 hours ago, Penny Patton said:

I usually tell people "high complexity is always bad, low complexity is not always good." Unless LL has changed the formula recently it largely undervalues things like VRAM use. An avatar with a lot of large textures can tank framerates just as badly as one with a high poly count. High VRAM use also sucks up bandwidth.

It would be more useful if LL displayed VRAM use and triangle count. Even better if the viewer gave people that info on their own individual attachments, so residents could pinpoint problematic attachments and optimize their appearance.

Complexity doesn't seem to take VRAM use into account at all. There is a comment that says "weighted attachment - 1 point for every 3 bytes" in a part of the calculation that concerns rigged mesh, but nothing points to that calculation actually being done. Rigged mesh gets a 20% increase to Complexity, that's it.

Also, you can see the VRAM and triangle count in the Inspect window -- on Firestorm. The one in the LL Viewer does lack that information, but you can still "kind of" get that information from the Developer > Render Metadata > Triangle Count option. It's a pretty clunky way to view that information though.

Firestorm's inspector:

72c9dd78f2.png

Metadata, Triangle Count:

15d0d850d0.png

5 hours ago, Quarrel Kukulcan said:

What is a rigged mesh object with a complexity of 20k when it's all visible but 75k when the hair is off? Good? Bad? Something in between? Is the "real" complexity closer to the low end in both cases? The high end? It seems like the complexity calculation is weighting based on the vertex count of the entire object instead of just the count of alpha-blended triangles.

It's not about "alpha blended triangles" but "alpha-even-once." Again, in the source code: (I've redacted a lot the more convoluted bits.)

shame = 0;

...

for (i = 0; i < num_faces; ++i)
{
	if (img)
	{
		texture_cost = 256 + (ARC_TEXTURE_COST * (FullHeight / 128 + FullWidth / 128));
	}
	if (PoolType == POOL_ALPHA)
	{
		alpha = 1;
	}
	else if (img && PrimaryFormat == GL_ALPHA)
	{
		invisi = 1;
	}
	if (face->hasMedia())
	{
		media_faces++;
	}
	if (textureEntry)
	{
		if (textureEntry->getBumpmap())
		{
			bump = 1;
		}
		if (textureEntry->getShiny())
		{
			shiny = 1;
		}
		if (textureEntry->getGlow() > 0)
		{
			glow = 1;
		}
		if (face->mTextureMatrix != NULL)
		{
			animtex = 1;
		}
		if (textureEntry->getTexGen())
		{
			planar = 1;
		}
	}
}

...
  
if (alpha)
{
	shame *= alpha * ARC_ALPHA_COST;
}

The code checks every face, and enables different multipliers for the whole object. Note that these are not increments, as in "if this face has alpha, add 1 alpha face," but "if this face has alpha, use the alpha multiplier for this object." You'll quadruple your total Complexity if even one tiny triangle has alpha blending on it.

P.S. I can't for the life of me find the part that accounts for the triangle counts either. The getRenderCost is only concerned about textures, the other parts are... somewhere else... and they're not clearly paired up for some reason. I can find lots of functions that get the total/high LOD triangle counts, but those aren't used in any complexity calculations I can find.

Edited by Wulfie Reanimator
  • Like 1
  • Thanks 1
Posted
6 hours ago, Wulfie Reanimator said:

It's not about "alpha blended triangles" but "alpha-even-once."

So if I want to abuse the system and maximize the number of other users who render me, use no alpha blending (or isolate all alpha panels and eyelashes and such into their own separate texture-optimized objects).

Posted
11 minutes ago, Quarrel Kukulcan said:

So if I want to abuse the system and maximize the number of other users who render me, use no alpha blending (or isolate all alpha panels and eyelashes and such into their own separate texture-optimized objects).

The "object" here is a single link of a bigger linkset (if we're talking about linksets).

The getRenderCost function is called in from other functions, like here, which calculates the total cost of an attachment's linkset.

Posted (edited)
4 minutes ago, Wulfie Reanimator said:

The "object" here is a single link of a bigger linkset (if we're talking about linksets).

It's not a linkset. I'm talking about designing mesh objects to not cause jellydolling.

Edited by Quarrel Kukulcan
Posted (edited)
2 minutes ago, Quarrel Kukulcan said:

It's not a linkset.

I guess since it's a percentage increase, you would benefit from splitting the object to isolate the alpha texture from every other cost.

The same applies to every cost, really. I think there's no cost to additional links in a linkset, at least as far as I can tell.

Edited by Wulfie Reanimator
Posted (edited)
5 hours ago, Quarrel Kukulcan said:

So if I want to abuse the system and maximize the number of other users who render me, use no alpha blending (or isolate all alpha panels and eyelashes and such into their own separate texture-optimized objects).

That's not abuse. Avoiding alpha blending is a genuine saving and if you split the parts with alphas into separate meshes, you're simply eliminating the calculation error Wulffie mentioned, makign the calculated render coplexity more rather than less realistic.

It's is fairly minor stuff anyway. If you really want to abuse the system, the fitmesh scale bug is the one that matters.

Scale is an important factor for render cost since the larger an object is, the longer is stays at higher (and more complex) LoD models. With unrigged meshes, the size used for calculating render cost is the object's actual size in-world but for fitted mesh it's the size it was uploaded at. Upload size is of course irrelevant for the appearance of fitted mesh since it's scaled to fit the avatar anyway and that means if you upload a fitted mesh at the minimum size possible in SL (1x1x1 cm) you can drstically reduce the nominal (but not the actual) render cost. In extreme cases we're talking about more than 99% reduction.

As I said, there are at least six - and probably eight or more - flaws in the calculation that skew the result but the fitmesh scale bug is the biggest by far. Fortunately, many fitmesh makers are honest and considerate people. I think I have to give special credit to Slink, Maitreya, Blueberry and TMP here. As far as I know, they have always tried their best to play fair and not cheat and when we think of how popular their products are (or were when it comes to TMP), imagine how abd it would have been if they too had given in to the temptation.

Unfortunately many others have succumbed to the Dark Forces, including at least one of the major mesh head brands. Most of them probably aren't even aware of what they're doing. The technical skill level among SL content creators at large is fairly low so eitehr they don't understand that the different calculated weights have real implications or they believe the doctores numers are genuine. Many even cheat by accident. Maya uses a different method to determine size (yes I know, Wulffie and Optimo, that's hardly a technical explanation). If you build with Maya and are unaware of this you may well end up uploading at minimum size without even noticing.

Edited by ChinRey
Posted
20 hours ago, ChinRey said:

As far as I know, they have always tried their best to play fair and not cheat and when we think of how popular their products are (or were when it comes to TMP), imagine how abd it would have been if they too had given in to the temptation.

Have you inspected their LODs?

You are about to reply to a thread that has been inactive for 1330 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...