Jump to content

Rigged mesh LoD bug


OptimoMaximo
 Share

You are about to reply to a thread that has been inactive for 2196 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

Based on my testing on all this sort of stuff awhile ago ( and my memory is a bit hazy on all of it ~ ) this is how stuff works ( I think? )~  so please correct me if any of this tests out to be incorrect.

In world avatar BB (bounding box) size altered based on the SL dimensions of the rigged asset and where it is attached to ~ despite how it visually looks.  That is to say the same formula for altering an avatar's bounding box is used regardless of the worn items rigged status.  The alterations to the avatar bounding box are based on the worn position and scale of the unrigged parameters of the worn item, regardless of whether rigging data is present for that item, or if it's simply just a giant box-prim~  The same calculation is used.  It's the same issue I originally dredged up when assessing the validity of the ARC calculations with respect to rigged assets. 

Rigged mesh assets have a bounding box size based on what their unrigged scale is.  IE if a rig a 0.5 m (per side) 6 sided box to my head bone~ and upload it.  The bounding box size of the rigged item ( and also the effect that wearing it has upon my overall avatar bounding box ) is based solely upon what size that object is in SL when it's placed on the ground.  If I place it on the floor and scale it down to 0.01 m ~ then it's associated bounding box on the avatar when worn ~ will be that size.... however visually it will appear as a 0.5 m rigged mesh box, overtop the avatar head, because it's vertex data dictates that's where it will appear.  The ARC costs will be calculated as if I was wearing an unrigged asset that size ~ and the overall avatar bounding box does not grow.  If I scale that same box up to 64 m per side and wear it ~ it will now affect the avatar's overall bounding box to be absolutely ginormous, and it's effective ARC cost for the box alone, is raised.  However the associated ARC costs for all the other worn attachments that have now had their LOD swap rate changed by the fact that I'm wearing a 64 meter box~ (thus making my avatar bounding box absolutely gigantic ) ~ is not.  Which is probably not intended behavior either.  It's important to keep in mind that in both of these cases the box appears as a 0.5 m cube on the avatar's head at all times.  It's just "under the hood" it's different.

This however has nothing to do with how rigged assets are stored and rendered by SL~ which I think was the original question!  All vertex data are stored as having an offset from the avatar skeleton.  The offset of these vertices is stored in a rather generic system unit ~ which based on what Beq tells me is an integer distance that's divided into and interpreted as meters ~ effectively making your vertex distances from your avatar skeleton stored as measurements that are ~ for the sake of argument: meters.  This offset position is multiplied at render time by all of the skeleton bone scale modifiers with the skeleton ~ IE joint sliders, or ~ in the cases of giant avatars and tinies alike ~ a scale applied directly to the mPelvis bone ( which I confess I'm still not entirely certain how this scale modifier actually makes it in world and is applied !  I've just seen it in action enough times to know that it works. )   This vector based offset position is updated with skeleton deformations etc and that's how our rigged content moves.  However.  If you export from your scene from your external 3D app in cm without doing the proper conversion to meters, such that your vertex data is measured in cm, SL will still interpret those values as if they were in meters.  This will effectively set every vertex offset from its parent bone  to be different from your intended value by a factor of 100.  It's got internal limits to cap these values and you'll wind up with a giant puffy avatar that's just a near-spherical blob of vertices, each following their respective bone~ jiggling about like some sort of terrifying vaguely humanoid kooshball.  This behavior however ~ will have zero effect on the avatar's ARC cost or bounding box when compared to a correctly scaled and exported avatar that has the same SL object scale ( size on the ground ) .

It is worth mentioning as a final aside though ~ that SL will interpret an upload that is 50 cm to a side as a 0.5 meter cube.  But at the same time it will translate the vertex data from the object at upload time into meters, creating a 0.5 m large cube in world that has rigged vertex offset data specified at 50 meters away from a bone  ( which hits the integer limit for maximum allowable distance from a bone ) and makes things a mess.  Note: that this effect is something I've noticed while browsing DAE files, but it's possible that different DAE writing export plugins might do the unit conversion differently such that SL interprets the data correctly, despite doing an export in cm.
 

Edited by polysail
clarity / spelling errors
Link to comment
Share on other sites

50 minutes ago, Beq Janus said:

I think I understood, and it would be a bug if true - and very worthy of a Jira - but I have a problem with the theory. Why do you state that the real scale of an SL avatar is cm?

Because it's how LL built it in the first place, and metric scale needs a compensation method, as shown by the export/import from both Blender Avastar and Maya itself. It's how LL distributed it in the first place for 3rd parties to develop other stuff (like SL related plug ins). As i was also saying, and you confirm that, vertices get a position from the weighting and the scale factor is no longer taken into account, while indeed 3D programs in general need some attention to scale when it comes to hierarchies.

The difference in treatment between rigged and static meshes is very much clear to me, as i have witnessed that, for animation encoding, it works on units, not specificaly meters or centimeters. Which for encoding, again, is more than fine BUT... when it comes to transform a hierarchy root node, scale IS a factor that may (or may not) influence the outcome.

  • Like 1
Link to comment
Share on other sites

16 hours ago, polysail said:

Note: that this effect is something I've noticed while browsing DAE files, but it's possible that different DAE writing export plugins might do the unit conversion differently such that SL interprets the data correctly, despite doing an export in cm.

Indeed, depending on the application, the Collada file will report the scene units as 1.0, or 0.01. Again, this is a feature in order to keep the scaling consistent. 

 

16 hours ago, polysail said:

It is worth mentioning as a final aside though ~ that SL will interpret an upload that is 50 cm to a side as a 0.5 meter cube.  But at the same time it will translate the vertex data from the object at upload time into meters, creating a 0.5 m large cube in world that has rigged vertex offset data specified at 50 meters away from a bone

This would happen if you changed the linear unit scale without compensation, say by manually editing a collada exported in meters and setting it as centimeters. Which is a practice i often hear being used in reverse when exporting from Maya, unfortunately.

  • Like 1
Link to comment
Share on other sites

When exporting a file from 3ds max ~ if the 3ds max scene is set to cm ~ and upon export, the file is set to convert cm to meters.  That item will appear in world at it's appropriately converted scale.  However, the rigging data will be incorrect~ as it won't have been converted properly.  If you look in the DAE file, the values are all off. ( I think? .. it's a vague recollection I have of this ).

Link to comment
Share on other sites

4 hours ago, polysail said:

When exporting a file from 3ds max ~ if the 3ds max scene is set to cm ~ and upon export, the file is set to convert cm to meters.

It all depends by how you began the scene in the first place. In 3DSMax, meter based, you will have a giant avatar when you set the scene to centimeters. However, if you group it and assign to it a scale value of 100, you would do a scale compensation that brings everything back to being relative distances. 

Oddly enough, even if i set my Maya scene to meter, an fbx or collada would sign the scene as centimeters anyway

 

  • Like 1
Link to comment
Share on other sites

On 3/16/2018 at 11:08 AM, polysail said:

The offset of these vertices is stored in a rather generic system unit ~ which based on what Beq tells me is an integer distance that's divided into and interpreted as meters ~ effectively making your vertex distances from your avatar skeleton stored as measurements that are ~ for the sake of argument: meters.

I missed this part and i wanted to specify something. Internally, the equivalence between generic unit and meter is done inworld, implicitly or explicitly. As per my experience with the .anim file serialization, i could observe a behavior that points to just "units" as being important: i had two options for anim files, either keep the scene as in the rigging set up and animate that size, exporting the linear values as they appeared in the transforms (in centimeters) or, as i did for animation on props, scale everything up to meter scale and convert the measurements from centimeters to meters. As long as some scale compensation is done within the hierarchies, nothing should be really different.

  • Thanks 1
Link to comment
Share on other sites

I have been making some rigged meshes and getting low ARC, with decent LOD distances. And using low triangle counts for Low and Lowest because I can work out the triangle size is smaller than a screen pixel at that distance.

Now some of you guys are saying low triangle counts are wrong.

I have the feeling that a huge part of the problem is crap documentation, internal to Linden Lab and published.

Can I trust anything on this? Can I trust you, J. Random Linden, or my viewer?

At least I can say that I have tried to keep triangle counts small. 

Link to comment
Share on other sites

There's nothing wrong with low triangle counts themselves --- unless the LOD model that you can still see is so horribly degraded that it is a jumbled mess of triangles.  The whole idea of having four LOD levels is that your model should appear to have less detail as you view it from farther and farther away, but you shouldn't carry the idea of "less detail" to such an extreme that you see garbage. (Parenthetically, one of the first "modern" artists to understand and use the concept of LOD was Leonardo Da Vinci, although he saw it a an analog concept rather than defining discrete levels. If you study his paintings, you'll see that he not only reduces the size and the color saturation of things at a distance, but also includes fewer details. That was a departure from the style of his contemporaries.)  So, if you expect people to still see the low or lowest LOD models of your object at a reasonable distance, you should keep the vertex count (or triangle count) high enough that they actually see a simplified version of the object.  That means doing more than just watching the number of vertices. You have to make LOD models that actually look like something.

Edited by Rolig Loon
  • Like 2
Link to comment
Share on other sites

As Rolig correctly says, low poly is good as long as it-s not a poor model. If you manage to make your content look decent on all LoDs levels, it's fine. That's what you should aim for. The problem at hand is that, for rigged content, the distances at which each LoD level gets displayed are too mismatching between their rezzed (static) state and when worn. It is ok to have the avatar bounding box as reference size to get all mesh pieces to switch LoD at the same time, but this won't happen as it appears that the calculations give out a different result than what is supposed to be output. Therefore the ARC calculations aren't real as the mesh is calculated as bigger than it actually is, which makes the overall avatar bounding box artificially bigger, increasing by a LOT all the threshold distances at which each LoD should kick in, for ALL of the rigged attachments, also a tiny pinky finger's ring would keep its high LoD always displaying at useless distances, given its actual size on the avatar.

I'd rather make the calculations on the avatar's hitbox, making sure it resizes along with the skeleton's joints position (in case of custom joint position for a 3 meters tall/long charater/creature). This way it may become a relatively consistent bounding box replacement, consistent with the actual avatar size. Not to mention that collisions would improve on such "shaped" avatars.

  • Like 1
Link to comment
Share on other sites

If I can't trust the numbers for the LOD distance that I get, how can I judge whether the detail of the lower LOD models is too low?

That's part of what I do. I am told that the Low-Lowest switching distance on the rigged mesh I made is 107.1m (I love the excessive precision...) which means it subtends about 3 milliradians. Figuring what that is in screen pixels depends on the physical screen and the actual angle of view (a lot of people confuse zoom and dollying, and right from the start camera distance changes have been wrongly called zooming in the documentation). I've never found any plausible numbers for the default angle, but, from the Mini-map in Firestorm, and the view indicator shown, it's not far from 2 radians. With my screen, and erring on the side of detail, I decided 3 milliradians was 6 screen pixels.

Yeah, excessive precision. The difference between 6 screen pixels and and 5, at that distance, is about 18m

Anyway, when I dug through the JIRA I found a bug report suggesting that some Linden had used the wrong calculation, and doubled the switching distances for rigged mesh. And I read the conclusion as that nothing had been done to fix it.

But if that is the way it works, where is it documented? Where's the link from the LOD explanations to that particular JIRA entry? 

I can see the point of all rigged mesh on an avatar using the avatar bounding box as the base for the LOD switching, but when you dig into it, it's the radius of the bounding box. OK, but then what about people who want to do the giant robot thing? Use the largest bounding box of the set. It still makes sense, but it's all rather pointless if you don't bother to tell people.

Apart from the giant robots, just using the avatar height would work pretty well., but how quickly can you get that number?

 

Link to comment
Share on other sites

8 minutes ago, arabellajones said:

If I can't trust the numbers for the LOD distance that I get, how can I judge whether the detail of the lower LOD models is too low?

That's part of what I do. I am told that the Low-Lowest switching distance on the rigged mesh I made is 107.1m (I love the excessive precision...) which means it subtends about 3 milliradians. Figuring what that is in screen pixels depends on the physical screen and the actual angle of view (a lot of people confuse zoom and dollying, and right from the start camera distance changes have been wrongly called zooming in the documentation). I've never found any plausible numbers for the default angle, but, from the Mini-map in Firestorm, and the view indicator shown, it's not far from 2 radians. With my screen, and erring on the side of detail, I decided 3 milliradians was 6 screen pixels.

Yeah, excessive precision. The difference between 6 screen pixels and and 5, at that distance, is about 18m

Anyway, when I dug through the JIRA I found a bug report suggesting that some Linden had used the wrong calculation, and doubled the switching distances for rigged mesh. And I read the conclusion as that nothing had been done to fix it.

But if that is the way it works, where is it documented? Where's the link from the LOD explanations to that particular JIRA entry? 

I can see the point of all rigged mesh on an avatar using the avatar bounding box as the base for the LOD switching, but when you dig into it, it's the radius of the bounding box. OK, but then what about people who want to do the giant robot thing? Use the largest bounding box of the set. It still makes sense, but it's all rather pointless if you don't bother to tell people.

Apart from the giant robots, just using the avatar height would work pretty well., but how quickly can you get that number?

 

If you are using my tools in Firestorm you can trust the numbers under the following conditions.

1) You are examining either a single mesh or are looking explicitly at a link in the linkset using "edit linked". 
2) You are comparing to a typical setup. The table allows for LL default for on mid through high-ultra settings i.e. 1.125, likewise, the same settings range for the "FS default" 2.0. The final column is your personal setting.

The number shown is the value that the viewer pipeline calculates as part of the LOD calculation process, thus it is the value used to do the actual LOD switch. There may well be some use cases that I missed but for the common options that number is right. The value for rigged mesh is very wrong, and yes, nothing is likely to be done about that directly because to do so would cause outrage due to all the mesh bodies that suddenly looked like garbage.

If you have a particular problem with the precision, raise me a jira on the firestorm bug system (jira.phoenixviewer.com) and I'll consider it when I get back to coding after Fantasy Faire.

The commentary around the number of triangles in LOD models is not (or should not) be about how low they are, to some extent, it is the opposite. You approach is at the scientific end of the spectrum, far removed from what most people are doing, the focus for good content is to get the balance of detail right. The problems being discussed in these threads tend to be around the production of items (typically non-rigged) that under all normal user settings would fail to display nicely in a perfectly reasonable scene because the creator has minimised the lower LOD models and expects the user to compensate by cranking up the LOD multiplier. However, while it is bad practice in non-rigged mesh, it is very common practice in rigged mesh because there is almost no chance of it LOD decaying at anything approaching the visible thresholds, as you noted for your 100+ metre visible item and frankly in that regard it is "excusable" because there is a good counter-argument that if a LOD is never going to be shown by the system then wasting a few megabytes of data to send (and manage) the LOD models is utterly pointless. Of course, we then have the fact that some people make items extremely complex in the High LOD "just in case" someone spends time to zoom in close and photograph their shoes.

The current LI equation "charges" you for every triangle you use, with a higher cost per tri in the lower LOD models, this leads to the situation where a creator can offset their overly complex high LOD model by excessively minimising the lower LOD models. What is needed is a balance, a well constructed high LOD model that uses the tools we have such as materials, to efficiently boost the details, and a good set of lower LOD models that work well for the expected use case and have a appropriate volume/profile/silhouette as it slips into the distance.

As you say, the guidance is poor and mixed, there is little to no consistency around best practice because of a number of legacy conditions that affect the content we have. As the lab continue their experiments to redefine how our stuff gets costed, we should all continue to lobby for quality official guidance.

 

 

  • Like 5
Link to comment
Share on other sites

3 hours ago, arabellajones said:

I can see the point of all rigged mesh on an avatar using the avatar bounding box as the base for the LOD switching, but when you dig into it, it's the radius of the bounding box. OK, but then what about people who want to do the giant robot thing? Use the largest bounding box of the set.

The problem here is that each single rigged attachment contributes to the overall avatar bounding box. So, as shown in a post above, if a content creator manages to upload a rigged mesh with a gigantic bounding box, that affects the whole avatar's BB size too, PLUS the LoD being calculated on the diameter instead of the radius. Basically, you already start with a doubled LoD threshold distance because calculations happen using the BB diameter (like increasing the LoD factor by 2 on each rigged mesh), plus an eventual rigged attachment that makes the overall BB artificially bigger than it should be.

 

3 hours ago, arabellajones said:

Apart from the giant robots, just using the avatar height would work pretty well., but how quickly can you get that number?

This is why i was saying

16 hours ago, OptimoMaximo said:

I'd rather make the calculations on the avatar's hitbox, making sure it resizes along with the skeleton's joints position (in case of custom joint position for a 3 meters tall/long charater/creature). This way it may become a relatively consistent bounding box replacement, consistent with the actual avatar size. Not to mention that collisions would improve on such "shaped" avatars.

 

  • Like 1
Link to comment
Share on other sites

4 hours ago, arabellajones said:

Apart from the giant robots, just using the avatar height would work pretty well., but how quickly can you get that number?

That "number", is the avatar scale and is used in other calculations. It is a simple, arguably far simpler number to get. There are potential side-effects though, the giant robots are one, but animals and beasts are another and they are not uncommon. And the real side effect of fixing this stuff is of course that things are suddenly going to start breaking when people didn't expect them to. It needs some kind of watershed in place, a bit like the legacy prim cap where you can keep your 1LI sculpted prim for all the horror that it hides, but you cannot use any feature post-2012 on it. It's a complex thing to manage though.

Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 2196 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...