Jump to content

To join or not to join, that is the ...


Robin Talon
 Share

You are about to reply to a thread that has been inactive for 3191 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

This is probably a real noob question, but it has me scratching my head. Hopefully it's an easy one for y'all to answer. And feel free to point and laugh, too. :matte-motes-big-grin:

I built a house in Blender. Fairly large but not too terribly complex. Imported said house to the test grid, decided that the LI was too high (a tad over 100), and proceeded to separate it into several logical pieces in Blender. I then simplified each piece as much as I possibly could. 

So here's what has me a bit baffled: If I select all of the pieces, unjoined, export them together to .dae, and import to the test grid, they import as a coalesced object with a comfortable 27 LI. If I join them first in Blender, export them to .dae, and import to the test grid with the exact same export and import settings, they import with almost exactly twice (54) the LI. 

Da heck?

Link to comment
Share on other sites

The download weight, which usually becomes the LI, depends not only on the amount of geometry, but also on the size of the object(s). This is because of the LOD (level of detail) system which switches to lower resolution versions of the mesh when the camera gets greater than to certain distances from the object. If it has switched to a lower LOD, there is less data for the viewer to download and fewer triangles for the gpu to render, so that less resources are used. The LOD switch distances are multiples of the diagonal of the mesh's bounding box. So smaller objects switch at shorter distances.

When your house is a collection of smaller objects, the area within which cameras will se them at higher detail are smaller than when they are joined into one object because the one object has a larger bounding box than the small objects. Consequently, on average for many observers at random distances, the collection of small objects uses less download and rendering resources than the single larger object. The LI calculation takes account of that difference. So the LI of the single large object is higher. The trade-off for the lower LI is loss of detail at smaller distances and oncoordinated switching of different sized parts.

  • Like 1
Link to comment
Share on other sites

Ohhh, okay - that actually makes perfect sense. Thank you so much! 

That correlates with something else I was noticing, too: That parts of the model were losing detail at a much shorter distance in the unjoined version. 

This becomes a fun balancing act, doesn't it? 

Link to comment
Share on other sites

Hi :)


Robin Talon wrote:


This becomes a fun balancing act, doesn't it? 

Standing on one leg maybe fun idea to start with but can become tiring after a while, but that's when the real fun can begin  :)

You don't have to accept the default auto generated Medium, Low and Lowest LOD meshes. They often don't look very good.

Try experimenting by making your own LOD meshes. and load them up in the LOD slots as you did the High LOD mesh.

Also remember that parts that can only be seen from the inside of your building don't need to be seen from a long way off. Which could mean they can be eliminated completely from the lower Lod meshes. Just a note: all the materials you assigned to your High LOD mesh must appear somewhere in each of the lower LOD meshes.

Here is a recent post that covers a little about creating lower LOD meshes and keeping any UV unwrapping intact across the lower LOD meshes.  https://community.secondlife.com/t5/Mesh/Any-advice-on-ways-to-line-up-UV-islands-for-the-different-LODs/td-p/2935850

 

  • Like 1
Link to comment
Share on other sites


Aquila Kytori wrote:

 

Standing on one leg maybe fun idea to start with but can become tiring after a while, but that's when the real fun can begin 
:)

By day I'm a software engineer, so this really is my idea of fun. :matte-motes-nerdy:

Great link, thanks! I've been working on learning how to make my own LODs efficiently. This afternoon I've focussed primarily on baking high-poly to low-poly. Brain officially bleeding. good fun! 

You guys have no idea how grateful I am that you exist and are so helpful here. I obsessively read this forum (when I'm not poking at/cussing at Blender). 

Link to comment
Share on other sites


Robin Talon wrote:

This afternoon I've focussed primarily on baking high-poly to low-poly. Brain officially bleeding. good fun!


Don't forget you can also use this for your highest LoD model! In fact in many cases (depending on object size and type) I would consider the highest LoD to be the most important candidate for this treatment.

Link to comment
Share on other sites


Aquila Kytori wrote:

You don't have to accept the default auto generated Medium, Low and LowestLOD meshes. They often don't look very good.

I have to disagree with you there, Aquila. The default auto generated LOD models never look very good. ;)

There are some occasions when they look acceptable but those are rare indeed and they'll never be as good as manually made LOD models or even LOD models autogenerated by Blender or Mesh Studio or Maya or Mesh Generator or Mesh Lab or...

You can compensate by increasing the number of tris in the autogenerated models but at the cost of significantly increased LI.

There are two reasons why the uploader performs so badly here:

  • The algorithm used (named GLOD btw), is a rather crude and primitive one that apparently only eliminates triangles, leaving your model full of holes, whereas the more advanced algorithms used by mesh software attempt to merge them.
  • A computer algorithm can never really know which details are essential and which can be eliminated from each model anyway. That's one of the few things the human brain still does better than computer.

To create better LOD models than the ones the uploader makes, use the simplification functions in the mesh editor of your choice. To create really good LOD models, manual editing is the only option.

 


anselm Hexicola wrote:

 

(...because I know!!)  for the non-expert, optimising mesh upload is a bit of an art and an inexact science.

One of the reasons it's an inexact science is that it seems many mesh makers ignore one of the most crucial factors: the switch points. The switch points are the distances the rendering switches between the various LOD models. They're quite easy to calculate and once you know them, it's much easier to determine which details to keep and which to leave out from each model. Then of course it's the matter of visual perception, some tough decisions about compromises between LOD and LI, some creative solutions to cover up the holes and of course the Black Art of optimizing for compressability.

And balancing the weights of course, that's important too.

And compensating for the countless LOD affecting bugs.

And...

But if you know how to calculate the switch points and are willing to invest a few minutes manually optimizing your LOD models, you're already miles ahead of the majority of Second Life mesh makers.

Link to comment
Share on other sites


Robin Talon wrote:

So here's what has me a bit baffled: If I select all of the pieces, unjoined, export them together to .dae, and import to the test grid, they import as a coalesced object with a comfortable 27 LI. If I join them first in Blender, export them to .dae, and import to the test grid 
with the exact same export and import settings, 
they import with almost exactly twice (54) the LI. 

Re-reading the thread, it seems none of us actually gave a full answer to that original question. ;)

Part of the explanation here is the LOD of course but it's also about the "weight balancing" I mentioned.

The land impact an object is assigned, is the highest of three weights - rounded off to the nearest whole number:

  • Server weight: based on the number of parts and active scripts in the linkset. 0.5 for each part, somewhere between 0.2 and 0.3 for each active script.
  • Physics weight: the complexity of the physics model. Another Black Art of mesh making but usually you can make it so simple it doesn't affect the LI.
  • Download weight: the amount of data that needs to be transferred weighed against an estimate of how often the model is needed. Calculated for each LOD model separately and then summed up to give the mesh' total download weight.

Now, for several reasons splitting a big mesh into several smaller ones tend to reduce the download weight even after we've compensated for the LOD loss with more detailed LOD models. But of course that means more parts in the linkset which again means a higher server weight.

Balancing the weights usually means to find the sweet spot where those two weights are the same (rounded off to an integer that is) or sometimes to split until there is no more download weight to gain from it.

Sometimes you have to take the physics weight into account too. That makes things a bit more complicated but fortunately that's rarely necessary, at least for something like a house.

 

Link to comment
Share on other sites


ChinRey wrote:

But if you know how to calculate the switch points and are willing to invest a few minutes manually optimizing your LOD models, you're already miles ahead of the majority of Second Life mesh makers.

Calculating switch points is not that hard, but it's something I never do. For a reason.

First of all, some people have their graphics settings at minimum, some have it at ultra, some even have it on "ultra+" by changing the debug settings like RenderVolumeLODFactor. You could calculate the switch point for all those settings, but that doesn't really save you time over uploading a test model in its natural surroundings and see how it acts with different settings. More importantly, it strongly depends on the object how much and which features you can reduce to represent the highest LoD convincingly, from any given distance.

By all means, if it works for you, calculate the switch points. I rather look inworld.

Link to comment
Share on other sites


Kwakkelde Kwak wrote:

By all means, if it works for you, calculate the switch points. I rather look inworld.


It amounts to the same really. It's just that spending ten seconds typing three numbers into a spreadsheet saves you half an hour or more of trial and error. But of course, you always have to do a virtual reality check before the work is done. :)

As for optimizing for different LOD settings, I honestly don't see the point. If it looks good at LOD factor 1 it certainly looks good at 2 and the LI you can save is minute compared to what you can save by other less destructive methods.

Link to comment
Share on other sites


ChinRey wrote:

 

It amounts to the same really. It's just that spending ten seconds typing three numbers into a spreadsheet saves you half an hour or more of trial and error. But of course, you always have to do a virtual reality check before the work is done.
:)

I don't see why you need to know the distance. Having the distance doesn't tell you how big the object is on screen or how it looks.

What I consider logical is the following:

Build the object you want on the highest LoD.

Upload it and determine how it looks just before the LoD switches to the next.

Determine what you can change without altering the looks too much.

Using that information, build the next LoD.

etc.

(actually I guesstimate most things, which usually works out just fine, but if I had to do it logically, I'd do it as described above)

 


As for optimizing for different LOD settings, I honestly don't see the point. If it looks good at LOD factor 1 it certainly looks good at 2 and the LI you can save is minute compared to what you can save by other less destructive methods.

So your starting point is 1 then? For someone else it might be 0.25 (which is the setting on minimum I think).

More importantly, there are a lot more settings than the RenderVolumeLODFactor (mesh detail in the settings) in the graphic preferences.

Link to comment
Share on other sites


Kwakkelde Kwak wrote
I don't see why you need to know the distance. Having the distance doesn't tell you how big the object is on screen or how it looks.


Oh, you get the hang of it fairly soon. :)

But we're splitting hair here really. The important point is to actually know the switch points. Whether you find them by maths or trial and error isn't that important.


Kwakkelde Kwak wrote
More importantly, there are a lot more settings than the RenderVolumeLODFactor (mesh detail in the settings) in the graphic preferences.


The Mesh Detail setting is the RenderVolumeLODFactor actually. LL just renamed it when they added it to the viewer prefs.

Link to comment
Share on other sites


ChinRey wrote:

Oh, you get the hang of it fairly soon.
:)

But we're splitting hair here really. The important point is to actually know the switch points. Whether you find them by maths or trial and error isn't that important.

The important thing is to get the least amount of geometry on screen without letting your objects turn into a mess. I don't care about switch points at all But I fully agree it's splitting hairs.


ChinRey wrote:

Kwakkelde Kwak wrote
More importantly, there are a lot more settings than the RenderVolumeLODFactor (mesh detail in the settings) in the graphic preferences.


The Mesh Detail setting is the RenderVolumeLODFactor actually. LL just renamed it when they added it to the viewer prefs.

Ehhh... Do I hear an echo? :)

(only difference is the mesh detail slider stops at 2.0)

Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 3191 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...