Jump to content

Split mesh lesson and frustration


ChinRey
 Share

You are about to reply to a thread that has been inactive for 1543 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

Starting with a lesson. It's one I've posted here many times before.

One of the most important techniques for reducing land impact is balancing the weights by splitting up a large mesh into several smaller ones. With all other factors being equal, several smaller meshes will have lower download weight but higher server weight and since it's the largest of those weights that counts as the land impact, we want to get them as equal as possible.

Here's a row of four poplars I'm working on right now:

image.thumb.png.e5c0b4d0f1a52e26c032e4c991c4c6ea.png

If I upload it as a single mesh, the download weight is 6 and the server weight 0.5, so the land impact becomes 6. If I split it with the trunks and the foliage as separate meshes, the download weight drops to about 3 and the server weight is increased to 1, which gives 3 LI. If I split the group into four separate meshes, I can get the download weight down to about 2 with the server weight increased to 2 - a perfect balance and the lowest land impact possible for this without butchering the LoD or simplifying the mesh.

This is a very important trick for every serious SL mesh maker to know.

---

But then there's the frustration: why is it so?

Download weight is determined by the mesh file sizes with the significance of the various LoD models determined by the mesh' overall size. In this case the LoD/size factor is insignificant so it's all about file size. So, two files combined takes up half the kB of the same amount of data in a single file? That doesn't make any sense at all. Is there anybody familiar with the quirks of the SL software who can explain this?

 

Edited by ChinRey
Typos
  • Like 3
Link to comment
Share on other sites

16 minutes ago, Wulfie Reanimator said:

Just shooting from the hip, but could it be that the uploader is looking at the individual/average file sizes?

No, it's not the uploader's fault for once. There's no difference between uploading those meshes separately and linking them inworld and uploading them as a ready made linkset.

It's also quite clear it's a genuine difference, not just a quirk in the calculations. For some reason, with SL's itnernal mesh format, doubling the amount of data in a file quadruples its size. Or something like that, it's not a linear relation of course. It's totally weird and I can't think of a single other file format that works that way.

Edited by ChinRey
Link to comment
Share on other sites

3 hours ago, ChinRey said:

For some reason, with SL's itnernal mesh format, doubling the amount of data in a file quadruples its size. Or something like that, it's not a linear relation of course.

Shooting from the hip myself too, but I think it's due to the encoding. AFAIK, a bounding box relative to the mesh size is used to create a sort of grid, each grid intersection corresponding to an integer value between 0 and 65535 on each axis. The bigger the object, the finer the vertices coordinates get and therefore, the more numbers in the range get used to represent the mesh, the smaller the object, the coarser this grid needs to be in order to represent those vertices coordinates and therefore way less numbers of the bounding box grid get to be used. 

Some time ago, I wanted to make a SL mesh exporter for Maya, and I had the conversion figured out already when I came across the game stopper, being the data needed to be included in the file header had to be written at upload time. Each vertex had a converted value spread apart from the previous in the list by larger gaps when the mesh was smaller than what would be if the same mesh was bigger, even if just along one axis. I never got to the point of actually write that data to file though, I abandoned the idea before I could get there for the reason above. I just know that the vertices lists that contained the converted coordinates values were getting longer the bigger the  same object was. So I think this might be the reason for the behavior you're reporting? 

  • Like 1
Link to comment
Share on other sites

14 minutes ago, OptimoMaximo said:

Shooting from the hip myself too, but I think it's due to the encoding. AFAIK, a bounding box relative to the mesh size is used to create a sort of grid, each grid intersection corresponding to an integer value between 0 and 65535 on each axis. The bigger the object, the finer the vertices coordinates get and therefore, the more numbers in the range get used to represent the mesh, the smaller the object, the coarser this grid needs to be in order to represent those vertices coordinates and therefore way less numbers of the bounding box grid get to be used.

That's an interesting theory but I'm not sure if I understand exactly what you mean. Wouldn't each vertice still need 24 bits for its coordinates regardless?

Besides, the size of the objects doesn't seem to matter. I tested the row of poplars with full LoD so that the downlaod weight would stay the same regardless of size. DL for a single mesh was 8.284, split into four meshes it dropped to 5.928. That's a smaller relative difference than with sensible LoD models but still significant.

 

26 minutes ago, OptimoMaximo said:

Each vertex had a converted value spread apart from the previous in the list by larger gaps when the mesh was smaller than what would be if the same mesh was bigger, even if just along one axis.

Ummmm, do I misunderstand you or are you saying the coordinates of each vertice is defined as relative to the previous vertice on the list?

Link to comment
Share on other sites

1 hour ago, ChinRey said:

do I misunderstand you or are you saying the coordinates of each vertice is defined as relative to the previous vertice on the list?

It's all relative to the grid number spacing and where the mesh vertex falls into. The smaller the object, the bigger the distance between adjacent vertices in terms of that 3d grid used to encode their coordinates. Their coordinates values, once converted, show a bigger gap in their actual numbers than if the object was bigger. Say, we have 2 vertices which have converted values on the x axis only of 17434 and 17655, the same object but bigger showed those same vertices values as 17434 and 17595 (making up numbers here, but the example fits), so to describe the same object but bigger, we use more numbers in the range, causing to have more data to store even though the vertices numbers are the same. The scenario becomes worse when the bounding box is elongated in one axis. 

I think that at some point I planned to do a re scaling on the fly in that exporter I had in mind to make, so that the exported object would be shrunk into a perfectly cubic bounding box before querying the vertices lists and their coordinates for conversion and revert that back to its original state right after, exactly for this reason. 

Link to comment
Share on other sites

6 hours ago, Arduenn Schwartzman said:

I suspect the extra penalty for size comes with the increase in Level of Detail that larger mesh objects have, and thus cost more calculation power on the client side.

No, it has nothing to do with LoD. It's the same even will full LoD items where the LoD models don't affect the download weight at all.

Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 1543 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...