Jump to content

mesh - highest lod doesn't rez at all - cause traffic shaping by internet provider


MaxTux Wonder
 Share

You are about to reply to a thread that has been inactive for 4506 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

Interesting, I opened up a collada file and it looks like almost 2/3 of it is 7 digit normal data...

Converting my 1.4.0 to 1.4.1 by exporting as fbx then turning it into a dae with autodesks converter does save some data, but 10-20%, not 67. Ofcourse converting to is not the same as exporting as. Also an identical file format doesn't mean the export process is the same. For SL it doesn't matter ofcourse, since that uses its own format. I guess I will just have issues with uploading models over 80000 faces, hmm I can live with that:)

Link to comment
Share on other sites

  • Replies 57
  • Created
  • Last Reply

Top Posters In This Topic

Of course, i will for sure follow the ipse dixit of the Advocatus diaboli, and just will take all the arrogance from someone that tells me that my avie are not good and that he can do better, even when he shows me a picture from a rendering and not an inwolrd picture with the property settings in it.

 

Yes, right.

 

Anyway, i did a full bug report on jira with all my experiences, and of course i'm not an expert of streaming, this is why i've opened the thread. i will not ask to lindens " how much data a mesh is compared to the vertice count when sent between peers." because i don't know what does it means.

 

You do it, there is a link in the first post of this topic.

 

And i don't want to do a cheaper version of my avies, i can do a multiparts version just for anyone that is affected by the problem, but never a cheaper version, it's just nonsense, i want to do bigger and better avies with incredible details, not cheap cutted avies that cannot stand with an old tecnology like sculpt maps.

 

What's mesh for?

I leave to Chosen the 9k polycount avies, i'm doing it bigger and uncut, American way

Link to comment
Share on other sites


MaxTux Wonder wrote:

 

Anyway, i did a full bug report on jira with all my experiences, and of course i'm not an expert of streaming, this is why i've opened the thread. i will not ask to lindens "
 how much data a mesh is compared to the vertice count when sent between peers." because i don't know what does it means.

And I don't know if a 200kbs limit would stop a file 200kb big from passing through, I don't see why it would actually.  And I don't know if the streaming limit is at all responsible for the problem you have. I know very little about this.


MaxTux Wonder wrote:

 

What's mesh for?

There is no single answer to that question, but I agree, a big part of it so you can make your items look better. That does however not mean one should stretch the system just because they can. Mesh can be used for saving resource cost aswell, as you demonstrated yourself with the cube. It can be used for lots of things and there are lots of things to take in mind.... People don't have to agree, but it's good to see and hear various approaches in mesh building, with different goals and in different ways.

 

 

Link to comment
Share on other sites

"....it's 128 bits per vertex.  Assuming that's accurate, then the mesh cube would be 1536 bits"

The 128 bits per vertex is right*, for the vertex table and excluding the triangle table that indexes it. However, for a sharp-edged cube, there are three vertex table entries for each of the eight corners, with the three different normals. So that's 24 x 128 = 3072 bits. (UV mapping won't increase this because the corners already have separate entries.) There are 12 triangles, each of which has three 16-bit indices into the vertex table. That is 12 x 3 x 16 = 576 bits. So the total is 3648 bits (I think).

For the crossover point, it's a bit more complex. Sharp edges and UV seams all count against the mesh relative to the sculpty. The mesh competes best for a smooth sphere with the same (horrible) UV map as the sculpty. This reaches the same uncompressed size as the uncompressed sculpty sphere at about 21 x 21 segments, just a few more vertices than the 768 you quote, compared with the 32x32 for the sculpty. More realistic meshes with sensible UV mapping and some sharp edges will compete less well. In addition to that, I suspect that the sculpt map data will generally compress better. That is a guess though, and extreme cases where there are many repeated normals etc, this may well not be the case.

16-bit values; three for position, three for normal, two for UV; so 8 x 16 bits = 128.

 

Link to comment
Share on other sites

I don't think the size of the Collada file has any direct effect on the upload data size. It is only read by the viewer and is then  converted into the LL internal format before anything goes to the server. Since the same upload data can come from Collada files of greatly different sizes, the Collada file size is generally a very poor guide to the upload data size, especially when Collada files from different source software are compared. When you compare the file sizes of different meshes that were made with the same software, exporter and settings, it is probably a much better guide.

Link to comment
Share on other sites

hmmm you have the upload to the uploader then the upload from the uploader to the server...or something like that:)

hard disk - server - viewer - server ? the initial upload has to be the full dae file I'd say.... Anyway, that's just once and shouldn't have any lasting impact on anything..

EDIT or is the upload to the uploader 100% local? in that case it feels very slow to me....

Link to comment
Share on other sites

I am sure the Collada never goes anywhere near the server. It is converted in the viewer into the internal Collada document model data format, and then from that into the LL streaming format. You can see all the code for that in the viewer source code. That is all done before you press the calculate button. At that point, the viewer asks the server to calculate the weights and cost. I imagined it uploads the whole thing temporarily for that, but it could be a subset of the data. Then when you press the upload button, it does the actual upload.

Now that you draw my attention to it, I wonder why it would take an age for the final upload if it had uploaded the same data for the weight calculation? Hmm. Maybe that indicates that the calculation is done on partial data? Maybe the server just discards it as soon as it can?

The Collada file, which is a text file, is enormously larger the same thing in binary streaming format. It would be very wasteful to send the Collada to the server. It wouls also burden the server with format conversion, which would affect everyone using it instead of just the user doing the upload. It's similar to texture uploading where images are squished to fit power-of-two dimensions and converted to JPEG2000 before they are sent to the server.

Link to comment
Share on other sites

@Chosen: thank you so much for your great clarification on post #21. I realice now how ingenious were the Linden team when they developed sculpties. Maybe is not a good nor efficient way to build things, but when not abused can certainly have less streaming cost than the globalised and exploited meshes.

 

To the thread.. I just have to add my wonderings now: how many lag do we need to spread over the grid to learn what is efficient and what not?

Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 4506 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share


×
×
  • Create New...