Jump to content

forget it closed topic, go away


tabletopfreak Toocool
 Share

You are about to reply to a thread that has been inactive for 2329 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

8 hours ago, OptimoMaximo said:

For what matters, .anim format is also proprietary to LL, no other company uses this specific type of .anim.

Oh, I thought it was the same as the anim format used elsewhere.

 

9 hours ago, OptimoMaximo said:

Right now the viewer repository on line https://bitbucket.org/lindenlab/viewer-release/src/8579cefad3049e139efaa1b40a94f0357fcd0274/indra/ appears to be unavailable at the time of this writing

It's back now. Don't expect me to examine it in detail though. I do have some programming experience but this is way out of my league. ;)

 

3 minutes ago, Beq Janus said:

I can talk a bit about it.

Welcome to the discussion Beq. ^_^

 

28 minutes ago, Beq Janus said:

The size differences can come from a number of sources, most typically they arise from people using generated LOD models. The GLOD library that is used to produce the simplified meshes has some random seed in it (for reasons I have never understood, nor really investigated) and as a result, the generated LOD models can vary a bit with each upload.

LoD is not relevant to my examples here since they were all done with full LoD, that is with all levels set to "Use LoD above".

 

33 minutes ago, Beq Janus said:

When the material meshes are made they are individually compressed using an equivalent of "gzip -9". Zip compression works on repeated patterns and some data is more compressible than others. in some objects it is plausible that a mesh rotated by 90 degrees is more easily compressed.

Yes but then it wouldn't be consistent and the difference is nearly always in Mesh Studio's favor. I know I said always in an earlier post but I just stumbled across an exception. I'm working on this little forest right now:

5a557d11c855f_Skjermbilde(881).thumb.png.bf849368213c52a6d55aff9f69124433.png

It's made from two meshes, one with 20 and one with 16 trees. Both were created with Mesh Studio and uploaded with full LoD so Lod models are not a factor. I tried to upload both the raw MS output and a version cleaned up in Blender (half the vertice count with smooth normals, a better UV map and tris merged into quads). The 16 trees mesh is a very simple one and it has 0.398 DL straight from MS and only 0.06 when cleaned up in Blender. The 20 trees mesh is a little bit more complex but not much and it has 1.02 DL the way MS made it and 1.192 after it has been cleaned up in Blender.

I'm not going to post the dae files here, partly because it's a commercial build, partly because I think I've already spent my quota for huge blocks of code in this thread, but if either of you wants to have a look at them, let me know.

  • Thanks 1
Link to comment
Share on other sites

I just had the time to read the docs i linked yesterday, while Beq already summarized it. The only problem i would see here is the generation of data that is currently handled upon upload, like the creator name and the rendering cost assignment; one is part of the header and the other is the last block in the file. So there should be an uploader that handles these chunks of data, while the user feeds the remaining from their software.

6 hours ago, ChinRey said:
  5 hours ago, Beq Janus said:

When the material meshes are made they are individually compressed using an equivalent of "gzip -9". Zip compression works on repeated patterns and some data is more compressible than others. in some objects it is plausible that a mesh rotated by 90 degrees is more easily compressed.

So basically, you're saying that the more repeatable patterns we can create, the easier it would be to compress and therefore lighter weight? For example, if i managed to have all my materials including all the same number of vertices, so that all submeshes are equally sized, would that be the case?

 

6 hours ago, ChinRey said:
16 hours ago, OptimoMaximo said:

For what matters, .anim format is also proprietary to LL, no other company uses this specific type of .anim.

Oh, I thought it was the same as the anim format used elsewhere.

I wish it would! The original .anim format back in the day was the internal Maya animation exchange format, but the specs were different, first off it's not a binary. Unity also uses .anim for the animations created and saved within the editor itself, this one is binary format but of course the encoding is totally proprietary to work in Unity with the specific content you created it from/for. File extensions really can be arbitrary as heck.

6 hours ago, ChinRey said:
7 hours ago, Beq Janus said:

The size differences can come from a number of sources, most typically they arise from people using generated LOD models. The GLOD library that is used to produce the simplified meshes has some random seed in it (for reasons I have never understood, nor really investigated) and as a result, the generated LOD models can vary a bit with each upload.

LoD is not relevant to my examples here since they were all done with full LoD, that is with all levels set to "Use LoD above".

@ChinRey LoDs always keep some relevance. Guess what LI and DL you'd have got with proper LoDs. Sure thing they wouldn't be as visually stable as they are in your example, i know.

@Beq Janus why is it that if i feed in my own LoDs, regardless of the methods i tried so far, the result LI and DL is always higher than using generated LoDs? Even if slight, the difference always leans best toward the generated LoDs. Considering that the uploader doesn't really care about UV/mesh materials borders retention, when i do my LoDs i make sure to keep them as intact as i possibly can, to avoid holes in the mesh or UVs ending up outside of a UV shell. I tried a few methods: keeping quad based mesh, both making it manually and with reduction tools, and with free triangulation from the reduction tool. 

So far, oddly enough, the one which gave me best results was  triangulation with NO symmetry against: 

triangulation WITH symmetry

keeping quads both with and without symmetry preservation

that's why of my previous question, but it's still seen in the uploader as heavier than a higher vertex count generated LoD. My last test, i managed to get my LoDs to be around 10/15% less vertices than those autogenerated in the uploader and the final LI and DL weights were still higher. I should mention that most of my builds are organic/rounded models. Structures made of sharp edged cubes aren't a useful example as those are quite straightforward to make well working LoDs. 

I also ran a sort of benchmark/test to see how these models' LoD worked in other game engines, and i tried on both Unity and UnrealEngine4, of which the latter is the most picky in regard of geometry. Both engines accepted my models and their custom LoDs no problem, showing a drawing resource reduction of around 120% at each LoD during runtime profiling (profiled it in an empty scene and running it on the model alone). Unity is more forgiving, but UnrealEngine also didn't complain: no warning were thrown at inconsistent geometry materials/UVs or vertex orders/normals. Also Skyrim mod tools didn't complain about them, passed through the NIF tools consistently, and i could inject my models into the game no problem. I don't understand why my lower-poly-than-the-generated LoDs result in a higher DL and LI than with those crappy LoDs the uploader makes. It's not a BIG difference, like 1 or 2 LI. Still it puzzles me as per why this happens? There must be a condition to be met for better "compatibility" of my LoDs to what the uploader would generate and expect in order to give you an optimal LI

Edited by OptimoMaximo
space after the @
Link to comment
Share on other sites

5 hours ago, OptimoMaximo said:

So basically, you're saying that the more repeatable patterns we can create, the easier it would be to compress and therefore lighter weight?

That is correct but it's patterns in the raw data and with only a puny human brain you can't always reliably predict how it will turn out.

Gzip is based on identifying identical strings. Say you have five four bit binaries:

0100
1010
0010
1001
0101

That's 20 characters. Replace 010 with x and you get

xx100x10x101

Only 12 characters. Not bad for a crude compression algorithm like this.

However, resort the five binaries to:

1001
0100
1010
0101
0010

and then replace the string 10010 with x and you get:

xxxxx

only five characters.

Add a second step and replace xx with y and you end up with only three characters:

yyx

This is a very simplified example of course (and yes, Pamela, it really is!) and it doesn't take into account that you also have to include data to define the values of x and y. But it should still illustrate the basic idea and why the order the elements and array data in the dae file are sorted in can affect the download weight.

 

5 hours ago, OptimoMaximo said:

@ChinRey LoDs always keep some relevance.

Well, usually what we mean when we say that LoD isn't relevant for full LoD meshes is that their land impact is unaffected by size. In this case, however I simply meant that since all example treated the LoD models creation exactly the same way with no GLOD or other random factors involved, there won't be any differences in the LoD handling to explain the differences in DL.

 

5 hours ago, OptimoMaximo said:

@Beq Janus why is it that if i feed in my own LoDs, regardless of the methods i tried so far, the result LI and DL is always higher than using generated LoDs? Even if slight, the difference always leans best toward the generated LoDs.

That's very interesting. My experience is the opposite. Even with "zero LoD" models -  a single triangle for each material - I can usually get lower DL with manually created ones than with the ones GLOD comes up with.

 

5 hours ago, OptimoMaximo said:

So far, oddly enough, the one which gave me best results was  triangulation with NO symmetry against: 

triangulation WITH symmetry

keeping quads both with and without symmetry preservation

That is very odd indeed. I don't know about Maya but with Blender it is nearly always better to export with as little triangulation as possible and leave the rest to the uploader.

 

5 hours ago, OptimoMaximo said:

I should mention that most of my builds are organic/rounded models. Structures made of sharp edged cubes aren't a useful example as those are quite straightforward to make well working LoDs.

Maybe but even then you have to take into account the lack of precise control GLOD gives you.

I'm sorry, I don't have time to make a complex shape with custom LoD models exactly matching the poly and vertice counts of GLOD's. I have to use one I made earlier and it may not clearly demonstrate the point (yes, I know this is hardly news neither for Optimo nor Beq but others may be unfortunate enough to stumble across this discussion too and maybe some nice pictures may give them a little bit of pain relief ^_^):

5a561808d18bb_Skjermbilde(886).png.0ded4f80b68cf2c715eb597597ab4f4a.png

 

Upload data with manually made LoD models (no physics model btw):

5a5618326b23a_Skjermbilde(885).png.e47cc970520cfd9c5cb1f54517851047.png

2.356 DL

with the closest match GLOD could come up with:

5a56186763a77_Skjermbilde(887).png.ee00528585fe434639afd3e7533fc97a.png

Less than half the DL but note that the vertice counts for the mid and lowest models are considerably lower (it was the closest match I could get to my old LoD models).

 

It seems good so far but if we look at the mid models:

5a561ca0564d1_Skjermbilde(888).png.64eaa029750e3481c9912716545d71c9.png

The one to the right is the one generated by GLOD and it's clearly not good enough. It looks ok on its own but that distortion right at the center of the top is very noticeable when the LoD switches between high and mid. In this case, to get an acceptable mid LoD model without making it manually, I would have had to keep it identical to the high LoD one.

When I work with complex irregular shapes I usually try GLOD first - it is the lazy option after all. I can't remember a single occasion when it gave a lower land impact with acceptable LoD than manually made models did.

 

5 hours ago, OptimoMaximo said:

that's why of my previous question, but it's still seen in the uploader as heavier than a higher vertex count generated LoD.

On a side note, I haven't really tested or checked it and I may well be wrong but I have the impression that the poly count is far more significant to the DL than the vertice count is.

 

5 hours ago, OptimoMaximo said:

I don't understand why my lower-poly-than-the-generated LoDs result in a higher DL and LI than with those crappy LoDs the uploader makes. It's not a BIG difference, like 1 or 2 LI. Still it puzzles me as per why this happens? There must be a condition to be met for better "compatibility" of my LoDs to what the uploader would generate and expect in order to give you an optimal LI

Yes, I think that sums it up very well. I notice were away from the forum for a while Optimo, and we have discussed it several times during that time period. I think we gain a litle bit better understanding (I certanly have this time) but we're still not there. I really wish Drongle was still here. He seems to be the one who has made the most research ont he topic and should have a lot of info. It would also of course be great to hear what info Linden Lab has and is willing to share. Unfortunately they may not know much either. If I understand correctly, the developers responsible for mesh did very little documentation and the current LL programmers basically have to reverse engineer their own software to make sense of it.  Then again, there no harm in trying to page ... @Vir Linden perhaps.

Edited by ChinRey
  • Thanks 1
Link to comment
Share on other sites

54 minutes ago, ChinRey said:
6 hours ago, OptimoMaximo said:

I should mention that most of my builds are organic/rounded models. Structures made of sharp edged cubes aren't a useful example as those are quite straightforward to make well working LoDs.

Maybe but even then you have to take into account the lack of precise control GLOD gives you.

Here i meant to make my own LoD models works fine when it's for non rounded shapes.

 

59 minutes ago, ChinRey said:

On a side note, I haven't really tested or checked it and I may well be wrong but I have the impression that the poly count is far more significant to the DL than the vertice count is.

The two things are tied together in my opinion, where vertices represent the actual data and the surfaces (faces) go to render cost. For download, to me it would be more relevant counting the vertices, since those are the ones carrying position values in 3D and UV space.

 

1 hour ago, ChinRey said:

That is very odd indeed. I don't know about Maya but with Blender it is nearly always better to export with as little triangulation as possible and leave the rest to the uploader.

In some cases, it's advisable in Maya to triangulate the model yourself before the export, sometimes a few triangles on the High LoD go missing and triangulating in Maya with Maya's triangulate and NOT fbx export triangulation (which works unreliably, in comparison) fixes that. Otherwise, export leaving all with quads and fewest triangles possible works in Maya as well. But this doesn't make any difference in my case. 

 

At this point i guess the only thing that may address a more optimal upload through Collada might be to find a specific parsing order that results in better gzip compression. As i pointed out in a earlier post, there is some sort of flexibility shown in the scene unit section, where data is being fed like string = float in the MeshStudio, Lighter DL Collada as opposed to the Blender's version where it's like unit = meter scale =1, and this might make a difference when converted to binary and then compressed. If it happens for one attribute, it may as well happen in others.

Link to comment
Share on other sites

2 minutes ago, OptimoMaximo said:

I don't know, it will be a random array of questions :P

I thought we were going to ask them to write a brand new and better cross platform viewer (with a dae optimizing precompiler) in machine code? Or was that next week's homework?

Edited by ChinRey
Link to comment
Share on other sites

1 minute ago, ChinRey said:

I thought we were going to ask them to write a brand new and better cross platform viewer (with a dae optimizing precompiler) in machine code? Or was that next week's homework?

oh i thought we were going to ask to develop and distribute a SL optimized collada exporter for each single 3D program that must be ready by next two hours, including all possible softwares like Wings3D and ArtOfIllusion of course, and a new viewer written directly in binary code so it would lag less clientside, also to release in the next two hours. But i took too long to finish writing this post, so they got 1 hour and 50 minutes left from now

  • Like 1
Link to comment
Share on other sites

4 minutes ago, OptimoMaximo said:

oh i thought we were going to ask to develop and distribute a SL optimized collada exporter for each single 3D program that must be ready by next two hours, including all possible softwares like Wings3D and ArtOfIllusion of course, and a new viewer written directly in binary code so it would lag less clientside, also to release in the next two hours. But i took too long to finish writing this post, so they got 1 hour and 50 minutes left from now

That was the group assignment. Something like that needs to be designed by a committee to make sure all bases are covered and all points of view are respected.

Edited by ChinRey
  • Like 1
Link to comment
Share on other sites

1 minute ago, ChinRey said:

That was the group assignment. Something like that needs to be designed by a committee to make sure all bases are covered and all points of view are respected

That's right! i dind't think of it, of course there's need of a committee, never to be seen on Earth that the option to switch the world axis to Y up instead of Z isn't there for me! And i want it server side.

  • Like 1
Link to comment
Share on other sites

11 minutes ago, Rolig Loon said:

As usual, students are saved from the nastiest questions by the fact that the faculty have to answer them first, before they can grade the exam.  B|

You're right unfortunately. And also actually read and check the answers before grading them.

One trick that does work for (or is it against?) some teachers is to write so long and elaborate answers they loose track and give you a good grade just in case.

  • Like 1
Link to comment
Share on other sites

31 minutes ago, Rolig Loon said:

As usual, students are saved from the nastiest questions by the fact that the faculty have to answer them first, before they can grade the exam.  B|

In the meantime you might want to develop a LSL script to make the up axis serverside change in one click from HUD... oh no wait, the committee says they want Python as scripting language. And C# just to be no less than Sansar

  • Like 1
Link to comment
Share on other sites

1 minute ago, ChinRey said:
6 minutes ago, OptimoMaximo said:

oh no wait, the committee says they want Python as scripting language. And C# just to be no less than Sansar

But Ruby is the programming language with the prettiest name!

Hm here the RolePlayers representants in the committee say they want Rust too, because it sounds rough and tough enough

  • Like 2
Link to comment
Share on other sites

Considering how it started, this thread has been the best entertainment in the Forums all week.

Thankee kindly, Chin, Beq and Optimo.

P.S.  I vote for Algol60, or Fortran56.  (No, it doesn't show my age, I read a history of compilers article.)

  • Like 1
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 2329 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...