Jump to content

The importance of low poly count


ChinRey
 Share

You are about to reply to a thread that has been inactive for 2211 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

It was too late at night for useful work so I decided to do a some simple testing instead.

Here is a wall made from plain prims - 32 of them in the picture. Each visible face has 18 triangles and 16 vertices, the whole picture 576 triangles and 512 vertices.

5a6545faef138_32plaincubes70_4fps.png.4b40c65305b97de362b78cf81a35e117.png

The frame rate reading at the top right corner is barely readable but it's 70.4.

Because of some peculiarities in how cube prims are made, if I taper the cubes, the triangle and vertice counts drop to 2 and 4 per prim - 64 and 128 for the whole set. It looks pretty much the same:

5a6546a7bccea_32taperedcubes74_4fps.png.0d8eccd536a90155ba2c0493e532ab6c.png

But the frame rate goes up to 74.4 - that's more than five percent improvement.

I was really surprised by this myself. I did hope to get a measurable difference  but five percent improvement from eliminating 512 triangles and 384 that's crazy. There are single meshes with more than ten times as much geometry as that.

I did several tests and this was the one with the biggest difference but they all showed an improvement far better than I expected from such a small reduction.

So to all content creators who really want to give SL users the smoothest and most pleasant experience possible: Remember that every vertice and every triangle counts.

Edited by ChinRey
  • Like 2
Link to comment
Share on other sites

10 hours ago, ChinRey said:

I was really surprised by this myself. I did hope to get a measurable difference  but five percent improvement from eliminating 512 triangles and 384 that's crazy. There are single meshes with more than ten times as much geometry as that.

There could be a difference though: prims are parametric meshes, and may take shorter/longer to render depending on the parameter you chose and the mesh changes that this delivers. On the other hand, an imported mesh is not parametrized nor it can be and the result may be different, in terms of FPS dilation.

10 hours ago, ChinRey said:

Remember that every vertice and every triangle counts.

Absolutely true, use the geometry you need and no more.

EDIT: Absolutely true, use the geometry your model needs and no more.

Edited by OptimoMaximo
  • Like 1
Link to comment
Share on other sites

2 hours ago, OptimoMaximo said:

There could be a difference though: prims are parametric meshes, and may take shorter/longer to render depending on the parameter you chose and the mesh changes that this delivers. On the other hand, an imported mesh is not parametrized nor it can be and the result may be different, in terms of FPS dilation.

Yes, that is true. I forgot to say that the specific purpose of this test was a bit narrower than how I presented it here. I wanted to find out if the effect if the taper-to-reduce-polycount I've mentioned earlier actually was noticeable. I think it is safe to say that it is. :)

The rendering itself shouldn't be different for prims and mesh. Everything must be converted into OpenGL friendly mesh before it can be displayed after all. But the preprocessing is different. I've done some tests earlier that might indicate that sculpts are slightly heavier to render than mesh with exactly the same geometry. I would assume the viewer handles prims better than sculpts or mesh since it was in the original SL while both sculpts and meshes were added later at a time when the quality of LL's development work was at the very bottom.

Here is a similar test with mesh. This gray square is made the simplest way, two triangles and four vertices:

5a660041bac25_Skjermbilde(974).thumb.png.6cd13eb9a34b827c877c17015193bd98.png

Fps: 94.3

The same square done as a 32x32 grid - 2048 triangles, 1089 vertices:

5a6600299f979_Skjermbilde(973).thumb.png.bfe20cc07046649975ceebdb764fc838.png

gave me 88.9 fps.

So it's the same trend here: those extra vertices and triangles reduce the frame rate significantly.

 

2 hours ago, OptimoMaximo said:

Absolutely true, use the geometry your model needs and no more.

With one reservation of course: remember that every pixel counts too. Replacing geometry with more complex textures and normal and specular maps isn't always a good idea.

Edited by ChinRey
  • Like 1
Link to comment
Share on other sites

5 hours ago, ChinRey said:

With one reservation of course: remember that every pixel counts too. Replacing geometry with more complex textures and normal and specular maps isn't always a good idea.

Indeed, the model is what needs actual volumes, and that's what needs geometry. Since every pixel counts, visually, as much as every triangle does, texturing work should only account for the detail that really doesn't have a noticeable volume of its own.

 

5 hours ago, ChinRey said:

I've done some tests earlier that might indicate that sculpts are slightly heavier to render than mesh with exactly the same geometry.

As far as i know, sculpts always have been heavier than prims for a few reasons. First off, it's a stitched conversion to a single surface with the same max number of faces that a prim can have, right off the bat. Then, the sculpt map comes into play. It is a one unit cube RGB displacement map, which makes it very similar to a vector displacement map with the difference that the mesh data is assumed, instead of being embedded (that's why the normalized unit cube assumption). So to me it makes total sense that a sculpt prim is heavier to render than a prim, probably more than it could have been if the development quality at the time was better. To me, sculpts don't have any use anymore. But that is me, i never liked sculpts :P Mesh is another bag, as i think you can get that to compress and unpack more efficiently and the render time performs better than sculpts.

Edited by OptimoMaximo
  • Like 1
  • Thanks 1
Link to comment
Share on other sites

1 hour ago, OptimoMaximo said:

So to me it makes total sense that a sculpt prim is heavier to render than a prim, probably more than it could have been ...

Oh yes but I was talking about sculpts vs. mesh, not vs. prims.

A test I once did: I rezzed a sculpt (a sculpt plant field in case that's interesting) and checked the fps. Then I saved it as a dae with Firestorm, uplaoded it with no modifications an replaced the sculpt with the new mesh. The difference was marginal but I did actually get a slightly higher fps with the mesh than with the sculpt.

 

1 hour ago, OptimoMaximo said:

... if the development quality at the time was better.

Sadly "the time" in this case means from at least 2005 and all the way throughout 2013 and SL's implementation of mesh is certainly seriously harmed by it too although not quite as badly as sculpts.

I do believe that the sculpt as a concept is a great idea with some serious potential. But it was ruined by the poor implementation and the even poorer documentation.

 

1 hour ago, OptimoMaximo said:

First off, it's a stitched conversion to a single surface with the same max number of faces that a prim can have, right off the bat. Then, the sculpt map comes into play

That is essentially how meshes are handled in SL too. A mesh is first rendered as one of eight prim shapes - which one depends on the number of faces - then the vertices from the lowest LoD model is "stitched" onto it using a modifed version of the sculpt mapping code - and then it cycles up through the LoD models to the one it's supposed to have. Beq once told me that the mesh (and sculpt) data even includes all the prim twist parameters even though they're not used of course.

 

1 hour ago, OptimoMaximo said:

To me, sculpts don't have any use anymore.

There is this land impact bonus though. Even with modern LI accounting, the download weight of a sculpt is capped at c. 2.1. The render load isn't that much higher than a mesh of the same complexity and I do think sculpts still can be a relevant option for a large object several hundred vertices. I sometimes use them for rock clusters and tree trunks and for grass fields and flower fields I think sculpt is the only real option. I've seen those made from mesh with poor ground coverage and worse LoD to keep the LI down to a reasonable and sellable level and no - those are not interesting at all. And of course, with sizes beyond 64 m, a single mesh isn't an option at all and one sculpt may well be better than several meshes.

Link to comment
Share on other sites

3 minutes ago, ChinRey said:
1 hour ago, OptimoMaximo said:

To me, sculpts don't have any use anymore.

There is this land impact bonus though.

Yes i know a sculpt has a capped LI i comparison to a mesh. What i meant there is that, to me, sculpts don't have a use anymore. I don't find any use for them in what i usually make. I don't like them, i never liked to work with NURBS and sculpts are their direct descendants.

 

8 minutes ago, ChinRey said:

Beq once told me that the mesh (and sculpt) data even includes all the prim twist parameters even though they're not used of course.

Yeah, this, for sculpts was said

1 hour ago, OptimoMaximo said:

Then, the sculpt map comes into play. It is a one unit cube RGB displacement map, which makes it very similar to a vector displacement map with the difference that the mesh data is assumed, instead of being embedded

Assuming the original mesh data doesn't require it to be embedded in the sculpt map, but it's some data that needs to be there for the base system anyway. When mesh first came in(Beta stage), they initially were treated as sculpts, with an uploaded object and an asset inventory item (the little pyramid item) which had to be dropped in the build tool as sculpt maps need to in order to work, and it turned the prim into your mesh. Which makes total sense, because it's an asset that has to root into the item storage base system and hence requiring some data that actually hasn't to be there because unused. However those initial parameters on the base prim did actually affect the final LI of a prim turned into mesh, depending if you were rezzing the uploaded object or turning a regular prim into a mesh using a asset item. I don't recall that very well, but i seem to remember that the upload was limited to object and no corresponding asset item because of these prim torture related parameters on the base prims the user wanted to turn into mesh. I guess this was handled by defaulting to the set of parameters that actually binary compress with lowest byte usage. From this, i would evince that mesh should be the item type that, with equal amount of geometry, should be able to render faster, not having to read much or update any of the prim torture data. Plus, the internal format splits all LoDs models material faces into submeshes, while a sculpt is a single, stitched surface. In many cases, i had lower LI on models with multiple materials than the same model carrying only one material, i'm guessing because of the submeshes split.

  • Like 1
Link to comment
Share on other sites

1 hour ago, OptimoMaximo said:

Assuming the original mesh data doesn't require it to be embedded in the sculpt map, but it's some data that needs to be there for the base system anyway.

The sensible way to do that would be to have ready made mesh grids - one for each grid configuration - hardcoded into the viewer and then adjust the vertice positions according to the data from the sculpt map. That could actually be a very efficient method but for some reason I find it hard to believe it's how it's done in SL.

 

1 hour ago, OptimoMaximo said:

i never liked to work with NURBS and sculpts are their direct descendants.

Not really. A sculpt is a grid while a NURBS is a procedural shape. The sculpt map itself contains two pieces of information, the grid configuration (4x256, 8x128, 16x64, 32x32 etc.) and the coordinates for the vertices (as eight bit integers). Some data is included in the prim properties asset - the same as prims and meshes have but with three extra parameters (that shouldn't be more than one byte of data combined but you never know with old LL code). The rest is all preset.

In theory that should add up to only a few kb of data transfer and a very fast and simple way to convert into renderable mesh. But something went horribly wrong and the application of sculpts is poor even by the low standards of LL's development team at that time.

Edited by ChinRey
Link to comment
Share on other sites

6 hours ago, ChinRey said:

Not really. A sculpt is a grid while a NURBS is a procedural shape

Sorry, but it's very much really the case, Chinrey. The very first sculpt exporter was made by Q Linden, for Maya, starting from a NURBS surface which, by definition, is a square planar UV surface created procedurally. NURBS converted to polygons get a polygon UV that is the same as the NURBS and covers the whole UV range. Exactly what a sculpt is supposed to have. Therefore, sculpts in SL are direct descendants of NURBS surfaces. Don't look at this account birth date, i'm older than that, enough to have seen the introduction of sculpts.

  • Haha 1
Link to comment
Share on other sites

2 hours ago, OptimoMaximo said:

Sorry, but it's very much really the case, Chinrey. The very first sculpt exporter was made by Q Linden, for Maya, starting from a NURBS surface which, by definition, is a square planar UV surface created procedurally.

Oh. Maya has a rather unconventional definition of NURBS then. Or maybe it's more likely LL did some rather weird things with them. The sculpt as we have it, is a mesh grid, not a curve and there are certainly no splines involved.

Edit: I hope my "unconventional definition" joke didn't offend - I just couldn't resist it. ;)

It is of course possible to make a polygon from a NURBS but it is a very akward method and if that actually is how the rendering sofware handles sculpts, it would explain why they are so strangely inefficient.

Edited by ChinRey
Link to comment
Share on other sites

5 hours ago, ChinRey said:

It is of course possible to make a polygon from a NURBS but it is a very akward method and if that actually is how the rendering sofware handles sculpts, it would explain why they are so strangely inefficient.

http://wiki.secondlife.com/wiki/Sculpted_Prims:_FAQ Qarl Linden, that's the exact name, is the developer who came up with this idea starting from the basic features a NURBS surface give: a planar square UV regardless of the shape. The Maya workflow is very similar to the workflow a sculpt generator had, in the past. I can't remember its name, it worked with a stack of "slices".

NURBS curves are the ones which create the main "low res" set of Isoparams, the "grid" as you call it. You can also set custom isoparams to fix a grid line in place. NURBS surfaces are then converted into polygon meshes with the same UV grid derived from the NURBS coordinates. The conversion to polygons sets the subdivision rules, and a specific setting makes them 100% accurate with the same subdivisions a sculpt needs. Strangely,  wisely using this method gives the most LoD resistant sculpts.

  • Thanks 2
Link to comment
Share on other sites

1 hour ago, OptimoMaximo said:

http://wiki.secondlife.com/wiki/Sculpted_Prims:_FAQ Qarl Linden, that's the exact name, is the developer who came up with this idea starting from the basic features a NURBS surface give: a planar square UV regardless of the shape.

In case you misunderstood, I didn't doubt what you say. But I was surprised because it's a very weird and seemingly inefficient way of doing it. A sculpt map is nothing but a list of xyz vertice coordinates and their UV coordinates are hardcoded constants. Why make it more complicated than that? Then again, this does seem to explain some of the peculiarities of sculpts.

 

1 hour ago, OptimoMaximo said:

The Maya workflow is very similar to the workflow a sculpt generator had, in the past. I can't remember its name, it worked with a stack of "slices".

That would be Sculpt Studio. It's still available, not on it's own but bundled with Mesh Studio.

Link to comment
Share on other sites

1 hour ago, ChinRey said:

In case you misunderstood, I didn't doubt what you say.

I didn't misunderstand what you said, i just pointed you to more indepth info for your understanding. Flaws and general issues that sculpts always had can certainly be better explained looking at the feature's history.

 

1 hour ago, ChinRey said:

it's a very weird and seemingly inefficient way of doing it. A sculpt map is nothing but a list of xyz vertice coordinates and their UV coordinates are hardcoded constants

it would be inefficient if the sculpt map was baked onto a too big resolution image or wrong conversion to polygons settings. Making sure that the conversion to polygons occurs using a specific set of settings, you get the grid UV as you need it for sculpts and a matching size sculpt map is being generated. The difference is in the ability to generate the polys from the nurbs surface interactively, so as long as the final number of vertices is the same, you can generate the same sculpt as a square map or as an oblong, depending on the spline's number of spans, just by changing a few parameters in the splines' history. Besides, the fact that it is a constant grid makes possible to use a folded mesh plane that respects the same number of vertices for the type of sculpt map you're making. It's probably easier approach, but it forces you to recreate the sculpt-compliant object if you want to change the output map size. Using Nurbs surfaces, instead, you can get that change from the surface construction history with no effort, because the polygon object gets generated as a folded plane with flat 0-1 UVs.

Edited by OptimoMaximo
typo
  • Thanks 1
Link to comment
Share on other sites

22 minutes ago, OptimoMaximo said:

it would be inefficient if the sculpt map was baked onto a too big resolution image

Unfortunately, they often are. At least three of the most popular old sculpt applications, Prim Generator, Tatara and Wings 3D generate oversampled maps with 23,296 rather than 4,096 pixels. Those maps are horrendously laggy and also far more prone to render errors than properly made ones.

Edited by ChinRey
Link to comment
Share on other sites

3 minutes ago, ChinRey said:

Unfortunately, at least three of the most popular old sculpt applications, Prim Generator, Tatara and Wings 3D generate oversampled maps with 23,296 rather than 4,096 pixels. Those maps are horrendously laggy and also far more prone to render errors than properly made ones.

That's who wrote the exporter's fault, i assume. How can one export an image of 152.63 pixels per side? How did the application accept that, in the first place....

  • Haha 1
Link to comment
Share on other sites

Just now, OptimoMaximo said:

That's who wrote the exporter's fault, i assume.

No, it's my fault - aka a typo. I was too lazy to do the math so I used a spreadsheet and typed 128*182 rather than 128*128 and didn't even think twice about the strange result I got. :$

Apart from that, there is a perfectly good explanation why I managed to make such a newbie mistake, I just haven't made it up yet.

Link to comment
Share on other sites

9 minutes ago, ChinRey said:

No, it's my fault - aka a typo. I was too lazy to do the math so I used a spreadsheet and typed 128*182 rather than 128*128 and didn't even think twice about the strange result I got. :$

Apart from that, there is a perfectly good explanation why I managed to make such a newbie mistake, I just haven't made it up yet.

That explains it. So in the end, it's still the exporters' coders fault if they did even allow sizes not dictated by the "grid" sizing.

  • Like 1
Link to comment
Share on other sites

33 minutes ago, OptimoMaximo said:

That explains it. So in the end, it's still the exporters' coders fault if they did even allow sizes not dictated by the "grid" sizing.

Hmmmm...

A team of professional developers who really should have known better, does a bodged up, left-handed sub any standard job and then hands the result over - without any proper documentation - to a bunch of ignorant dabblers who have managed to fool themselves and others into believing they know what they're doing. Who's fault is it when things goes wrong? Hard to say but really, anybody with enough intelligence and skills to write a working script or program at all, ought to know that superfluous data is just that.

Edited by ChinRey
Link to comment
Share on other sites

9 minutes ago, ChinRey said:

Hmmmm...

A team of professional developers who really should have known better, does a bodged up, left-handed sub any standard job and then hand the result over - without any proper documentation - to a bunch of ignorant dabblers who have managed to fool themselves and others into believing they know what they're doing. Who's fault is it when thing goes wrong? Hard to say but really, anybody with enough intelligence and skills to write a working script or program at all, ought to know that superfluous data is just that.

Lack of documentation and clear example based explanation has been the main reason for me. We simply had the option of a custom oblong sculpt map right out of the box, with no clue around about the proper use. Of course anyone with no tech mind would think "texture encoding : higher res = higher precision", which ended up to be a question someone once asked me "how can a 64 pix square image have more precision than one at higher resolution?". Somewhere they had read about the size, but they just knew better, as you pointed out ;) 

  • Like 2
Link to comment
Share on other sites

It might be worth noting the specific contexts for which this is actually true and relevant. Or rather underline and stress that. It is nice that you've taken the time to try and do a basic test, and find something out. But to be realistic, it is missing some fundamental points. 

A stripped down basic flat shader will render hundreds of thousands to millions of triangles without sneezing on old outdated hardware. Though if the controlling program, or shader is written particularly poorly, it might not. But that is a hard one to get wrong.

In terms of SL, different windlight settings will impact the same viewed content. Even exponentially. The content is not the problem in those cases, clearly. That is to say, the same content that renders fine in 30-60fps can by changing scene settings crawl to a snail pace. We can observe that out in the wild.

Other teams and software packages set the bar for now outdated "nextgen" at 10k triangles a character years ago.

You're talking 4.4 frames per second. No one ever realistically feels that unless you are under fifteen, but some may argue twenty. I've stubbornly played a FPS game averaging 8 frames per second. I know from experience when and what that kind of pain is.

The number of triangles you are talking about in your test is fundamentally tiny and doesn't relate to issues of using high poly content that is not designed to be used for real time rendering. Which is probably the more significant topic.

Textures, being images are just a table of data. A shader need never use all of it. I doubt the underlying library even does explicitly all the time. OpenGL has many revisions over the years and is supported/implemented by nvidia. That doesn't necessarily mean that SL doesn't, I guess, I haven't actually looked at the code to be sure. But it will come down to how it is written and used. In that sense texture size for us, that do not touch any of the viewers rendering pipline, is mainly a time to download concern. Circle of control and circle of concern. Of course most reasonable creators are not totally suicidal, and if it doesn't work on their own viewer and machine, it clearly doesn't work.

Just a few things to consider.

It is worth being reasonable and wise with triangle counts, but it is not imperative or necessary to stress them.

Link to comment
Share on other sites

2 hours ago, NaomiLocket said:

A stripped down basic flat shader will render hundreds of thousands to millions of triangles without sneezing on old outdated hardware. Though if the controlling program, or shader is written particularly poorly, it might not. But that is a hard one to get wrong.

Not really that hard, if you're writing your own. SL got their materials, based on opengl blinn/phong basic model and modified with custom maps data and encoding. There is no shader i know of that has an "environment" entry, i know of "ambient", but it doesn't do what SL's environment does; the closest type of data in a shader is the f0 which is supposed to indicate a surface's reflectance at normal. But i supposed that the works LL are doing in parallel with the main, more famous Animesh project, is key in reducing ALM's impact on render stats. I think Rider Linden is in charge of renewing the Windlight system. And for a good reason: windlight doesn't actually have an environment component, and the areas marked with white in a environment map just shows a dull default skydome in the object's reflections, no trace of the windlight itself. So, to me materials are cool, but the shader can be optimized and improved with windlight going through the environment map.

As per polygon count, as i often say you should use the amount of geometry to actually make the object volume clear also from a distance. The fact that Chinrey pushes so much towards optimization is due to the other known fact that people would upload sculpted models straight from ZBrush. A flat shader can help an old machine render a few milion polygons, but it can't when those "few million polygons" is found on each single item in a scene.

Making a good model is a matter of compromises: i need to keep the polygon count as low as i possibly can because a) fewer polygons make easier editing b) too much fine detail modeling makes UV mapping imprecise and more difficult, the model is more prone to modeling artifacts (like non planar/overlapped/too small faces) in areas where textures might have done the job perfectly fine, as long as the detail doesn't need actual "volume". Why wasting time and polygons on polygon detailing if the shape hasn't to really be part of the object's silhouette? Hence it's a matter of general optimization, where polycount gives its contribution in a virtual world where "high poly = high quality" in most of the end users' minds. When i got a reply like "But it looks better, it's not like games where you can see the jaggyness of the polygons, this is Second Life and it has to look life-like, hence i don't care if it's heavier, i choose this. Otherwise, we could just stick with the classic avatar, it doesn't make sense". It's easier to smooth/subsurface/turbosmooth a character and whip it into SL as is, rather than optimizing the vertices locations so to have roundness where needed. So the so called "designer" did: easier and faster. Plus, all the small wrinkles on your skirt/shirt are modeled one by one, which shows you're a professional and a great artist, totally worth those 30K polygons per piece of clothing, not those fake/painted folds those 3D noobs make (sarcasm here). Assets so much better than those from the "3D noobs" that people can't rez them for something like five minutes. Should i remind of the famous high quality mesh head with a brain model inside, to show people that it actually attached but it's just taking its time to rez? (more sarcasm here).

To me, as i stated in the animesh forum already, it always boils down to the general design of your item. Every part counts, from polygon count to textures (shaders also, but we have no power on that) and scripts. Doing your best on all sides ensures your best optimal product (for the time being, until you improve even further), and that's what every content creator in SL should aim to, for the platform's health in the first place.

  • Thanks 1
Link to comment
Share on other sites

By and large yes. This is why I tend to push more to optimisation and performance through code first, before we push for people to be too picky with their triangles. Once there is a better foundation and a reasonable offering to work and build with in the first place, then being fussy about triangles matters more. Not just for performance considerations, but the responsibility becomes more mutually shared in general.

Facilitating good practices helps waste less effort in promoting them. When we know full well people are up against three levels of shader programs of radically different appearances that every asset is bound to. Given that actual uploads not on the test grid are fixed copies, monetising oneshot database entries almost wordpress publish style (where have we heard that not long ago), we can not define multiple geometries based on graphics preference (you need more triangles for vertex shadow painting), and the upload process is generally painful, people will always take shortcuts. Even if they can or know how to do better. It's a time management and sanity thing.

I was actually happy to hear animesh was happening. Instantly knowing there would be (hopefully) less reason to alpha/mask flip. Instantly reducing the amount of links and geometry needed for given effects. A step in the right direction. It sadly should have been on the design before SL went public, given UT 03/04 at the same time, and general shared features that were largely pre-existing concepts.

  • Like 1
Link to comment
Share on other sites

6 hours ago, NaomiLocket said:

It might be worth noting the specific contexts for which this is actually true and relevant. Or rather underline and stress that. It is nice that you've taken the time to try and do a basic test, and find something out. But to be realistic, it is missing some fundamental points.

Yes, it's an illustration of course. It's not going to tell the whole story.

 

2 hours ago, NaomiLocket said:

By and large yes. This is why I tend to push more to optimisation and performance through code first, before we push for people to be too picky with their triangles.

Ideally yes but we have no control over the code and, realistically, it's going to take years for L; to clean up all the old mess and get proper resource accounting routines in place. The latter may not even be possible to do anymore.

I can tell you why I brought up the poly count aspect at this point. Just before christmas, one of my tenants rezzed a christams gift she had picked up at one of the bigger clothes stores - I think it was Blueberry. It was a winter landscape thingy - a garden with a cliche cottage and a bunch or trees - trees apparently made the quick-and-dirty way with the Blender tree plugin. The thing was heavily textured it was a miracle it didn't collapse into a black hole but was not the only reason for the lag. It's those trees.

We have been warned against uploading those Blender plugin trees and for very good reason, each of them can easily have 100,000 or more triangles. Of course, a christmas gift thingy that is only going to be rezzed ocne and then be forgotten in an inventory isn't going to do much damage but since then I've sen simialr trees for sale from several relatively popular plant brands. Imagine a grove with - say 20 such trees - each with 100,000 triangles angled in a crazy semi-random "pattern" requiring rather complex shading.

I may be wrong but I don't think there's ever been anything as triangle heavy as that in SL before.

Edited by ChinRey
  • Like 2
Link to comment
Share on other sites

Oh there has been, just not as well disseminated, but that isn't your point. Thank you for the clearer picture. I am well aware of our limitations to affect the code to a point, where it counts, but that also plays into why I still push for the code. If we get so efficient at compromises and taking forever to do things as efficiently as possible, we eliminate any reason for the code to change. We've gone through these many years with selection highlighting unable to cope with the systems sheer size of vertex data, in some cases (and so needing to turn it off to return to performance). It is not like certain things have changed regardless of being efficient or not. I am pretty sure there has been an ancient article somewhere on how to engineer application code to more efficiently use shaders for exactly the purpose of handling selection feedback, vaguely. While we do not control that, we do have the needs, and there are benefits to be had.

I don't agree with the practice of those trees, however in principle, and I understand your efforts and need. I wasn't too keen way back when clothing tools were being discussed simply from seeing the resulting topology, and knowing how the system tends to be in general. While I do stand somewhat on the clumsily logician side of things in these topics, I would still keep to "reasonable" application and freedom of design. Partially because of the rigidity of the system itself. Outside of some aspect of the systems rigidity, yeah sure, I agree.

Perhaps somewhere down the line, a more relatable example and demonstration. Something that solves a particular, or couple of, constraints of the system, that may itself influence the decisions and principles in its construction. And demonstrates in general a wider performance gap. Though I'd like clarification on what the FPS counter is actually counting. I heard it wasn't strictly rendering performance, and included most if not the whole of applications activities. 

Your example in the other thread, making purposeful dual use of the shadow plane, is actually a decent point to keep people thinking about the how's and why's to do something a particular way (Even if ALM comes with its own shadows).

Link to comment
Share on other sites

3 hours ago, NaomiLocket said:

Oh there has been, just not as well disseminated, but that isn't your point.

My point is certainly not that the poly count is the only reason for lag or even the biggest. But it still is an important factor and one we keep underestimating and overlooking.

Since I mentioned trees, typical poly count for trees in Unity is a few hundred, maybe a few thousand for games only intended for high power computers. But that is with very heavy LoD reduction, down to an impostor even at fairly moderate view distances. Impostors add to the texture count but used well, they still reduce the render cost because they can save polys. I often wish we had more of them in Second Life but they don't seem to work very well with SL shading so maybe it's just as well we don't.

The reason we have LoD in the first place is of course to reduce the number of triangles in the scene. I really hope the myth that increasing the RenderVolumeLODFactor doesn't have a negative effect has been well and truly killed by now and the negative effect we have is all about the number of active triangles, nothing else.

Look at this:

5a6ac6e8716c3_Skjermbilde(1016).png.b5df8fd663a2ab01e9bc1c0a0a4f3162.png

This is how the end of a long, narrow cylinder looks at point blank range. To save some triangles, the LoD reduction formula for cylinder prims is different than for other objects. Grumpity Linden commented on it in https://jira.secondlife.com/browse/BUG-40629. It's not his own words but something he found important enough to quote:

"The LoD scale bias for cylinders ignores the z-axis because, in the case of actual cylinders stretched along the z-axis, it saves a ton of triangles with no visual degradation (silhouette edges are preserved)."

(That JIRA is about mesh btw. By accident LL ended up applying that reduced LoD cylinder formula to three and four face meshes too where it's most certainly not appropriate. It took them six years and at least three JIRAs to notice...)

Linden Lab is not consistent of course, that would be asking for too much. It's easy enough to find examples where they waste triangles like there was no tomorrow. But they do recognise that it's a significant lag factor and every now and then they even remember it.

I think I've mentioned this at least twice here and I try not to repeat myself but I once met a professional 3D modeller who was looking at Second Life as a place to build what he wanted to rather than what the movies and games he usually worked on asked him to. I don't think he even said hello before he asked "Why are meshes in Second Life so high poly?"

Edited by ChinRey
  • Like 1
Link to comment
Share on other sites

3 minutes ago, ChinRey said:

I think I've mentioned this at least twice here and I try not to repeat myself but I once met a professional 3D modeller who was looking at Second Life as a place to build what he wanted to rather than what the movies and games he usually worked on asked him to. I don't think he even said hello before he asked "Why are meshes in Second Life so high poly?"

Did you point him in the direction of the graphics preferences, shaders, state of the opengl implementation, and the asset design principles SL banked on? If he is a professional, he'll answer his own question in less than a minute.

Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 2211 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...