Jump to content

Why PE Has To Go UP with Larger Mesh?


Cathy Foil
 Share

You are about to reply to a thread that has been inactive for 4587 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

Can anyone explain to me why mesh PE has to go up when a mesh is scaled up?

 

I know they can be seen from farther away so therefore placing a higher load on the servers but come on.  Even at 64 meters it is just like moving the prim 32 meters closer to the avatar.

 

Is it the LOD?  Does it make the large mesh look better from too far away?  If so why not just make mesh over a certain size have different LOD changing distances from smaller mesh object?

 

Is it that at ground level with large meshes combined with all the other standard prims and scultpies LL is afraid they cause just too much lag?

 

Well here is a few of my suggestions if this is the case.  One I already said.  Set the LOD distances different for larger Mesh.  Two set larger mesh to different loading priorities at different altitudes.  I am not sure I am saying this properly.  You know the order of how things are loaded onto your viewer within the draw distance you have set.  Perhaps have a second draw distance set option for just mesh in the viewer’s preference menu.  If you are going that far why not a separate draw distance setting option s for prims and sculpties.  Let the residents decide which settings are best for them.

 

So for mesh at ground level mesh can be set at say half what the view distance is set at.   To clarify if some ones draw distance was set at 200 meters any mesh that was 100 meters or further would not be loaded into the viewer.  But at higher attitudes where things are a lot less crowed the all mesh within the 200 meters would be visible.

 

If larger mesh are too much of a burden on the physics engine why not simply make all mesh over a certain size automatically phantom?  If someone needs it to be solid they can simply use invisible regular prims as we do with large sculpties now.

 

Is it the texture on larger mesh that is the problem?  If so why not lower the size limit of the texture allowed on larger mesh?

I know these suggestions are possible.  Anything is possible with computers and programming.  It is just a matter of will and time.

Link to comment
Share on other sites

  • Replies 50
  • Created
  • Last Reply

Top Posters In This Topic

The explanation (supposedly) is that the distances where LOD switches are different for different sized objects. If a mesh gets twice as big, the distances are doubled, and the area where avatars see the highest LOD is quadrupled. This means that the bigger high-LOD data has to be download, and the greater number of triangles has to be rendered, four times as often. Since they try to make the PE reflect the resource usage, the PE goes up sharply. It is more complicated because of the effects of the draw distance, but this is the basic principle. Physics weight is very different, and for Prim physics shape type, the weight actually decreases as the object gets larger. That's not something I understand at all.

Link to comment
Share on other sites

Weights more, becuase its hallow in the middle. Like the dreams of a yesteryear lol.

 

 

Besides all the techie gobblygook. The basic factor is that mroe folks see it, theire for more rescourse of said sim. Its not a money issue, but more a realstic impact of server load, comprimising it in value based on lets say folks who rent land and hog up larger portions against those that would otherwise rent elsewhere.

Link to comment
Share on other sites

This is the part I cant figure out, is it sim resources or is it viewer resources? Tri's I expect wouldnt make much of a difference for server resources, its a case of mathematics not rendering, a 10 foot tree compared to a 100 foot tree is a basic calculation in terms of sim resources. If its a factor of viewer resources then this doesnt make any sense at all because residents can change thier settings to accomidate thier hardware, if we are being charged more money because of this then surely its an added cost purely for the sake of an added cost.

We are being limited due to a median based on a basic middle ground for computer hardware, its the same with LOD costs, these can also be controled independently via the viewer settings, its like charging people for shadows. Or do I have it wrong?

Link to comment
Share on other sites

  • Lindens

The streaming cost component of PE is based on the average number of bytes visible within the area of a circle that circumscribes a region.  Larger objects LoD sooner, increasing the average number of bytes visible over that area, making them use more bandwidth.  While it's true the actual displayed LoD is dependent on user settings, the streaming cost calculation is based on a constant LoD ratio and is therefore viewer setting independent.  The target triangle budget is 250 thousand triangles visible from the center of a maxed out region, but you can increase that budget for your viewer by increasing your detail settings in preferences or increasing your draw distance.

The current budget of 250 thousand was used by examining the triangle count of various inworld locations and looking at performance characteristics and capabilities of target systems (which are slower than you might think).  The budget is also biased a bit low for initial release because it will be easier to raise the limit later than lower it.

 

 

Link to comment
Share on other sites

Then why would there be a scaling upload charge in L$? If upload costs are directly related to PE then we are being charged real money on the basis of viewer settings which is highly variable. I understand PE and its limitations on the grid, thats a given but L$ costs? As someone mentioned before with the new lower limit of L$150 and a scaling cost, they would need to pay around $200 (not L$) to upload thier model, that is beyond excessive and completely unrealistic.

Then add in the mesh vrs sculptie comparisons (Scuplt costs L$10 + has drastically lower prim count and in some cases looks better) where is the incentive to upload any meshes? If they don't sell then the creator has not only lost alot of time and effort but they are also out of pocket. Something has to give here, let it be upload costs, make it a single set fee regardless of PE. Linden will still make money via the marketplace and extra L$ purchases.

Link to comment
Share on other sites

The longer i think about it the more it appears to me that the "punishment of mesh" indeed could be a way to keep people from swamping the grid with non optimized mesh objects. As far as i can tell from the last few months many (many!) people just do not know exactly how to get along with Mesh-development and especially with optimizations.

So the current cost settings seem to keep most people from working with meshes. Just a few enthusiasts might want to pay the high bills. Maybe the cost system makes people learn optimization strategies to actually get mesh objects cheaper (e.g. my kettle quest is mainly developed because i wanted to learn how to get out the most for the least possible costs)

And punishing larger objects in first place may be even a wise idea, so we all can learn in the small scale before we can go to the big builds...

Well this is just my 2 cents here trying to get some sense out of the current costs settings.


It still does not make any sense to me to put such a high upload fee to meshes. that is just too much, especially because the only way to check if something is working as expected in the environment where it shall be used is by uploading it. So if LL would allow temporary meshes, the situation would be a bit more comfortable.

But Actually my proposal is to "Make it easy":

 

  • 10 L$ for each mesh in an upload
  • 10 L$ for each added texture, so mostly 90 L$ per prim (assuming it can have up to 8 textures)

All the rest most probably will sort out itself. I.e. We won't want to rezz high Poly meshes due to PE. So what is the issue ???

Link to comment
Share on other sites

the problem is that low poly optimized meshes are been given very high PE if they are large objects, even though they have WAY less tri polys than equivelant sculpty giant objects.

Sculpties should never have been given their 1 prim status, that's plainly obvious now.

 

Link to comment
Share on other sites

I can not say what actually makes LL set the costs as they do. The only reason i can see for this move is to avoid getting large Mesh objects into SL for now. I do not know why this is so. I do not know if it will ever be changed. IMHO it makes no sense to compare Sculpties with meshes to understand LL's decision, even if all numbers indicate that meshes are the better option.

LL would be quite insane if they would discourage the usage of the "better tecnology" in the long term... Its just a pity that they do not tell us the whole story (with simple to understand words) ;-(

Link to comment
Share on other sites

The only reason I can think of other than pure profits is to discourage outlandish meshes, however the PE is punishment enough for this, regardless of what the cost is to upload if your messy mesh is a resource hog you would have to sacrifice a significant number of prims to have it.

And I completely agree with your pricing suggestion Gaia, L$10 for the mesh + L$10 per texture, its reasonable, fair and logical. Anything else is based on profits, and in this case when you start charging content creators.. well its nothing short of biting the hand that feeds you.

Link to comment
Share on other sites

I don't think it's necessary to look for any ulterior motive for the size-dependence of PE. It is an automatic consequence of the drive to make the triangle count the limiting factor. Apart from what I consider the be the misguided setting of max_area* in the calculation, the calculations do just about exactly what they are supposed to do. Now this is a very real problem in that it seriously hamstrings neshes for anything bigger than a couple of meters. The sensible solution is a model-specific LOD factor, for which I did a jira. This would allow the creator and/or user to specify sensible LOD switches and achieve reasonable PE, while remaining entirely consistent with the triangle limit philosophy.

If we accept that we are not going to get them to change the triangle limit philosophy, then the only thing that leaves for debate is what the target is; how many triangles and for what graphics setting. In my view the present choices are destructively constraining, but again, that is more likely simply over-cautious, rather than rflecting ulterior motive. I think the excesive restraint will not serve to increase income from mesh adaption, on the contrary, because it will strangle mesh, it must be destined to minimise any such income.

*the present max_area effectively sets the draw distance of the low-end user, who the limit is designed for, to 181m, when the medium graphics setting that is supposed to be the target, is 96m. The effect of this inconsistency is about a doubling of PE, and disproportionate increase in PE for larger meshes, but I don't think that is it's intention. I think it is just a mistake.

 

Link to comment
Share on other sites

Although starting out I don't think that is their direct intention, I do think it is serving the purpose you describe. My thoughts are that people are not using the proper work flow to achieve a mesh that is close to wear it needs to be such that optimization actually gets it to wear it will be workable. I am speaking of the non-enthusiasts or the people that have a basic understanding of mesh but are not on this forum reading about all the things the hardcore testers are finding out to work for them. (in other words the info on how to get good results is scattered and not all in one place to be found, as Ele puts it "layman's terms" :D much like how it is for sulpties.

With programs such as Zbrush where people are making million plus poly models, such as a 64K triangle gun that was decimated to achieve 64K, cannot be decimated down to were they will work and have proper topology. They have to be re-topologized to achieve that but I do not believe that most people will be doing that or understand the complete flow to get it done. I am in the process of working on video tuts similar to how you do but for Maya and the other programs I use to get models that are acceptable for upload.

I was thinking hard on the $150L charge and at first it seems like way high when you compare to the upload of your sculpt image but its just that regardless of tri or quads or anything it just one image. On the other had with mesh if you indeed do not use anything auto generated and indeed use all your texture faces you would be uploading bascially 13 different things, 5 meshes and 8 textures so if that was related to the prims are uploaded now it would be $130Ls so really its only $20Ls to high IF you are using all the upload options. [edited] the only problem left is 5 of the uploads are actually one thing inWorld so they could be combined into one, similar to how sculpties are now you upload one image all LOD is handled by server. that would make max upload $90Ls and the creator would pay up to that if the want features more color or whatever but it could be $20Ls now that sounds nice :D

So I agree with breaking it into $10Ls with each part, this would make it straight forward and IMHO very fare. That only leaves it up to the creator to optimize to get a product that is acceptable PE such that they are able to sell it or to wear it and not bring sims to standstills. But as many have said that is the case now with sculptie shoes and hair and such.

Link to comment
Share on other sites


Drongle McMahon wrote:

If we accept that we are not going to get them to change the triangle limit philosophy, then the only thing that leaves for debate is what the target is; how many triangles and for what graphics setting. In my view the present choices are destructively constraining, but again, that is more likely simply over-cautious, rather than rflecting ulterior motive. I think the excesive restraint will not serve to increase income from mesh adaption, on the contrary, because it will strangle mesh, it must be destined to minimise any such income.

*the present max_area effectively sets the draw distance of the low-end user, who the limit is designed for, to 181m, when the medium graphics setting that is supposed to be the target, is 96m. The effect of this inconsistency is about a doubling of PE, and disproportionate increase in PE for larger meshes, but I don't think that is it's intention. I think it is just a mistake.

 

Interesting, I sincerly hope that it is a mistake and that current PE's are in fact around double to what they should effectly be. I must admit last night when dave did that test with the rockface pools, I was trying to figure out what the problem was, even with them filling the sandbox we were on i didnt lag 'in the slightest' not to mention my graphics settings are quite high and i consider my computer to be medicore at best.

It seems silly to me to start basing PE's on low end users, rather than mid range users, just the fact that you need a graphics card for SL is enough for most people to know and understand about upgrading hardware not to mention sculpties blow all that out of the water anyways, they usually cause lag regardless of how powerful your computer is.

 

Just to add further to support the halving of current PE limits, its not LODs, textures and tri counts that cause huge rendering issues, its shadows, AA and other graphics features that effectively doubles/triples/quadruples+ the rendering process just by turning some of these features down or off can increase rendering performance significantly.

Link to comment
Share on other sites

  • Lindens


Drongle McMahon wrote:

*the present max_area effectively sets the draw distance of the low-end user, who the limit is designed for, to 181m, when the medium graphics setting that is supposed to be the target, is 96m. The effect of this inconsistency is about a doubling of PE, and disproportionate increase in PE for larger meshes, but I don't think that is it's intention. I think it is just a mistake.

 

You keep bringing this up, but look at how the math works out and you'll see that the larger the max_area is the lower the streaming cost is, because the streaming is based on the average number of bytes visible over max_area.  Effectivelly, the average number of triangles visible over max_area approaches the number of triangles in the lowest LoD as max_area approaches infinity.

Link to comment
Share on other sites

First I want to thank Runitai Linden for responding to my initial posting.  You’ve answered my questions quite nicely and I think I have a good idea now why LL has taken the approach to PE costs.

 

The current approach I think is going to hinder and hurt mesh development.  I do personally believe that SL has to embrace mesh in order to survive.  Other virtual worlds and gaming platforms, which are the main competitors for SL, graphics are getting better and better every year leaving SL behind.

 

Here are a few ideas I had that might be a better approach to dealing with mesh and lag.

 

There could be different tier levels, on initial uploads, for mesh of different in world physical size and or scalability.   What do I mean by this and how would it work?

 

You could have two or more different kinds of mesh.  For instance you could have mesh that is scalable in world.  Its PE would go dramatically up as the size is increased just as it does now.  Then you could have fixed size mesh.  The in world size is fixed upon upload.  The bigger the mesh the more it costs to upload and the fewer triangles one can have per mesh object.  Even smaller fixed size mesh should be given a break and allowed more triangles per object and get a smaller PE than they do now.

 

This approach would discourage lazy people from just going to Google Sketchup 3D Warehouse and uploading  huge buildings or ships that have a high poly count.  Lazy people are usually cheap when it comes to actually having to pay for something.

 

The second approach I take is giving the individual resident more control over draw distance so they can tailor their lag experience to their machine and internet connection.  As I suggested in my previous post have more than one draw distance slider.  One for prims, one for scultpies and one for mesh.  Come to think of it how about one for avatars as well seeing as it appears that avatars and what they wear may be causing more lag than just about anything else in SL.  Let the consumer determine how much lag that they are willing to put up with by giving them more control.

 

My last thought is wondering if bandwidth usage actually cost LL more real dollars cutting into their bottom line?  Does mesh mean considerably more bandwidth than prims or sculpties?

 

If real financial bottom line costs are behind, what I consider harsh PE on mesh, then why not offset those costs?  How?

I been suggesting this for years why not get some NATIONAL ADVERTISERS at least on the opening splash screen when you start up your viewer?  Heck if I where the owners of  Phoenix I sell that space for advertising.  Perhaps make enough money so the developers can work full time on it.  Hey if it meant lower PE or land tiers I look at a Coke Cola ad as SL loads.

Link to comment
Share on other sites

Yes, I know. In the context of the current maths for the triangle budget, that is true. It doesn't change the plateau at high sizes, but it approaches it faster as you use a smaller max_area. In that context, it's a "mistake" I like. I suppose the rationale for using it is that it is the area of the smallest circle that includes all the objects on the region (if you are at the centre). If the region is an isolated island (as most are), that is the same thing as all the triangles you would see if your draw distance was the radius of that circle (181m). So you can quite fairly say that that is the perceived triangle count for an avatar at the center with draw distance just big enough to see everything on the region (and renderVolumeLODFactor of 1). Then when an isolated island region is filled to capacity with the object, uniformly distributed, that should be equal to the triangle budget. There is nothing wrong with that, and that is what the current calculation does. With this approach, max_area is not supposed to represent an actual likely draw distance, and therefore there is no mistake.

The part I have problems with is that the user with a low end machine, who I presume is the user for whom we want to limit triangle numbers for, will not generally have his draw distance set so high as to see an entire region's content. So I would define the PE as the cost that means an avatar on medium graphics settings would see 250,000 triangles in a region uniformly filled with copies of the object. On medium graphics settings (I think) his draw distance is 96m. Then, standing in the center of the same region, he will only see pi*96^2 = 28953 sqm, which is slightly less than half the region area. Thus he will only see 28953/65536 of the objects. (He can't see to the edges, so we can ignore the question of whether the region is isolated or not). The weighetd average number of triangles he sees per object is now given by substituting 28953 for max_area in the calculations. At first sight, that looks like 3.5 times as many, but as it approaches the inflection points faster, and reaches the plateau sooner, with increasing size, that is only true for the smallest sizes. Now the number of objects in the region can be as many as will make him see 250,000 triangles in the 28953 sqm area, as he cannot see any outside of that. That means the weighted average count is for only 28953/65536 of the 15000 PE region prim budget. In other words, the weighted average triangle count only accounts for 0.44 of the 15000 region PE allocation. This compensates for the faster increase of PE with size.

These two are compared in the graph I made earlier, redone here with a small correction for renderVolumeLODFactor. The orange line is the relative PE with the present calculation and the green one is the one described in the second paragraph here. The essence is that (a) the present dotted orange one is up to about 2 times as high, depending on size, and (b) the other, green, reaches a plateau sooner, so that the reduction is highest at the largest sizes*. The blue line on the graph is for the same argument as the second paragraph here, but using the high graphics settings instead of the medium (dd=128, so edge effects are still missing). This is really very similar to the present calculation. So if we say the triangle limit is supposed to apply to an avatar on high graphics settings, then there is little to choose between the two definitions. Black and red are for ultra and low settings.

cfschemes.png  cfsch_4x.png

In summary, you are right (of course); it is not a mistake if you formulate the definition of PE as in the first paragraph; it would be if you used the definition of the second paragraph, which seems more natural to me. Of course, if you have based the triangle budget on performance at default settings, then if you adopted a different calculation the difference would probably have to be compensated for by a change in the budget. Then the only really effective difference would be the size at which the cost stops increasing.

I don't mean to suggest you need to change the calculation. There are assumptions embedded in the maths that are quite crude approximations, notably the assumption of uniform and independent spatial distributions of avatars and objects, and in only two dimensions at that. The effects of these approximations are very likely greater than the differences between these two versions of the mathematics. I am just being needlessly pedantic. I will try to restrain myself.

* which is why I like it. of course.

ETA: I should have said, the graph is for an object with exactly four-fold LOD data size redictions at each step.

ETA: whoops, that was the two-fold graph. Now put the 4-fold next to it. No difference in conclusions except that the medium settings are now closer to the existing calculaion. So the existing is giving stronger LOD discrimination.

Also: the graphs are all amost the same shape exept that the x axis is stretched by d/f and the y axis by d^2 where d=draw distance and f=renderVolumeLODFactor. They aren't drawn that way, it's just the effect of the maths.

Link to comment
Share on other sites

Here are two formal definitions, which I hope clarify the differences. A is the existing calculation.

A) If N copies of an object uniformly distributed in a region cause 250,000 triangles to be rendered by a camera at the ceter of the region, with a draw distance set to encompass the whole region and renderVolumeLODFactor equal to 1.0, then the prim equivalence of that object shall be 15000/N, where the region is deemed to have a capacity of 15000 prims.

B) If N copies of an object uniformly distributed over an area including that (draw_area) in which the object is visible to a camera with default medium graphics settings*, cause 250,000 triangles to be rendered by that camera, then the prim equivalence of that object shall be 15000*( draw_area/region_area )/N, where region_area, the area of a region, is deemed to have a capacity of 15000 prims..

* that is a draw distance of 96m and renderVolumeLODFactor of 1.125
Note that a camera with default medium graphics settings, as in (B), will generally have to render less triangles than that with the settings in (A).

Link to comment
Share on other sites

I believe the Budget test is for graphics setting "High" with the Mesh Detail 'Object' slider set to Mid. Which results in 128m DD and 1.0 rendervolumeLODFactor. I TP'd in our Sim at coordinates 128/128 with this graphic setting. Gave it some time to load, did a few 360's and opened the Scene Statistics. ~480k Triangles. And hey, it's a Homestead. :smileyvery-happy:And half of the sims sqm is just plain water. I got 40 to 60 fps with a GTS 250, 4x AA, no Shadows.

Link to comment
Share on other sites

"Which results in 128m DD and 1.0 rendervolumeLODFactor."

It has to be for DD=181 to fit the current calculations, and then it has to be on an isolated region* so that you don't get any triangles from neighbouring regions. If it was for the settings you give, it would be like the blue graph, but stretched 9/8 along the horizontal axis.....actually not very different from the existing calculation, just different inflection points and a bit cheaper for the largest meshes.

But I think the point then would be - why limit what high end machines are allowed to see by insisting that low-end users must be able to use high settings? What are the medium and low settings for then?

*or neighbouring regions are empty .... same effect.

Link to comment
Share on other sites

I think Runitai said that the scene statistics display only counts objects inside a sim.



Drongle McMahon wrote:


But I think the point then would be - why limit what high end machines are allowed to see by insisting that low-end users must be able to use high settings?
What are the medium and low settings for then?

I dunno.

Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 4587 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share


×
×
  • Create New...