Jump to content

Download weight and size. Some graphs.


Drongle McMahon
 Share

You are about to reply to a thread that has been inactive for 4627 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

Download weight, rather than physics weight or server weight, will be the PE for most complex meshes with simple physics shapes. This depends on the size of the data download required for each level of detail (LOD), but is also very dependent on the size of the object. This thread is to describe that size-dependent variation.

Size is represented by the "radius" of the object. That is the distance from the center to the corner of the x,x,z bounding box. You can calculate the radius from the dimension, on the modifiers tab of the upload dialog or the dimensions in the edit dialog. It is sqrt(x*x + y*y + z*z)/2. For a perfect cube, a radius of 1 means a dimension of 1.155, as shown in the first picture. For a flat square, it is 1.414 and for a thin rod it is 2.0.

pe_fig4.png


The red line in the second figure shows the general way the download cost (vertical axis) increases with increasing radius (horizontal axis). It has four segments which always have the same spans on the radius axis, so that there are sharp "inflection" points always at radius = 5.43, 10.86 and 43.44 meters. The first three are parabolic segments in which the weight increases faster and faster with increasing radius. The last segment is a flat plateau, for radiuses greater than 43.44 m. The lowest and highest weights and the inflection points are related to the sizes of the download data for each LOD as shown on the right.

pe_fig2.png

The exact shape of the graph depends on the ratio of the LOD data sizes. Multiplying all of them by the same number, as the mesh gets more complex, just stretches the vertical axis. So this and following graphs are "normalized" so that the weight of the plateau is 100%. The "100" thus means 100% of the plateau level*. This is the weight of a mesh which has the same mesh in all four LOD slots, which does not vary with size.

The inflection points for a perfect cube with side d are roughly d = 6.25, 12.5 and 50. For a flat square, they are about d = 7.5, 15 and 60, and for a rod, they are about d = 11, 22 and 88*

The next post will describe the effects of different choices in providing LOD meshes.

* In practice, the plateau cannot be reached for a rod-like mesh where the length (max 64) = 2 x radius. (max 32).

Note: corrections and suggestions for improvement are wellcome.

  • Like 3
Link to comment
Share on other sites

A digression because I think I have found a particularly useful graph for optimising dowload weights. Here it is....


pe_fig7a.png

What it shows is the change in download weight for adding or removing an equal amout of data from each LOD mesh, which is nearly the same thing as adding or removing a triangle from each LOD, and how that depends on the size of the object. Thus the height of each curve shows how effective geometry reduction will be when applied to the different LOD meshes. That is not comparing proportionate reductions, but removal of the same number of triangles from each mesh. The dashed vertical lines are the inflection points where each LOD becomes irrelevant (previous post). Between each of these there is a crossover point at which triangle removal from the higher LOD becomes more effective than from the lower LOD. These are marked with the vertical dotted lines. The advantage of this graph is that it does no change at all when you change the LOD meshes. So it always applies whatever your LOD steps are.

Another way of looking at this is to show the actual proportion of the download weight accounted for by each LOD mesh. This is les useful, as it does change with changes to the LOD mesh complexities. The second picture shows the percantages of weight accounted for by each LOD for some models with the relative LOD data sizes (~ vertex counts) shown in the square brackets.


pe_fig8a.png

  • Like 3
  • Thanks 1
Link to comment
Share on other sites

Back on track, although maybe not so useful, some graphs that show the effects of different departures from constant LOD complexity ratios. In all of these, the left two graphs are based on a model with four-fold LOD reductions, and the right on two-fold reductions. The full range of sizes is at the top while magnified views of the smaller sizes are below. Numbers in brackets like [a,b,c,d] specify the relative data sizes (~vertices) at each decreasing LOD.

 1. Effect of omitting lower LODs. In the first picture, the red curve shows the download weight for a model with all four LOD meshes specified [64,16,4,1]. It is hidden under the blue and green curves for radius greater than 5.43. The blue curve is for a model with the lowest LOD omitted or identical to the LOD 2 mesh [64,16,4,4], and the green curve is for the model with only one LOD step [64,16,16,16). The case where there is no LOD reduction is not shown, because it gives the constant value of 100 that does not change with radius.

 pe_fig1.png

The first thing to notice is that as the radius reaches each inflection value, the curves become identical. Thus for r > 5.43, the lowest LOD no longer makes any difference to the download weight. For r > 10.86, LOD 2 is irrelevant, and for r > 43.44, only the highest LOD makes any difference. As the radius increases towards the inflection points, the weight reduction for the lowest relevant LOD becomes less, and the next LOD becomes more important. Thus at r = 1, the lowest LOD effectively determines the weight, at r = 5 it is almost irrelevant.

 

2. Delaying LOD steps. Here, the same basic models (red) are compared with those where the LOD reduction is delayed for one ([64,64,4,1], blue) or two ([64,64,64,1], green) steps.

 pe_fig5.png

This is an expensive strategy for any but the very smallest meshes. The weight rises much more rapidly to the maximum plateau level. This shows that it is nearly always important to have effective LOD reductions at all LOD steps.

 

3. Accelerated LOD reduction. Here the red curves are again the same basic models, but this time the lowest LOD is used in the two lowest LOD slots ([64,16,1,1], blue), or in the three lowest ([64,1,1,1],green).

 pe_fig6.png

The difference between the red and blue curves is small, except over a small size range. The green curve, in contrast, shows a much lower weight for a substantial range of sizes. So using the lowest LOD in the two lower slots is not very useful, but dropping the LOD vertex count as low as possible at the first LOD step is very effective in reducing weight even for meshes up to 30 m radius.

That's enough for now, unless anyone want something else.

  • Like 1
Link to comment
Share on other sites

In case anyone is interested in the maths: The graphs are all made using a function that works exactly the same as the C++ function shown in the wiki. It is implemented in R, which is mainly used for statistics and bioimformatics, but also provides effective graph-drawing functions. In the wiki function there is  min_area = 1. If this is set instead to 0, it makes only imperceptible differences to the weights, but greatly simplifies the mathematical description of the function. With this change, if we call the weight "w" and the data sizes of the LOD meshes "a" (high), "b", "c" and "d" (lowest), then the weights can be written as....

for:  0.00 <= r <=  5.43 :  w = f*{ Qr*r*(a + 15*b  + 48*c - 64*d) + d }
for:  5.43 <= r <= 10.86 :  w = f*{ Qr*r*(a + 15*b  - 16*c) + c }
for: 10.86 <= r <= 43.44 :  w = f*{ Qr*r*(a - b) + b }
for: 43.44 <= r <= 64.00 :  w = f*a

or, revealing the role of the differences between LODs....

for:  0.00 <= r <=  5.43 :  w = f*{ Qr*r*( (a - b) + 16*(b - c) + 64*(c - d) ) + d }
for:  5.43 <= r <= 10.86 :  w = f*{ Qr*r*( (a - b) + 16*(b - c) ) + c }
for: 10.86 <= r <= 43.44 :  w = f*{ Qr*r*( (a - b) ) + b }
for: 43.44 <= r <= 64.00 :  w = f*a

...where Q = pi/(m*0.24*0.24) and m = max_area; Q = 0.000530
...and f is a scaling factor
...(range constants rounded to two places)

from the first equations, it can be seen that the partial derivatives for each of a, b, c and d are simple quadratic functions of r, or zero, and do not depend on any of a, b, c or d.

for:  0.00<=r<= 5.43 : dw/da = f*Q*r*r*; dw/db = 15*f*Q*r*r; dw/dc = 48*f*Q*r*r; dw/dd = f*(1-64*Q*r*r)
for:  5.43<=r<=10.86 : dw/da = f*Q*r*r*; dw/db = 15*f*Q*r*r; dw/dc = f*(1-16*Q*r*r); dw/dd = 0
for: 10.86<=r<=43.44 : dw/da = f*Q*r*r*; dw/db = f*(1-Q*r*r); dw/dc = 0; dw/dd = 0
for: 43.44<=r<=64.00 : dw/da = f*Q*r*r*; dw/db = 0; dw/dc = 0; dw/dd = 0

This explains why the segments of the curves in the change/triangle graph are parabolic and do not change with changes to the data sizes of the LOD meshes.

When the right hand side of the last non-zero equation in each line is set equal to zero, the value of at the discontinuities can be calculated. for example from r*r = 1/(Q*64). This is equivalent to hitting the upper bound in the statements like...

high_area = llclamp(high_area, min_area, max_area);


  • Like 1
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 4627 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...