Jump to content

Size matters more than expected


animats
 Share

You are about to reply to a thread that has been inactive for 1445 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

I'm over on the beta grid, uploading escalators in different sizes. They're all the same, except that the center part has been stretched in Blender.

uploadcompare.thumb.jpg.cc9ddf8400571a5b4077f84dc2d369e5.jpg

7m, 3m, 30m, and 60m escalator frames. The dimension is floor to floor.

Land impact
Height Land impact
3m 10
7m 16
30m 116
60m 205

These all have exactly the same number of triangles: 5040 at high LOD, 92 at medium, low, and lowest LOD. This is a linkset of two objects, uploaded together. The LI for the 30m and 60m escalator frames is enormous.

uploadcostlarge.thumb.png.743825d5426dad2ee9b09dc1ebc0cd8a.png

Here's the upload estimate for the 60m big one. (No, I don't really need a 60m escalator, that's just for testing.) The cost is mostly "Download". Which is strange, because the mesh data isn't any larger just because of the object size.

The big 60m upload was very strange. The escalator frame became thinner. It's normally 1m, but somehow it shrank in width during upload. The wood cube is 1m^3, for a size reference.

So what's the deal with download weight going up this fast?

  • Like 1
Link to comment
Share on other sites

2 hours ago, animats said:

So what's the deal with download weight going up this fast?

The significance given to each LoD model in the download weight calculation depends on how much area it "covers" within a c. 184 m radius or something like that) and with LoD factor set to 1

For example, for an object 1x1x1 m, the theoretical swap distances are 2.88, 11.55 and 23.1 m respectively. That means the high model is supposed to cover 26.2 m2, the mid 392.7 ,2, low 1256.64 m2 and lowest 104,686 m2. In other words, the lowest LoD model covers about 4,000 times as much arae as the highest so it's given 4,000 as much significance to the download weight calculation.

However, if we increase the size to 10x10x10 m, the theoretical swap distances become 28.9, 115.5 and 231 m. At that size the high LoD model covers 2618 m2 while the lowest isn't included in the weight calculation at all. At max size (64x64x64 m) even the mid model is eliminated from the weight calculation.

This is a real pain since you can't really use the LoD system effectively for meshes smaller than about 2 or bigger than about 5 m and I strongly believe LL's failure to provide an adequate LoD system for meshes and sculpts is one of the biggest problems we have in SL today. But we have to make the most out of what we've got.

 

Edited by ChinRey
  • Like 1
  • Thanks 1
Link to comment
Share on other sites

Yes this is fundamental to the streaming cost algorithm. The concept is derived from the odl days when the server had to stream these, a larger build is seen by more people and thus will be downloaded more often, when combined with the LOD setting the dominance of a given LOD in the weighting changes and thus the optimisation required alters. 

Consider an example I have used in the past. A house is a large object, you can see it from afar, but you cannot see the window latches and door knockers. As such it makes no sense for them to be included in the same object, becuase they should LOD independently. This scale related algorithm addresses this.

My blog here, explains the algorithm. My forthcoming Addon will eventually have an accurate cost predictor too. 

http://beqsother.blogspot.com/2016/07/the-truth-about-mesh-streaming.html

 

  • Thanks 3
Link to comment
Share on other sites

1 hour ago, Beq Janus said:

My blog here, explains the algorithm. My forthcoming Addon will eventually have an accurate cost predictor too. 

http://beqsother.blogspot.com/2016/07/the-truth-about-mesh-streaming.html

I didn't examine your blog post in detail so I may misunderstand you but it seems to me you stipulate a 512 max view distance:

def getLODRadii(object):
    max_distance = 512.0
    radius = get_radius_of_object(object)
    dlowest = min(radius / 0.03, max_distance)
    dlow = min(radius / 0.06, max_distance)
    dmid = min(radius / 0.24, max_distance)
    return (radius, dmid, dlow, dlowest)

If so, that is not correct. It's supposed to be 181 m but for some reason it's set a little bit higher, not more than a few meters though.

 

1 hour ago, Beq Janus said:

Yes this is fundamental to the streaming cost algorithm. The concept is derived from the odl days when the server had to stream these, a larger build is seen by more people and thus will be downloaded more often, when combined with the LOD setting the dominance of a given LOD in the weighting changes and thus the optimisation required alters.

Would you say streaming cost is irrelevant with CDN? I know AWS charges by bandwidth use and I'd be very surprised if Akamai doesn't too.

(For those who don't speak Geekish:

  • CDN: Content Delivery Network. Rather than have everything stored on their own servers, LL has much of it stored at several (20 last time I checked) across the world.
  • Akamai: The world's biggest CDN provider and the one LL currently uses.
  • AWS: The company that actually owns the servers, Akamai only rents space at them. (At least they're supposed to own the servers. The way the Cloud works is that everybody rent from everybody else and if we follow the tracks all the way to the root, the entire Cloud is actually hosted on an old Windows XP computer with a dial up connection in Oblivion, North Dakota.)
Edited by ChinRey
  • Thanks 2
Link to comment
Share on other sites

11 hours ago, ChinRey said:

I didn't examine your blog post in detail so I may misunderstand you but it seems to me you stipulate a 512 max view distance:


def getLODRadii(object):
    max_distance = 512.0
    radius = get_radius_of_object(object)
    dlowest = min(radius / 0.03, max_distance)
    dlow = min(radius / 0.06, max_distance)
    dmid = min(radius / 0.24, max_distance)
    return (radius, dmid, dlow, dlowest)

If so, that is not correct. It's supposed to be 181 m but for some reason it's set a little bit higher, not more than a few meters though.

 

Would you say streaming cost is irrelevant with CDN? I know AWS charges by bandwidth use and I'd be very surprised if Akamai doesn't too.

(For those who don't speak Geekish:

  • CDN: Content Delivery Network. Rather than have everything stored on their own servers, LL has much of it stored at several (20 last time I checked) across the world.
  • Akamai: The world's biggest CDN provider and the one LL currently uses.
  • AWS: The company that actually owns the servers, Akamai only rents space at them. (At least they're supposed to own the servers. The way the Cloud works is that everybody rent from everybody else and if we follow the tracks all the way to the root, the entire Cloud is actually hosted on an old Windows XP computer with a dial up connection in Oblivion, North Dakota.)

For the 512... the code doesn't lie, even if it is a bug. I double checked, cos when I wrote that I was not a viewer dev and I suspect I used the code referenced on the wiki (and we all know how reliable that can be), however, it is still there and still the same value is in use. What is more, the calculation method that was used back then (getStreamingCost() ) is now known as getStreamingCostLegacy() because Vir Linden refactored it to introduce the new animesh calculation option. the refactored version double checks it is consistent with the old version and we don't get those errors so we can assume it is using the same numbers and if we trace through the new code we fine that a new method (getRadiusWeightedTris() ) has been introduced and it also uses 512.

I suspect that the 181 (ish) value you are thinking of is the radius of a circle encompassing the region (square root of 32768 = 181.0193). The area is used as a clamp on the LOD distance when calculating the visible LOD.

Would I say the streaming cost was irrelevant with CDN? purely my own views here of course, and the I think the answer is  not plainly yes or no. It is certainly less directly relevant, in the sense that it is amortised over the total number of views for that asset whether it be present on 1 region or 1000, it is not trying to discourage heavy downloads because people have slow networks and the poor server has to send it out any more. It is far more complicated now, a popular mesh item will get downloaded more than a one off no copy item that sits on a private estate. On the other hand, if we consider the field of view of an avatar, at 120m that covers a lot more objects than at 5m, so you could argue that you'd like to encourage the models at 120m to be less complex and larger objects are more likely to fall into that category. This is deep into conjecture and these arguments can be pushed one way or the other quite easily, but if we were asking "Do I think it makes sense that scale affects the 'mesh cost'?" (avoiding the streaming connotation for a moment) then yes, I do. "Is the breakdown  right for modern SL?" probably not, that is in large part what I want to see from ArcTan and I would argue strongly that the cost of tris in the lowest/impostor LOD were always far far too high.

 

43 minutes ago, ChinRey said:

That's lovely. We really should compile a list of all these old informative posts here so they wouldn't get so easily lost.

I agree, in all the blogs from 2014-2016 or whenever it was I wrote those. Drongle's graphs and stuff were my goto. You have to be careful at times as things do change a little but they remain correct for the largest part. An excellent resource, I've long been a @Drongle McMahon fan.

 

Edited by Beq Janus
  • Like 1
Link to comment
Share on other sites

34 minutes ago, Beq Janus said:

I suspect that the 181 (ish) value you are thinking of is the radius of a circle encompassing the region (square root of 32768 = 181.0193).

That was @arton Rotaru 's explanation, not mine. I don't know the reasoning being whatever cutoff value LL chose but I never doubted Arton because it fits the observable facts. Drongle's graphs are also fairly close to this and I checked right before I wrote this post just to make sure there haven't been any recent changes. A 512 m cutoff distance isn't even close to matching any of our tests or explanations though not even if we take into account a certain former Linden developer's problems understanding the difference between radius and diameter. So apparently we have a piece of software that uses a different value for a constant than what is actually written into its code. Am I the only one who finds that a teeny-weeny bit weird?

 

Link to comment
Share on other sites

 

6 minutes ago, ChinRey said:

That was @arton Rotaru 's explanation, not mine. I don't know the reasoning being whatever cutoff value LL chose but I never doubted Arton because it fits the observable facts.

I think you are confusing two different things, or I have lost track of what this conversation is about. If we are still talking about how the viewer (and the server) determine the streaming cost part of the land impact then the 181 value has no part to play.

The 512 is used in determining the so-called streaming cost of the LI. The 181 clamp has nothing to do with the LI, it has a part to play in the LOD.

I think I'm going to have to do some tests now before I start doubting my own eyes.

10 minutes ago, ChinRey said:

So apparently we have a piece of software that uses a different value for a constant than what is actually written into its code. Am I the only one who finds that a teeny-weeny bit weird?

Viewers cannot make numbers up. If they did then your LI would differ to the next persons. 

Link to comment
Share on other sites

OK so test completed.

What we have here is a mesh lamp of mine. I dropped into it an lsl script that dumps mesh information. it is derived from one by Linda Hellendale but with some of my own changes.

There are a couple of things to note. Firstly, the script correctly estimates the LI, and it does this using the 512 Max_distance. One (minor) mistake in the script is that is uses the incorrect area 102932  which is Pi * 181^2 whereas the viewer uses 102944 which is PI* sqrt(2*128^2) (i.e the the hypoteneuse of a triangle with two sides of 128m (and thus the radius of the circle that would enclose it.). The viewer used to have the incorrect value, I submitted a jira to get it fixed a couple of years ago.

 

e8f31f70a486b21d4223aa0692fe581f.jpg

Link to comment
Share on other sites

18 hours ago, Beq Janus said:

 

I think you are confusing two different things

We may misunderstand each other but what I'm talking about is at which object sizes each LoD model becomes irrelevant for download weight. The answer to that is that it happens when its stipulated swap distance exceeds a little bit over 180 m. That's what you use in your calculations too and I don't know where the 512 m value comes into the picture at all. So:

  • If the mesh is larger than c. 8 m, you won't save any download weight by reducing the lowest LoD model
  • If the mesh is larger than c. 16 m, you won't save any download weight by reducing the low LoD model
  • Once the mesh gets close to 64 m (you can't go higher than that in SL of course), you won't save any download weight by reducing the mid LoD model.

Here's a simpler test than Beq's, one that anybody can try: Make a mesh with identical relatively high poly high, mid and low models and a very low poly lowest model. Then check the downlaod weight at various sizes. Once the object radius is increased to somewhere between 3.5 and 4.5 m, which equals a stipulated swap distance between 160 and 200 m (you have to expect that much margin of error for a quick test as simple as this) you will see the download weight will no longer increase if you scale it up further. With more detailed test, you can narrow down the margin of error to somewhere between 181 and 184 m.

Edited by ChinRey
Rephrased and reorganized the post to avoid some potential misunerstandings
Link to comment
Share on other sites

On 4/26/2020 at 2:31 AM, ChinRey said:

That was @arton Rotaru 's explanation, not mine. I don't know the reasoning being whatever cutoff value LL chose but I never doubted Arton because it fits the observable facts. Drongle's graphs are also fairly close to this and I checked right before I wrote this post just to make sure there haven't been any recent changes. A 512 m cutoff distance isn't even close to matching any of our tests or explanations though not even if we take into account a certain former Linden developer's problems understanding the difference between radius and diameter. So apparently we have a piece of software that uses a different value for a constant than what is actually written into its code. Am I the only one who finds that a teeny-weeny bit weird?

 

I remember that I once asked about that number. So I tried to find that post of mine, and found it in this thread. :SwingingFriends:

https://community.secondlife.com/forums/topic/51368-food-for-thought-warning-long-with-maths/

Edited by arton Rotaru
Link to comment
Share on other sites

6 hours ago, arton Rotaru said:

Thanks.

This is off topic but I had a look at some of the other posts too and found this:

  

On 7/3/2011 at 4:08 AM, DanielRavenNest Noe said:

I tried rezzing multiple objects until I had 1.2 million triangles in the scene, with the default low graphics setting I am getting 160 fps.  That is for an Nvidia GTX-260.  So scaling that performance down to whatever you consider a "low" graphics card to target for should give you an acceptable frame rate and triangle count combination.  Unfortunately I don't have a low end card any more to test with.

That was 2011. Judging by the benchmark I found for the GTX-260, you'd probably get 20-30 fps if you redid that test today - 50 at most. Visual quality has improved of course but not that much so it's mostly overhead that has built up over the years.

Link to comment
Share on other sites

13 hours ago, ChinRey said:

Thanks.

This is off topic but I had a look at some of the other posts too and found this:

  

That was 2011. Judging by the benchmark I found for the GTX-260, you'd probably get 20-30 fps if you redid that test today - 50 at most. Visual quality has improved of course but not that much so it's mostly overhead that has built up over the years.

Such a hard test. Asking a computer to render 1.2 million copies of the same thing. The processor was probably bored doing it and the graphics card didn't have much to do either. You can't use that as a comparison to something you'd find on a typical region now.

Link to comment
Share on other sites

13 hours ago, ChinRey said:

That was 2011. Judging by the benchmark I found for the GTX-260, you'd probably get 20-30 fps if you redid that test today - 50 at most. Visual quality has improved of course but not that much so it's mostly overhead that has built up over the years.

Fortunately it's not very hard to repeat that test these days.

You can get to 1.6 million with just two Legacy bodies and nothing else.

Link to comment
Share on other sites

I put up 36 copies of an object that had a mere 33,916 triangles for a total of a little over 1.2 million. I got 320 fps with my GTX1060. Yeah, it's way better than the 260 but I've had that card for 4 years. It's not the latest & greatest by far. I'd try the 260, but I think I sent it to electronics recycling a couple years ago.

Link to comment
Share on other sites

3 hours ago, Parhelion Palou said:

Such a hard test. Asking a computer to render 1.2 million copies of the same thing. The processor was probably bored doing it and the graphics card didn't have much to do either. You can't use that as a comparison to something you'd find on a typical region now.

That's true but it is a bit beside my point. I don't think you can get that performance even in a completely empty region these days.

I don't know though since I don't have a GTX-260 to test. What I did was compare becnhmark tests of it with ones of gpus I do know and that isn't reliable enough to draw firm conclusions of course. It would be really useful if we had a set of tests over the years using exactly the same hardware to render exactly the same scene. I'm sure LL as a serious software developer does something like this or something else to keep track of how software updates affects performance (ummmm.... errr... they  are and they do, right?) but they're not telling us the results.

  • Haha 1
Link to comment
Share on other sites

40 minutes ago, ChinRey said:

That's true but it is a bit beside my point. I don't think you can get that performance even in a completely empty region these days.

I don't know though since I don't have a GTX-260 to test. What I did was compare becnhmark tests of it with ones of gpus I do know and that isn't reliable enough to draw firm conclusions of course. It would be really useful if we had a set of tests over the years using exactly the same hardware to render exactly the same scene. I'm sure LL as a serious software developer does something like this or something else to keep track of how software updates affects performance (ummmm.... errr... they  are and they do, right?) but they're not telling us the results.

OK, here's a test:

My usual settings: Firestorm Ultra with draw distance 304m, shadows on everything
Low setting: Firestorm Low (draw distance 64m, no ALM, pretty much nothing interesting)

Test Location 1
Note: There's an avatar at the far left in the scene, but definitely not wearing anything mesh. I'm in the scene, so there is a mesh avatar, but a Dinkie is pretty efficient.

810796226_FramerateTestLocation1.thumb.jpg.45069799c746677d6cbe887aa45ed662.jpg

Usual: 74 fps
Low: 109 fps

Test Location 2

1589499756_FramerateTestLocation2.thumb.jpg.4c0e3ea0793f137b998ade0c919bcf3c.jpg

Usual: 124 fps
Low: 270 fps

The second location is at 3300 meters with content I control, so it's not terribly far off from an empty region.

Link to comment
Share on other sites

2 minutes ago, Parhelion Palou said:

OK, here's a test:

That's interesting but is that with the same gpu as the 2011 test? That's the real question here, how the software performs with the same hardware. We already know that gpus are more powerful today than they were ten years ago. The GTX-260 may have been a top range game gpu back then, today it's lower midrange home computer level.

Link to comment
Share on other sites

Yah, but the graphic card isn't that important anyway. Why aren't we comparing processors, ISP speeds, disk access times (loving my SSDs)?

I get better framerates now with infinitely better scenery than I did in 2006 though my current computer cost less than the old one. I'm not going to insist that SL be able to run on a 2006 computer. I'll let someone else tilt at that windmill.

Link to comment
Share on other sites

Once the assets are loaded into RAM, ISP and disk access times are largely irrelevant so you don't need to manage those as controls except to keep the test scene from changing during the run.  What seems to make the biggest difference from one machine to another is easily seen in software profiler.  SL uses CPU, then GPU, then CPU then GPU, etc.  Not both at same time.  Imagine a lone worker carrying a product back and forth between two workshops to use the tools in the workshops to complete the project.  SL rendering is damn near that.  A single render process hauling ass back and forth between the CPU and the GPU, doing work synchronously.  There is some help in the form of other processes for texture fetching and decompression and some other I/O and but they are not the ones cutting a grove in the floor.  SL content makes this poor worker run back and forth many times to render a single frame.  If any of the tasks in either 'workshop' can be sped up, the end result is a slight increase in the framerate (number of products produced per second).  If the work done in CPU workshop takes too long the GPU manager slows the GPU clock down so save energy due to idleness.  We also see GPU "utilization" statistic decrease.  Conversely, If the work done in the GPU workshop takes too long we see CPU "utilization" statistic decrease.  It's like a weird street race for the workshops, accelerate hard in the straight runs and slam on the brakes prior to the corners.  Or like city traffic when the traffic control lights are not pipelined.  Hey, somebody fix this please! @Rider Linden  @Ptolemy Linden  @Euclid Linden

Edited by Ardy Lay
Link to comment
Share on other sites

 

On 4/26/2020 at 2:59 AM, ChinRey said:

We may misunderstand each other but what I'm talking about is at which object sizes each LoD model becomes irrelevant for download weight. The answer to that is that it happens when its stipulated swap distance exceeds a little bit over 180 m. That's what you use in your calculations too and I don't know where the 512 m value comes into the picture at all. So:

  • If the mesh is larger than c. 8 m, you won't save any download weight by reducing the lowest LoD model
  • If the mesh is larger than c. 16 m, you won't save any download weight by reducing the low LoD model
  • Once the mesh gets close to 64 m (you can't go higher than that in SL of course), you won't save any download weight by reducing the mid LoD model.

Here's a simpler test than Beq's, one that anybody can try: Make a mesh with identical relatively high poly high, mid and low models and a very low poly lowest model. Then check the downlaod weight at various sizes. Once the object radius is increased to somewhere between 3.5 and 4.5 m, which equals a stipulated swap distance between 160 and 200 m (you have to expect that much margin of error for a quick test as simple as this) you will see the download weight will no longer increase if you scale it up further. With more detailed test, you can narrow down the margin of error to somewhere between 181 and 184 m.

As per the thread from @arton Rotaru The 181 is the radius of a circle surrounding the region as I said. That is the input into the weight calc. the 512 is a clamp on the distance at which a model will swap (assuming no lodfactor mutltiplier). The 512 is correct and used in all viewers, the ~181 is correct as a derived number but is not referenced in the directly anywhere because the code only cares about the area of that circle (which is the value I showed).

The 512 is used when determining the distance at which a swap occurs based on the radius of the object. thus the LOWEST LOD will swap at 512m for any object > ~15.3m  (15.3/0.03 =510), by the same logic the LOW LOD will switch in at R/0.06 until that value is > 512, for this to be the case R would be > 30.72,  Medium will NEVER hit the cap, not in SL with normal mesh as 512*0.24 gives us 122.88. So a mesh would have to be of radius ~123m before the LOD swap would exceed the upper limit.

You can test this easily for yourself. Get a mesh, use firestorm and edit the item on the object tab where you see the table of LOD Swaps. now as you scale an object up you'll see that they cap at 576 for L (factor 1.125) and in my example where I have my factor at 1 it caps at 512.

1b5f713ef1d4a3ef67c0da4c678a12b0.jpg

 

The area of a circle (which is where you 181 comes in to play ) comes in later when we are determining the triangle weight. this uses the LOD switch distance as per the above, with a cap clamped at the area of a circle radius 181. This is in my blog but the value is not shown explicitly as it is stored in a setting. The MaxArea (and MinArea for that matter) ensure that we clamp our values into "sensible" ranges.

def getWeights(object):
    (radius, LODSwitchMed, LODSwitchLow, LODSwitchLowest) = getLODRadii(object)

    MaxArea = bpy.context.scene.sl_lod.MaxArea
    MinArea = bpy.context.scene.sl_lod.MinArea

    highArea = clamp(area_of_circle(LODSwitchMed), MinArea, MaxArea)
    midArea = clamp(area_of_circle(LODSwitchLow), MinArea, MaxArea)
    lowArea = clamp(area_of_circle(LODSwitchLowest), MinArea, MaxArea)
    lowestArea = MaxArea

 

I hope this helps put the two of these in context. In effect, the visual LOD switch is clamped to 512 but the calculation of triangle cost is clamped at 181, which I think is the root of our talking at cross purposes.

 

Link to comment
Share on other sites

Just now, Beq Janus said:

I hope this helps put the two of these in context. In effect, the visual LOD switch is clamped to 512 but the calculation of triangle cost is clamped at 181, which I think is the root of our talking at cross purposes.

Yes. Misunderstanding cleared. :)

  • Like 1
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 1445 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...