Jump to content

Too Many 1024 Textures and the NEW (please hurry) Land Impact rules


Chic Aeon
 Share

You are about to reply to a thread that has been inactive for 2098 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

I just wanted to comment that some folks making avatar garb (clothes, shoes, hair etc) are doing an excellent job -- while others are not. 

I needed to rez some shoes for a photo shoot a couple of days ago. Wandering through my inventory I pulled out some bootlets. They were 87 land impact. Well OBVIOUSLY THAT WAS A BAD PLAN.  Another pair of shoes was 1 when rezzed. Some creators have no clue. Others simply don't care and some farm out their mesh work to jobbers and apparently don't give any technical rules on the making of items. 

Consumers now have some tools to let them know what their rezzed and worn items "cost" but most are not going to do much with those tools. I don't inspect everything I am given  except for checking the LODs as that is number ONE on my personal list.  

A few folks are paying attention and changing their ways but as a blogger I see WAY too much stuff that is crazy over the top in both textures and mesh optimization.  

WE NEED SOME GUIDELINES --- AND WE NEED TO KNOW HOW THINGS WORK SO WE CAN OPTIMIZE ACCORDINGLY.

 

  • Like 1
Link to comment
Share on other sites

1 minute ago, Penny Patton said:

But I think when it does come to the main grid it shouldn't be enforced right away. Give people plenty of time to change their habits.

Patch did say in one thread here, that the two calculation methods would run side by side for a long time.

  • Thanks 2
Link to comment
Share on other sites

2 minutes ago, Penny Patton said:

No, no! Sorry if I wasn't clear. I think "Avatar Impact" should be a thing. LL has never said anything about it and I don't believe they're considering it. 

But I think when it does come to the main grid it shouldn't be enforced right away. Give people plenty of time to change their habits.

OK on part one. Thanks.   On part two, while I would LIKE to agree with you, I can't see that even a minority of folks will make changes by themselves. We have sort of proven that as a community over the last five years or so.  I am in the "rip of the band-aid" camp on that one -- but yes, with warning for sure. 

Link to comment
Share on other sites

ARC & Li have done nothing to encourage responsible content for the most part. They are just meaningless subjective numbers in a sea of meaningless subjective numbers.

ARC places the onus squarely on the consumer. Either don't wear what you want and have purchased or don't care if other people can see you. Makers of individual avatar accessories have not changed their approach in anyway. There is no guidance for shoppers, just arbitrary subjective numbers. A minority of creators have released updates, but these typically only involve removing any transparent prim parts as those are heavily punished. The actual visible content remains unchanged.

The adult community is actually ARC adverse in it's behavior with market leading creators making and selling updated products with insanely huge ARC costs. The consumer's choice boils down to "suck it up".

Li costs has lead directly to gaming the system to push everything tough with as low an Li as possible forcing the consumer to either not use their purchases or crank their LOD setting to the max and suffer poor performance everywhere. Complain about a products LOD and you will more often than not get a snotty creator telling you to use firestorm and set LOD to max or worse, go into debug settings and set it to an even higher setting manually.

 

Limits on textures or textures being linked to ARC/Li will result in one outcome. Hackery, bad consumer advise and another decade of us moaning about it here.

Consumer demand and market forces are entirely based on aesthetic quality, most consumers wouldn't even know where to start when inspecting something prior to purchasing to assess the technical skill of it's creator (and nor should they be expected to). Meaningless numbers are still meaningless. Li only matters as it's directly related to finical cost, but as we have witnessed in SL since sculpt maps, the answer to consumer demand for low Li is hackery and tinkering with viewer settings.

Edited by CoffeeDujour
  • Like 2
Link to comment
Share on other sites

1 minute ago, Chic Aeon said:

OK on part one. Thanks.   On part two, while I would LIKE to agree with you, I can't see that even a minority of folks will make changes by themselves. We have sort of proven that as a community over the last five years or so.  I am in the "rip of the band-aid" camp on that one -- but yes, with warning for sure. 

Oh nononono. I agree it will have to be enforced, I'm just thinking (and like Callum says, LL is planning to do this) that before the new LI is enforced, it should be displayed for a period of time, letting people know these new calculations will be enforced starting at a set future time. Just to give people time to start changing their habits. Sure, some will wait until the new LI goes into effect, but many content creators will try to stay ahead of things.

Link to comment
Share on other sites

2 minutes ago, CoffeeDujour said:

ARC & Li have done nothing to encourage responsible content for the most part. They are just meaningless subjective numbers in a sea of meaningless subjective numbers.

ARC places the onus squarely on the consumer. Either don't wear what you want and have purchased or don't care if other people can see you. Makers of individual avatar accessories have not changed their approach in anyway. There is no guidance for shoppers, just arbitrary subjective numbers. A minority of creators have released updates, but these typically only involve removing any transparent prim parts as those are heavily punished. The actual visible content remains unchanged.

The adult community is actually ARC adverse in it's behavior with market leading creators making and selling updated products with insanely huge ARC costs. The consumer's choice boils down to "suck it up".

Li costs has lead directly to gaming the system to push everything tough with as low an Li as possible forcing the consumer to either not use their purchases or crank their LOD setting to the max and suffer poor performance everywhere. Complain about a products LOD and you will more often than not get a snotty creator telling you to use firestorm and set LOD to max or worse, go into debug settings and set it to an even higher setting manually.

 

Limits on textures or textures being linked to ARC/Li will result in one outcome. Hackery, bad consumer advise and another decade of us moaning about it here.

Consumer demand and market forces are entirely based on aesthetic quality, most consumers wouldn't even know where to start when inspecting something prior to purchasing to assess the technical skill of it's creator (and nor should they be expected to). Meaningless numbers are still meaningless. Li only matters as it's directly related to finical cost, but as we have witnessed in SL since sculpt maps, the answer to consumer demand for low Li is hackery and tinkering with viewer settings.

All good points but it would still be foolish to dismiss ARC and LI as concepts entirely.

ARC was totally useless until jelly dolls, and while I agree an enforced ARC limit would have done much more than the jelly doll feature we got, it still convinced many content creators to optimize their ARC costs more. Whether or not that translates into better performance is a matter of how ARC is calculated, but it works well enough as a "rule of thumb" guide, except when creators abuse the loopholes to get a lower ARC cost.

 And that's the problem with the old LI calculations too. But again, that's a matter of LL patching said loopholes, which they are currently preparing to do. And people will always try to game any system you throw at them, but that doesn't mean you should make it easy for them to do so.

Likewise with the VRAM jelly doll feature, I agree it would be better if it were a hard limit enforced by LL, but using the feature gave me an enormous fps boost, and if that pushes even some creators to better optimizing their texture use that's a huge win. You might be able to game ARC and LI, but a lower VRAM cost is a lower VRAM cost no matter how you achieved it.

Link to comment
Share on other sites

LL need to enforce a standard object/avatar set of detail slider in all viewers and remove the ability of users to manually hack a debug setting higher. Provision photographers with an "insane detail level" option that doesn't persist between parcel changes/teleports etc.

Let products live and die by their visual quality on a level playing field between all clients. 

(as the initial outcome of this will be a 50% reduction in object detail for firestorm users, the trade off between hacked low Li and visual quality will become instantly apparent)

LL provided guidelines and workflows should be written up for creators spelling out the steps expected of quality content in a way that is accessible to everyone. Newbie friendly guides make good reference material for the pros.

As a side note, as prims are a fundamental part of Second Life, they should be given more of a pass

Link to comment
Share on other sites

I agree with most of that. I think we just disagree on what the effect of stop-gap measures that are in the reach of the TPV community would achieve, but we agree they're not ideal ways to deal with the problem.

As an aside on prims, I still think LL should have invested more in improving the in-world building tools. Mesh is great and if you know what you're doing you have total creative freedom, but easy and accessible in-world tools that can be used to create content that can at least stand with the mesh content in terms of visual quality would be a huge deal for SL. When prims were all we had, anyone could jump in and start creating. It was a nice, level playing field and that was a huge part of SL's initial charm and appeal.

I don't think SL needs a full on Blender-esque 3D modeling suite built in, and the in-world tools should be easy and intuitive, but I do think there's a lot of room for potential improvement with the current in-world tools.

Edited by Penny Patton
  • Like 2
Link to comment
Share on other sites

1 hour ago, Penny Patton said:

As an aside on prims, I still think LL should have invested more in improving the in-world building tools.

 

Another quote from Avi Bar-Zeev's blog then:

Quote

The most success to date I’ve had in my 20 year dream to obsolete polygons was with Second Life, where I wrote their 3D Prim generation system, still in use today. I wanted to do much more than the simple convolution volumes we ultimately shipped, but it was a good step in the right direction and at least proved the approach viable. However, one doesn’t create technology for its own sake — you always need to do what’s right for the product.

The ultimate vision I was hoping for then was more like what Uformia is now doing — giving us the ability to mash up and blend 3D models with ease.

Avi Bar-Zeev: Death to Poly (2012)

Bar-Zeev is a bit biased of course, prims were his baby after all. But even so, reading the two articles I linked to it's easy to see how nerfed the prim system is. That probably made sense back in the earliest days of SL when computing power was seriously limited but it should have been continuously developed. Offset holes would only require two more bytes of prim property data and shouldn't add much to the computing time. As far as the software is concerned, a prism (with a triangular profile) and a cube (square profile) are just cylinders with low curve resolution; how easy it would have been to extend that to include other polygons too. Then add CSG to the mix...

I know, this is a rant about water under the bridge and it's about geometry, not textures, but there is an important point. I've said it before but I think it shold be said again: Procedural geometry as a general principle is one of the two most fundamental ideas behind modern high speed 3D rendering (the other one is PBR). In 2003 Linden Lab was one of the pioneers of the concept and that was one of the main reasons why SL worked as well as it did. And the SL software is at its very core specially made to take advatage of procedural geometry. But then, just as other were starting to catch up, LL ... forgot all about it. There is no sign they abandoned procedural geometry on purpose, it was all by accident - a case of collective amnesia.

Edited by ChinRey
  • Like 1
Link to comment
Share on other sites

11 hours ago, Chic Aeon said:

OK. So if I understand this correctly ---- if you have four bushes with the same textures and you LINK them to be one object, then the texture IS reused without a separate call for each bush.  Otherwise not.

Reusing textures even across linksets does reduce the load significantly and is strongly recommended whenever possible. But that's probably only because of more efficient texture caching, not consolidated draw calls.

As for linksets, I always though that not only each part but even each face was a separate draw call even if the texture is the same. I never really looked into it though, so I don't know.

I've always linked as much as possible for practical reasons, to simplify backups and object positioning. My land at Coniston is considerably less laggy than any comparable SL scene I know of but I've done so much else to reduce lag, I can't say if the linking has had any effect there.

Link to comment
Share on other sites

On 7/8/2018 at 11:10 PM, Callum Meriman said:

It can be demonstrated as mentioned above, texture a prim at 1024x1024, shrink to a pinhead and duplicate, swing the camera around. Do the same at 512x512 and lower.

I've been looking at the texture code in Firestorm. If a high-rez texture on a small object brings down the frame rate, it may be a bug. The code supports mip mapping, but the policy on when to use it is hard to check. ("usemipmap" is passed down through many layers of calls.) With mip mapping on, and multiple texture levels loaded into the graphics card (a standard OpenGL feature), the graphics card should automatically use a lower-rez texture. I'd suggest duplicating that bug on the beta grid and filing a bug report.

The rendering code in the viewer is impressive. Just about every optimization you can think of has been implemented. (Except impostors for non-avatar objects.) But the policies on which ones to use in what order are spread across different modules, not tunable, and lack something checking to see how well they're doing.

By "policy", I mean:

  • Which texture files should be fetched first?
  • What resolution of a texture file should be fetched from the servers? (The JPEG 2000 files are progressive; you can read a small part and get a low-rez version.)
  • What resolution of an already fetched texture file should be sent to the graphics card?
  • What texture already in the graphics card memory should be evicted to make room for a new texture?
  • Should a texture already in the graphics card be reduced in resolution to free texture memory?
  • Should a texture already fetched at low resolution be re-fetched at higher resolution?
  • How many simultaneous HTTP connections should be used for texture download?

The code can do all that. The developers did their homework. Plus there's the level of detail system, which is separate from the texture loading system. But there's no central control of which optimizations to use when. There's some statistics collection (sent to the LL servers), but no automatic feedback and self-tuning.There's nothing that checks "frame rate dropping, take action to reduce graphics card overload".

Just finding all the policy code, which is spread across multiple modules, and writing down what it's doing would help. There are lots of options, and they have direct visual effects. You could choose to bring up the world at low-rez out to the view distance, then fill in locally, or bring up the nearby world at high resolution first, then bring up the background. Or, from Callum's report, find out and prevent bringing in a large texture for a small object.

This area may not have been re-tuned since the textures were moved to a CDN back-ended by AWS. Originally, texture files were served by the sim servers. So trying to avoid bogging them down was a priority. Now they're far from the sim servers, in different data centers. The optimal strategy for fetching is different. Making a large number of simultaneous HTTP requests is not a problem. (Or is it? It's using a CDN that thinks the SL viewer is a web browser. There may be throttling at the CDN if requests from a single IP address come in faster than a web browser would ever do. That's part of how a CDN protects itself against denial of service attacks. If you look at Firestorm logs, there are a lot of rejected HTTP requests and retries. Also, while reading part of a JPEG 2000 file and then closing the connection is a valid operation, a CDN might view that as a hostile act intended to waste CDN resources. Anybody able to look into this? There really are too many HTTP errors in logs.)

Link to comment
Share on other sites

2 hours ago, CoffeeDujour said:

Tuning the HTTP requests to match the CDN could resurrect a lot of the problems that people had with domestic routers getting overwhelmed.

That's a good point. The viewer needs to throttle the requests to the asset servers if the sim ping time, which the viewer knows, goes up. If the "Network" lag indicator is not green, bulk asset loading needs to slow down. There may be code to do that already.

Whether you're going to have trouble with this is easy to check with this speed and bufferbloat test. Watch the "bufferbloat" meter during the download and upload phases of the test. If it goes into the red during download, the CDN can overload you. If the meter stays near the low end during download, you're good. If it goes into the red during upload, that's usually tolerable, because SL traffic is almost all download. (Just don't have anything else on your local net doing heavy upload while you're in SL.)

This is gradually getting fixed. Look up "bufferbloat", "fair queuing", "fq_codel", and "DOCSIS 3",  if you're interested.

 

Link to comment
Share on other sites

35 minutes ago, CoffeeDujour said:

The problem was nothing to do with network speed or bandwidth, just the sheer volume of http calls.

The router shouldn't care about that. They're just packets to the router. Unless it has some weird stateful firewall or something. Windows has some rate limiting on HTTP requests, partly to prevent machines infected with malware from causing too much damage, and partly to make you buy the server version of Windows for high-volume usage.

Here's a connection report from DSLreports:

bufferbloat1.png.21310ed6f72de24ded89b9148b4a726f.png

Bufferbloat on the uplink side. Try to do anything while an upload is in progress and performance will be very bad. Download, no problem.

Link to comment
Share on other sites

25 minutes ago, animats said:

The router shouldn't care about that. They're just packets to the router.

It was a huge problem, LL spent a lot of time testing lots of routers and even found some that were wholly incompatible with SL. You wouldn't want to run a small office over the cheapest domestic router your ISP gives away and it turned out a single SL client was generating a comparable amount of http calls. People don't tend to upgrade their router just for fun, so it's fair to say a sizable chunk of the SL userbase are still connecting with the same lowest-bidder hardware they have been for years.

Link to comment
Share on other sites

2 hours ago, CoffeeDujour said:

It was a huge problem, LL spent a lot of time testing lots of routers and even found some that were wholly incompatible with SL

Hm. Here's a typical texture URL:

http://asset-cdn.glb.agni.lindenlab.com/?texture_id=d449da3c-6fc8-5755-5414-6250b2e09369 (random texture from Zindra, NSFW. JPEG 2000, so you need something like GIMP to read it).

The domain is the same for all textures, so a persistent HTTP connection could be used and reused. One way around routers mucking with content is to go HTTPS. Then the router has no idea what you're doing over that connection. One of the big reasons to go HTTPS is obnoxious middle boxes.

That URL will not work in HTTPS mode; the CDN server is not set up for it. Akamai supports HTTPS over TLS now; they didn't a few years ago, which may be which this is not HTTPS. There may be an extra charge for HTTPS.

Link to comment
Share on other sites

8 hours ago, animats said:

What resolution of a texture file should be fetched from the servers? (The JPEG 2000 files are progressive; you can read a small part and get a low-rez version.)

JPEG2000 doesn't have "mip mapping" or "low rez versions".

What it as is as you said, a progressive encoding system for its lossless compression, it's a bit like a stepped pyramid.

When the original image has been subected to lossless compression, it creates a second step to the pyramid, SAME image resolution (width in pixes by height in pixels) but with a fuzzy lossy colour averaged across blocks of pixels, preview version, then it creates another step above that, with eve worse quality AT THE SAME image resolution.

JPEG2000 wasn't designed as a mipmapped game format, like DDS, it was designed for WEB pages, and the progressive system was supposed to be an improvement on the old progressive interlaced "alternate rows of pixels" idea, to give you SOMETHING to look at while you waited for the rest of the file to load over your 2400 baud dialup modem.

Standard decoding process is... Read image size from header, reserve that amount of memory then read progressive previews and stretch-fill them into that blank full sized image reservation parking lot, one after the other as more of the file arrives.

Strictly speaking, it's a stupid feature, people had pretty much abandoned the use of progressive interlacing due to improved bandwidth speeds before the new version was added to JPEG2000.

It's also partly responsible for the way texture thrashing looks in SL.

Render engine sucks down a texture, throws up the low quality blurry as hell full sized preview layer 1, as more of the file arrives, replaces that with a less crap version, then shows the actual image, then drops the texture, then says oh crap you are still standing on the floor that used that, reloads the texture, redisplays the ultra-crap full sized preview, then the regular crap preview, then the full image, then drops it, then says oh crap you are still standing on that floor...

Rinse and repeat.

Worthless progressive feature plus bad "which testure to drop code" = Standard SL focus/defocus texture thrashing.

SL can't use the openGL mipmapping code to select mip maps, because SL doesn't HAVE mip maps, tht code only works in opengl if your application has mip maps to send to the gpu. No Mipmaps, no mipmapping selection. QED really.

1 hour ago, animats said:

(random texture from Zindra, NSFW. JPEG 2000, so you need something like GIMP to read it).

You can open it with Irfanview, wich is a lot quicker and easier to install than gimp, and can convert it to damn near any format you want...



 

  • Thanks 1
Link to comment
Share on other sites

22 minutes ago, Klytyna said:

SL can't use the openGL mipmapping code to select mip maps, because SL doesn't HAVE mip maps, tht (sic) code only works in opengl if your application has mip maps to send to the gpu. No Mipmaps, no mipmapping selection.

Go look at the viewer code, if you can read C++. Read llviewertexture.cpp. There's lots of code there to support mipmapping. The graphics board does most of the work. OpenGL will even create the mipmaps for you. It's not clear when it's turned on, but there's clearly code for it. Mipmapping increases VRAM consumption for each texture by about 1.5x, so there's a tradeoff.

This beginner tutorial on mipmaps in OpenGL may be useful.

15 minutes ago, CoffeeDujour said:

There is nothing wrong with progressive textures, the code that chooses what to drop or when to downgrade could do with some love.

Exactly. The heavy machinery to do the job is all there. The policy on when to use it could use some attention.

(Just guessing, but I wonder if the antialiasing settings interact with mipmapping control. Antialiasing implies rendering at a higher resolution and then blurring down, s extra texture resolution helps. Try turning off antialiasing and see if the problem Callum mentioned goes away.)

Link to comment
Share on other sites

21 minutes ago, CoffeeDujour said:

There is nothing wrong with progressive textures

You mean apart from the pointless overhead creating them, transmitting them, and decoding them? And the bloody awful effect they have on texture quality?

File header with image resolution and colour depth, followed by...

50kb Supercrap lossy jpeg of a 150kb Crap lossy jpeg of a 350kb png file, followed by...

150 kb Crap lossy jpeg of a 350 kb png file, followed by...

The lossless 350kb png file you should have transmitted instead of the progressive low quality preview crap...

Why waste cpu/gpu time rendering a crap preview for a couple of seconds, when it's almost as fast to render the actual texture, once you strip away the progressive bs data-waste.

5 minutes ago, animats said:

OpenGL will even create the mipmaps for you

If it's told to, but there's an overhead... Generating mipmaps on the fly burns GPU time, that you want used for rendering frames... It's wasteful, that's why mip-mapped systems normally use premade mip-maps, which SL doesn't have, because it's built around a web page file format not a game texture file format.

8 minutes ago, animats said:

Mipmapping increases VRAM consumption for each texture by about 1.5x, so there's a tradeoff.

And our problem with 1024 textures and having too many of them is...

VRAM overfill, needing textures to be dropped, so mip-mapping where you load all the mips into vram for the gpu to choose from, with it's 1.5 memory use multiplier overhead, simply makes the problem worse.

A better option would be separate mip files, and the cpu chooses which mip version (just the one) to send to vram based on range etc. but that too has overhead.

13 minutes ago, animats said:

Antialiasing implies rendering at a higher resolution and then blurring down, s extra texture resolution helps. Try turning off antialiasing and see if the problem Callum mentioned goes away.)

The AA settings are NOT used when ALM is enabled... ALM uses it's own AA system, that isn't controlled from the preferences panel at all. So, setting AA to off has no noticable effect on anything if you use ALM/Materials, 



 

Link to comment
Share on other sites

3 minutes ago, CoffeeDujour said:

If we didn't have them, SL would be a lot more grey.

We're old hands, we're used to SL being grey, and frankly shorter periods of grey might well be preferable to longer periods of endless repeated autoblur.

The problem of too many 1024's, and vram overfill, is largely due to...

Piss poor UV maps, resulting from the use of the "generate uv" auto-fail button in Blunder-3D. by people too lazy and stupid to learn how to uv map.

Classic example, actual case in SL, a ribbon choker, in mesh.

Now, if I was making such a thing, in C4D, I'd use a spline path extrusion, and the uv map that comes with that would basically be a square, where ALL the pixels get used, top and bottom edges of the map might be a seam along the back of the ribbon, left and right edges would be the ends of the ribbon, easy.

I could even tile the uv along the length of the ribbon multiple times, OR use a non square texture, whose aspect ratio matches that of the ribbon, say 1024 x 64 perhaps.

But when this item was made, in Blunder-3D, the creator chose to use the auto-fail button, and the actual uv map is...

A strip about 900 x 32, centered in a 1024 x 1024 texture. WASTE.

Then you add progressive "reduced quality previews" to that mostly wasted 1024 x 1024 for slower load and decode times, and thus slower rezzing. WASTE.

Then Animats adds on the fly GPU generated mip-mapping, lowering frame rates in the process. WASTE.

Then the loading of the mip-maps into vram increases vram usage by 50%. WASTE.

And the use of mip-maps by the rendering engine reduces the already bloody awful texture quality, so the creator sees their stuff looking all blurry and crap and decides to revise the thing so instead of using a SINGLE 1024x1024, badly, it uses FOUR 1024x1024's VERY BADLY! 

EVEN MORE WASTE!

Congratulations, the plan to use gpu generated mip-maps of progressive load textures has increased vram usage overfill by 50%, thus increasing the need for texture dropping by 50%, and increased texture thrashing by 50%, and that's BEFORE the creators respond by using MORE large textures in a futile bid to reclaim image quality.

Your cure is worse than the disease...

Maybe, just maybe, our time would be better spent actually trying to teach creators HOW TO BLOODY UV MAP PROPERLY.



 

Link to comment
Share on other sites

31 minutes ago, animats said:

It's become tiresome and pathetic.

That was my exact thought, when I recently read a patronising suggestion ( complete with "if you can read C++" and "here's a noobs guide to hardware rendering") by a tech-illiterate dev-team wannabe, to cure VRAM overfill and texture thrashing, not by encouraging better content creation practices, but by misusing automated features to by increasing vram usage by 50% per texture, while deacreasing viewed texture quality 75% by forcing the use of half sized mip-maps.

What a co-incidence...
 

Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 2098 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...