Jump to content

This is why we can't have nice things.


Coffee Pancake
 Share

You are about to reply to a thread that has been inactive for 1990 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

19 hours ago, CoffeeDujour said:

I'm less concerned with rezzed stuff, that will always be contained by Li .

The complexity at least. LI doesn't take texture use into account at all, which is why it's exploded to ridiculous levels. LL needs to find a way to reign that in. I think the best way would be for texture use to affect LI. It's not a perfect way of dealing with it, but I think it will provide the right encouragement content creators need to watch their texture use more carefully.

And before anyone worries this would mean lower res textures, I want to point out a few things.

  • A lot of the mesh content I've seen in SL uses textures with lots of wasted pixel space, meaning most of that texture's memory isn't actually being used in the content you see on screen. Tighter UV wrapping would fix this without altering the appearance of the object at all.
  • A lot of content creators use numerous high-res textures on small, barely noticeable objects in a scene. That drinking cup on the table should not be using as many textures as (sometimes more than) the table itself.
  • We need to be smarter about how we do simple, common texture effects, like lights and shadows. LL should really provide a library of common textures like shadows and light effects so instead of every object in a scene having it's own separate, yet nearly identical, square floor/wall shadow, they could all be using the same single texture instead.
21 hours ago, ChinRey said:

So, right after he had read my offer, he told the world he had yet to see any of us suggest how to.

I have to admit, that stung. Still, I think a lot of that has to do with communication at the lab. When I point out nasty graphics issues on the Jira, does someone in Patch's position ever see it? Or does it just get dismissed by whoever is cleaning up the Jira issues that day? And no individual can be everywhere and read every forum post at once. Not to mention turnover. I've been in SL since 2005, documenting issues since at least 2007. I can't imagine that when LL hires a new employee they tell them to scan 10+ years of forum posts and user blogs to see what needs to be fixed.

In an ideal world LL would have an art team that, collectively, would know all this stuff we post about. As far as I know it's just Patch and the Moles right now and to get their attention we either need to show up at an office hour being held at noon on a work day, or wave our arms while shouting into the forum and hope the right Linden sees it.

 

 

 

  • Haha 1
Link to comment
Share on other sites

1 hour ago, Penny Patton said:

I have to admit, that stung. Still, I think a lot of that has to do with communication at the lab. When I point out nasty graphics issues on the Jira, does someone in Patch's position ever see it? Or does it just get dismissed by whoever is cleaning up the Jira issues that day?

In my experience, when I post a JIRA, it takes on average about two years from it's been rejected until they are forced to implement it anyway - with no mention of or reference to the JIRA of course.

 

1 hour ago, Penny Patton said:

And no individual can be everywhere and read every forum post at once.

At least one Mole (possibly two but the second one is probably an ex-Mole) is also a very active forumite. She's mainly active elsewhere in the forum but she's certannly read and even ocaisionally replied to some of the mesh optimization discussions. So obviously it's not only a lack of communication between departments at LL, the LDPW people don't even communicate with each other.

Oh well.

When Arctan is launched, it can be a symbolic band-aid tailor made to suit the dominating merchants and builders or it can be a bona fide attempt to make the most realistic measurement of actual load that is possible. Most likely it will be something inbetween. Anything too far away from realism would be a disaster though and the developers know that so I'm sure they won't do too bad a job. That means the big players will have to come here because this is actually the only place where you can find a decent volume of reliable information about optimizing mesh for Second Life. So they will take notice. And they will take the profit. And the credit. Leaving the people who did all the hard work without as much as a half-hearted "thank you".

But I'd rather not think about that. Because when I do, I can't help wishing I had never even heard about the Banana Republic of Second Life. I'm not quite ready for that yet so if it's ok by you, I'd rather stay in the comfort of the denial phase for a little bit longer.

  • Like 1
Link to comment
Share on other sites

1 hour ago, ChinRey said:

At least one Mole (possibly two but the second one is probably an ex-Mole) is also a very active forumite.

I'm not certain what the relationship between LL and the Moles actually is. It seems to me that they're more like contractors than employees and I'm under the impression they have no more input on SL development than we do.

Link to comment
Share on other sites

1 minute ago, Penny Patton said:

I'm not certain what the relationship between LL and the Moles actually is. It seems to me that they're more like contractors than employees and I'm under the impression they have no more input on SL development than we do.

Yes but it seems they don't have much input on the policies they are told to follow for their own work either. The Mole I was referring to here is mainly a scripter and I have yet to see a single Mole mesh made by her. But she does have at least some of the knowledge the other Moles and the LDPW Lindens need to know. DOes this mean the empolyees and contractors at LDPW don't discuss building with each other at all???

Link to comment
Share on other sites

3 hours ago, Penny Patton said:

When I point out nasty graphics issues on the Jira, does someone in Patch's position ever see it? Or does it just get dismissed by whoever is cleaning up the Jira issues that day?

Rendering/graphics/all viewer-side issues are triaged by Linden Lab viewer QA every single day of the working week.
They have a daily meeting in which viewer issues reported in BUG are triaged.
BUG triage consists of going through all newly filed viewer bugs in the BUG project and going through all other updates to the BUG issues where the reporter or another Resident has updated the issue & added further information.

If the reported bug is obviously a bug & can be reproduced when tested in triage, the JIRA issue is imported into the LL internal JIRA & is then essentially on the "to-fix" list.
If there is not enough information in the bug report to tell if it's a real bug or not, the filer will be asked for more information & when that has been provided, that JIRA issue will fall into the triage queue again.

The viewer QA Lindens who triage the JIRA issues are brilliant & take a lot of care to investigate all the bug reports.
It's really unfair to say that those JIRA issues "just get dismissed by whoever is cleaning up the Jira issues that day".

  • Like 1
  • Thanks 2
Link to comment
Share on other sites

One way to look at graphics load is to bring up the statistics bar and look at KTris per second. That's the number of triangles drawn on screen. The biggest numbers you see there indicate how fast your machine can draw. My Linux machine with a GeForce 640 maxes out around 30 million triangles per second when the GPU is near 100%.  You may run out of GPU time or main-thread CPU time first. Whichever you run out of first will put a ceiling on KTris per second.

The statistics bar also show KTris per frame. If you're graphics-bound, frame rate times KTris per frame is KTris per second. So divide your maximum KTris per second by 30 FPS or so to get the maximum KTris per frame your machine can handle before it chokes. For my machine, that's about 1000 KTris/frame, or a million triangles.

Once the frame rate drops, the viewer starts to behave badly. Typing echo slows down. Events to and from the server are processed slowly and motion becomes jerky. Measured ping time goes up. The viewer indicates that the network is slow, even when it's not. This may result in unnecessary throttling of texture and mesh loads. (Not sure about that one.) The user experience becomes very poor.

The job of the viewer is thus to pick the million triangles that best represent the scene. Currently, what the viewer chooses to load and display is based on preset draw distance and level of detail values. The viewer does not react automatically to scene complexity. It should. Like this:

tralalausual.thumb.png.e1bf2d7be99a592c122c9406444d97a0.png

Tralala's Diner, usual settings. 10 FPS. User interface and movement sluggish. The outdoor market here is a good test of SL. All those stalls are full of highly detailed items. But movement is like swimming in mud and you don't want to go from stall to stall.


tralalalow.thumb.png.008786b2f90c2b03b2759987c9a3c43d.png

Tralala's Diner, lower settings. 25 FPS. User interface not sluggish. 32 m draw distance. Advanced lighting off. Level of detail factor reduced from 2 to 1. Now you can look around and move around, get close to the objects and check them out. Visual quality is down a little, but not too much.

The viewers can handle a scene that complex. They just need to automatically downshift to lower settings. This alone would probably make Fashion Week not choke.

  • Like 1
  • Thanks 3
Link to comment
Share on other sites

3 hours ago, Whirly Fizzle said:

The viewer QA Lindens who triage the JIRA issues are brilliant & take a lot of care to investigate all the bug reports.
It's really unfair to say that those JIRA issues "just get dismissed by whoever is cleaning up the Jira issues that day".

 It's been a long time since I bothered with the Jira. I was pretty active on it for years and it is fair to say a lot went ignored or dismissed out of hand in that time. A lot of the bugs and issues I reported then are still problems now. But again, I don't know if the same people handling the Jira now are the same people who were handling it back before I gave up on it. Gawd, I just realized how long it's been since I posted anything there. Well, if things have improved, that's great. Maybe I'll try and bump one of my old issues if it's still listed there.

  • Like 1
Link to comment
Share on other sites

4 hours ago, CoffeeDujour said:

Dynamically switching advanced lighting on and off is not practical unless your avatar is wearing roller-skates.

The screen shots look similar side by side, but you do that in real time on screen and the effect is the lights appear to strobe.

Yes, you have to be gentle about the switching to make it look good. Lights in SL are always going on and off anyway as you move, because OpenGL has some severe limits on number of lights.

There's a good discussion of viewer design philosophy in the "Culling" wiki article. A key paragraph:

"We've (somewhat intentionally) done a very poor job at Linden Lab of educating builders on how to be performance conscious, which I think plays a large part in the overall sluggishness of the viewer. This is part of a philosophy that says it should be possible to build a content authoring system where artists don't need to care about performance implications, as the software should just deal with whatever comes its way appropriately. "

OK, fine. That's LL's design philosophy. Don't blame the creators, fix the technology. Now, how to make it work.

Land impact calculation alone cannot solve frame rate problems. On big parcels, it's easy to put too much complexity into a small area. That's what Tralala's Diner does; it's on a private sim, and most of the complex objects are concentrated in a small area. The LI system did what it could. The frame rate didn't drop all the way into low single digits, where things start breaking. So the scene works, but slowly. That gives us something to work with.

It's the job of the viewer to dial back the scene complexity to get the frame rate up, into the 25-30 range. That's within reach. You can get a 3:1 change in frame rate by tweaking the quality settings. The viewer itself could do a better job of this than the user can, changing the settings slowly and noting whether avatars or objects are the worst part of the problem. There are also tuning knobs that aren't exposed in the user interface, such as the distance at which shadows appear.

Has anyone been down this road in a viewer? It seems an obvious thing to try.

The "Culling" article continues:

"What we've seen, however, is that people (even technically minded people) consume as many resources as are available to them until performance reaches a level they deem is unacceptable, so the more efficient your renderer becomes, the more ludicrous the demands become. In SL, this usually means the performance becomes that of the lowest common denominator in terms of what is acceptable. That is, if you think performance is important and build a nice house with opaque window exteriors and few textures, you'll still have a bad experience if your neighbor thinks a flexi-prim bamboo forest is the bees knees."

Dealing with that is more complicated, but not impossible. The level of detail system tries to equalize the number of pixels per triangle across the screen.  It could try harder. Too many small triangles should mean being dropped to a lower LOD sooner. So the flexi-prim bamboo forest would be forced down to "lowest" LOD as the viewer cuts the LOD factor to speed up the rendering. If your object has too much detail and bad low-LOD models, under overload it should look worse.

What to do about objects with no lowest LOD? Maybe drop them all the way down to grey bounding boxes. That would be a useful debug option for finding the performance hogs in a scene. Billboard impostors would be better. Some of us are trying to make impostors work, but it's at an early stage.

Link to comment
Share on other sites

4 hours ago, Penny Patton said:

 It's been a long time since I bothered with the Jira. I was pretty active on it for years and it is fair to say a lot went ignored or dismissed out of hand in that time. A lot of the bugs and issues I reported then are still problems now. But again, I don't know if the same people handling the Jira now are the same people who were handling it back before I gave up on it. Gawd, I just realized how long it's been since I posted anything there. Well, if things have improved, that's great. Maybe I'll try and bump one of my old issues if it's still listed there.

Triage's job is to sort out the "must fix" issues, that doesn't mean that everything that doesn't meet their internal criterion is just ignored, even if the copy-paste issue-closed message might as well just say "Nice find! We have put it with the rest of the fire."

1 hour ago, animats said:

"We've (somewhat intentionally) done a very poor job at Linden Lab of educating builders on how to be performance conscious, which I think plays a large part in the overall sluggishness of the viewer. This is part of a philosophy that says it should be possible to build a content authoring system where artists don't need to care about performance implications, as the software should just deal with whatever comes its way appropriately. "

I absolutely agree with this 100%

In a world made of textured prims, this should be absolutely achievable in the viewer.

I am confident that the VRAM texture issues will be solved in time, entirely in software, and require no changes to behavior on the part of content creators. Better processing and use of smaller textures in the viewer will fix this.

Unlike a texture, you can't easily make a 50% resolution mesh on the fly and expect it to be anything you would want to look at in all circumstances. This is why we have seperate LOD meshes rather than just the top mesh and some magic to make it less detailed as needs be. Maybe a machine learning solution ... 

In an ideal world, that is what we would have. All the lower LOD meshes could be flushed away and your device, whatever it was, would just do the right thing.

Leaving the lid off complexity should be done AFTER such a system is in place.

1 hour ago, animats said:

"......................That is, if you think performance is important and build a nice house with opaque window exteriors and few textures, you'll still have a bad experience if your neighbor thinks a flexi-prim bamboo forest is the bees knees."

Yet we have never been provided the tools to hide the proverbial giant ? on the plot next door, because the commitment to a shared contiguous space overrides everything else.

1 hour ago, animats said:

Yes, you have to be gentle about the switching to make it look good. Lights in SL are always going on and off anyway as you move, because OpenGL has some severe limits on number of lights.

This sadly doesn't work.

For example ... You walk into an area, the viewer is under pressure so ALM is off. Things settle down and ALM gets flipped on & set up in such a way that there are no visual differences (starfox: good luck!), the idea being to gradually raise the lighting over a couple of seconds and have things just improve.

The result is a sudden hard change in frame rate before any visual changes has been seen. There is no way to predict what that change will be, only that it will happen.

The viewer is then faced with a choice .. switch ALM off to get the frame rate back up to some user defined acceptable minimum, stick with it and hope that as the user continues to move or cam about that it doesn't get worse.

A smooth even experience is better than a choppy one. A solid 10fps is preferable to it jumping about seemingly at random even if the average is much higher.

Link to comment
Share on other sites

11 hours ago, CoffeeDujour said:

This sadly doesn't work.

For example ... You walk into an area, the viewer is under pressure so ALM is off. Things settle down and ALM gets flipped on & set up in such a way that there are no visual differences (starfox: good luck!), the idea being to gradually raise the lighting over a couple of seconds and have things just improve.

The result is a sudden hard change in frame rate before any visual changes has been seen. There is no way to predict what that change will be, only that it will happen.

Yes, the connection between settings and frame rate is difficult to predict.

Turning ALM on and off is a bit drastic, but it gets about a 70% change in frame rate. Same for shadows.

Changing the LOD factor doesn't take effect immediately. The avatar has to cross an existing LOD threshold before it does anything.

Changing draw distance has immediate effect but may not help much if the near scene is cluttered. It has almost no effect in the street market at Tralala's Diner.

Turning off "Hardware skinning" makes the frame rate go up on my machine.

Link to comment
Share on other sites

As a sidenote on the opening topic, i'd like to point that I sincerely believe that the heavy and laggy content we are getting is conditioning current and future uses of SecondLife.

There is a reason why other platforms seem just as successful with just a fraction of the SecondLife features, with the content we have, using SL to do anything but chat while sitting on dance poses is extremely difficult.

As for the texture issue, i think ONE of the possible factors is that from a purely "creation" side of things, why would you pay 10L$ to upload a 32x32 when you can upload a 1024x1024 for the same price? I'm not sure how much this factors in but a lot of items I end up having to retexture after purchase just use 1024 after 1024 and I can't imagine that at no point they haven't realized it was completely overkill.

I think it pushes also away creators who could push SL outside of this whole "retirement home activities" kind of deal. Which also pushes out users looking for a more "intense" experience.

 

I don't know if SL will still be there in 10 years but i really hope we find some sort of solution to raise region health and framerate enough that creating play experiences and things that are not just avatar accessories will be more viable.

  • Like 3
Link to comment
Share on other sites

5 hours ago, Penny Patton said:

I guess  my comment wasn't so unfair afterall. Oz just dismissed the bumped Jira out of hand.

I'm trying to add my take on his response to that JIRA report there, since I saw that come up in my feed and it really *really* feels like Oz just completely ignored or is maybe somehow unaware of how important a reliable across the board baseline measuring system is for content creation involving multiple creators making all sorts of things that will work well together across SL.  But for some reason the JIRA doesn't seem to be letting me comment on it and I'm not entirely sure why.  Showing such a total disregard for such an integral factor really made my hope in SL plummet.

  • Thanks 1
Link to comment
Share on other sites

Here's a simple way to see what has good lower levels of detail. In Firestorm, go to Advanced->Debug Settings and set RenderVolumeLODFactor  to a value below 1, like 0.25. This is the same value you set in Graphics Preferences as level of detail, but in Debug Settings, you can set it below 1. (It will switch back to its regular value the next time you change it in Preferences.) Now you switch to a lower level of detail at short ranges, and can see what's done well and what isn't.

Prim objects have a built-in LOD model, and they generally look OK at all LOD levels.

Complex sculpties often look terrible at low-LOD. They turn into junk, with vertices all over the place.

Buildings with bad lowest-LOD models become see-through. That's probably the worst defect in mesh objects, because it's very visible from far away. I went to a major seller of prefab houses to look at their buildings. The older prim-based buildings look OK in this mode. The mesh buildings look awful. 

My own bikes look awful. I knew that; they're built from off the shelf parts. I'm going to have to start working from models in Blender to get something that looks good at low LOD.

Looking around in forced low-LOD mode is very frustrating. I'm not putting up pictures, since naming and shaming is frowned upon. But it's not looking good.

 

 

Link to comment
Share on other sites

5 hours ago, animats said:

Here's a simple way to see what has good lower levels of detail. In Firestorm, go to Advanced->Debug Settings and set RenderVolumeLODFactor  to a value below 1, like 0.25.

There's no need to go to the debug settings for it. You have that in the quick preferences menu (bottom right corner of the window) too.

I have to add one warning though: too strong LoD models can be just as bad as too weak ones. What you effectively do by over-strengthening the LoD models, is to "hardwire" a high LoD factor into the model, adding all the disadvantages of high LoD settings and leaving the user with no way to avoid them.

The secret of good LoD is to reduce the amount of details as much as possible without affecting the visual appearance.

The core problem of the RenderVolumeLODFactor isn't actually that it's abused by content creators to cheat on land impact, it's the fact that it exists at all. The LoD swap distances could be set to anything. Half of what it is today or ten times as high - either would be fine as long as the content creators could adjust to those distances and the LI calculation took it into account. But when such a crucial factor becomes an unknown variable, it's very difficult to optimize well and impossible to come up with a realistic way to calculate download cost.

Edited by ChinRey
  • Like 2
  • Thanks 2
Link to comment
Share on other sites

That's not what breaks visual quality, though. What looks awful is low-LOD models which are quite visible and totally wrong. Random single triangles, distorted textures because vertices were removed, that sort of thing. Sometimes worse than a grey blob. One face of the right size with a blurry texture would be a big improvement for distant objects.

Link to comment
Share on other sites

Yesterday's content meeting i suggested to scale the price of texture uploads based on the pixel area. I wonder if a change like that would encourage people to use smaller textures and/org make better use of the UV space? Even if it's not an actual money deterrent (upload fees have never been a deterrent) at least it would aknowledge that  a 32x32 is not equal to a 1Kx1K

Link to comment
Share on other sites

This is a great discussion, I have seen a lot of these items too, I cringe sometimes, one outfit has more verts than a whole sim would need lols, and it inspires me to try and be far more optimized with anything I build.

I think it'd be nice if the marketplace (and also the object inspector) gave us a reading of tri-count & texture count, and also show a view of it at all LOD levels, people could choose products more wisely if they could see more details about it, also a product list date would be helpful for avoiding laggy legacy stuff.   And big red warning sign on anything with Sculpties , alpha blending, or particles, those can just kill a whole area if they're around, not just visually, but also in performance too.

Edited by Macrocosm Draegonne
Link to comment
Share on other sites

On 9/14/2018 at 6:26 AM, Kyrah Abattoir said:

Yesterday's content meeting i suggested to scale the price of texture uploads based on the pixel area. I wonder if a change like that would encourage people to use smaller textures and/org make better use of the UV space? Even if it's not an actual money deterrent (upload fees have never been a deterrent) at least it would aknowledge that  a 32x32 is not equal to a 1Kx1K

There is no way to know what a texture will be used for prior to upload, smaller textures might be better individually, using one large texture as an atlas is significantly faster than 4 x 512's (etc).

There is no reason that texture memory use can not be entirely solved in software allowing for unlimited texture detail on everything.

I'd very much like to see 4K textures and some code to better handle texture degradation ... but that will have to wait till we have the cache changes from LL as they are quite literally the wheels this fun-bus rides on. We have some back-of-napkin ideas but the exact method depends on how the cache performs, worst case scenario being the new cache is as terrible as the old cache and we leverage large amounts of system ram as a buffer. (yes 4K textures, or bigger, really really, bring it on, challenge accepted)

Mesh detail on the other hand .. eeeeehhhh, unless someone comes up with a generic FOSS GPU based decomposition library that performs acceptably exceptionally in all circumstances...

  • Like 1
Link to comment
Share on other sites

40 minutes ago, Love Zhaoying said:

Is there “texture abuse” like “prim torture”?

Your computer has a limited amount of memory, even moreso since SL will not use all of your available memory.

Textures are stored in that memory while being displayed on your screen.  If that memory is filled, several things happen:

  • Sharp decline in framerate
  • Texture thrashing (when textures keep derezzing due to being shuffled in and out of memory)
  • Stuttering (where SL freezes up as you try to move the camera around, because it's desperately trying to move textures in and out of memory)

There are objects, from tiny attachments to larger environmental objects, that use hundreds of MB worth of textures. To the point where it's not uncommon to see avatars using a couple hundred MB to nearly a full GB of textures.

 In addition, textures need to be downloaded, and you have a limited amount of space dedicated to your SL texture cache. This means excessive amounts of bandwidth are used not just to download all these textures, but to repeatedly redownload them. This results in several issues:

  • Excessive bandwidth use
  • Excessively long rez times
  • That problem where rigged mesh bodyparts appear floating around the space where your avatar should be for several minutes before finally snapping to your avatar

Despite what some people think, there is no way we're going to see some magical software fix that allows for unlimited texture detail. Seriously, if anyone could figure how to pull that literal miracle off and patent it, they'd be rich. Every videogame developer in the world would be licensing it off of them. If there were an existing method to do this, then videogames would employ this miracle rather than carefully managing texture use (which is what they actually do). There are technologies employed to manage or reduce the memory use of textures where possible, but these technologies are always paired with efficient use of textures, not as a replacement for it.

Link to comment
Share on other sites

30 minutes ago, Penny Patton said:

Textures are stored in that memory while being displayed on your screen.

Computer or graphics card memory?

*Edit* In addition, I saw massive improvement on machines with SSD. I suspect “disk thrashing” for non-SSD systems is a major issue.

Edited by Love Zhaoying
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 1990 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...