Jump to content
You are about to reply to a thread that has been inactive for 1459 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

7 hours ago, Jennifer Boyle said:

LL could also monitor the quality of content using algorithms, and ban the sale of content that doesn't meet announced current standards.

LL has been working on project ArcTan, a better way to monitor content "weight", for a few years now. After all those years of neglecting performance issues, they have a lot of catching up to do so I don't think ArcTan will be launched anytime soon but if they get it right, it should help a lot.

I don't think banning poor quality content is a good idea, not even if we somehow managed to find a way to clearly define what is good and what is poor quality. But with a good way to measure content performance and clear limits to how much load is allowed, it shouldn't be a problem. If people want to use poorly optimized heavy items, it would only mean they could have less content.

  • Like 3
Link to comment
Share on other sites

11 hours ago, Jennifer Boyle said:

LL could also monitor the quality of content using algorithms, and ban the sale of content that doesn't meet announced current standards.

LL can't even get algorithms for checking and banning offensive language right.  I don't think they would do any better with offensive content.

  • Like 3
Link to comment
Share on other sites

23 minutes ago, Lindal Kidd said:

LL can't even get algorithms for checking and banning offensive language right.  I don't think they would do any better with offensive content.

Yes but let's be fair to LL, even giants like Alphabet Inc. and Facebook can't get their AIs to evaluate content properly.

  • Like 3
Link to comment
Share on other sites

4 hours ago, Ardy Lay said:

I think everybody needs a place to go where they can see what SL rendering is like when all content present is "optimized" for SL rendering.  I suspect most will have to take stuff off to keep themselves from spoiling the results.  Empty regions need not apply.  😉 

It's not really that simple.

Any object can be evaluated independent of everything else and shown to be well optimized or not.

But SL doesn't die because of one object. It dies because there are thousands of objects. Take away the un-optimized thing .. SL still struggles exactly the same.

Generally speaking, rezzed objects are only going to impact performance if they make excessive use of blended alpha textures, or emit blender alpha particles.

As a rule, the amount of stuff impacts much sooner than the "quality" of the stuff. Which brings us back to draw distance, the lower you set that, the less stuff your viewer has to worry about.

 

Ok, but what about avatars, those tend to be hyper detailed.

Places with lots of avatars run slower, but that is in large part due to avatars not just being the sum of their attachments. Your viewer has to deal with a long list of indirect avatar related stuff - position updates, dozens of layered animations and so on. A crowd full of simple avatars is still a crowd.

You can easily have a screen full of beautiful avatars, a high end graphics card that's barely doing anything, and terrible performance.

 

This isn't to say over detailed content isn't bad or worth pursuing, but in the grand -death_by_a_thousand_cuts- / -SL_is_a_Russian_doll_made_entirely_from_bottlenecks- scheme of things, it's nowhere near the entirety of the problem and just solving that wont magically make SL run appreciably better (especially when the viewer is fetching and decoding content .. which is most of the time)

  • Like 3
Link to comment
Share on other sites

AMD RX 5700XT and Ryzen 2700X with 32GB Ram..Region with most prims used but no people around 190FPS add 20 people it would be around 80FPS...Both on Max Settings..Everything still looks the same if i was to use an I5 with just  integrated graphics just FPS would go down for me.

Link to comment
Share on other sites

a interesting thing about where compression/decompression of ditigal assets is heading

the Sony PS5 has a dedicated chip for decompression of ZLIB and the Kraken format. The principal designer of Kraken is Charles Bloom, who is legend in the field of compression

from the specs it appears that this chip has a Kraken decompression rate comparable to nine Zen 2 cores of a conventional CPU

i think that this chip (and/or similar decompression chips) are going to start appearing on other devices. Which will be quite good I think

i assume that Linden will take these chips into account when they rebuild the SL codebase on Vulkan. Which I think they will do. Not this year because of all the current cloud transition work, but maybe a start on it in earnest next year

an article on the PS5 is here

https://www.eurogamer.net/articles/digitalfoundry-2020-playstation-5-specs-and-tech-that-deliver-sonys-next-gen-vision

Link to comment
Share on other sites

4 hours ago, Mollymews said:

i assume that Linden will take these chips into account when they rebuild the SL codebase on Vulkan. Which I think they will do. Not this year because of all the current cloud transition work, but maybe a start on it in earnest next year

Consoles present a very homogeneous environment for developers, who will only ever have to support a very limited number of very specific configurations. The manufacturer (sony, whoever) goes to great lengths to make sure hardware revisions function exactly the same as previous versions. Dedicated special chips can happen in that environment as they are baked into the core spec .. and probably invisible to applications/games being run as the OS will just hand work off.

I would absolutely not expect dedicated support for compression chips from LL, especially as SL has to run on every random Windows device going back over a decade.

This alone makes a decent case for not updating the rendering environment from OpenGL. If it wasn't for Apple depreciating it, this discussion probably wouldn't even be on the table, and you can be sure LL are going to explore every possible option to get out of doing it if they can get away with it.

  • Like 1
Link to comment
Share on other sites

10 minutes ago, CoffeeDujour said:

I would absolutely not expect dedicated support for compression chips from LL, especially as SL has to run on every random Windows device going back over a decade.

compression code libraries also come in software. So programs on a general use computer can switch. Use the chip if present else use the software

a bit like it how it used to be when FPU's came out. Some computers had them, some didn't. Over the long time then all general use computers came with a FPU. Same with dedicated graphics chips/cards back in the day. Use when present else use the software emulator 

Edited by Mollymews
same
  • Like 1
Link to comment
Share on other sites

13 hours ago, Derek Torvalar said:

Out of curiosity, the people reporting their CPU speeds, are you overclocking them or just running at default?

I am also curious to know whether these systems are bottlenecking somewhat because of their hardware configs. Are they store bought or home made rigs?

Finally, as most who have been here for decade or more know, this issue has always been here, ie SL has never utilized the latest hardware capabilities and is not likely ever to change.

Full disclosure: I have no complaints, mainly cos I know it won't help, but I never see fps less than 30.

Mine is custom-built. It is overclocked, but by software. It has software that adjusts the CPU clock speed according to the CPU temperature. I am not technically-adept enough to know where bottlenecks are. I can say that each instance of Firestorm uses about 20-25% of my GPU's power, so I can always run three without much change in performance, often four, and never five. CPU load is never very high.

Edited by Jennifer Boyle
  • Like 1
Link to comment
Share on other sites

1 minute ago, Mollymews said:

compression code libraries also come in software. So programs on a general use computer can switch. Use the chip if present else use the software

a bit like it how it used to be when FPU's came out. Some computers had them, some didn't. Over the long time then all general use computers came with a FPU. Same with dedicated graphics chips/cards back in the day. Use when present else use the software emulator 

You've been able to buy dedicated gzip cards for years (AHA), but good luck offloading anything onto them outside of specific environments. Tends to only happen on Linux (or *nix like platforms) where a single common application (that gets replaced by a vendor one) provides system wide compression functionality. Even then .. YMMV.

Not going to happen on Windows.

  • Like 2
Link to comment
Share on other sites

4 minutes ago, Jennifer Boyle said:

Since my CPU speed is limited by CPU temperature, would I get better SL performance by disabling all but a few cores? I'm think a few could run faster if the others were disabled. I suppose it would help, too, if the viewer was the only thing using one core. Right?

 

That might help.  It seemed to help on mine just a little, when I tried.  I found that providing better airflow through the case helped a lot more.

Link to comment
Share on other sites

23 minutes ago, CoffeeDujour said:

Not going to happen on Windows.

i would say most likely it will

companies like NVidia and AMD will be keeping a close eye on the Sony effort.  At the moment most of the high throughput megabyte decompressors  use graphic card cores to help decompress data blocks in parallel

when NVidia and AMD start sticking decompression chips on their graphics cards which I think they will, then it will come to Windows

Link to comment
Share on other sites

1 hour ago, Jennifer Boyle said:

Since my CPU speed is limited by CPU temperature, would I get better SL performance by disabling all but a few cores? I'm think a few could run faster if the others were disabled. I suppose it would help, too, if the viewer was the only thing using one core. Right?

 

Simply put: "I'm only taking left turns with my car. I think I could preserve the wheels on the right and simply take them off. That's a good idea, right?" ... no. Make sure your computer has decent cooling and leave it as it is.

And that other post of not seeing a difference between the look of integrated low end grahics versus a decent GPU: really?

  • Like 2
  • Thanks 1
Link to comment
Share on other sites

@Derek Torvalar
I never overclock. It doesn't interest me and It can just be too hot here in summer.
I also have some huge CPU fan thingy that came in the box of parts we ordered. I didn't order it lol. (thx bro).
I don't know how it goes on and surely it would need a support bracket.
The only thing I would add is I save and save and work my guts out for my PC's. 

Cooler13_2cac28a60f126a7e6e22fb07b81351be_1484726268.jpg

Edited by Maryanne Solo
Link to comment
Share on other sites

Thanks to those who posted answers to my queries.

Both you cite heat related issues concerning your CPUs. Heat is the killer of course in both GPU and CPU speeds. I am going to assume both are using air cooling, so maybe switching to a water cooling system for your CPU can be an inexpensive way to keep temps down. I got a 9900K last year and that runs very hot comparatively speaking but I still manage to keep temps under 60C despite OCed to 5.2 Ghz, even in summer. Jennifer, software OCing is fine if you are uncomfortable doing it manually but does not usually yield the best results. 

Both GPUs are the EVGA Hybrid version as well and OCed to 2150 and again never see temps go above 50C. Making a conversion of the GPU is a little more trouble but as GPU speed isn't as critical for SL it may not really be worthwhile anyway.

However, SL is gonna be what it always has been. Any minor attempts at corrections to improve it are really going to have negligible effects I think.

Link to comment
Share on other sites

The days of stable desktop monster CPU overclocks are long gone.

Get a good cooler, AIO if you're wanting to be fancy, then leave it alone.

At best you're going to eak out a couple extra percent CPU speed, and then the scheduler will laugh in your face and put SL on the slowest coolest core.

If you're really really determined to go ham, play with your memory speed, it's way more important.

  • Like 1
Link to comment
Share on other sites

23 minutes ago, CoffeeDujour said:

The days of stable desktop monster CPU overclocks are long gone.

Get a good cooler, AIO if you're wanting to be fancy, then leave it alone.

At best you're going to eak out a couple extra percent CPU speed, and then the scheduler will laugh in your face and put SL on the slowest coolest core.

If you're really really determined to go ham, play with your memory speed, it's way more important.

Yep got that covered too. 32GB 4000. But that is stock speed haven't really had the need to push it yet.

Link to comment
Share on other sites

On 4/20/2020 at 5:06 PM, Sassy Kenin said:

AMD RX 5700XT and Ryzen 2700X with 32GB Ram..Region with most prims used but no people around 190FPS add 20 people it would be around 80FPS...Both on Max Settings..Everything still looks the same if i was to use an I5 with just  integrated graphics just FPS would go down for me.

What monitor re*****ion are you using?

ETA: Why does the forum software not want me to say the word that sounds like REZ-O-LOO-SHUN?

Let me rephrase. What is the product of the number of vertical pixels and the number of horizontal pixels on your monitor?

Edited by Jennifer Boyle
  • Haha 1
Link to comment
Share on other sites

We have a new example of LL's misguided priorities:EEP is Out! Introducing the Environmental Enhancement Project

I need better performance a lot more than I need that. Yes, it is really nice. Yes, I would absolutely love that they did it, IF SL PERFORMED DECENTLY WITH THE FEATURES IT ALREADY HAD.

Why do they keep adding bells and whistles instead of giving us better performance?

  • Haha 1
Link to comment
Share on other sites

Could a viewer -- not necessarily a Linden viewer -- find its own performance bottlenecks while running and basically critique a configuration? The goal would be to suggest simplistic but actionable stuff such as:

  • "Add more RAM to boost average speed by 10% until you need to upgrade your GPU for further gains," or
  • "Improve the network to speed texture downloads before any rendering improvements will be noticeable."

The idea would be to go beyond static configuration analysis [*] to also take into account a specific session's actual dynamic usage (is the user frequenting high rendering complexity environments, or places with simple geometry but textures that never load, or real-time motion updates, or... whatever differs across sessions and users.

There used to be several little colored performance indicators in the menubar... maybe some viewer still has them, but it must have been ages since I had them enabled... were they sampling data that could be useful for something like this?

[* ETA: Such "static configuration analysis" might be a starting point. Why don't we have that at least, based on some "normal" usage profile?]

Edited by Qie Niangao
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 1459 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...