Jump to content

a debug setting that may really help speed things up


Recommended Posts

RenderGLMultiThreadedTextures: Allow OpenGL to use multiple render contexts for loading textures (may reduce frame stutters, doesn't play nice with Intel drivers).

By default this is set to FALSE

It can be switched to TRUE

Not sure why this is 'off' by default - suppose there must be a reason, But on a PC at least with a decent graphics card, I can't see any downside to turning it 'on'

  • Thanks 3
Link to comment
Share on other sites

4 minutes ago, Jackson Redstar said:

that would, I presume, be the drivers for Intel Integrated graphics in which case all of SL doesn't play nice with lol

Another possibility of why not to default it, maybe:

52 minutes ago, Jackson Redstar said:

But on a PC at least with a decent graphics card, I can't see any downside to turning it 'on'

It will be interesting if someone with knowledge of the setting reports any downside to using it, but possibly if someone has a "not-so-decent graphics card", then it could go poorly?  You'd think if this setting is GOOD, then there's be some "auto-detect" or checkbox in Advanced settings..!

 

Link to comment
Share on other sites

2 hours ago, Jackson Redstar said:

Sl has always been like this though. there are settings in the debug settings that can, under the right circumstances, help performance but nobody really knows anything about them other than, of course, the developers

If this were any other company, they would have already tested out a variety of hardware with this and determined which hardware to enable/disable it with. Unfortunately this is not the case.

As for what RenderGLMultiThreadedTextures actually does, it uses multithreading to load textures into memory. The performance gain for this is going to depend more on your CPU, internet connection, and disk read/write speed than your GPU. PCIe bandwidth for your GPU connection will of course be a factor too, but most motherboards use 16x.

  • Thanks 1
Link to comment
Share on other sites

  • Lindens
4 hours ago, Jackson Redstar said:

...there are settings in the debug settings that can, under the right circumstances, help performance but nobody really knows anything about them...

It's like a game within a game.

  • Like 2
  • Haha 4
Link to comment
Share on other sites

16 minutes ago, Monty Linden said:

It's like a game within a game.

Messing around with the debug settings is my favorite hobby, turn X to true, increase Y, decrease Z, etc 

Until my 750ti has a fit and the screen turns purple. 

  • Haha 1
Link to comment
Share on other sites

FWIW I tried setting the value to "true" and I see no change in FPS at all but I notice that on arrival after a TP the scene appears to rez more quiickly than it did, so maybe it helps a bit.

My system is an NVidia GTX1660ti 6GB GPU and an AMD Ryzen 9 3900X CPU and 32GB 3200MHz System RAM.  Nothing is overclocked.  I currently have UDP bandwith set at 2500 on a 70Mbps FTTC connection (BT Infinity).

Edited by Aishagain
  • Like 2
Link to comment
Share on other sites

2 hours ago, gwynchisholm said:

Messing around with the debug settings is my favorite hobby, turn X to true, increase Y, decrease Z, etc 

Until my 750ti has a fit and the screen turns purple. 

Firestorm has "save" and "restore" settings options. It makes it easy to experiment.

Edited by SandorWren
  • Like 1
Link to comment
Share on other sites

3 hours ago, Aishagain said:

FWIW I tried setting the value to "true" and I see no change in FPS at all but I notice that on arrivalafter a TP the scene appears to rez more quiickly than it did, so maybe it helps a bit.

My system is an NVidia GTX1660ti 6GB GPU on an AMD Ryzen 9 3900X CPU and 32GB System RAM.  Nothing is overclocked.  I currently have UDP bandwith set at 2500 on a 70Mbps FTTC connection (BT Infinity).

It is hard to say say if this is CPU or GPU related setting - but anything that can let our systems use more threads, assuming they are capable of it, seems to be a good thing

  • Like 1
Link to comment
Share on other sites

I tested this for a couple hours today on my Nvidia 3080 and it's probably off by default for a reason on this config. No measurable difference in FPS, but in practice camming became micro-stuttery. Smooth and no hitches with it off.

Link to comment
Share on other sites

  • Lindens
9 minutes ago, Love Zhaoying said:

I suspect if the setting was "generally beneficial", @Monty Lindenwould not have given a cryptic, playful answer!

No, I was just having fun.  We often abandon debug settings in place or they become relevant only to certain platforms and configurations.  Unless a setting is known to have effect, the tweaking in here may just be Placebo Effect.

  • Thanks 1
Link to comment
Share on other sites

9 minutes ago, Monty Linden said:
20 minutes ago, Love Zhaoying said:

I suspect if the setting was "generally beneficial", @Monty Lindenwould not have given a cryptic, playful answer!

No, I was just having fun.  We often abandon debug settings in place or they become relevant only to certain platforms and configurations.  Unless a setting is known to have effect, the tweaking in here may just be Placebo Effect.

...which is what I meant!!! 😻

Link to comment
Share on other sites

well who knows. Some people on the FS Beta group mentioned it, that it could help. But honestly this is really a whole bigger issue. There are probably settings in there that maybe be fine or even help higher end systems - which would be understandable why by default it is tuned for lower end systems. I remember even years and years ago Strawberry Signh (now a Linden) had a blog post on some mesh debug settings that supposedly improved performance. I get that nobody wants the general public rooting around in debug settings, but sure would be nice if there was a list of changes that could help those on higher end systems, or at least a suggested list that we could try. The biggest issue performance wise is other avatars and what they wear. Sure we can jelly bean them, turn them off or make them imposters, but if there was a way to get them rendered faster for higher end systems would be nice as well

Link to comment
Share on other sites

Played around with this setting a bit, keep in mind this is with an N100 and a 4gb gtx 750ti, only thing i really noticed it do was it feels like textures loaded in slightly faster when camming around? And thats slightly, emphasized. I wasnt sure there was any change for a while but with a lot of back and forth, its definitely an improvement but its very small. The downside is as mentioned by others already, theres a bit of stutter while it does that. Also very subtle, not game breaking by any means. But this is all still in the realm of a potential coincidence that maybe my cache is just being loaded faster for whatever reason.

Im gonna try it on the x1 nano with the intel iris Xe graphics and see if the warning there has any merit.

 

  • Like 2
Link to comment
Share on other sites

As the official viewer only uses a single thread for this, it is only slightly useful. It also needs proper support by the GPU driver to be of any real use. (see https://github.com/secondlife/viewer/blob/5c16ae13758bdfe8fe1f13d5f67eabbb6eaa30a1/indra/llrender/llimagegl.cpp#L2568 ). This may be reasonable, as the whole texture decode pipeline probably does not produce more textures to bind.

In the Cool VL Viewer, a similar setting uses as many threads as there are texture decode threads running to bind multiple textures in parallel. And there are some optimizations in place to make some callbacks far less costly too, which may reduce stuttering (see the message thread here http://sldev.free.fr/forum/viewtopic.php?f=10&t=2335 and the resulting code, its basically a scheme to only check when a texture is needed, not blocking on the check when it is bound). 

What are the risks of enabling it?

  • Temporary spikes in RAM/VRAM usage, while the GPU driver churns through all the bind requests. I saw random spikes by 5-8 GB with AMDs drivers.
  • Stuttering and misbehaving GPU drivers. A driver can either process the texture bind requests in parallel, e.g. NVIDIA and newer AMD drivers, or it could just process them serialized. In the latter case, this would be worse.
  • Crashes if the GPU driver is bad.
  • Overloading your CPU with too many threads, if you have a slow machine with not many CPU cores.
  • Extra RAM usage. Every thread needs some RAM. So spawning more threads on a machine with not enough RAM (e.g. 8 GB and iGPU) is a bad idea.
Edited by Kathrine Jansma
  • Like 2
  • Thanks 2
Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...