Jump to content

Linden Performance viewer and Texture thrashing


You are about to reply to a thread that has been inactive for 643 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

On 7/26/2022 at 8:39 AM, Gavin Hird said:

macOS (and not OSX) does not really give you any tools for how much VRAM is available, because all new Apple systems in marketing have unified memory where the CPU and GPU share and allocate the same memory. So depending on your system configuration, the GPU can allocated close to 120 GB memory if it wants to in a system with 128 GB main memory.

The above also means that the GPU allocated memory is subject to system paging, which in theory could overcommit GPU memory used by all applications running on the system. 

What Apple programming guidelines advices is for the application to  subscribe to a low memory system events and act accordingly to either reduce memory use or gracefully terminate. Because of other issues with the viewer code subscribing to such events are really not possible...

 

How does that matter? All you need to know from the OS (or your GPU) is your total VRAM and scale the maximum amount of texture memory being usable according to that, set it to 90% (to give 10% spare to anything else) by default and give the user a very simple direct (unlike the current one) non-convoluted option that gives you absolute and precise control over how much VRAM you want to allow SL to use and you're set. Don't see why its complicated really. I've been smashing my head against a wall forever, ever since i added automatic memory usage and allowed all memory to be used i was worried about AMD (and their nonexistent VRAM reporting) to screw it over and cause issues, its not. Most issues still come from users who don't understand a simple "Max Texture Memory" option and for some reason turn it down to unusable small numbers (like 512MB) and then act surprised when they see texture thrashing happening... or alternatively those who turned the automatic memory management off and didn't touch the setting at all (leaving it at low values), despite the clear warnings of the Viewer that turning it off is bad. The Intel users i had didn't have any issues with memory either, just with Deferred not working (because some unknown shader is unsupported on almost all Intel GPUs).

https://git.alchemyviewer.org/NiranV.Dean/blackdragon/-/blob/master/indra/newview/llviewertexturelist.cpp#L1474

Here's the very simple function to do that.
I hooked it up here:
https://git.alchemyviewer.org/NiranV.Dean/blackdragon/-/blob/master/indra/newview/llviewertexture.cpp#L584

And modified the rest a bit to give the Viewer a small grace timer so it doesn't immediately go into texture thrashing mode the split second memory usage goes higher than the automatically calculated max, this was done because the calculation update doesn't happen with upmost priority and only after textures are already allocated, so texture memory usage rises and THEN a new max is calculated.

It's very simple and has been doing the very job LL has failed to do so for a long time now and i'm sure it could be dumbed down a lot and simplified to the point it literally just takes whatever it can take minus a small margin. This is including an option to set a max manually currently.

Link to comment
Share on other sites

4 hours ago, NiranV Dean said:

How does that matter? All you need to know from the OS (or your GPU) is your total VRAM and scale the maximum amount of texture memory being usable according to that, set it to 90% (to give 10% spare to anything else) by default and give the user a very simple direct (unlike the current one) non-convoluted option that gives you absolute and precise control over how much VRAM you want to allow SL to use and you're set. Don't see why its complicated really. I've been smashing my head against a wall forever, ever since i added automatic memory usage and allowed all memory to be used i was worried about AMD (and their nonexistent VRAM reporting) to screw it over and cause issues, its not. Most issues still come from users who don't understand a simple "Max Texture Memory" option and for some reason turn it down to unusable small numbers (like 512MB) and then act surprised when they see texture thrashing happening... or alternatively those who turned the automatic memory management off and didn't touch the setting at all (leaving it at low values), despite the clear warnings of the Viewer that turning it off is bad. The Intel users i had didn't have any issues with memory either, just with Deferred not working (because some unknown shader is unsupported on almost all Intel GPUs).

https://git.alchemyviewer.org/NiranV.Dean/blackdragon/-/blob/master/indra/newview/llviewertexturelist.cpp#L1474

Here's the very simple function to do that.
I hooked it up here:
https://git.alchemyviewer.org/NiranV.Dean/blackdragon/-/blob/master/indra/newview/llviewertexture.cpp#L584

And modified the rest a bit to give the Viewer a small grace timer so it doesn't immediately go into texture thrashing mode the split second memory usage goes higher than the automatically calculated max, this was done because the calculation update doesn't happen with upmost priority and only after textures are already allocated, so texture memory usage rises and THEN a new max is calculated.

It's very simple and has been doing the very job LL has failed to do so for a long time now and i'm sure it could be dumbed down a lot and simplified to the point it literally just takes whatever it can take minus a small margin. This is including an option to set a max manually currently.

No, on macOS where GPU RAM = system installed RAM you only need to make normal requests to the operating system to fulfill your memory request, and leave it to the operating system to handle it, even if the system need to swap to fulfill your request. 

You only need to handle low memory situations as signaled by the operating system, and reduce your memory requirements which can be done by flushing caches, temporarily turn off features like shadows, ALM and so on. If you run out of options to reduce your memory consumption and the system still signals you are in a low memory situation, you terminate gracefully. 

Of course you need to make sensible request to how much memory you allocate, meaning you request less GPU memory on a machine that has 8 Gb system memory compared to a machine with 64 Gb system memory. Consequently you set your texture memory limits accordingly. 

 

For the Mac models with Intel processor it is pretty simple to set some general rules, because Apple has been consistent with how they equip their machines with VRAM.

All Intel based portable machines and the Intel based Mac minis supported by macOS 10.14 or higher,  have Intel integrated graphics with 1536 Mb graphics memory. So you only need to check for the presence of Intel graphics. 

For all stationary Macs with Intel processor having AMD graphics cards and supported by macOS 10.14 or higher have a minimum of 4 Gb graphics memory. 

All Macs with Apple Silicon support as much graphics memory as there is installed system RAM, which is minimum 8 Gb.

No machines supported by 10.14 or higher have NVIDIA graphics cards. 

If you set your deployment target (minimum system requirement) for the viewer to 10.14 or higher, you know that you will not meet systems with less than 1536 Mb graphics memory and you can set the defaults accordingly. 

If you run on a machine with Apple Silicon you know you can safely allocate 2048+ Mb graphics memory. You can safely do the same on a system with AMD GPU (up to 4 Gb). 

Edited by Gavin Hird
Link to comment
Share on other sites

On 7/27/2022 at 8:06 PM, Gavin Hird said:

No, on macOS where GPU RAM = system installed RAM you only need to make normal requests to the operating system to fulfill your memory request, and leave it to the operating system to handle it, even if the system need to swap to fulfill your request. 

You only need to handle low memory situations as signaled by the operating system, and reduce your memory requirements which can be done by flushing caches, temporarily turn off features like shadows, ALM and so on. If you run out of options to reduce your memory consumption and the system still signals you are in a low memory situation, you terminate gracefully

I'ma stop you right there.

I had a good laugh. What you describe sounds like an utopia. You do realize that apps, games specifically to which SL counts very much do not simply "back off" and free up their memory again? especially not to make space in a low memory situation! SL would rather crash than free up memory. SL and many other apps will simply continue allocating more memory until your OS gives in and explodes (or the app crashes, whatever comes first).

  • Like 1
Link to comment
Share on other sites

11 hours ago, NiranV Dean said:

I'ma stop you right there.

I had a good laugh. What you describe sounds like an utopia. You do realize that apps, games specifically to which SL counts very much do not simply "back off" and free up their memory again? especially not to make space in a low memory situation! SL would rather crash than free up memory. SL and many other apps will simply continue allocating more memory until your OS gives in and explodes (or the app crashes, whatever comes first).

I know, but a properly coded Mac app does. The SL viewer is a hack from that standpoint too.

Edited by Gavin Hird
Link to comment
Share on other sites

Thats like checking any malloc() return carefully and then still getting killed by the OOM killer randomly. Usually not worth the trouble.

You technically CAN handle low memory situations and the usual app compatibility guidelines recommend testing and handling it (Windows App Verifier tests low memory situations too, e.g https://docs.microsoft.com/en-us/windows-hardware/drivers/devtest/application-verifier-tests-within-application-verifier#low-resource-simulation). But most applications do not care and just crash or shutdown on their first malloc error.

Handling all malloc failures is doable sure, especially if you override the platform allocator and have some memory pool you can throw away in emergencies. But thats not always done and usually needs some dedicated effort to make it happen.

 

Link to comment
Share on other sites

57 minutes ago, Kathrine Jansma said:

Thats like checking any malloc() return carefully and then still getting killed by the OOM killer randomly. Usually not worth the trouble.

You technically CAN handle low memory situations and the usual app compatibility guidelines recommend testing and handling it (Windows App Verifier tests low memory situations too, e.g https://docs.microsoft.com/en-us/windows-hardware/drivers/devtest/application-verifier-tests-within-application-verifier#low-resource-simulation). But most applications do not care and just crash or shutdown on their first malloc error.

Handling all malloc failures is doable sure, especially if you override the platform allocator and have some memory pool you can throw away in emergencies. But thats not always done and usually needs some dedicated effort to make it happen.

 

No, you subscribe to system events such as applicationDidReceiveMemoryWarning (UIKit on iOS) and DISPATCH_SOURCE_MEMORY_PRESSURE with the event flags: DISPATCH_MEMORYPRESSURE_WARN, DISPATCH_MEMORYPRESSURE_CRITICAL and DISPATCH_MEMORYPRESSURE_NORMAL.

 You don't monitor every malloc(), which in macOS you don't really have full control of anyway. 

Edited by Gavin Hird
  • Like 1
  • Thanks 1
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 643 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...