Jump to content

Second Life isn't using all of the resources? (GPU,CPU and Ram)


You are about to reply to a thread that has been inactive for 299 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

Hello!
So this is more of an experiment or such that i'm wanting to see what the highest graphics settings i can have while having a high frame rate. This is most likely the most asked question on the forums but hey, here we areĀ šŸ¤£

My GPU is an AMD Radeon RX 6900
My CPU is a AMD Ryzen 9 3900 12-Core Processor
My RAM is 32 GB.
My Download speeds are usually 350+ while upload speeds tend to be 200 during peak hours. These both tend to be higher off peak as well.

I usually also put the viewers onto an SSD just incase to see if it helps as well.

I've been jumping between different viewers (Second life Viewer, Firestorm, Black Dragon, Alchemy etc) and they all seem to have wildly different results. Black Dragon seemingly having the better results with 50+ even on it's max settings? While Firestorm poops out 20 frames. though i've noticed a recurring theme. none of the viewers are actually using all of the resources that are avaiable to them? my CPU is often being used 9% of the time, RAM it's 35% of the time and my GPU is only 25%.

I'm curious if there's any way to force the viewer(s) to use more resources. i don't seem to see any settings in the viewers that allows me other than wacking everything up to max and my PC isn't chugging or anything. it demands more.

Any advice or such would be appicated! i've been testing performence when visiting the linden newbrooke sims. as since it's all built by linden (except for the interiors and such) i thought it might be a good benchmark to place myself in. I understand that SL is not a "game" so to speak. But it wouldn't hurt to see what can be achieved.

Ā 

  • Like 1
Link to comment
Share on other sites

I'm runningĀ 

AMD 5950X

RTX 3060Ā 

32 GB of Ram

similar type systems, but at the end of the day...SL doesnt really give a crap, it's CPU bound, utilizing the fastest CPU core...so core clock speed is key. I know that sounds kind of crap, but, it's what enables SL to be a platform with a wide variety of specs to its user base. Although, I personally would love to see SL move into the here and now, and be able utilize more cores on the CPU. For myself when i go into ultra on Firestorm or Blackdragon, my GPU maxes at 100, with dips into the 90's...but i think i only get a max of 20% or so on the CPU...i dont know if this gives you a new perspective, or not...but food for thought.

Link to comment
Share on other sites

2 minutes ago, J Canucci said:

I'm runningĀ 

AMD 5950X

RTX 3060Ā 

32 GB of Ram

similar type systems, but at the end of the day...SL doesnt really give a crap, it's CPU bound, utilizing the fastest CPU core...so core clock speed is key. I know that sounds kind of crap, but, it's what enables SL to be a platform with a wide variety of specs to its user base. Although, I personally would love to see SL move into the here and now, and be able utilize more cores on the CPU. For myself when i go into ultra on Firestorm or Blackdragon, my GPU maxes at 100, with dips into the 90's...but i think i only get a max of 20% or so on the CPU...i dont know if this gives you a new perspective, or not...but food for thought.

Ahhh so it's only really using one core? that's interesting then. i'm going to assume there's not "brute forcing" it to use more than one core?

Link to comment
Share on other sites

No, not really...and if there is...i've yet to find out how...because i definately would...Now, if you're one of those wiz kids that know how to overclock your cpu, that is a way to go to increase your clock speed. My CPU sits on a B550 motherboard, which has overclocking ability, but not as much as the X570 motherboard...and i'm not adventurous enough to attempt that. I'd probably break something

Link to comment
Share on other sites

2 hours ago, charlottethegiantess said:

I've been jumping between different viewers (Second life Viewer, Firestorm, Black Dragon, Alchemy etc) and they all seem to have wildly different results.

You forgot the Cool VL Viewer... šŸ˜œ

2 hours ago, charlottethegiantess said:

though i've noticed a recurring theme. none of the viewers are actually using all of the resources that are avaiable to them? my CPU is often being used 9% of the time, RAM it's 35% of the time and my GPU is only 25%

The current viewers all share the same characteristic: they use a mono-threaded renderer, meaning you basically get limited by the performance of a single CPU core (with the exception of very simple scenes, such as in a skybox, where the CPU will be fast enough to saturate the GPU itself: with the Cool VL Viewer, on a Ryzen 7900X and RTX 3070, I top up at 1500fps in my sky box, at which point the GPU is 100% loaded).

Also, your GPU is not really a good performer for OpenGL (AMD OpenGL drivers suck rocks), and you'd get +30 to +50% in performances with an equivalent NVIDIA card.

2 hours ago, charlottethegiantess said:

I'm curious if there's any way to force the viewer(s) to use more resources.

One thing some viewers can do, is to use more threads while rezzing scenes to fetch, cache and decode meshes and textures faster: with the Cool VL Viewer, you may configure a multi-threaded GL image thread, for example, and it will by default use as many threads as your CPU got (virtual) cores (minus two, reserved for the mono-threaded renderer and the GPU driver threads, when it can and is configured to use threads) to decode textures.

Edited by Henri Beauchamp
Link to comment
Share on other sites

You can easily load up the gpu but it doesnā€™t change some rendering being cpu bound, and not utilizing multiple cores. Cranking shadow resolution in 4k can get firestorm to use 10-11gb of video memory and almost tap full usage of my arc A770. But itā€™s still not doing everything it could because itā€™s just pegging one core of my 11900k all the way and not using the other ones.

Kinda just the limits of such an old engine SL runs on, it was designed for hardware from 20 years ago. Sure over two decades itā€™s seen a lot of additions and improvements but it doesnā€™t change that at its core, past havok, is an in-house engine that was made to run on a Pentium 4 and GeForce 6 series card. Which is evident in exactly how small the performance difference is in SL with drastically different hardware, I lose maybe 30% average overall framerate jumping from my main pc to a 10 year old haswell i5 4570 and a gtx 780. I can go even further, this game is still comfortably playable on a 15 year old pc with a core2quad q8200 and a 9800 GT.

Itā€™s already this games detriment imo, people come here and despite the game looking OK, it doesnā€™t look like itā€™s from 2023, yet it doesnā€™t understand any hardware itā€™s being played on so regardless of what you use, your framerate is in the floor and nothing loads properly.

  • Like 1
Link to comment
Share on other sites

3 hours ago, Henri Beauchamp said:

You forgot the Cool VL Viewer... šŸ˜œ

The current viewers all share the same characteristic: they use a mono-threaded renderer, meaning you basically get limited by the performance of a single CPU core (with the exception of very simple scenes, such as in a skybox, where the CPU will be fast enough to saturate the GPU itself: with the Cool VL Viewer, on a Ryzen 7900X and RTX 3070, I top up at 1500fps in my sky box, at which point the GPU is 100% loaded).

Also, your GPU is not really a good performer for OpenGL (AMD OpenGL drivers suck rocks), and you'd get +30 to +50% in performances with an equivalent NVIDIA card.

One thing some viewers can do, is to use more threads while rezzing scenes to fetch, cache and decode meshes and textures faster: with the Cool VL Viewer, you may configure a multi-threaded GL image thread, for example, and it will by default use as many threads as your CPU got (virtual) cores (minus two, reserved for the mono-threaded renderer and the GPU driver threads, when it can and is configured to use threads) to decode textures.

Ooooo, i'll have to give Cool VL viewer a go as well then. see if i notice any changes then! As for the OpenGL, that performence increase alone is tempting to get a NVIDIA card. Shame that the drivers suck though.

Link to comment
Share on other sites

10 hours ago, Henri Beauchamp said:

You forgot the Cool VL Viewer... šŸ˜œ

The current viewers all share the same characteristic: they use a mono-threaded renderer, meaning you basically get limited by the performance of a single CPU core (with the exception of very simple scenes, such as in a skybox, where the CPU will be fast enough to saturate the GPU itself: with the Cool VL Viewer, on a Ryzen 7900X and RTX 3070, I top up at 1500fps in my sky box, at which point the GPU is 100% loaded).

Also, your GPU is not really a good performer for OpenGL (AMD OpenGL drivers suck rocks), and you'd get +30 to +50% in performances with an equivalent NVIDIA card.

One thing some viewers can do, is to use more threads while rezzing scenes to fetch, cache and decode meshes and textures faster: with the Cool VL Viewer, you may configure a multi-threaded GL image thread, for example, and it will by default use as many threads as your CPU got (virtual) cores (minus two, reserved for the mono-threaded renderer and the GPU driver threads, when it can and is configured to use threads) to decode textures.

I've heard of the CoolVL viewer, but I didn't know it had that functionality...i will definately have to check that out

Link to comment
Share on other sites

10 minutes ago, charlottethegiantess said:

so i've noticed a good amount of performence coming from CoolVL! but all of that goes away when shadows/projections get turned on where as the other viewers get higher performence with them still turned on.

The more you load the GPU, the less the optimizations that went into the C++ code (i.e. CPU side) are noticeable. However, once all settings set at equal values (*) the Cool VL Viewer will never perform worst than others just because you turn shadows on in them all.

Ā 

(*) Careful about the default FOV of the camera, for example, which is different in my viewer and which, instead of making you looking a bit down at the ground, makes you look parallel to the ground, causing more objects to be encompassed in your FOV (more objects to render, more textures loaded). There are also other settings added to improve how the world looks, but impact frame rates (e.g. mesh objects boost factor, Linden tree LODs at 3.0 instead of 1.0, higher LOD textures, etc). Also, for shadows, some settings (only accessible via the debug settings) have been changed to give better shadows (they now match what LL's PBR viewer is using), so shadows are indeed costlier with them...

Edited by Henri Beauchamp
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 299 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...