Jump to content

Firestorm - Low Performance but not utilizing hardware?


Dazashi Graves
 Share

You are about to reply to a thread that has been inactive for 1267 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

So... I've had this issue for months now, and nothing is seeming to fix it. On some Sims I'm getting between 10-15 FPS, sometimes as low as sub 5, which is insane and makes even typing unbearable as it seems linked to FPS some how.

Simply put, this shouldnt be happening. For reference, I'm running a 6700k and GTX 1080, 32gb ram@3200. Certainly not a top spec monster or anything, but still extremely capable. 

The part of all this that's confusing me is that when I check my utilization, everything is going basically unused. None of my cores are peaking above 60%, my GPU utilization is peaking at 45%, and using less than a third of my ram as well. 

So what the heck is going on? For some reason, it's like SL just isnt utilizing any of my hardware, and I dont understand why. The strange thing is I have a friend who has far inferior hardware, and is actually always getting better performance. Sometimes dramatically, in the order of 2-3x my FPS with matched settings. If they're getting 40-50's I'm usually sitting around 15. 

I've done everything I can think of. Tanking settings, ensuring FPS limits arnt on, dumping cache, fresh firestorm re-installs. It rarely gets me more than another 10FPS on sims. Thing is it wasnt always like this either, it used to run quite fluidly, but it's been so long I cant exactly remember when that was. And it doesnt improve with time either, the sim can be fully loaded around me and still be running terribly. 

 

My main concern is the wild under utilization of literally everything. It's like the client isnt even trying to leverage my hardware anymore and I simply cant think of why it wouldnt. Nothing seems to be bottlenecking anywhere as far as I can tell, not even close. 

 

I'm at a loss. Any advice, or at least an explanation of what's going on would be really helpful.

  • Like 1
Link to comment
Share on other sites

SL has always been more CPU bound and never gets the most out of a powerful graphics card like the 1080 (I know, I have the 1080Ti and otherwise similar specs).  I have certainly seen results like yours at times.  There is PC powerful enough that SL cannot drag it down to a crawl in the some places.

To be able to see if this is a new problem, you would have to try to compare what you are seeing now with a known baseline.  A single often frequented place at a single common time of day would be closest way to do this.

It is very difficult to tell sometimes if we really are seeing a new performance issue because we are almost never in like-for-like circumstances, hence my recommendation to try to pin it down to a frequently visited place at a common time.  It is more of a "feel" about things being much slower.  It can certainly be that those "feels" are new problems but it also can be misleading and you might be seeing your viewer struggling with more resources than you think.

All it takes is for a place to be busier than usual, have more render heavy avatars than usual or have changed the build in some places and it will "feel" bad and seem as though it is the same place but is actually putting more strain on your system and you may not notice anything changed except for the speed.

On your side, be mindful of whether you installed any new software recently, especially sneaky Microsoft updates to drivers etc.
Also check your network performance, if you are downloading lots of new textures and objects but your network performance is suddenly terrible it will affect your FPS.
This especially so if you have cleared your texture or object cache recently.
Did you change your network recently?  Install a new modem and router?  Even upgrade and/or reboot your router?

These could all point to issues on your side.  Troubleshooting takes time, is painstaking and frustrating.

That said, you could be being affected by region/server performance as the grid uplift to the Amazon Cloud is in progress.  Read about it here:  Uplift Update

We are not exactly sure what is going on with it but there have been issues in the last few weeks including at least 2 days issues with the Bake server.

 

Link to comment
Share on other sites

Yeah, troubleshooting is a drag. Thing is, this isnt a new issue. I actually spend most of my time at my home so I didnt really notice, but when I went back to familiar sims, performance tanked. 

 

I do know that SL is horribly inefficient. Thing is thats got me confused is that, seemingly, nothing is pegged. Highest CPU usage I saw was only like 65%. If that was holding it back, I'd expect at least one core to be walled. In fact, I would like to see that, as then it'd just be "mystery solved, get a new cpu already." but that doesnt seem the case? I dunno =/

Far as the network goes, I do know that performance always tanks when loading in new textures. That said, one of the things I monitor is my network usage, and dont think thats an issue either as once everything loads in I'll see network usage slow, but still be getting single digit FPS.  

Right now my best guess is that for whatever reason Firestorm itself might be what's causing the issue. It honestly feels to me exactly like something that's not requesting the resources it actually needs for whatever reason. I'm starting to wonder if theres some exterior issue that's preventing it from pulling all available resources, of which frankly theres still plenty. Thinking on it does give me some ideas though, I'll try a few experiments and see what changes that brings. 

  • Like 1
Link to comment
Share on other sites

4 hours ago, Dazashi Graves said:

So... I've had this issue for months now, and nothing is seeming to fix it. On some Sims I'm getting between 10-15 FPS, sometimes as low as sub 5, which is insane and makes even typing unbearable as it seems linked to FPS some how.

Simply put, this shouldnt be happening. For reference, I'm running a 6700k and GTX 1080, 32gb ram@3200. Certainly not a top spec monster or anything, but still extremely capable. 

...

I'm at a loss. Any advice, or at least an explanation of what's going on would be really helpful.

I experienced something like you with my GForce 1060, where I normally had excellent FPS and no problems. See Viewers and GPU memory usage.

I reverted back to GForce drivers version 456.71 from 2020-07-10, further I uninstalled Firestorm clean and reinstalled it again. Ensured NVidea Control Panel settings for Firestorm using my 1060, maximum performance.

After the above steps I now have normal high good FPS again 130+ FPS i my sky box, 20-30 FPS high end clubs with graphics set between high and ultra in Firestorm, set to ultra with everything maxed out I get 100+ FPS in my sky box - I suspect, it was the latest drivers causing my problems. But no one reported same problems, it could also been my Firestorm gone "bad" or somehow the settings in NVidea Control Panel did not get used - I simply have no idea, just that my performance is back with excellent FPS and no problems.

No updates from Microsoft for Windows 10 caused any problems - as the matter of fact, if the system got forced to use the Microsoft NVidea drivers, you would get very excellent performance and good FPS, but with less excellent quality on OpenGL programs as for example Second Life and some modern games requiring NVidea game ready drivers would not run well.

FYI:

Firestorm 6.3.9 (58205) May 27 2020 01:20:51 (64bit) (Firestorm-Releasex64) with Havok support
CPU: Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz (2808 MHz)
Memory: 16341 MB
OS Version: Microsoft Windows 10 64-bit (Build 19041.572)
Graphics Card Vendor: NVIDIA Corporation
Graphics Card: GeForce GTX 1060/PCIe/SSE2

Windows Graphics Driver Version: 27.21.14.5167
OpenGL Version: 4.6.0 NVIDIA 451.67

Edited by Rachel1206
Link to comment
Share on other sites

5 hours ago, Dazashi Graves said:

None of my cores are peaking above 60%, my GPU utilization is peaking at 45%, and using less than a third of my ram as well. 

That's about normal. Most of the viewer CPU time usage is in one thread. It will normally use 100% of one CPU, but not much of the others. There's some minor stuff going on in other threads (texture decompression, streaming media, etc.) but the viewer won't really use all the cores of a modern CPU. It was designed in the single-core era.

Link to comment
Share on other sites

8 hours ago, Rachel1206 said:

I experienced something like you with my GForce 1060, where I normally had excellent FPS and no problems. See Viewers and GPU memory usage.

I reverted back to GForce drivers version 456.71 from 2020-07-10, further I uninstalled Firestorm clean and reinstalled it again. Ensured NVidea Control Panel settings for Firestorm using my 1060, maximum performance.

After the above steps I now have normal high good FPS again 130+ FPS i my sky box, 20-30 FPS high end clubs with graphics set between high and ultra in Firestorm, set to ultra with everything maxed out I get 100+ FPS in my sky box - I suspect, it was the latest drivers causing my problems. But no one reported same problems, it could also been my Firestorm gone "bad" or somehow the settings in NVidea Control Panel did not get used - I simply have no idea, just that my performance is back with excellent FPS and no problems.

No updates from Microsoft for Windows 10 caused any problems - as the matter of fact, if the system got forced to use the Microsoft NVidea drivers, you would get very excellent performance and good FPS, but with less excellent quality on OpenGL programs as for example Second Life and some modern games requiring NVidea game ready drivers would not run well.

FYI:

Firestorm 6.3.9 (58205) May 27 2020 01:20:51 (64bit) (Firestorm-Releasex64) with Havok support
CPU: Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz (2808 MHz)
Memory: 16341 MB
OS Version: Microsoft Windows 10 64-bit (Build 19041.572)
Graphics Card Vendor: NVIDIA Corporation
Graphics Card: GeForce GTX 1060/PCIe/SSE2

Windows Graphics Driver Version: 27.21.14.5167
OpenGL Version: 4.6.0 NVIDIA 451.67

Hmm, I'll look into this. I ran a debloater and it seems to have helped a bit. I'm trying other things, like choosing which core it's gana sit on.

5 hours ago, animats said:

That's about normal. Most of the viewer CPU time usage is in one thread. It will normally use 100% of one CPU, but not much of the others. There's some minor stuff going on in other threads (texture decompression, streaming media, etc.) but the viewer won't really use all the cores of a modern CPU. It was designed in the single-core era.

Yes, I know that SL tends to only prioritize 1 core/thread. My issue is that it's not even utilizing the entirety of just one. It pegs one core at like 60-70% and doesnt bother going above that. Restricting SL to a less used core seems to have increased it, yet still it wont fully utilize it. 

Link to comment
Share on other sites

  • 3 weeks later...
On 10/20/2020 at 12:48 AM, Dazashi Graves said:

Hmm, I'll look into this. I ran a debloater and it seems to have helped a bit. I'm trying other things, like choosing which core it's gana sit on.

Yes, I know that SL tends to only prioritize 1 core/thread. My issue is that it's not even utilizing the entirety of just one. It pegs one core at like 60-70% and doesnt bother going above that. Restricting SL to a less used core seems to have increased it, yet still it wont fully utilize it. 

The viewers are predominantly single-threaded (see note below) but windows is particularly poor at managing this (or is it particularly adept at it? you choose) and while all of the activity is happening on a single thread, Windows moves that thread about all over the shop. If you sum all of the per core CPU usage it will almost certainly come to about 1.2 * 100/N where N is the number of cores. e.g. my machine has 8 cores and if I sum all the usage it comes in at around 14% which is a little over the 12.5% you'd expect of a 100% utilised single thread.

On 10/20/2020 at 1:12 AM, bigmoe Whitfield said:

Viewers are single threaded too. 

Mostly true but not entirely so, it is not so cut and dried. Viewers are multi-threaded, however the vast majority of work happens on a single thread and that includes all of you rendering, data marshalling to and from cache etc. The reason we can also be confident that it uses 100% is that the main loop of the viewer is literally looping as fast as it can drawing frames and servicing input devices. The exception to this case is when you deliberately frame rate limit or when you defocus from the viewer window; the mainloop takes a deliberate sleep on each frame then to reduce the load while you are doing other stuff. What happens on other threads is network fetching of http assets primarily. It is also worth noting that your voice service (slvoice.exe) being a separate executable can run on another core.

The viewer was never built for multi-thread rendering and even in the places where threads are employed it is some rather peculiar traits. In particular openGL is not friendly towards threaded access and even if that were the case the manner in which the data is marshalled today would struggle to take proper advantage of the additional cores. There is light at the end of this tunnel though. The lab are actively researching and preparing for, a migration to a new pipeline and as part of that migration being able to scale to available computing power is a key aim. It's not a small undertaking (there is a reason why none of the third party viewers, have such support) it requires a complete rewrite of the rendering internals and that extends it's reach into how data is retrieved and stored, and even has its claws in the higher level functions of the UI. As such, the first steps towards this brave new world will likely be entirely invisible to users as more strict borders are placed around architectural parts of the viewer to allow the surgery to take place.

The challenge of course is also making sure that updates don't lose more users than they benefit, I don't know what the typical user's machine looks like, but the lab do have extensive data on this, I know anecdotally that on Firestorm we have a phenomenally wide range of users, from those with small laptops with slow CPUS and onboard graphics to those with multi-GPU, overclocked desktop beasts. It is inevitable that some older machines will simply no longer be usable, the minimum hardware spec will be adjusted, but it would make no commercial sense to the lab to do that if they lost more users than they gained. Time will tell where that takes us.

 

Link to comment
Share on other sites

4 hours ago, Beq Janus said:

There is light at the end of this tunnel though. The lab are actively researching and preparing for, a migration to a new pipeline and as part of that migration being able to scale to available computing power is a key aim. It's not a small undertaking (there is a reason why none of the third party viewers, have such support) it requires a complete rewrite of the rendering internals and that extends it's reach into how data is retrieved and stored, and even has its claws in the higher level functions of the UI. As such, the first steps towards this brave new world will likely be entirely invisible to users as more strict borders are placed around architectural parts of the viewer to allow the surgery to take place.

You seem to know something i don't. Don't bring my hopes up, they usually come crashing down on me like an inferno.

I've also looked into multithreading (at least some basics) and implemented some basic multithreading into the preferences window to stop it from locking up the Viewer for an increasing amount of time that scales with the amount of custom avatar render settings you have. It was a small experiment and i can see why large scale multithreading is an issue, it needs to be done absolutely careful and its a surprise that games running multithreaded don't crash as often given that even the slightest mistake could make it blow up.

Link to comment
Share on other sites

27 minutes ago, Profaitchikenz Haiku said:

That is one of the two big misbehaviours that put me off using the V2/3 style viewers, the other is almost identical, it's the long delay when you try to view somebody's profile.

the Legacy Profiles project viewer restores the old V1 style profile page which shows on first click. User profiles come up pretty quick compared to web profiles. Still quite a way to go with this viewer tho, not sure when it will be finished

Link to comment
Share on other sites

9 hours ago, Beq Janus said:

The viewer was never built for multi-thread rendering and even in the places where threads are employed it is some rather peculiar traits. In particular openGL is not friendly towards threaded access and even if that were the case the manner in which the data is marshalled today would struggle to take proper advantage of the additional cores. There is light at the end of this tunnel though. The lab are actively researching and preparing for, a migration to a new pipeline and as part of that migration being able to scale to available computing power is a key aim. It's not a small undertaking (there is a reason why none of the third party viewers, have such support) it requires a complete rewrite of the rendering internals and that extends it's reach into how data is retrieved and stored, and even has its claws in the higher level functions of the UI. As such, the first steps towards this brave new world will likely be entirely invisible to users as more strict borders are placed around architectural parts of the viewer to allow the surgery to take place.

That part about the OpenGL rendering is a particular sore point for AMD GPUs. The official AMD drivers for Windows have abysmal performance in OpenGL applications compared to Nvidia. While the open source AMD drivers for Linux are ironically much better than what AMD offers on Windows, my RX 580 8 GB GPU is 10-50% slower, depending on the scene in Second Life, than my GTX 960 2 GB GPU with CPUs of similar capabilities. It wasn't until recently that I managed to cobble together the second computer from parts in an old case so that I could compare performance between an all AMD system and an Intel/Nvidia system. I use Ubuntu 20.04 on both computers. I used Rickslab GPU Utils to verify that the AMD GPU was indeed operating at maximum load levels (and frequencies) in my tests while still underperforming next to the GTX 960. This isn't a CPU thread issue, as far as I can tell.

Now, also ironically, the AMD system scores much higher in the Unigine Valley OpenGL benchmark than the Intel/Nvidia system, but lower in some glmark2 tests. I do have to wonder if some of the viewer rendering pipeline has been streamlined to favor Nvidia in Second Life. Case in point, the Cool VL Viewer is faster than all other viewers on my Intel/Nvidia computer but slower than all others on my all AMD system. Henri optimizes his code for his own Nvidia-based computers, which isn't a bad thing since it's his viewer to do with as he pleases. Would it be possible to optimize for AMD? At this point, would it even matter since the Lab is going to change the rendering pipeline anyway?

The RX 580 is also far superior to the GTX 960 in most games I play on Steam, but that has more to do with the AMD GPU having better Vulkan (Vulkan & DXVK) rendering capability. If the viewer code is updated, which I imagine is being motivated by Apple deprecating OpenGL soon, then Vulkan is the way to go as a cross platform graphics API.

If I had a system like the one that the original poster has, but on Linux, it would be giving me incredible framerates no matter what viewer I use. Most likely the issue isn't the viewer, it's Windows or some setting, driver, or power state related to the operating system.

Link to comment
Share on other sites

4 hours ago, KjartanEno said:

That part about the OpenGL rendering is a particular sore point for AMD GPUs. The official AMD drivers for Windows have abysmal performance in OpenGL applications compared to Nvidia. While the open source AMD drivers for Linux are ironically much better than what AMD offers on Windows, my RX 580 8 GB GPU is 10-50% slower, depending on the scene in Second Life, than my GTX 960 2 GB GPU with CPUs of similar capabilities. It wasn't until recently that I managed to cobble together the second computer from parts in an old case so that I could compare performance between an all AMD system and an Intel/Nvidia system. I use Ubuntu 20.04 on both computers. I used Rickslab GPU Utils to verify that the AMD GPU was indeed operating at maximum load levels (and frequencies) in my tests while still underperforming next to the GTX 960. This isn't a CPU thread issue, as far as I can tell.

Now, also ironically, the AMD system scores much higher in the Unigine Valley OpenGL benchmark than the Intel/Nvidia system, but lower in some glmark2 tests. I do have to wonder if some of the viewer rendering pipeline has been streamlined to favor Nvidia in Second Life. Case in point, the Cool VL Viewer is faster than all other viewers on my Intel/Nvidia computer but slower than all others on my all AMD system. Henri optimizes his code for his own Nvidia-based computers, which isn't a bad thing since it's his viewer to do with as he pleases. Would it be possible to optimize for AMD? At this point, would it even matter since the Lab is going to change the rendering pipeline anyway?

The RX 580 is also far superior to the GTX 960 in most games I play on Steam, but that has more to do with the AMD GPU having better Vulkan (Vulkan & DXVK) rendering capability. If the viewer code is updated, which I imagine is being motivated by Apple deprecating OpenGL soon, then Vulkan is the way to go as a cross platform graphics API.

If I had a system like the one that the original poster has, but on Linux, it would be giving me incredible framerates no matter what viewer I use. Most likely the issue isn't the viewer, it's Windows or some setting, driver, or power state related to the operating system.

There is no specific optimization done for any GPU/CPU vendor, only bugfixes. AMD generally has a lot of issues GPU wise and shaders often need some extra fixes or workarounds to accommodate for their sometimes weird implementations. Generally you'll always see better performance on Intel CPU than on an AMD CPU, GPU doesn't make a difference unless you are using an Intel GPU or your GPU is topped up in which case an AMD will perform worse due to AMD's well known bad OpenGL support. As long as your GPU never runs at 100% you should see roughly the same performance +/- the usual SL discrepancies. Any "optimization" you think may be there isn't. HOWEVER, compiling on different vendors may result in better performance for that specific vendor although i cannot confirm that, i've just heard several times that AMD users see much better performance on my Viewer compared to another Viewer compiled either via automation (Build City, probably with an Intel) or manually with Intel. Simply taking the Official Viewer as is and compiling it on my Ryzen without any changes nets a good chunk extra performance, though this could again just be SL being SL or simply the difference between compiling manually vs compiling via automation.

  • Like 1
Link to comment
Share on other sites

6 hours ago, NiranV Dean said:

Simply taking the Official Viewer as is and compiling it on my Ryzen without any changes nets a good chunk extra performance, though this could again just be SL being SL or simply the difference between compiling manually vs compiling via automation.

Ok, since I have compiled Firestorm for Linux using their developer wiki instructions, I'm aware that some libraries such as OpenJPEG are precompiled, and the build system downloads them from a server. If those precompiled libraries come optimized for Intel CPUs, would that have a performance effect even if the main viewer code itself is compiled on a Ryzen system? When it came to Cool VL Viewer, I saw only a couple of frames per second improvement by compiling it optimized on my Ryzen CPU, but again using Henri's precompiled libraries.

I must stress, I am not complaining or denigrating any viewer or the person(s) who code them. Programmers do a wonderful job. I'm just an end user who wants to learn more, and I have more time than money, so there simply won't be any new computers in my near future.

Link to comment
Share on other sites

7 hours ago, KjartanEno said:

Ok, since I have compiled Firestorm for Linux using their developer wiki instructions, I'm aware that some libraries such as OpenJPEG are precompiled, and the build system downloads them from a server. If those precompiled libraries come optimized for Intel CPUs, would that have a performance effect even if the main viewer code itself is compiled on a Ryzen system?

Most likely. But i dont know anything about coding for any CPU and so far i haven't noticed any differences or special "optimizations" done for either AMD or Intel CPU's. I'd say it probably doesn't matter unless we talk about optimizations like AVX/AVX2 which could make a big difference and only be available for a certain CPU at least for a while. AMD only just got AVX2 with Ryzen but had AVX since 2011... and AVX alone can make some difference if optimized for it.

Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 1267 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...