Jump to content

Firestorm - Significant FPS drop in 'Sunset' environment setting?


AmeliaJ08
 Share

You are about to reply to a thread that has been inactive for 449 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

As per title, anyone else experience a significant drop in FPS when a sim is in 'Sunset' environment setting? happens when it is selected manually too.

Hardware is an i7 10750h, GTX 1650ti, 16GB ram and perfectly sufficient for 40-80FPS on all other environment settings... until it switches to 'Sunset' and the frames tank, usually by about half!

I have been googling around to try and work out what is going on but couldn't find much except for one mention of an old Firestorm bug where this was observed... but nothing recent.

Advanced lighting is on, shaders, shadows, reflections, ambient occlusion etc - pretty high settings all round and I'm happy with performance in everything except the (very pretty) Sunset environment setting.

Just wondering if anyone has run into this, going to experiment with some older Nvidia drivers (currently running the latest 528.49) to see if it's some sort of strange driver bug.

Link to comment
Share on other sites

Advanced lighting with shadows own take serios CPU power.  When you shift and the shadows cascade, and you are not using a I& or I( with serious single thread speed. , it will dip.  Sunset does cause a heavier load (and some other simular) but the hit should quickly even out. The textures and shadowing may be hindered by the lower GPU, see if you can force  setting in NVidia to utilize the CPU rather than GPU for offsets. 

Edited by Kendawe
Link to comment
Share on other sites

Though I can't quantify it, I observe a drop in FPS plus an increase in GPU utility during both sunset and, to a lesser extent, sunrise if I have shadows on.

This should come as no real surprise, since shadow-drawing is more intense at those times, though quite why the load in heavier at sunset I have no idea.

Perhaps someone with more knowledge of OpenGL could explain it.

Link to comment
Share on other sites

6 hours ago, Kendawe said:

Advanced lighting with shadows own take serios CPU power.  When you shift and the shadows cascade, and you are not using a I& or I( with serious single thread speed. , it will dip.  Sunset does cause a heavier load (and some other simular) but the hit should quickly even out. The textures and shadowing may be hindered by the lower GPU, see if you can force  setting in NVidia to utilize the CPU rather than GPU for offsets. 

Do you have any more details on what you mean by forcing settings in the Nvidia control panel?

 

Link to comment
Share on other sites

alright i'll try explaining whats happening here to my best ability:

Shadows are prepared and rendered on the CPU into the shadow map, the shadow map acts as sort of a "depth" map for shadows, there are 4 of them (+2 for projectors) which are used for the sun, the GPU checks against these shadow maps in a shader, shaders solely run on the GPU alone. Shaders work by going through every single pixel from top to bottom, checking if this pixel needs some work and then goes to the next one, what the GPU does for shadows is checking every pixel on screen against one (sometimes two on overlaps) of them to see what area should be shadowed, it simply skips out if its safe to assume there cant be any shadows or if all primary checks whether we should use a shadow map to begin with fail, this safes a lot of cycles but highly depends on the shadow amount in the scene, this means as more and more of the shadow maps are filled with shadows, the more pixels the shader wont skip and will have to do some expensive checks and calculations to paint said pixel to be shadowed, low angle lighting such as sunset/sunrise have long shadows across the entire area and the sun is potentially at such an angle that we can't assume any pixels to be skipped simply due to the light angle, this means each and every pixel needs to be checked, the GPU quickly ramps up as it now has to check millions of pixels, do millions of checks and calculations, this is why the more shadows you see on screen the higher the GPU usage, since the shader simply skips out of less iterations. The way shadows work in SL are not constant, they are dynamic and their GPU usage depends solely on the amount being shadowed.

The CPU part (and thus your baseline performance) depends on the amount and complexity rendered into the shadowmaps, the CPU effectively renders every object a second time into the shadow map, think of that maitreya body, how bad it is to render, now do that a second time into the shadow map (potentially two times if overlapping).

In short CPU shadow performance is based on shadow resolution, shadow map count, amount and complexity of the objects being rendered into the shadow maps and can potentially spike into infinity.

GPU shadow performance is solely based on amount of pixels being altered with a minimum of "none" and a maximum of whatever your screen resolution resolution is.

EDIT: To give some context and statistics:

The common screen resolution "Full HD" is 1920 * 1080, that's 2073600 pixels. The GPU effectively has to scan through all 2073600 pixels and every time determine whether this pixel lies in shadow or not by comparing against the 4 shadow maps that we have depending on where the point we are checking against is situated in the scene. Does the light angle permit this pixel to be in shadow? No -> Skip || Yes -> Determine whether it is and how much (expensive).

Edited by NiranV Dean
  • Like 1
  • Thanks 1
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 449 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...