Jump to content

Frame Rate settings in viewers: what suggested changes still work?


You are about to reply to a thread that has been inactive for 229 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

This doesn't seem specific to any particular viewer.

There are various Preferences which affect the frame rate, One of the most obvious is the Draw Distance. Shadows can knock the frame rate down a lot. Some things depend on the graphics card and drivers used.

I can run several different viewers, and with EEP all have been updated in the last year. Some of the options are managed differently. Some viewers have more help popups available than others.

If the Help says some option affects frame rate, it's worth trying, but I was trying various things today, and some of the help info seems out of date. Of course it depends on your hardware, but several times I found that the recommendation was wrong. The help says switching off an option increases the frame rate, and the frame rate drops.

And I am still trying to figure out what the help on some things actually means. The way that caching can be done in video RAM has changed a lot.

I don't feel able to trust the Help built into the viewer. except as a warning that something might have an effect. The whole graphics game has changed a lot, and it's hardly fair to blame any particular developer for not being able to see changes.

I know there's some talk of new viewer versions in the works. That could change everything for you. But it looks worth saving a copy of your settings before your next viewer upgrade. And, when you do upgrade, check you have current graphic drivers. That could pay off more than any settings change.

Link to comment
Share on other sites

I think this is complicated now by the presence of mesh, both massive mesh buildings and tiny mesh jewellery with large transparent prims attached to them to try and prevent them deteriorating into triangles if other people are not within draw distance. Also I suppose mesh avatars are going to have a significant impact on how much time is spent by your viewer preparing the scene rather than displaying it and letting you move around.

What it means is that a set of graphic settings that give you a reasonable time in one place won't in another. Go to an old-school  prim-built place and then an ultra-modern mesh-with-baked-shadows place and things are not comparable despite the settings staying the same.

I've tried all I can to get things so I can move around without trouble, see most people nearby, see the structures around me, and there's nothing I can find that gives a 100% any-where any-place satisfactory result.

That said, I never bother looking at the frame rate figures, I judge things entirely by how things feel around me, do things move smoothly, do I struggle to walk around? Very subjective, I know.

There are some much newer debug settings popping up in the TPV's with no documentation but they appear to allow you to set limits on how big a thing is before your viewer says "you'd better not see this", and some others which are even less comprehensible to me.

@Niran V Dean would be the one I think is going to give the most detailed technical answer here, but I suspect it will apply mostly to Black Dragon and not to the majority of the TPVs.

Link to comment
Share on other sites

15 minutes ago, arabellajones said:

I don't feel able to trust the Help built into the viewer. except as a warning that something might have an effect. The whole graphics game has changed a lot, and it's hardly fair to blame any particular developer for not being able to see changes.

The problem is more that with the wide variation of hardware viewers have to run on what improves one person is worse for another, it will also vary significantly from  scene to scene

The majority of people are typically CPU bound in the viewer. This is because the viewer (as we all know) is predominantly single threaded, but also that it simply does an awful lot of pre-rendering preparation on the CPU. If you are basically hamstrung by your CPU with your GPU barely ticking over then all the settings that are GPU heavy (such as anisotropics) won't have any noticeable impact, where someone with onboard graphics where the imbalance is less distinct will potentially see a slow down because the GPU is taking longer to draw a frame and the CPU is waiting on it. With regard to scene impact, rigged mesh is particularly hard on CPUs esp if you have shadows on. Sun shadow calculation takes a good 10-20% of frame time typically on my machine and that will be higher and longer with more avatars and rigged mesh in a scene. 

I think the safest conclusion is that your mileage will be different to the next person's and all we can reasonably say is "this knob when twisted changes stuff, you might win, you might not".

A good example here is EEP related. One of the reasons we (FS) waited a long time to get EEP out was because of the water shader fiasco. For some, not all by a long measure, but enough that it mattered, saw a significant drop in FPS with EEP and water rendering was frequently the cause. One discovery was that occlusion culling had been disabled. Occlusion culling is a feature that is used to remove "stuff" from the rendering pipeline that will be hidden in the final view. When you are drawing reflection/refraction some of what is outside of visible range is within reflection range and so occlusion culling is turned off to get better reflections. the upshot of disabling culling is that you are left with a LOT more stuff to send to the pipeline and ultimately to the GPU for drawing. So...enable culling, less stuff to draw...yay must be faster right? For some people, yes, a lot faster. mostly those with lower end GPUs for whom the lack of culling had pushed the GPU drawtime above whatever threshold made it the bottleneck. For others though, not good, because occlusion culling requires the CPU to decide hat is visible or not and the probing for occlusion itself has an overhead, so if you are CPU bound, then occlusion culling is probably not going to help. 

Just one example and those who know the full details will hopefully accept the overly simplified explanation for what it is. 

I've long wanted to have a revised "lag meter" that could tell you where your personal bottleneck was in a given scene. It's really not that simple however.  One clue to watch for though. Your GPU probably has an idle speed and a boost speed. When it has enough work to do it'll break into  boost and use more power, fans whir etc. When the CPU or disk or network is your limiting factor the GPU will sulk and look bored. I've sat in a busy region clocking a few FPS and my GPU (infuriatingly) idling because my CPU is busy pulling down textures or stupidly complex mesh bodies etc. Once we get to a more performant viewer design we'll hopefully have a nice balance where slow fps can be linked directly to hardware being overburdened.

 

  • Like 2
  • Thanks 3
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 229 days.

Please take a moment to consider if this thread is worth bumping.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...