Jump to content

Viewer + Multicore CPU's


You are about to reply to a thread that has been inactive for 2199 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

Hi All,

I am sure this has been bought up before, but I figure I would ask since its a long standing annoyance. A LOT of the viewer's workload seems to be CPU bound...

Is anything being done by the lab to improve the viewer's use of modern CPU's?

In my case, I get horrific slowdowns at venues that may have more than 15 avatars - we're talking 10fps, sometimes worse. I know for a fact I am not alone with these sort of issues.

Before the masses say "turn down your graphics settings" "jellydoll everyone" "render only friends" "turn off particles" "turn down your DD" "Turn off ALM" "set your max complexity slider for av's to a really low value" etc etc... Hear me out... I have an NVidia GTX 1080 with plenty of VRAM and grunt, plenty of system RAM, and a decent 6 core i7 CPU.

On my system, the bottleneck isn't my video card or system memory. The viewer seems to slam one CPU core at 100% constantly while the remaining other 5 cores on my Intel i7 are doing NOTHING. That is the bottleneck... Why not improve the threading of the viewer so these other cores can share some of the workload (I am guessing its all rendering pipeline stuff thats tied to one core)??

 

 

Edited by mygoditsfullofstars
typos etc.
Link to comment
Share on other sites

Not into the hardware stuff unless I am in the market for a new computer, BUT there has been a lot of discussion and problems on these forum about the 1080 -- so you may have a particular issue that others do not see.  I have a two year old computer with no issues even in crowded places.

AND it could be the places that you frequent. I had one roleplay sim that instantaneously (almost) crashed my computer each time I visited. It was "heavy mesh".  

 

If you want to make  'feature request', you can do that in a JIRA :D.

 

Firestorm 5.0.11 (53634) Jan  9 2018 18:51:07 (Firestorm-Releasex64) with Havok support
Release Notes

You are at 126.8, 168.0, 3,860.7 in LEA6 located at sim10336.agni.lindenlab.com (216.82.50.58:13006)

Second Life Server 18.05.25.515749
Release Notes

CPU: Intel(R) Core(TM) i7-5820K CPU @ 3.30GHz (3298.09 MHz)
Memory: 16286 MB
OS Version: Microsoft Windows 10 64-bit (Build 16299)
Graphics Card Vendor: NVIDIA Corporation
Graphics Card: GeForce GTX 980/PCIe/SSE2

Windows Graphics Driver Version: 23.21.13.8813
OpenGL Version: 4.6.0 NVIDIA 388.13

RestrainedLove API: (disabled)
libcurl Version: libcurl/7.47.0 OpenSSL/1.0.1i zlib/1.2.8
J2C Decoder Version: KDU v7.9.1
Audio Driver Version: FMOD Ex 4.44.61
LLCEFLib/CEF Version: 1.5.3-(CEF-WIN-3.2526.1347-32)
LibVLC Version: 2.2.4
Voice Server Version: Not Connected
Settings mode: Firestorm
Viewer Skin: Firestorm (Grey)
Window size: 2560x1377 px
Font Used: Deja Vu (96 dpi)
Font Size Adjustment: 0 pt
UI Scaling: 1.5
Draw distance: 176 m
Bandwidth: 1500 kbit/s
LOD factor: 2
Render quality: High-Ultra (6/7)
Advanced Lighting Model: Yes
Texture memory: 2048 MB (1)
VFS (cache) creation time (UTC): 2018-4-30T14:22:54 
Built with MSVC version 1800
Packets Lost: 46/165,555 (0.0%)
June 11 2018 20:11:42 SLT

Link to comment
Share on other sites

6 minutes ago, Chic Aeon said:

Not into the hardware stuff unless I am in the market for a new computer, BUT there has been a lot of discussion and problems on these forum about the 1080 -- so you may have a particular issue that others do not see.

Hi :)

I had the same issue even on my older rig that had a 780ti - it was always the CPU (one core out of multiple) getting pegged at 100% constantly. This is no recent issue for me. And yes, I can fix my frame rate by doing the usual things, jellydolls etc, but to have to stare at a room of jellydolls just to get a decent frame rate is silly in this day and age. CPU contention issues with the viewer have been around a long time.

 

Edited by mygoditsfullofstars
Link to comment
Share on other sites

LL is always fiddling with various viewers to try different things, and they've created what I think have been 2 totally new viewers over time - the V2 and the V3, although I believe the V2 was outsourced. My thinking is that, since they are not strangers to creating viewers from the ground up, and because the viewer isn't anywhere near as massive a project as the other end of SL, they could create a new multi-core viewer - IF not using multiple cores really is a significant problem.

Link to comment
Share on other sites

I'm really hoping we get this someday. CPU utilization with SL is trash. My i5 4570 and GTX 970 can max this game, but quickly I gets to the point where changing settings does nothing because my CPU is being underutilized. Render distance at max, 30fps. Render distance at minimum, 32fps. 

Turn reflections on and off, bump mapping and shiny things on and off, 970 has no problems with it and there's no difference in frame rate since the CPU is holding everything back.

i shouldn't need the best of the best single threaded performance to play this game smoothly

and if I didn't care about graphics or frame rate, I'd play on my Pentium 4 system from 2003

Link to comment
Share on other sites

10 hours ago, CoffeeDujour said:

Spoiler alert ... That's not how viewer development has ever worked. 

Explain please.

All programmes are developed from the ground up, even when using pre-written, no necessarily in-house, code, and a viewer can't be any different..

Edited by Phil Deakins
Link to comment
Share on other sites

You can get a low frame rate in a static scene with the graphics card not at 100% load, but the viewer's main thread at 100%. This can happen in an environment with one avatar, nothing moving, no disk accesses, and little network traffic, but lots of objects. In that situation, the GPU should be working hard, replaying display lists from retained-mode OpenGL calls, and the CPU should just be starting and managing the GPU. So what's the CPU doing?

The viewer is mostly using retained mode calls, I hope. Or isn't it? Are there too many draw calls because  each prim is drawn separately? Is there other processing which is redone on every frame even when nothing is changing?

Link to comment
Share on other sites

1 hour ago, Callum Meriman said:

https://bitbucket.org/lindenlab/viewer-release/commits/all

Go back in time in the commits Phil.

I'm several pages in and I'm still looking at 2018 commits, so I assume you are referring to the way the viewers' 'tracks' develope and merge. If that's what you mean, it's nothing to do with what I said, which was that a viewer that makes use of multiple cores can be written from the ground up, just like viewers (and any new programme) is always written (I mentioned the V2 and V3 but I forgot to include the V1); i.e. from a blank page. Making use of pre-written code when suitable helps, of course, but the page is still blank at the outset.

Coffee first mentioned that a multicore viewer would need to be written from the ground up, and later, when I said that LL could do it, she said that that's not how viewers are written, which didn't make any sense, and is why I asked for an explanation.

If you are both talking about the way the viewer is normally developed, it's different to what I'm talking about. I'm talking about what Coffee mentioned - "from the ground up" - from a blank page.

Note: The V2 was outsourced and was possibly written form a blank page. The V3 may have been from a blank page or deveoloped from the V1, or even from the V2. Even so, the V1 was like any new programme - written from scratch, possibly making use of some chunks of external pre-written code, but written from a blank page - from the ground up.

Edited by Phil Deakins
Link to comment
Share on other sites

I believe the opening 'axe riddle' scene from 'John Dies at the End' sums up how software development works. ( I cant post it here as the film is R rated and the section in question somewhat gory) .. one of my favorite films.

Needless to say. There has never been a blank slate development of a new viewer since the very first one. Most of the time changes are small and obviously incremental, other times, huge chunks are replaced. Keep in mind .. what you understand as milestone releases are simply when key visual elements changed.

Link to comment
Share on other sites

That doesn't matter, Coffee. YOU said that a viewer such as the OP suggested would need to be written from the ground up. And it's self-evident that a viewer like that can be written from ground up, and that LL could do it as they've done it before, which is what I said. So what are you disagreeing with? Heck, even I could do it (but it would take me so long that I don't think I'll bother lol.)

Note: You're probably right that the V1 was the only viewer that was written from a blank sheet. I thought the V2 might have been, since I'm sure it was outsourced, but maybe not.

Edited by Phil Deakins
Link to comment
Share on other sites

3 hours ago, animats said:

You can get a low frame rate in a static scene with the graphics card not at 100% load, but the viewer's main thread at 100%. This can happen in an environment with one avatar, nothing moving, no disk accesses, and little network traffic, but lots of objects. In that situation, the GPU should be working hard, replaying display lists from retained-mode OpenGL calls, and the CPU should just be starting and managing the GPU. So what's the CPU doing?

The viewer is mostly using retained mode calls, I hope. Or isn't it? Are there too many draw calls because  each prim is drawn separately? Is there other processing which is redone on every frame even when nothing is changing?

In SL the CPU is actually handling a lot more of the graphics than you would think. Most games use the CPU graphically at some level, and SL does the same, but because the game is so bad at utilizing multiple cores and threads, it quickly becomes a bottleneck at higher settings no matter what you have. Absolutely NOTHING running on an 8700k should ever dip below 60fps in 1080p, yet there are people on this forum with that problem.

the CPU mainly handles lighting and shading effects, the gpu is more for geometry and textures, surface effects and filtering, various forms of AA

the gpu does share some of the shader effects however, and they're tied into texturing and surfaces 

if you disable basic shaders for example, it will decimate your FPS because pretty much everything except for the geometry is immediately offloaded to the CPU 

since lighting is a big part of SL, even without the advance lighting, the underutilization of the CPU is a serious issue that's really holding the game back, even if it IS a super unoptimzed mess of content with insane polygon counts and super high res textures, modern hardware should be able to handle it fine

Link to comment
Share on other sites

  • 2 weeks later...

I caught the tail-end of the v1 era. Several things from that era, including a choice of colour themes, vanished from the v2 viewer, never to return. The SL Viewer has a poor choice of colours, but it seems you have to use it if you are reporting any problem to Linden Lab. I know I am not colour blind, but the choices make it difficult for me.

v2 was outsourced, and at a time when multiple cores were becoming common enough to need thinking out. But one doing the OS work, one doing the work for the program, and a third for things such as SLVoice, was the low-hanging fruit.

I know I can use a separate browser, which is essentially the same trick, and it helps, but http and https need to be in the viewer.

There are bits and pieces which come from outside  such as the SLVoice module. On Linux that still relies on v0.10 of gstreamer, and v1.0 of that program has been out for 4 years.

With so much attention devoted to Sansar I have my doubts that these long-running problems will see any change. It's arguable that Cool VL Viewer may be the best available TPV, it didn't go down the v2 rabbithole, but it still seems to be stuck with using SLVoice. But right now it looks the best available answer to the Linux problem. It installs. it works, and it doesn't seem to choke so easily. I haven't been able to get the LL official viewer working on Linux for a long while (and so I don't bother trying to report problems for anything, too much of the TPV community isn't interested until you can prove something works in the official viewer, understandable, but they push it too far.)

 

Link to comment
Share on other sites

It's not just CPU's where the viewer has issues - it's APU's also. I've got a Toshiba Satellite Laptop with AMD A4 APU and AMD Graphics Card (can't tell you which one of the top of my head but the one that's compatible with the APU) and my system is more than enough to handle SL but i still have the issues as the op has in the busier sims. It is an SL issue and it's not just with the viewer but the viewer does play a good part in the problem.

Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 2199 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...