Jump to content
You are about to reply to a thread that has been inactive for 1462 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

  • SL does not take full advantage of modern GPUs.
  • SL content is constantly changing whereas games with pre-made content that does not change often can take advantage of some performance enhancing tricks.

Still, even with this in mind, much of SL's performance issues are still due largely to the lack of optimization in SL content. I already get 2-5 times the FPS in sims I built myself compared to pretty much anywhere else in Second Life because I put in some effort to optimize.

LL needs to provide tools that will guide content creators towards making better choices with the content they produce. Tools and features that will reduce the texture and polygon bloat so much SL content needlessly suffers from. If that ever happens you will see drastic performance increases for everyone in SL, whether they have a high end computer or a low end computer. Not overnight, certainly, but over time as old content is phased out in favour of new, better optimized content.

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

  • 2 months later...

I understand your story very well. I am facing the same trouble. And I understand roughly about this forum, but most user using inexpensive PCs can not understand your sufferings. Probably, They use inexpensive graphics cards. It can be purchased under $ 500, resolution lower than FHD, not very high setting, go to places where the number of objects is not just 5000, it is offered with low quality of 100,000 KTris per sec or less. And their problem is to stick to SL being the best community and not deliberately look at SL 's low hardware compliance.

I am using Core i 7700K, GTX 1080 Ti, NVMe SSD, 4K monitor, that Ultra High End PC as of now. So I understand what you are saying is right. My graphics card has 11 GB of graphics memory and 32 GB of PC main memory.

The SL only uses 2 GB of graphics memory. This is the root of all problems. It's almost the same as the technology level 10 years ago.

The graphics processor must use most of its performance for loading, unloading and compressing textures. This is because only 2 GB permission is given. In other games, the graphics memory amount of the graphics card is automatically detected and loaded to its maximum amount. Therefore, the graphic processor can concentrate on drawing processing. For example, in Final Fantasy 14, graphics memory is loaded up to about 8 GB. It should be loaded to the maximum required. To handle advanced graphics with 4K resolution, it need to load at least 5 GB. SL can handle only 2 GB.

I guess development team thinking. They are thinking that SL will work comfortably even on very old 32 bit PCs. For that reason it is probably keeping graphics resources low.

  • Like 1
  • Haha 1
Link to comment
Share on other sites

  • 1 year later...
On 6/16/2018 at 9:37 AM, Eriko said:

I understand your story very well. I am facing the same trouble. And I understand roughly about this forum, but most user using inexpensive PCs can not understand your sufferings. Probably, They use inexpensive graphics cards. It can be purchased under $ 500, resolution lower than FHD, not very high setting, go to places where the number of objects is not just 5000, it is offered with low quality of 100,000 KTris per sec or less. And their problem is to stick to SL being the best community and not deliberately look at SL 's low hardware compliance.

I am using Core i 7700K, GTX 1080 Ti, NVMe SSD, 4K monitor, that Ultra High End PC as of now. So I understand what you are saying is right. My graphics card has 11 GB of graphics memory and 32 GB of PC main memory.

The SL only uses 2 GB of graphics memory. This is the root of all problems. It's almost the same as the technology level 10 years ago.

The graphics processor must use most of its performance for loading, unloading and compressing textures. This is because only 2 GB permission is given. In other games, the graphics memory amount of the graphics card is automatically detected and loaded to its maximum amount. Therefore, the graphic processor can concentrate on drawing processing. For example, in Final Fantasy 14, graphics memory is loaded up to about 8 GB. It should be loaded to the maximum required. To handle advanced graphics with 4K resolution, it need to load at least 5 GB. SL can handle only 2 GB.

I guess development team thinking. They are thinking that SL will work comfortably even on very old 32 bit PCs. For that reason it is probably keeping graphics resources low.

 

I have practically the same card as you, also with 11G, and 32G PC memory. The fact that SL is only willing to use 2G of my video memory is, indeed, a bit absurd. Not doing whole previous discussion all over again, but I learned, earlier, that this limitation is basically just a passive-aggressive way of forcing ppl to not use large textures, to put it bluntly.

You're astoundingly wrong on what puts load on your GPU, though, when you say 'The graphics processor must use most of its performance for loading, unloading and compressing textures.' For one, textures are hardly compressed (which, if it happens, is visually known as 'bias' occuring), and, at any rate, a trivial operation that would put as good as no load on your GPU, and is decided upon by your viewer anyway (Firestorm, for instance, has nice built-in statistics for that). Believe me, almost all load is caused by fancy post-processing, like anti-aliasing, processing light-sources, shadows, etc. Swapping out textures can cause small delays, but those are either caused by streaming-time/bandwidth (which is why LL dislikes the use of larger textures), or, when cached locally, by slower hard disks.

Edited by kiramanell
Few typos
  • Like 1
Link to comment
Share on other sites

Hey, wait a second!

It appears the OP is making two very different arguments.

1.  SL has an unacceptably low frame rate, even when you're using a good graphics card, and
2.  SL does not make good use of all the capabilities of modern graphics cards.

Or maybe that is actually one argument, which would sum up as

3.  SL should revamp the code to take advantage of the new cards, so we could all get better performance (even with all the shaders and shadows on).

She's not wrong.  However, "revamping the code" in this case probably means "switch from OpenGL to DirectX", and that would be a HUUUUUGE project.  Maybe that's why they hired some more graphics engineers recently?  We can only hope.

Link to comment
Share on other sites

10 minutes ago, Lindal Kidd said:

Hey, wait a second!

It appears the OP is making two very different arguments.

1.  SL has an unacceptably low frame rate, even when you're using a good graphics card, and
2.  SL does not make good use of all the capabilities of modern graphics cards.

Or maybe that is actually one argument, which would sum up as

3.  SL should revamp the code to take advantage of the new cards, so we could all get better performance (even with all the shaders and shadows on).

She's not wrong.  However, "revamping the code" in this case probably means "switch from OpenGL to DirectX", and that would be a HUUUUUGE project.  Maybe that's why they hired some more graphics engineers recently?  We can only hope.

There is no reason whatsoever to use DirectX unless you're looking to try and lock Second Life into a particular OS ecosystem.

  • Like 3
Link to comment
Share on other sites

3 minutes ago, Solar Legion said:
14 minutes ago, Lindal Kidd said:

She's not wrong.  However, "revamping the code" in this case probably means "switch from OpenGL to DirectX", and that would be a HUUUUUGE project.  Maybe that's why they hired some more graphics engineers recently?  We can only hope.

There is no reason whatsoever to use DirectX unless you're looking to try and lock Second Life into a particular OS ecosystem.

Adoption of DirectX would force the exodus of Mac OS users, including me. That may happen anyway, as Apple moves away from X86, they may also abandon OpenGL completely. LL has work to do.

Edited by Madelaine McMasters
Link to comment
Share on other sites

5 minutes ago, Madelaine McMasters said:

Adoption of DirectX would force the exodus of Mac OS users, including me.

And any Linux user that refuses to run the viewer through Wine/Lutris. Hi.

Seriously though, if anything they ought to be updating Second life to use modern OpenGL or Vulkan. The latter is wholly OS agnostic.

Edited by Solar Legion
  • Like 1
Link to comment
Share on other sites

SL is CPU bound.

The current SL architecture puts a huge load on the CPU downloading and processing all the assets and textures. They aren't individually super bad (before anyone gets over excited about optimized content), but collectively, they leave little CPU time left over to get your GPU out of bed. Think 'death by a thousand cuts' rather than 'zomg that thing makes me lag'.

CPUs haven't gotten a lot faster in recent years, but they have added more cores to allow more throughput. SL as it stands can't make use of more than one core.

You would think that needing to download stacks of individual textures and objects would be a perfect fit for a CPU with many cores, and you would be right ... except that everything has to come together at a single point so your GPU can render a frame. This sucks, and while it would be nice (and attempts have been made) to use more cores to process stuff in parallel, the over head of collecting everything together so it can be passed to the GPU wipes out all of the benefits. 

The fix to this is for LL to switch to a rendering engine that's designed to take advantage of multiple CPU cores and rework the fetching and processing of stuff. Vulkan seems the obvious choice being both the successor to OpenGL (that the viewer currently uses) and cross platform.

Till that happens, the best you can do is reduce your draw distance, this will have a dramatic impact on the volume of stuff that needs to be processed, and the sooner it's processed, the sooner your CPU can get busy asking your GPU to draw more frames.

Personally, I find it worthwhile to make a few graphics presets and then switch between them as the need arises. 

 

 

  • Like 4
Link to comment
Share on other sites

I’m glad this thread was revived. I, too, am frustrated by poor performance of SL viewers. I have tried everything I could think of that I can do to improve it. I got my current computer specifically to (hopefully) get better performance, but I was disappointed.

My computer:   CPU: AMD Ryzen Threadripper 2950X 16-Core Processor  (3493.43 MHz)

Memory: 130946 MB

OS Version: Microsoft Windows 10 64-bit (Build 18363.778)

Graphics Card Vendor: NVIDIA Corporation

Graphics Card: GeForce RTX 2080 Ti/PCIe/SSE2/11 GB

The reason I have such an outlandish amount of memory is that I hoped that running SL viewers from a ramdrive would help. It didn’t.

The OS is on an SSD. The viewer program files and chat logs are on a different SSD, and the cache, which is set to the maximum possible, is on a third SSD.

I have a 300 mbps fiber internet connection. I use a wired Ethernet connection directly to the six-month-old router.

What else can I possibly do to make SL viewers perform better? I use Firestorm 99 percent of the time.

Two things are more frustrating than any others. One is slow loading of textures. Many times, I teleport to one of the two sims where I spend 90 percent of my online time and, even though most textures should be cached, loading often still takes a very long time. It varies a lot from time to time when I have changed nothing, so it’s not on my end.

The other is low frame rates. I recently spent several hours experimenting with graphics settings and comparing frame rates. This was not a scientific experiment. I stood on the roof of my house and looked east, over a developed area where there were very few avatars. What I found was interesting, and, I think, may be useful to others, so I’ll share. Only three settings had a noticeable effect. They were monitor resolution, shadows, and draw distance. None of the other graphic settings made enough difference to notice, but these had large effects.

I tested every monitor resolution available to me that had the same aspect ratio as the highest. They were: 3840 x 2160, 2560 x 1440, 1920 x 1080, 1366 x 768, and 1280 x 720. I used draw distances of 16, 32, 64, 128, 256, 512, 1024, 2018, and 4096 meters. After every change in resolution, I restarted the viewer (Firestorm). I used the wide range thinking that extreme values might reveal more. At the highest three draw distances, the performance was poor at all resolutions, with no frame rate higher than eight and none lower than three, and little change as the resolution changed. At the highest resolution frame rates were about equal at 11 to 12 up through 256 m; at 512 they dropped to around 6, were the same at all higher resolutions. At lower resolutions, frame rates at lower draw distances were very good. As draw distance was increased for each resolution, the frame rate didn’t drop much until the change from 256 m to 512 m. At all resolutions but the highest, it didn’t drop much as draw distance increased from 16 m to 128 m, but then it dropped dramatically at 256 m and again at 512 m, but not much with further increases. Turning off shadows with all other graphics settings maximized, which I tested at 1920 x 1080 because I had decided that that resolution gave me the best trade-off of appearance versus performance, resulted in a dramatic increase in frame rate.

My “sweet spot” is 1920 x 1080, at which I get a frame rate of 47 at a draw distance of 128 m, with shadows off and all other settings maximized, which doesn’t increase with further decreases in draw distance. I get an OK frame rate of 36 at 256 m, so that’s useable if I need to see farther.

I love shadows, but not enough to endure the hit on performance they cause. In the future, I’ll use them only for photography.

I hope others find this helpful.

I have two other performance-related questions for people who know more than I do. Would adding another graphics card help? Why is the cache limited in size? There are directions on the Firestorm web site for increasing maximum cache size, but they are over my head.

Added after seeing new posts: Multi-core processors have been out ten years or so. Is it too much to expect LL to update its software to be able to utilize their power after that long? It seems to me that that should be more important than adding features like experiences.



 
  • Like 1
Link to comment
Share on other sites

16 minutes ago, Jennifer Boyle said:

Added after seeing new posts: Multi-core processors have been out ten years or so. Is it too much to expect LL to update its software to be able to utilize their power after that long? It seems to me that that should be more important than adding features like experiences.

Please re-read @CoffeeDujour's post just above yours. And no, a second video card isn't going to help. A GTX1060 runs SL just about as well as a RTX2080.

  • Like 2
Link to comment
Share on other sites

1 hour ago, CoffeeDujour said:

==> SL is CPU bound.  <==

The current SL architecture puts a huge load on the CPU downloading and processing all the assets and textures. They aren't individually super bad (before anyone gets over excited about optimized content), but collectively, they leave little CPU time left over to get your GPU out of bed. Think 'death by a thousand cuts' rather than 'zomg that thing makes me lag'.

CPUs haven't gotten a lot faster in recent years, but they have added more cores to allow more throughput. SL as it stands can't make use of more than one core.

You would think that needing to download stacks of individual textures and objects would be a perfect fit for a CPU with many cores, and you would be right ... except that everything has to come together at a single point so your GPU can render a frame. This sucks, and while it would be nice (and attempts have been made) to use more cores to process stuff in parallel, the over head of collecting everything together so it can be passed to the GPU wipes out all of the benefits. 

The fix to this is for LL to switch to a rendering engine that's designed to take advantage of multiple CPU cores and rework the fetching and processing of stuff. Vulkan seems the obvious choice being both the successor to OpenGL (that the viewer currently uses) and cross platform.

Till that happens, the best you can do is reduce your draw distance, this will have a dramatic impact on the volume of stuff that needs to be processed, and the sooner it's processed, the sooner your CPU can get busy asking your GPU to draw more frames.

Personally, I find it worthwhile to make a few graphics presets and then switch between them as the need arises. 

 

 

I find it is quite interesting to watch my GPU utilization go up as I increase the core clock frequency of my CPU.

Edited by Ardy Lay
emphasizing the point
  • Like 1
Link to comment
Share on other sites

1 hour ago, Parhelion Palou said:

Please re-read @CoffeeDujour's post just above yours. And no, a second video card isn't going to help. A GTX1060 runs SL just about as well as a RTX2080.

Sorry. It was posted while I was typing. Damn, I sure wasted a bunch of money. It angers me that LL didn't give me good advice to the effect of, "Don't buy a high-end computer hoping it will help SL performance. It won't, so don't waste your money."

I'll bet that, in the aggregate, residents have wasted millions of dollars buying better computers that they hoped would run SL better, because they didn't know that it wouldn't help, money that, otherwise, would have been available to spend in SL, increasing LL's revenue.

It's hard to believe that SL couldn't be rewritten to utilize multiple cores. Practically everything else has been. If the problem is that content is dynamic, why did people writing software for video editing, facial recognition, etc. manage to overcome it, but LL can't? Perhaps LL needs to redirect their emphasis from adding bells and whistles to just making the product work better.

Edited by Jennifer Boyle
  • Like 2
Link to comment
Share on other sites

On 3/26/2018 at 7:33 PM, Penny Patton said:

... much of SL's performance issues are still due largely to the lack of optimization in SL content. I already get 2-5 times the FPS in sims I built myself compared to pretty much anywhere else in Second Life because I put in some effort to optimize.

LL needs to provide tools that will guide content creators towards making better choices with the content they produce. Tools and features that will reduce the texture and polygon bloat so much SL content needlessly suffers from. If that ever happens you will see drastic performance increases for everyone in SL, whether they have a high end computer or a low end computer. Not overnight, certainly, but over time as old content is phased out in favour of new, better optimized content.

Great idea. But why not have the server optimize content that needs it? Surely algorithms exist, or can be created, that can do that. I am not (much of) a coder, but, surely, it's not an insurmountable problem for a program to evaluate the size and shape of an object, evaluate the number of polygons, evaluate the size of the textures on it, and reduce the number of polygons and the size of the texture to the number needed to produce a nice image. I imagine it would be easy to provide the user with settings for quality of images that would govern the number of polygons and the size of textures sent to their

viewer.


What am I missing?

LL could also monitor the quality of content using algorithms, and ban the sale of content that doesn't meet announced current standards.

Edited by Jennifer Boyle
  • Like 1
Link to comment
Share on other sites

5 hours ago, Jennifer Boyle said:

Great idea. But why not have the server optimize content that needs it? Surely algorithms exist, or can be created, that can do that. I am not (much of) a coder, but, surely, it's not an insurmountable problem for a program to evaluate the size and shape of an object, evaluate the number of polygons, evaluate the size of the textures on it, and reduce the number of polygons and the size of the texture to the number needed to produce a nice image. I imagine it would be easy to provide the user with settings for quality of images that would govern the number of polygons and the size of textures sent to their

viewer.


What am I missing?

LL could also monitor the quality of content using algorithms, and ban the sale of content that doesn't meet announced current standards.

There would be much screaming.

Link to comment
Share on other sites

Mmm why not:
Early 2019 Build (to family members spec. No M2 hard drive/s yet).

Win10/64 pro
H370-F Motherboard
i7-8700 3.20Ghz Coffee Lake
64Gb Corsair DDR4 Ram. (forget what speed).  
Radeon RX 580 8Gb (Polaris).
A couple of Samsung Evo SSD's (2x500gb)
2 Seagate Barracuda mechanical hard drives (2x1Tb)
2 x Lacie Quadra 2Tb external hard drives.

Poor thing is getting so old it will need a spray of WD40 soon 😆☺️
 

Link to comment
Share on other sites

Funnily enough, my machine tends to load most things in Second Life quickly enough (about a minute at most for most content).

It's running a Ryzen 7 2700X CPU, 16 GB RAM, an nVidia 980TI (6 GB of VRAM I think) a 500 GB SSD (system drive) and a 7200RPM 1 TB data drive.

OS? Linux. Variant? Manjaro. Desktop Environment? XFCE. Kernel? Zen 5.6 (5.6.4 currently).

That hardware and OS, running Second Life (Firestorm) with Ultra stock preset, Draw Distance reduced to 128 Meters, Anti-Aliasing set to maximum.

I only see severe slowdown (loading time anyway) when there are more than twenty or thirty avatars present or when the environment is badly optimized. FPS? Usually between thirty to sixty on average, dipping down to ten for bad optimization.

ETA: That is with Discord, Telegram, Pidgin, Steam, Brave, Audacious and Pulse Effects running (usual programs, started up as the computer boots into Linux or shortly after and just left running).

Yep, a swap to Vulkan would be nice. Yep, better optimization would also be nice. No, doing it via AI would not be the proper way to do so - it will flub up (yes I am aware that humans can do so as well, don't care). No, penalizing the end user for what content creators do is not a viable option either.

The first step ought to be swapping to Vulkan, see what effect that has on overall performance. Then begin planning the next steps based on that data.

Edited by Solar Legion
  • Like 2
Link to comment
Share on other sites

I think everybody needs a place to go where they can see what SL rendering is like when all content present is "optimized" for SL rendering.  I suspect most will have to take stuff off to keep themselves from spoiling the results.  Empty regions need not apply.  😉 

  • Like 1
Link to comment
Share on other sites

Out of curiosity, the people reporting their CPU speeds, are you overclocking them or just running at default?

I am also curious to know whether these systems are bottlenecking somewhat because of their hardware configs. Are they store bought or home made rigs?

Finally, as most who have been here for decade or more know, this issue has always been here, ie SL has never utilized the latest hardware capabilities and is not likely ever to change.

Full disclosure: I have no complaints, mainly cos I know it won't help, but I never see fps less than 30.

  • Like 2
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 1462 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...