Jump to content

Second Life - Performance Thread


You are about to reply to a thread that has been inactive for 71 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

Why is SL a slideshow on Powerful PC's?

 

Hello all,

I just want to start this thread of by saying, Thank you for reading or even participating. I believe the community can make SL better.

SL, Second Life, "The Old Metaverse" etc, Is known by many names and by many people around the world. It is also known to do one thing, Bring PC's to their knees.

For years I thought it was Anecdotal experience that SL ran badly, but research and time has taught me this it not the case.

TheBenchmarker, is not just me, or you or her or him. It should be something that everyone on SL can give to, a Knowledge base of what makes SL run smoothly and what doesn't.

 

I want to start off this topic by giving some of my anecdotal experience from being on SL for nearly 10 years at this point.

SL didn't run as poorly years ago, This is mostly because things weren't as developed and textures and other things used by creators weren't as taxing. SL now runs poorly on everything.

My personal system is as follows;

  • Ryzen 7 5800X3D
  • 32GB of RAM
  • RTX 3090

I run SL at 2560x1440p, Now because SL runs in a Window I didn't take into account it is actually rendering at that size but it does. A previous system I had with a 1700x and a GTX 1070 ran SL like butter.... for two weeks until I updated a driver and it started running badly again.

I however, also ran a RX 6800 XT up until a few months ago, SL performed significantly better with AMD's Pro drivers installed, which I elaborate on further on in the post (Linking mostly to nVidia)

SL isn't a game!

With the way SL is designed with user submitted content, textures and everything. SL can't be thought of as a game when thinking of performance and maybe putting together a computer for specifically SL.

Unoptimized textures? Massive Mesh's? Sounds like CAD.

SL can be thought of as more like a CAD program than a Game. I think the exact same type of system that would run a CAD application like Maya or Revit would be exactly what SL needs, but this needs further musing to actually be proven.

Currently; I have built a test system to see what works and what doesn't work and I'll post more detailed How To's and findings in the responses.

Specifications;

  • Ryzen 7 1700x
  • 16GB DDR4@2666
  • SATA SSD
  • 2ndry HDD (Cache Location)
  • GTX 1070 Ti.

With this system, running at 1440p and at high while walking through London City. SL ran like hot garbage, as to be expected. Switching everything to low helped slightly but not by much, The average framerate while running at 1440p low was 28FPS (Anecdotal - No idea how to properly measure FPS other than looking at the statistic bar) and then while walking moving, it dipped down to 5-7FPS. With quite a lot of avi's on screen, it runs badly but when alone it'll run at 60FPS plus but everyone can agree no one plays SL only on their own land doing their own thing *all* the time, So let's be real. It's unacceptable that this is how it plays on still fairly mid-range hardware but trying to optimize SL on the development side of things is a rock and hard place type of situation.

Also to dispel some rumors, SL takes more than two cores to run well, The LL viewer seems to target 4 Cores. Which makes sense.

But...But... This is promising.

Doing a few small tweaks to SL brought the frame rate up from 30FPS at Low to 35FPS at Ultra and a dip down to only 20FPS.

The tweaks were;

  • Moving Cache to secondary drive and maxing it out 9984MB
  • Maxing the Networking slider
  • Doing a few tweaks inside nVidia control panel (Will elaborate in next post)

Doing these few things, brought the framerate and more importantly the frame-times up to a steady smooth level, so being curious. I then tried with the nVidia Studio driver which is the driver that you can download for GeForce cards which apparently has some more optimization for more creator type things. Using the studio driver brought the frame-times up even more.. Which leads me to think.

SL may run better on Professional grade hardware, like a Quadro or Tesla. Quadro's, Titan's and Tesla's from nVidia have specific optimizations in the driver path for CAD and professional applications such as the ones mentioned earlier like Maya and 3DS Max, Revit etc. All of these applications use OpenGL in one way or another and all of them recommend Quadro's. I know somewhere on LL's site it says Quadro's aren't recommended but there's also a current thread from LL talking about how SL runs better on Windows Vista so I think we can safely ignore all of the written documentation for now.

Thinking on the type of workstations that run these programs well, It all simply seems to come down to Pro level graphics and single core performance, which lines up with my research about SL.

So, questions still to be answered.

  • What does SL run best on?
  • What does LL target hardware wise with development? Is it current low end hardware like the GTX 1650? 1650 Super? or is it still older hardware?
  • If SL does run better on older hardware, We can hopefully give a recommendation to simply go with older hardware. It would make sense, like how games for DX9 run better on DXVK because the older API's simply can't handle modern hardware.
  • What is the specifications of the workstations that LL are using to develop? They must know what runs better on their own machines? The only currently available knowledge on this is the old Youtube video's that show LL running mostly Mac's. Which I'd hardly guess they're still using considering OpenGL got dropped from MacOS a while back.
  • Does SL run better when put through a translation layer such as DXVK, would running it on Linux be better? (I know there's one guy on the forums who developed his own viewer on Linux and says its better, if you're reading this then HI!, I'll give it a shot)

Thank you for reading this far and I hope I can invite some of you to comment down below, because that is what this is all about. I'm hoping this type of research excites more than just me.

 

  • Like 2
  • Haha 1
  • Confused 1
Link to comment
Share on other sites

What SL runs best on is changing. The viewers are starting to really use the GPU. Amusingly, users are reporting that their GPU fans now wind up to speed when running Second Life viewers.

In Firestorm, try Advanced->Performance Tools->Improve Graphics Speed. That lets you set the desired frames per second, and adjusts rendering quality to maintain it.

  • Thanks 1
Link to comment
Share on other sites

17 hours ago, animats said:

What SL runs best on is changing. The viewers are starting to really use the GPU. Amusingly, users are reporting that their GPU fans now wind up to speed when running Second Life viewers.

In Firestorm, try Advanced->Performance Tools->Improve Graphics Speed. That lets you set the desired frames per second, and adjusts rendering quality to maintain it.

I haven't tried Firestorm in a while simply because I couldn't use their interface since it reminded me of years gone by. I'll give it a shot though.

Link to comment
Share on other sites

On 2/15/2023 at 5:46 AM, TheBenchmarker said:

Moving Cache to secondary drive and maxing it out 9984MB

This might be a side effect of SATA SSDs, which have a much smaller command queue than NVMe ones. The other effect might be that the fsync()/flush file buffers calls are typically per drive.

A typical advice is to add a virus scanner exclusion for the Cache directory, as that helps quite a bit.

  • Like 2
Link to comment
Share on other sites

My recent experiences seemed to show me that SL bottlenecks can be shifting quite a bit depending on your configuration (I mean this sounds like a trivial statement.) For me it's tough to determine what the actual bottleneck at the time is, the only thing I can do is change stuff around and see how it behaves in scenarios that can be reproduced.

 

On 2/15/2023 at 5:46 AM, TheBenchmarker said:

2ndry HDD (Cache Location)

This is one example. After upgrading my main components I decided to not go with a RAM drive at first since I stayed on a high end M.2 SSD. However, when trying to stress out the system, I couldn't say that I was CPU or GPU bottlenecked while it wasn't running 100% smoothly in the scenario I was testing. So I went for a RAM drive again (on 6400 CL32 memory) and in my tests I went from occasional stuttering to 100% smooth in a very high demanding scenario (now I'm not a scientist and there's always the chance that something unexpected had an influence here but I tried to get to reproduceable results as best as I could. It's always a good idea to take anyone's benchmarks with a bit of a grain of salt.)

 

On 2/15/2023 at 5:46 AM, TheBenchmarker said:

What does SL run best on?

I can only contribute my specific experience after my upgrade where I went from

5950X + RTX4090 + DDR4 3600 CL16 + RAM drive

to

13900KS + RTX4090 + DDR5 6400 CL32 + RAM drive

and the performance difference is HUGE (in semi-objective, non scientist terms.)

 

Link to comment
Share on other sites

Its interesting stuff. I'm at that maybe ill upgrade my pc stage but theres so little posted showing the benefits to a high end system with second life. My pc is about at the end of its upgrade path. The processor dosent support win 11, the power supply dosent support a 3080 and above so im still using a 1080. What did make a ton of difference was upping the ram to  32 gig.I was amazed how much ram sl will use when given it. I want to see a high end pc log into a full club like Peak Lounge with everything on full to see  if the wading through soup effect still happens. Sl needs a benchmarking sim with a standadized  experiance and a way to post specs and fps.

Ill throw some money at it to get a wow factor improvment .But if spending 3 grand gets me an extra 2fps , ill spend the money on cake instead ;)

  • Haha 2
Link to comment
Share on other sites

tracing.thumb.png.ad5df3ae0d6751a270ef54a836cacdc5.png

Where the time goes in a viewer. Probably more than you wanted to know.

There's no simple answer to performance problems in SL. But it's not un-knowable. So here's a developer R&D perspective. This is rather detailed, but since people are discussing this, I decided to say something.

There are developer tools for this sort of thing. This happens to be my experimental Rust viewer, but similar tools can be applied to viewers base on the LL code. This is using Rust, PBR, Vulkan, and has shadows but not reflections.

The graph is from Tracy, which is a performance measurement tool for game-type programs. It's a timeline. Note the frame number and time at the top. The Tracy tool lets you pan and zoom through minutes of that timeline, so you can find and examine slow frames in detail. This type of tracing slows the program by 20%-30%, so performance is below normal here.

This is a multi-thread viewer, so you can see the various threads doing their thing.

  • "Main thread" is doing the actual rendering, and very little else. This is an older version of Rend3/WGPU, and I will soon be using a newer and faster version. Eventually this stack should support multiple threads sharing the rendering load. It's all retained mode; the GPU is doing most of the work here.
  • The "asset fetch" threads are decoding JPEG 2000 textures, rather inefficiently right now. It always loads the full texture, puts it in a cache, and then cuts it down to the resolution currently needed on screen. That's inefficient, and I need to optimize that part. It loads the textures in order of screen area covered, so that important optimization is already working. If you get close to something, it quickly gets loaded.
  • The "Client" thread is processing incoming UDP messages from the server. It's not very busy. If the server sent object updates faster, that would improve performance here, but might choke the single-thread viewers, which steal time from the main thread to handle incoming messages. If too much comes in per frame, they drop messages, which get resent later. Some of this is a holdover from the days when assets (meshes, textures, materials, sounds, and animation) came in over the UDP message path. The effect is that you wait for objects to appear longer than is really required on more powerful machines. Everybody (I think) currently gets the same object appearance rate, regardless of machine speed. As viewers speed up, that might be something to consider negotiating between viewers and servers. It's not really that big a load if it's not stealing time from rendering.
  • The "Movement" thread is not very busy here, because not much in this region is moving. Also, the server doesn't tell the viewer much about moving objects far from the camera or outside the viewing window, which keeps the irrelevant traffic down.

In the GPU window, you can see what the GPU, an NVidia 3070 with 8GB, is doing. It's 71% full and 78% busy. Yes, the fan speeds wind up. Currently, meshes are taking up more space than they should, because the Rend3 renderer supports rigged meshes but, in this version, allocates space for rigging data for all of them. 90 bytes per vertex. That's about to be fixed.

So there's a sense of what's going on at a low level.

This is region "Nymphai", a shopping event, full of highly detailed objects. This is a stress test, with all meshes at "highest" LOD and full region draw distance. It's getting about 44 FPS right now. Should be able to do better than that. This gives a sense of how SL ought to perform on a gamer PC.

For the Firestorm viewer, Beq's Other Blog has good coverage of what goes fast and slow in Firestorm.

  • Like 1
  • Thanks 5
Link to comment
Share on other sites

0659929173ee4e0022d39092cd9fcef6.jpg

Processor    Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz, 4008 Mhz, 4 Core(s), 8 Logical Processor(s)

OS Name    Microsoft Windows 10 Home

Name    NVIDIA GeForce GTX 1080

Installed Physical Memory (RAM)    32.0 GB

cfb86260c674454794055126d2101db6.png

 

With all Firestorms 's graphics maxed out im getting a blistering 12.5 fps in the same location. But nothing seems maxed out in a less scientific task manager.

So it looks like a 3070 may double my fps,

What processor do you have?

 

 

Link to comment
Share on other sites

8 hours ago, Judas Shuffle said:

So it looks like a 3070 may double my fps,

What processor do you have?

6 cores, 12 hyperthreads, AMD.

You're looking at results from my R&D project. This is not a standard viewer you can use, yet. It's a tech demo, to demonstrate that SL can perform like an AAA title if we use a more modern architecture. I did this because I was tired of the "nothing can be done" line that we used to hear from LL.  The LL-based viewers are now improving in performance. ALM now, PBR in test, some multithreading now, Vulkan in a year or so, we're hearing from the Linden devs. LL has been staffing up on the graphics side. We keep seeing new Lindens show up at the meetings.

Meanwhile, go read Beq's Other Blog and find out how to use the new automatic performance tuner in Firestorm. It will get the frame rate up by automatically adjusting quality down as needed.

 

  • Like 1
  • Thanks 2
Link to comment
Share on other sites

8 minutes ago, animats said:

"I did this because I was tired of the "nothing can be done" line that we used to hear from LL"

I'm very thankful someone has said this because this is the exact problem with SL these days, is simply because it's running on old code. Vulkan has been around for quite a few years now so LL's slow transition is part of the problem on why SL might be loosing market share, because it runs terribly.

 

On 2/18/2023 at 2:17 PM, Arluelle said:

So I went for a RAM drive again (on 6400 CL32 memory) and in my tests I went from occasional stuttering to 100% smooth in a very high demanding scenario (now I'm not a scientist and there's always the chance that something unexpected had an influence here but I tried to get to reproduceable results as best as I could. It's always a good idea to take anyone's benchmarks with a bit of a grain of salt.)

 

I can only contribute my specific experience after my upgrade where I went from

5950X + RTX4090 + DDR4 3600 CL16 + RAM drive

to

13900KS + RTX4090 + DDR5 6400 CL32 + RAM drive

and the performance difference is HUGE (in semi-objective, non scientist terms.)

 

Also I have noticed this, I was using a SATA HDD for my secondary drive and it was getting accessed constantly. I'm curious to see if Intel Optane might be a fix for this since the latency to NAND flash is half. SL also seems to be dependent on lots of ram because I only have 8GB in the PC I'm testing and using Windows 11 that has been debloated. It's constantly sitting at near 90% usage.

I have also acquired a Quadro card to test with, a Kepler based card. Just to see if my theory was correct. Currently, with the lowest settings but with shadows and the advanced lighting turned on I was getting 50FPS at The Firestorm Starter place, which was quite impressive. Going back to London city the FPS tanked to around 15FPS. It seems the benefits that the Quadro cards get for ISV certified applications like CAD applications doesn't seem to apply to SL, but maybe for someone who is developing a viewer *cough* maybe could apply these driver pathways to SL. 

I also tried Firestorm but saw no difference between the default LL viewer and the Firestorm viewer.

It seem's SL is simply bottlenecked by a few things, that only more research will uncover.

  • Is SL bottlenecked by storage speed/Latency? (Will Intel Optane help)
  • Does SL prefer purely single core/multicore performance or does it prefer a Generation/Platform of Processors. (Since SL is strange we can simply put all logic aside)
  • Is SL bottlenecked by RAM speed? (From what Arluelle has said, possibly.)
  • Finding the specs of LL's Dev Workstations would help quite a lot, since I assume they know that their software runs like garbage on mostly everything.

What the ultimate goal of this thread is, is to make a configuration that is the best for SL and hopefully have people not wasting money. I know AMD's cards used to be terrible for OpenGL performance because I had a really terrible time with an AMD Vega 64 a few years ago that caused me to sell the card because I was getting 10FPS at the most but it has me wondering now if SL would like the massive memory bandwidth that comes with HBM2, more testing would need to be done.

  • Like 1
Link to comment
Share on other sites

I am what I consider myself a 'power user' but also mostly tech unaware, There are 2 main issues in SL - one is inside of SL itself and one in your own computer that requires someone to have a Masters in Computers to use SL correctly. You have to be able to find cache directioies, white list files and folders, optimize processes, while list for security scans, it is never ending. One does not simply install Sl and "play"

Inside SL you have to know all the ins and outs of the veiwer, how to try to overcome lag issues when they arrive, how to set  all the preferences, learn a few debug settings. You have to learn all about EEPs, how to change lighting, local lites, attached lites... again SL is not someting a novice can really install and play and expect out of the box good performance even with a higher end computer. When SL works, it works good and its amazing, When it sucks it sucks bad and you wanna take a hammer to your computer sometimes. SL has good days and bad days. At least Firestorm has gone along way to make the UI (mostly) usable 

  • Like 1
Link to comment
Share on other sites

11 hours ago, Judas Shuffle said:

With all Firestorms 's graphics maxed out im getting a blistering 12.5 fps in the same location.

I guess you are running at 4k resolution correct?

I have one machine with the same i7 6700K, GTX 1080, 32gb memory (LOD set to 4.0, everything else also maxed out in settings), using one 144hz monitor at 1080p ( 1920x1080 ), with DD at 256sqm I'm getting 56 fps, very similar position/view as your screenshot, at Nymphai, with DD at 128sqm I get the same fps (area is in the sky...), these measured using the current official viewer, also worth nothing, with the official viewer, once you open any UI window, fps goes down a bit, average of 10 fps to 15 fps observed using Firestorm, and assuming the same would happen using the official viewer (the official viewer cannot show FPS without Ctrl + Shift +1 open).

With Firestorm I get very similar readings when opening the Ctrl + Shift + 1 window... feels "smoother" due to VRAM usage, less texture blurring... (The PBR viewer has a tweak for VRAM usage, similar to current Firestorm). 

Using another machine, 11700K + RTX 3060 + 64gb memory (tri-monitor 1080p each), that same area shows 110 fps with DD at 128sqm and 105 fps with DD at 256sqm.

If you have a dedicated 3D graphics card but less than a 3060 series or equivalent AMD graphics card ( which are advertised as 4K graphics card ) try to keep the resolution at 1080p or lower (To me, the 6700K machine is still very smooth even at Nymphai using 1080p, even though the 6700K is not enough to run a GTX 1080... a balanced machine also helps and may save money).

No extra tweaking on both machines either, NVIDIA 527.37 driver, plain installation of official viewer + max settings + debug setting RenderVolumeLODFactor set to 4.0.

My 2 cents, for SL... desktop with a recent processor i7 or fast i5 with a RTX 3060 or above (or AMD equivalent) running with a 1080p monitor (Full HD 1920x1080),  very smooth, great experience, without breaking the bank "too much", without "losing hair". ( Blue team being recent Intel i7 plus recent NVidia at 1080p is hard to beat for SL)

Edited by Andred Darwin
  • Like 1
Link to comment
Share on other sites

On 2/20/2023 at 8:20 PM, Judas Shuffle said:

I'm at that maybe ill upgrade my pc stage but theres so little posted showing the benefits to a high end system with second life.

I've recorded some videos of places from the destination guide so you can see what to expect from a high end system at this day. I'm showing the graphics settings at the beginning of each video. When shadows are enabled the shadow quality is set to 2. The recording was done on a 4K screen. You'll notice some videos running smoother than others, but it's definitely on a decent level.

 

 

  • Like 2
  • Thanks 1
Link to comment
Share on other sites

On 2/21/2023 at 10:32 AM, Judas Shuffle said:

With all Firestorms 's graphics maxed out im getting a blistering 12.5 fps in the same location. But nothing seems maxed out in a less scientific task manager.

The problem might be that with a middle-range, aging PC, you are using a high resolution monitor: your snapshot is 3840x1961 pixels wide, meaning that unless you took a ”high res snapshot” (which doubles the native resolution), you got a high DPI screen...

If this is the case, then on a ”standard” (full HD) screen, you'd get much better frame rates... And yes, with a high res monitor, an RTX 3070 would far better, but not that much either (I'd say you'd get twice the fps or so in this spot), because the CPU single core performances would become the bottleneck.

With a 1920x1200 screen (i.e. slightly over full HD which is 1920x1080), a 9700K @ 5.0GHz (locked on all cores), an RTX 3070 (2025MHz graphics clock, 16000MT/s VRAM clock), running the Cool VL Viewer under Linux, with all graphics settings maxed out, including shadows on, I get 60fps, for a draw distance set to 256m.Nymphai-shadows.thumb.jpg.8fa41f90bfe89eddb37f694b7fd55a49.jpg

 

And 150fps with shadows off (not much of a difference in this spot, visually, so I could not tell if you had shadows on or off from your screen shot):

Nymhai-no-shadows.thumb.jpg.9b884e4224d08be878f0d7bba6048336.jpg

 

As for the CPU and GPU usage, with shadows on, it was 36% CPU (~ 2.9 cores loaded), and 42% GPU.

Edited by Henri Beauchamp
  • Like 1
  • Thanks 1
Link to comment
Share on other sites

14 hours ago, Jackson Redstar said:

I am what I consider myself a 'power user' but also mostly tech unaware, There are 2 main issues in SL - one is inside of SL itself and one in your own computer that requires someone to have a Masters in Computers to use SL correctly. You have to be able to find cache directioies, white list files and folders, optimize processes, while list for security scans, it is never ending. One does not simply install Sl and "play"

Inside SL you have to know all the ins and outs of the veiwer, how to try to overcome lag issues when they arrive, how to set  all the preferences, learn a few debug settings. You have to learn all about EEPs, how to change lighting, local lites, attached lites... again SL is not someting a novice can really install and play and expect out of the box good performance even with a higher end computer. When SL works, it works good and its amazing, When it sucks it sucks bad and you wanna take a hammer to your computer sometimes. SL has good days and bad days. At least Firestorm has gone along way to make the UI (mostly) usable 

This is sadly a big annoyance with SL that it doesn't easily seem to follow normal conventions of "Faster is better"

The research I am and everyone else in the thread is conducting will hopefully allow for a better understanding of why SL runs badly, and what we can do to fix it. I'm thankful people have started to comment and reply since this needs more and more discussion to get anywhere.

  • Like 1
Link to comment
Share on other sites

35 minutes ago, Henri Beauchamp said:

The problem might be that with a middle-range, aging PC, you are using a high resolution monitor: your snapshot is 3840x1961 pixels wide, meaning that unless you took a ”high res snapshot” (which double the native resolution), you got a high DPI screen...

If this is the case, then on a ”standard” (full HD) screen, you'd get much better frame rates... And yes, with a high res monitor, an RTX 3070 would far better, but not that much either (I'd say you'd get twice the fps or so in this spot), because the CPU single core performances would become the bottleneck.

With a 1920x1200 screen (i.e. slightly over full HD which is 1920x1080), a 9700K @ 5.0GHz (locked on all cores), an RTX 3070 (2025MHz graphics clock, 16000MT/s VRAM clock), running the Cool VL Viewer under Linux, with all graphics settings maxed out, including shadows on, I get 60fps, for a draw distance set to 256m.Nymphai-shadows.thumb.jpg.8fa41f90bfe89eddb37f694b7fd55a49.jpg

 

And 150fps with shadows off (not much of a difference in this spot, visually, so I could not tell if you had shadows on or off from your screen shot):

Nymhai-no-shadows.thumb.jpg.9b884e4224d08be878f0d7bba6048336.jpg

 

As for the CPU and GPU usage, with shadows on, it was 36% CPU (~ 2.9 cores loaded), and 42% GPU.

Does SL simply run better on Linux because of Linux's better implementation of OpenGL or is it more of a driver issue?

  • Like 1
Link to comment
Share on other sites

1 hour ago, TheBenchmarker said:

Does SL simply run better on Linux because of Linux's better implementation of OpenGL or is it more of a driver issue?

In NVIDIA's GPUs case (with NVIDIA's proprietary drivers), it runs better (about +5 to +20% fps rates, depending on the rendered scenes) because Linux (the kernel itself) got less overhead and (way) more efficient I/O (much less frame rate ”hiccups” while moving around and crossing sim borders, for example).

In AMD's GPUs case, this is both because of the above and because of AMD's deficient OpenGL implementation in their own drivers, where they are replaced with Mesa under Linux.

Edited by Henri Beauchamp
  • Like 2
  • Thanks 1
Link to comment
Share on other sites

11 minutes ago, Henri Beauchamp said:

In NVIDIA's GPUs case (with NVIDIA's proprietary drivers), it runs better (about +5 to +20% fps rates, depending on the rendered scenes) because Linux (the kernel itself) got less overhead and (way) more efficient I/O (much less frame rate ”hiccups” while moving around and crossing sim borders, for example).

In AMD's GPUs case, this is both because of the above and because of AMD's deficient OpenGL implementation in their own drivers, where they are replaced with Mesa under Linux.

I'll give Linux a try, I know you develop your own Linux viewer and I'm happy to give it a try, I have noticed that the official SL Viewer for Linux seems to be ancient. Which is strange because, as you said. Most of the open source libraries that SL runs on are developed on Linux.

I'm a user of Linux but for gaming I have always went back to Windows due to the hoops that are usually needed to jump through to get games to work but being fair. If Windows is stripped down like how versions of Windows Server Core are built, Both Windows and Linux can be as streamlined as each other.

Link to comment
Share on other sites

6 hours ago, Arluelle said:

I've recorded some videos of places from the destination guide so you can see what to expect from a high end system at this day

On 2/18/2023 at 3:17 PM, Arluelle said:

I can only contribute my specific experience after my upgrade where I went from

5950X + RTX4090 + DDR4 3600 CL16 + RAM drive

to

13900KS + RTX4090 + DDR5 6400 CL32 + RAM drive

and the performance difference is HUGE (in semi-objective, non scientist terms.)

Regrouping your posts, since I was wondering, reading the second (cited first here), what was your ”high end system”... 😛

I am not surprised you got better frame rates with the 13900KS compared with the 5950X (RAM speed should not make a huge difference, but does of course contribute): at this level of GPU performances (the RTX 4090 is a monster !), the bottleneck is fully seen at the CPU mono-core performances level, and the P-cores of the newer and super-clocked 13900KS definitely beat the (one generation older and slower clocked) 5950X cores hands down... You could have tried and overclocked the latter, however (if only two cores overclocked, and the viewer affinity set to these overclocked cores), since every % of clock speed translates in the same amount of % in frame rates.

This said, and sadly for most of us, ”poor” SLers, not everyone can afford a system such as yours !

Edited by Henri Beauchamp
Link to comment
Share on other sites

37 minutes ago, TheBenchmarker said:

but being fair. If Windows is stripped down like how versions of Windows Server Core are built, Both Windows and Linux can be as streamlined as each other.

Nope... The figures I give correspond to comparisons done with carefully stripped-down Windows installations (Win7 and Win11), with all the cruft removed, all the ancillary background tasks and services impeached to run (this of course includes Defender, Search, Smartscreen, Security center, etc).

Link to comment
Share on other sites

3 hours ago, Henri Beauchamp said:

Nope... The figures I give correspond to comparisons done with carefully stripped-down Windows installations (Win7 and Win11), with all the cruft removed, all the ancillary background tasks and services impeached to run (this of course includes Defender, Search, Smartscreen, Security center, etc).

Which is why I gave the example of Windows Server Core, which is a command line only version of Windows.

Link to comment
Share on other sites

Many people, inside and outside LL, are now working on increasing performance, and it's getting better on all fronts. I'm encouraged. Two years ago, I heard "nothing can be done, user content is too complicated to optimize". Now there's progress. It helped that some of the companies making "metaverse" systems do have a clue and have advanced the technology.

The next big frontier is avatar optimization.

  • Like 2
Link to comment
Share on other sites

19 hours ago, Henri Beauchamp said:

You could have tried and overclocked the latter, however (if only two cores overclocked, and the viewer affinity set to these overclocked cores), since every % of clock speed translates in the same amount of % in frame rates.

I haven't been a huge fan of overclocking in the past. What I had done on the 5950X was to use the Curve Optimizer in order to apply per core undervolting. In fact I had spent a lot of time on that in the beginning and I was pretty hyped by it, but eventually just turned it off completely since the tricky part was to get it stable in low load scenarios, which is impossible to just straight forwardly test. So it just kept happening that after some longer time, while the pc was idling, it suddenly just rebooted, and I didn't want to put up with that anymore eventually. But also on that topic I'm loving the Intel (I had the 3950X and the 5950X and I'm completely ideologically unbiased towards either Intel or AMD) since the very basic overclocking/undervolting I've done on it (I'm no expert, this is just from a power user's point of view) is a so much better experience than on the 5950X. I just raise the multiplier and lower the adaptive voltage offset, and according to my experience it either works after a successful cinebench test or it fails. But I don't have the issue of finding out a day or 2 or 3 later that my settings are unstable.

Also on a side note, I never come into a usage scenario where the advertised 2-core max frequency would be kicking in. The max frequency would be kicking in if those were the only active cores, and that's just not happening with SL running, or in any other relevant scenario to me.

Edited by Arluelle
Link to comment
Share on other sites

4 hours ago, Arluelle said:

I haven't been a huge fan of overclocking in the past.

I always have been, in the past, simply because it was possible to get great gains from it. My best overclock achievements have been obtained with an Intel Core Quad Q6600 (3.4GHz OC instead of 2.66GHz stock frequency) and an i5-2500K (4.6GHz locked on all cores instead of 3.3GHz base / 3.7GHz turbo at stock). I also overclocked old 486-SX/DX and Cyrix 6x86/M2/MX processors before (just like you, I am unbiased towards any brand, and just choose the best performances for my money at the time I buy new hardware), but while I did own AMD CPUs (K6-2, K6-2+, K6-III, Athlon XP, Athlon64), none of them provided sufficient overclocking headroom for it to be worth the time spent tuning the knobs and testing the stability.

With the i7-9700K however, I found out that modern Intel CPUs do not have any headroom any more (or so little that it is anecdotal: only 100MHz for my 9700K when the 2500K had 900MHz of headroom over the turbo frequency), and all you could achieve is running all cores at the turbo frequency.

4 hours ago, Arluelle said:

apply per core undervolting. In fact I had spent a lot of time on that in the beginning and I was pretty hyped by it, but eventually just turned it off completely since the tricky part was to get it stable in low load scenarios, which is impossible to just straight forwardly test.

Undervolting won't allow to achieve any stable overclock. It is only good when you won the Silicon lottery and your CPU can work stably at the same frequency with a lower Vcore (meaning less heating), and if you can achieve this, then your CPU is also a good candidate for overclocking.

4 hours ago, Arluelle said:

according to my experience it either works after a successful cinebench test or it fails.

Testing the stability of an overclock is much more demanding than just running Cinebench !

I do the following to ensure a stable overclock (everything done under Linux):

  • Running compilation of large programs (the viewer code is a good candidate for this, since it can load all cores at 100% with just one short mono-core ”pause” during its whole compilation) in an infinite loop (I run such loops at night, so it's 8+ hours of compilation). gcc (the GNU compiler) is an excellent unstable CPU crasher ! 😄
  • Running Prime95 in torture mode with ”smallest FFT” and ”small FFT” modes for an hour or so, test runs repeated in both SSE2 and AVX modes (important for Intel), with a variable amount of cores (all cores, 6, 4, 2), to ensure an adequate voltage is provided by the VRMs in various loads conditions.
  • Running BOINC tasks (various projects, various loads, with SSE2, AVX, AVX2, etc) during a few nights: any computation error reported by the project could be the sign of an instability (but must be careful since some project tasks do error out ”naturally”: just look at what other BOINC participants for that task got on that result)
  • As you found out, idling can also be the cause of issues, so I also test an idle PC at night !

With a Zen CPU, I would likely attempt locking all cores at turbo frequency as well, and should it fail, I would try locking only the best cores at the max turbo, and the rest at a slightly lower frequency; this is easy to do with good motherboards BIOS/UEFI (I'm sure you can afford buying one such MB 😜 ), or under Linux, via the /sys/devices/system/cpu/* controls...

Note that locking frequencies on cores allows to achieve the best overclocks, because they avoid the Vcore drop outs and overshots which happen when the frequency (and the power consumption with it) brutally changes and the VRMs must catch up (there is always a delay, causing the voltage transitory variations and possible resulting crashes). It also avoids the latencies seen when a CPU core must reenable its caches and other parts, when it gets affected a thread to run and was idling a few ms sooner, thus providing even better performances.

Quote

Also on a side note, I never come into a usage scenario where the advertised 2-core max frequency would be kicking in. The max frequency would be kicking in if those were the only active cores, and that's just not happening with SL running, or in any other relevant scenario to me.

Well, the 13900KS is not a good candidate for any overclock (i.e. running it over the max turbo, or even running all cores on turbo): it is already pushed at its best by Intel, at the factory, and no amount of personal effort will ever provide you with better results than what Intel got... All you can hope (on the condition of cooling your CPU very, very well), is to run it at a higher base frequency on all cores (and let Intel's algorithm deal with turbo)...

Edited by Henri Beauchamp
Link to comment
Share on other sites

8 hours ago, animats said:

The next big frontier is avatar optimization.

Its the avatar that is killing SL. With a good machine almost all sims perform well, You can walk around with all the shader and shadows on, a decent draw distance. Add in a few Avis and most places start turning into pea soup. Get 30-50 avis together for a event, you now got a bowl of frozen molasses

  • Like 3
  • Haha 2
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 71 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...