Jump to content

Fastest 2023 CPU for SL? (Ryzen 7 8700X3D)


You are about to reply to a thread that has been inactive for 180 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

In many recent tests, the Ryzen 7 7800X3D has come out as the fastest gaming CPU of 2023, it achieves this not by having the fastest cores but by having a huge L3 cache. For most (but not all) games, the L3 cache size gives this processor an advantage over everything else. Unfortunately there are no benchmarks for SL on this processor comparing it with something that's similar but lacks the large L3 cache so it is a bit hard to know if SL benefits from the large L3, although I think the indications from my testing are that it probably does. 

I've tested it against a computer running an Intel i9-9900K (which was supposedly the fastest gaming CPU at the start of 2020).

PC1: Intel i9-9900K @ 4.7GHz (single core Geekbench score: 1655), 40GB RAM, AMD Radion Pro Vega 48 graphics with 8GB vRAM, 1TB SSD, Windows 10

PC2: Ryzen 7 7800X3d @4.8GHz (single core Geekbench score: 2700), 32GB RAM, Nvida 4070 Ti graphics with 12GB vRAM, 2TB SSD, Windows 11

All tests conducted with SL in a FHD resolution Window with the graphics settings slider set to 1 step back from Ultra. Open Hardware Monitor was used to measure CPU load and power.

Test 1: Peak nightclub with view of dancefloor containing 23 avatars (SL frame rate CPU limited)
Test 2: Skybox with only 1 other avatar (SL frame rate GPU limited)

 

Results:
In test 1, the 8700X3D managed 102fps with the CPU consuming 33W, whereas the i9-9900K managed 33fps consuming 64W.

In test 2, the 8700X3D managed 340fps with the CPU consuming 33W, whereas the i9-9900K managed 114fps consuming 78W.


Conclusions:
The comparison isn't like for like, notably the graphics cards are different and from AMD versus Nvidia, but I've tried to isolate a situation where the SL frame rate is CPU limited rather than GPU limited (e.g. SL just running on a single maxed-out core).

With that caveat, in a CPU limited situation the 8700X3D manages 3 times the frame rate of the i9-9900K while consuming half the power, making it about 6 times more efficient. 

The matching of the Ryzen 7 7800X3d with a Nvida 4070 Ti graphics card seems pretty good for SL at 4K resolution, since at that resolution the CPU and GPU both max out at similar frame rates. Plus, the low single core power consumption of the 7800X3d means the fans remain pretty quiet when running SL.

Edited by filz Camino
Link to comment
Share on other sites

18 minutes ago, MarissaOrloff said:

The 9900k is 5 generations old now. Of course something modern is going to perform better.

So you would expect a processor that only has a 60% higher single core Geekbench score to give an SL frame rate that is 300% higher?

There's also the point that I've seen made many times in these forums that SL does not make use of the improvements in modern CPUs and there's not much point having one. Well clearly SL doesn't make much use of the ever increasing numbers of cores in modern CPUs, but it seems a modern CPU with higher IPC and other tech improvements can nevertheless improve SL performance significantly.

Edited by filz Camino
Link to comment
Share on other sites

6 hours ago, filz Camino said:

Test 2: Skybox with only 1 other avatar (SL frame rate GPU limited)

In test 2, the 8700X3D managed 340fps with the CPU consuming 33W, whereas the i9-9900K managed 114fps consuming 78W.

That are strange results and you don't mention some crucial details: ALM on or off? Shadows? Anisotropic Filtering? Antialiasing (off, 2x, 4x...)?

 

 

  • Like 1
Link to comment
Share on other sites

I doubt it makes any real sense to compare CPUs for this when also switching the GPU around.

Comparing the CPUs with same GPU, drivers, OS, RAM amount available might be useful, but with a setup like this you introduce too many differences that have nothing to do with the CPU itself.

You have vast system differences:

  • DDR4 vs DDR5 memory
  • 1 TB SSD vs 2 TB SSD (probably also NVMe vs SATA or different PCIe levels)
  • Different OS
  • Different GPU

So basically you compare systems, not CPUs.

  • Like 2
Link to comment
Share on other sites

SL doesn't even seem to utilize CPU particularly well (just like it doesn't utilize GPU particularly well) so... I guess what we're seeing is strong single thread performance of the latest and greatest Ryzen or whatevers. That is a surprising uplift though when you consider the utlization issues, I don't see SL taxing my 10th gen i7 particularly.

I think what you are really seeing here is the difference between Nvidia's OpenGL performance and AMD's OpenGL performance to be honest, people do like to ignore GPU when it comes to SL but there's no denying that AMD have issues as far as OpenGL performance even in an application like SL which isn't really doing the best job of actually using the GPU resources being provided.

You throw enough GPU at SL and you do see significant performance improvements, Nvidia hold the crown for best SL GPU's. I'm not sure the gaming idea of "CPU or GPU limited" really applies here since SL is never CPU or GPU limited in modern systems, the performance uplift people see from Nvidia cards could very well be a much deeper thing related to combined performance than that.

All of that said: compare the latest and greatest Ryzen to the latest and greatest i9 if we're actually trying to find the best performing SL CPU. It probably is the Ryzen due to its very high single thread performance but we don't truly know. You would need to use the same GPU, same memory speed etc as well of course.

 

 

Edited by AmeliaJ08
  • Like 1
Link to comment
Share on other sites

1 hour ago, Nofunawo said:

That are strange results and you don't mention some crucial details: ALM on or off? Shadows? Anisotropic Filtering? Antialiasing (off, 2x, 4x...)?

 

 

As I said, the graphics slider set 1 step back from Ultra in both cases (all the settings you mention will be set by that, and will therefore be the same on both systems)

Link to comment
Share on other sites

1 hour ago, Kathrine Jansma said:

I doubt it makes any real sense to compare CPUs for this when also switching the GPU around.

Comparing the CPUs with same GPU, drivers, OS, RAM amount available might be useful, but with a setup like this you introduce too many differences that have nothing to do with the CPU itself.

You have vast system differences:

  • DDR4 vs DDR5 memory
  • 1 TB SSD vs 2 TB SSD (probably also NVMe vs SATA or different PCIe levels)
  • Different OS
  • Different GPU

So basically you compare systems, not CPUs.

Yes, I did put that in there as a caveat although as you will notice, I've made an effort to test the setups in a situation where the frame rate is CPU limited, not SSD limited nor GPU limited.

By CPU limited, I mean 1 core running constantly at 100% load. In that situation with SL's single threaded render loop, it is highly likely that it is the CPU itself that is limiting the frame rate. 

I personally think the only significant error here might lie in the fact that AMD graphics drivers might be taking resources from the core that is running the render loop and the Nvidia drivers aren't.

Edited by filz Camino
Link to comment
Share on other sites

1 hour ago, AmeliaJ08 said:

SL doesn't even seem to utilize CPU particularly well (just like it doesn't utilize GPU particularly well) so... I guess what we're seeing is strong single thread performance of the latest and greatest Ryzen or whatevers. That is a surprising uplift though when you consider the utlization issues, I don't see SL taxing my 10th gen i7 particularly.

I think what you are really seeing here is the difference between Nvidia's OpenGL performance and AMD's OpenGL performance to be honest, people do like to ignore GPU when it comes to SL but there's no denying that AMD have issues as far as OpenGL performance even in an application like SL which isn't really doing the best job of actually using the GPU resources being provided.

You throw enough GPU at SL and you do see significant performance improvements, Nvidia hold the crown for best SL GPU's. I'm not sure the gaming idea of "CPU or GPU limited" really applies here since SL is never CPU or GPU limited in modern systems, the performance uplift people see from Nvidia cards could very well be a much deeper thing related to combined performance than that.

All of that said: compare the latest and greatest Ryzen to the latest and greatest i9 if we're actually trying to find the best performing SL CPU. It probably is the Ryzen due to its very high single thread performance but we don't truly know. You would need to use the same GPU, same memory speed etc as well of course.

 

 

As I mentioned, one of the tests measures frame rate when the rate is CPU limited. 

That removes the effect of the GPU hardware affecting the measurement.

I also did a test where the frame rate is GPU limited, and in fact the Nvidia GPU is faster by the same factor as that GPU is in benchmarks, when compared to the AMD gpu.

Link to comment
Share on other sites

OK. than the 7800X3d is not a game changer.

Resolution: FHD

CPU: 12th Gen Intel(R) Core(TM) i5-12600K (3686.4 MHz)
Memory: 32560 MB (Used: 1258 MB)
OS Version: Microsoft Windows 11 64-bit (Build 22621.2428)
Graphics Card: NVIDIA GeForce RTX 4060 Ti/PCIe/SSE2
Graphics Card Memory: 8188 MB

Quality: High  Sykbox - 2 AVAs: 420 fps - CPU 45W - GPU: 96W

Quality: Ultra Sykbox - 2 AVAs: 370 fps - CPU 45W - GPU: 96W

 

 

 

Link to comment
Share on other sites

29 minutes ago, Nofunawo said:

OK. than the 7800X3d is not a game changer.

Resolution: FHD

CPU: 12th Gen Intel(R) Core(TM) i5-12600K (3686.4 MHz)
Memory: 32560 MB (Used: 1258 MB)
OS Version: Microsoft Windows 11 64-bit (Build 22621.2428)
Graphics Card: NVIDIA GeForce RTX 4060 Ti/PCIe/SSE2
Graphics Card Memory: 8188 MB

Quality: High  Sykbox - 2 AVAs: 420 fps - CPU 45W - GPU: 96W

Quality: Ultra Sykbox - 2 AVAs: 370 fps - CPU 45W - GPU: 96W

 

 

 

OK, that's interesting, although  from the experiments I've done, I think you are probably GPU limited in that (skybox) test.

I'd be very interested to hear what you get with that hardware if you TP into Peak nightclub and stand at the corner of the dance area so you have all the avatars all in view. At the moment, there's 29 avatars there and in FHD with graphics set to High I'm getting 121fps

Link to comment
Share on other sites

On Ultra in my skybox...300fps with restrictions off.

CPU: 12th Gen Intel(R) Core(TM) i7-12800HX (2304 MHz)
Memory: 32436 MB (Used: 1668 MB)
Concurrency: 24
OS Version: Microsoft Windows 11 64-bit (Build 22621.2428)
Graphics Card Vendor: NVIDIA Corporation
Graphics Card: NVIDIA GeForce RTX 3060 Laptop GPU/PCIe/SSE2
Graphics Card Memory: 6144 MB

  • Thanks 1
Link to comment
Share on other sites

Update

It seems we have an answer on the large L3 cache debate. Nofunawo and myself just stood in the same location in Peak with the same settings, and got virtually identical frame rates. 

Since we have very similar Nvidia graphics cards, and the single core Geekbench score of his Intel processor is more or less the same as the 7800X3D, I think we can conclude that the large L3 cache in the 7800X3D probably does not make any difference in SL.

  • Thanks 1
Link to comment
Share on other sites

So, we did the test in the Club. Same settings - same view. Both around 80 fps 

Conclusions:

  • As expected as the 12600k and the 7800x3D are twins when it comes to single core power
  • The extra cache is no benefit in SL
  • Cause of the CPU limit the 4070ti doesn't perform better than the 4060ti

Now back to my normaly activated limiter. Save energy - save humanity! 😀

 

 

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

Thanks for running through that.  I did suspect the cache would still be missing a lot simply because the Second Life Viewer main render loop is a monstrous thing.  Come back with 2GB of on-die cache and try again. 😉

Link to comment
Share on other sites

The 3D-V cache would not bring any benefit to SL viewers. The latter are already small enough to fit all the critical parts of their code into the L3 cache on non-3D counterparts, and the latter boost higher in frequency (the SL viewer is very sensitive to mono-core CPU performances).

I myself bought a Ryzen 7900X for my new main PC, which replaced one with a 9700K. I'm not interested in a 7900X-3D; the 7900X performs beautifully in SL, and is a beast to compile large programs. E.g. the Cool VL Viewer compiles in less than 3 minutes on it, which is a huge time savior, when compared to the 10 minutes it took on the 9700K. Even CEF (the embedded browser library used for the viewer web plugin) builds (from scratch, with downloads, git ”deltas” etc, which take about 35 minutes to complete) in less than 1H35 (against 2H10 for the 9700K)...

Edited by Henri Beauchamp
  • Like 2
Link to comment
Share on other sites

11 hours ago, Henri Beauchamp said:

The 3D-V cache would not bring any benefit to SL viewers. The latter are already small enough to fit all the critical parts of their code into the L3 cache on non-3D counterparts, and the latter boost higher in frequency (the SL viewer is very sensitive to mono-core CPU performances).

I myself bought a Ryzen 7900X for my new main PC, which replaced one with a 9700K. I'm not interested in a 7900X-3D; the 7900X performs beautifully in SL, and is a beast to compile large programs. E.g. the Cool VL Viewer compiles in less than 3 minutes on it, which is a huge time savior, when compared to the 10 minutes it took on the 9700K. Even CEF (the embedded browser library used for the viewer web plugin) builds (from scratch, with downloads, git ”deltas” etc, which take about 35 minutes to complete) in less than 1H35 (against 2H10 for the 9700K)...

Ah, so now it's the opposite of what I remember from running Second Life Viewer on the ICE I had access to.  It was functionally equivalent to an Intel W3550 and was reporting around 40% cache misses when running Second Life Viewer on Windows XP and around 44% cache misses when running Second Life Viewer on Linux.  Looking that CPU up I see it had 8MB of "Intel Smart Cache".  I currently run on an i9-13900k which apparently has 36MB of "Intel Smart Cache".  I see 32MB of L2 cache listed too.  Very different environment.  Foolish of me to assume the contemporary CPU cache is still too small.  I no longer live in a design lab and don't know how to get CPU cache hit/miss statistics on my home computer.  I may try to look that up.

Link to comment
Share on other sites

20 hours ago, Ardy Lay said:

Ah, so now it's the opposite of what I remember from running Second Life Viewer on the ICE I had access to.  It was functionally equivalent to an Intel W3550 and was reporting around 40% cache misses when running Second Life Viewer on Windows XP and around 44% cache misses when running Second Life Viewer on Linux.  Looking that CPU up I see it had 8MB of ”Intel Smart Cache”.  I currently run on an i9-13900k which apparently has 36MB of ”Intel Smart Cache”.  I see 32MB of L2 cache listed too.  Very different environment.  Foolish of me to assume the contemporary CPU cache is still too small.  I no longer live in a design lab and don't know how to get CPU cache hit/miss statistics on my home computer.  I may try to look that up.

The cache misses will happen, especially on data, but on instructions, it is likely that all critical parts of the viewer code (the parts that run at every frame) will be kept in the CPU caches: they will of course migrate between the L1, L2 and L3 caches (especially during context switching by the OS scheduler), but today's L3 caches are so large, that the probability to see this critical code evicted from it is very slim...

Also, be careful: L1 and L2 caches are per CPU (full, i.e. non-virtual) core (so the total quoted amount is to be divided by the number of cores, and by two again for the L1 instruction/data caches actual size per core), while L3 is shared (though maybe split in two, e.g. for AMD's Zen4 CCDs with 2x32MB for non-3D parts and 32MB/96MB for 3D ones); Intel recently made it even more confusing, with their ”smart cache”, which may allow cores to use some L2 cache from other inactive cores...

It would be interesting to do some benchmarking on a Zen4 with 3D-V cache (and two CCDs) and see wether assigning the viewer main thread to a (full = 2 SMT virtual cores) core on the 3D CCD gives better results or not than when assigned to a core on the other CCD (with the smaller L3 cache)... This can easily be done with the Cool VL Viewer, using the ”MainThreadCPUAffinity” setting.

Edited by Henri Beauchamp
Link to comment
Share on other sites

  • 2 weeks later...

Very interesting discussion and something I am thinking about recently.  I am considering pairing a 7800X3D w/a standard 4070 12gb if i build something new.  I am wondering if the 7950X3D might offer any improvements with SL specifically.  For standard gaming it doesn't seem to in tests, but SL is hard to predict since it is generally unoptimized and also is, i believe, more processor dependent than a lot of games.  I get a lot of slow and janky rendering in SL.  I currently have a 3060 12gb but the rest of my current tower's hardware is pretty dated.

Link to comment
Share on other sites

On 11/7/2023 at 3:11 PM, Callieleaf said:

I am wondering if the 7950X3D might offer any improvements with SL specifically

Yeah but just because of the faster single core speed, it’s kinda brute forcing better performance. No other attributes of that cpu would really impact performance any more than the cpu simply being faster would.

And getting a cpu like that for sl, I would only recommend if the only thing you use your pc for is sl and money is no object. It’s really not going to help that much in a way that justifies the expense to most people, SL doesn’t really care what hardware you have, it performs awful on anything, and higher end hardware is just having it perform slightly less awful.

Link to comment
Share on other sites

SL isn't performing awful at all. Just saying.
Differently than most games, yes. That is because there is no prerendering because of all the user created content.
There will always be rezz times. It is how the system is build and gives everybody almost unlimited possibilities.

But compared to the early days, the speed is so high at times, I'm almost warped into the future.   😁

That said, IMHO Internet connection and the graphic card are far more important.
Modern CPU's don't have their hands full mastering SL.

Link to comment
Share on other sites

1 hour ago, gwynchisholm said:

SL doesn’t really care what hardware you have, it performs awful on anything, and higher end hardware is just having it perform slightly less awful.

Yes, some "problems" of SL, caused by "WorksAsDesigned", can't be solved by any hardware....

BUT with a good hardware basis SL doesn't show a lot of the issues people always complaining about. I  had no crash since years, I can move also @Events... 

OC a good internet connection is also, maybe more, important for performance and stability.  Starting with the ISP (IPV4 vs IPV6, perring ...) and ending with using a LAN and not a WLAN/WiFi connection. 

 

Edited by Nofunawo
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 180 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...