Jump to content

AMD Ryzen 7 5800X3D Will SL take advantage of the extra 3D cache of 64 MB ?


You are about to reply to a thread that has been inactive for 372 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

Nope, not even a little. SL doesn't take advantage of any modern technology in computers, really. Multiple cores, SLI, CrossFire... It doesn't even take advantage of multiple network connections like even ancient downloading software does.

Stuff like this is only worth it if you also play other games.

  • Like 1
Link to comment
Share on other sites

  • 2 months later...
On 4/15/2022 at 12:53 PM, AgmortenDK said:

I just read the first review of the new AMD cpu. In games will it give a boost. Was wondered If it also boost SL

https://hothardware.com/reviews/amd-ryzen-7-5800x3d-review-and-benchmarks

 

i m curious about this too.... SL likes strong single core  performance so i guess the most suitable  amd cpu would be something like  5800x  or 5900x which booth have the highest boost clocks in single thread  workloads and very strong ipc... the 5900x has the extra  advantage of extra l3 cache per core  that helps  of more data preloaded  for processing staying  away from the memory sticks that are slower.... that means  more stable fps and less stutter .... ofcourse all  cpus are kinda overkill for Secondlife only ...but its a mistake in firstplace to build a machine  for specifically for SL ...plus with these cpus  you ll be able to stream and do side tasks  with almost no impact while  logged on. So its only a waste of money if  its money you can t really spare...Now for the 5800x3d  its a weird mix...on theory its should do miracles  since in games like gta 5  and flight simulator , that are vast rendered words  with a lot of multiplaying it really shines destroying the above cpus that   clock higher and  were considerered the best or amongst the best gaming cpus last year ...the huge L3 cache makes experience buttersmooth and gains up to 50% in lows and 15-40 % in avg fps depending the game...but...SL likes both...l3 cache and especially  high clocks......and high clocks aren t the best feature  of 5800x3d so im not 100%  it will improve SL experience... and i d personally go for the 5900x which is cheaper...and has 2/3 of the 5800x3d cache so it s not missing too much in that area... having way higher  single core boost clocks that SL loves.....if  are on tight budget better go for 5800x.... If you don t have fast  internet  though  SL won t care much about  your computer hardware .....it ll still lag and freeze...i have overkill specs  pc and although i see huge max fpsi still get  lag and freezes at  full areas or doing sails/ races in heavy  sims because of my adsl internet... waiting for the fiber 200mbps to  hopefully enjoy much  smoother  experience in allinstances 

 

Edited by Gio1984Vr
syntax corrections
Link to comment
Share on other sites

  • 10 months later...
On 4/21/2022 at 9:57 PM, SarahKB7 Koskinen said:

SL runs off an ancient Havok based graphics engine and only uses a single core of a CPU. Buying a new 8 core AMD Ryzen 5800X3D CPU just for SL is extreme overkill and a waste of money.

For starters Havok is not graphics engine.

I am not sure  why You chose to come on forums and spread disinformation like this, that has less value than methane gas released by cows.

Havok is a state of the art Physics engine, used in some of the best triple A  titles,  We could say safely that in general Havok is the best physics engine available around.

You also need to note that being able to use Havok  implies paying a very hefty licensing fee...I am not sure how much today, but in 2010 to have a Havok License cost anything north of half a million US dollar a year.

You know nothing of how SL works and You know nothing of how computers work and yet You have the guts to come on forums and talk BS on a thread that is about if 3D L3 cache improves SL client performance.

Link to comment
Share on other sites

On 4/15/2022 at 11:53 AM, AgmortenDK said:

I just read the first review of the new AMD cpu. In games will it give a boost. Was wondered If it also boost SL

https://hothardware.com/reviews/amd-ryzen-7-5800x3d-review-and-benchmarks

 

Hello

I know it is over one year late, and while Id like to now if You  did commit to an AMD 3d CPU if you had any chance to compare it to other contemporary CPu's with what results.

That being said, if You still have not made a computer upgrade, I will say as  follows in the hope it can be of use to You:

1 First of all, when building ANY system, You must first assess  what resolution You wish to run,,, so the display dictates anything You choose to build a PC system that is dedicated for gaming...(if building a workstation other factors come as apriority and display is irrelevant to a certain extent as you cold very well run a multiple GPu workstation with a one GPu solely dedicated to  drive your monitor set up..)

best results come at 1080P  which is starting to become obsolete and I do not find the cost of going X3d justifiable for a 1080p experience as SL is not competitive gaming and You cannot eg run 244Hz... as a matter SL true FPS is capped at 40ish even if you see more.

Ideally 1440P is where you can justify the purchase of  a more expensive CPu like 7800X3d for SL or gaming in general.

At 4K which is what I am trying to figure out Myself, it is a mystery as I cannot find any documentation if on single threaded performance on Open GL engines  X3d at 4k will cash in extra performance or not...In theory You should have better lows...

 

there is another factor to consider about SL that is different  from any other game regardless of the engine it is running on, and that is that SL has editable run time...this means that You with any amount of users on teh same simulators can edit the game while running it,,,, this is the UNIQUE fantastic feature to SL, but it has it s draw backs...

 

In other games before You boot into the  scene You will play in, You will have a preload time where all the textures and assets of the game are loaded immediately into Your memory and frame buffer....

On second Life this happens ONLY after you, so to say it in simple words, either spawned, or teleport into a region...which then You experience what we  know as Rezing and Rez time.

 

So to rez faster You need tons of internet bandwidth and  set your viewer to have maximum Viewer bandwidth  so to be able to grab on server baked stuff faster, at the same time You want very quick IOPS and read hard drive..M2 hard drive,  with  a cpu that is very quick to unzip your cached assets into the world u are spawned in.

The system memory and the GPu Memory  and your  non volatile will be put at  work during rez time.

Once Your assets are loaded onto your GPu and the textures as well, then it is the combination of CPu and GPu that have to handle the massive amount of geometry you deal with in SL...You can easily deal with tens of millions of triangles in a club where all avatars are fully customized , dancing and live streaming is ongoing.

 

SO In My opinion, there is two scenarios  where performance needs to be considered...

1 is how long to rez

2 how are you interacting  with anything inworld once u do have all rezzed? how many FPS?!

 

conclusion.

AMD 3d cache is as if running ram memory 5 times faster... eg 4800 memory would be as if running 4800x5.... 6000 memory would be as if running 6000 memory x5, that is because CPu cache is always faster than ram memory no matter what and it is usually about 5 times faster.

At lower resolutions, Your CPu Gpu and memory interactions are many small sized ones, while eg at 4k its less but bigger ones, therefore Cu clocks and instructions per clock  matter less ONLY  AND SOLELY WHEN THEY AR ENOT LOW ENOUGH TO bottleneck your graphics card....it i s to say if u running a 7700k Intel cpu with a 4090, the generational gap is so big that You will   suffer FPS and cam navigation  performance a lot because the GPu is always waiting on a CPu that  cant deliver in time even if at 4k.

 

So once u have a state of the art powerful GPu and aa state of the art fast SSD, when choosing what CPu and platform to run, it all comes down to HOW LONG DO YOU WANT your system to last  and still offer decent performance?!

So By logic buying a 3d cache cpu might not give You benefits compared to eg a 7700x at 4k today, but  as time passes and you upgrade GPU and  Sl  becomes more and more complex, You might pull out  one generation more  of CPu  performance out of the x3d cpu than the non x3d...that would be in the sense that if u getting eg  60 fps today, you might still get 60 fps for one year more out of a larger cache Cpu like these AMD 3d cached ones.

If SL is all what u like as far as gaming, I would set up a 1440p nice big smart TV as monitor and calibrate it, a nice very quick m.2 ssd with lowest latency... Samsung, WD and Kingston fury gen 5.o are the best in this , and then buy a  8 core cpu with its proper EXPO ram...just aim at a GPu that has at least 16Gb of memory , I would recommend even buying a used 3090, 3090ti because of the  memory size and the reason for this is because if Your GPu will start to run out of memory, Your windows will start paging virtual memory this will result in much much slower performance and crashing at one point as  this kind of virtual memory is harder to get freed...

You have seen this many times.....when a friend is telling You, I think I am going to crash soon... this is  when their system is starting to use paging and the paging is getting saturated and not freed in time to run the game resulting in a hard crash.

 

Bottom line for a stable sl get overkill SSD, system memory  and Gpu,  for squeezing more FPS buy a good recent mid tier  8 core cpu that is not too expensive and upgrade every 2/3  years one of those components when you  see that benchmarks show it is at least double as fast.

In rezing I think the NON X3d cpus will  give you much faster rez time because they decompression (unzipping cache while in RUN TIME) performance is  faster.

IN Run Time so during game play X3d cpus might give You not much better fps but a better optimization of how much memory u loading and unloading in  your ram and Gpu, but I cannot know for sure and it is only an educated guess.... I just think that a 7900X because costing pretty much similar to a 7800X3d, is a better buy as it will anyhow compensate for its lower Cache by  single core its clocks and way faster decompression.

 

Edited by COCA Yven
Link to comment
Share on other sites

I mean it'll take advantage of it (it's cache) but I doubt you'll notice any difference. For what it's worth the '3D' is just a reference to the physical structure of the transistors.

SL isn't exactly great at using a CPU and certainly doesn't take full advantage of anything remotely modern. It'll run about as well on just about every mid-range and above CPU made in the past decade with precious little difference between them. The latest and greatest CPU will perform more or less the same as the latest and greatest of many years prior.

GPU is what counts most if the basic requirement of a moderately fast CPU is covered and even then it's just a case of faster GPUs being able to brute-force performance when rendering dated Second Life OpenGL.

 

Link to comment
Share on other sites

11 hours ago, AmeliaJ08 said:

GPU is what counts most if the basic requirement of a moderately fast CPU is covered and even then it's just a case of faster GPUs being able to brute-force performance when rendering dated Second Life OpenGL.

You are mistaken. Currently, with its mono-threaded OpenGL renderer, what counts most for frame rates, once you got powerful enough a GPU (GTX 1070 or better), is the mono-thread performance of the CPU. SL is ”light” on the GPU, compared with AAA games, but it is very heavy on the CPU (or more exactly, on the only CPU core it uses to render, when modern AAA games use several cores for the same task) and, for example, increasing the frequency of the latter translates in an almost proportional increase of your frame rates.

Things will change, eventually, when a multi-threaded Vulkan renderer will be implemented...

This also means that non-3D variants of Ryzen CPUs will actually perform better in SL, since they got the same IPC as, and a higher core frequency than their 3D counterparts, while still being able to keep in their caches the time-critical code of the SL viewers, which is small in size: the full executable program size is currently between 26MB (Windows build of the Cool VL Viewer, 49MB for the Linux build) and 87 MB (Linux build of Firestorm, 51MB for their Windows build), only a small part of that size actually representing the renderer and other code called every frame (which is the part that will be kept in the CPU caches).

Edited by Henri Beauchamp
  • Like 4
Link to comment
Share on other sites

Given all this technical gobbledy-*****, which I use simply as a placeholder for "everything I don't know and don't have the bandwidth to master", what would be the best "rig" for SL on the value scale, at the current time? I'm not asking for just indescriminately piling on horsepower across the board, just a reasonable build that gives the best bang for the buck, from a  moderate "gaming builds" perspective. I'm not in the market for that, but I think spec'ing a machine like that would give a good view into where to focus attention (and money) and where it delivers diminishing returns.

OMG I got flagged for the latter part of gobbledy-STUFF because it's an asian insult. Good on SL.

Edited by Thecla
Link to comment
Share on other sites

55 minutes ago, Thecla said:

Given all this technical gobbledy-*****, which I use simply as a placeholder for "everything I don't know and don't have the bandwidth to master", what would be the best "rig" for SL on the value scale, at the current time? I'm not asking for just indescriminately piling on horsepower across the board, just a reasonable build that gives the best bang for the buck, from a  moderate "gaming builds" perspective. I'm not in the market for that, but I think spec'ing a machine like that would give a good view into where to focus attention (and money) and where it delivers diminishing returns.

OMG I got flagged for the latter part of gobbledy-STUFF because it's an asian insult. Good on SL.

Any contemporary midrange gaming pc. Which right now is the i5 13400F on a B660 or B760 motherboard, 16gb of 3600mhz cl16/18 ddr4 or if you go for a ddr5 board, 32gb of anything 5800mhz or better (ddr5 in low capacity is pretty poor value and 32gb kits are the most common).

Any midrange current gpu, so either the Radeon RX 6600, 6600XT or 6650XT, on the nvidia side the RTX 3060, 3060ti or 4060ti. Though the 4070ti is also proving to be very popular. There’s a lot of debate over the vram limitations of 4000 series which brings the third player into question, the Intel Arc A770 16gb is a great choice IF most of what you do is somewhat “modern”, arc has issues with DX9/10 titles so it’s really only a good choice if your use case is mostly DX11/12, Vulkan or OpenGL.

Case, psu, Heatsink and drives are dependent on a large variety of factors. Though SSDs are extremely cheap now and 1tb+ nvme SSDs are the default recommendation since a decent one is $40.

AM5 isn’t recommended, it’s very early into the platform and is hot, expensive and really not that much better than Ryzen 5000 offerings on AM4. Future generations may change that.

tldr current i5, midrange gpu, can’t go wrong for most stuff

edit: basically this

https://pcpartpicker.com/list/VZJ8W4

If you brought that to pc oriented communities and asked “is this good” most people are gonna say yeah. It’ll likely spark endless debate on specific part choices but not the hardware combination of the 13400, 6650XT, 16gb of ddr4 and 1tb nvme.

That is the current average pc build recommendation. All you do to go lower is swap to the i3 12100F and usually a used gpu because new entry level is poor value, and to go higher you do the i7/i9 with a better gpu, sometimes swapping to Z690/790 for better overclocking support.

Edited by gwynchisholm
Link to comment
Share on other sites

For what it's worth, Firestorm is currently using 41% of my GPU and 11% of my CPU, at my home region (AMD Ryzen 9 7950X3D /  NVIDIA GeForce RTX 3080 Ti), getting about 450fps.

Of course, this will tank depending on what region you're on. The main issue with Second Life is it's all custom content. Lots and lots of custom (often very unoptimised) content is really hard to render efficiently.  Games that you play will have huge amounts of precalculated baking, geometry shaders to do the heavily lifting, etc.

However, if you run your viewer at 4k resolution you're certainly going to make more use of your GPU.

 

 

  • Like 2
Link to comment
Share on other sites

Was me there started the post.

i upgraded to a Rytzen 7 5800X3D. Plan was to make some test to compare but in my etager i forgot it. But i dont see any change in speed under normal use!
 

Then i upgraded from a 1080 GTX to a 3070 TI grafic card it was better but not a big different! But then i changed my CPU from a 3,2 GHz to 4,7 GHz i saw a big change of speed. SL only use single core so then Casper Warden. Have only 11% its because it only use one core. But if you messure only the core it would be 100%

my simple conclution is its only rough CPU GHz there makes the biggest change.

Link to comment
Share on other sites

20 minutes ago, Aishagain said:

Multi-thread maybe but still single core only.

That sentence sounds strange, but has some truth in it.

Modern viewers can use extra cores to run threads to decode textures and binding OpenGL textures. Current NVIDIA and AMD OpenGL drivers also use multiple cores for rendering. But the main rendering loop of the viewer is still running on a single core.

So more CPU cores help to rezz things faster, but do not help all that much with getting more FPS once textures are loaded.

  • Like 2
  • Thanks 1
Link to comment
Share on other sites

3 hours ago, Casper Warden said:

I believe the viewers have had multithreaded rendering for quite some time.

It’s more like there are certain parts of the modern viewers that can use more cores, but the cpu bound parts of graphical rendering and lighting are still single core bound.

If LL could manage to get their engine to render multi core lighting alone that would dramatically increase performance. 

  • Like 1
Link to comment
Share on other sites

20 hours ago, gwynchisholm said:

the cpu bound parts of graphical rendering and lighting are still single core bound

Hm, I remember Firestorm in 2010 had the menu option Advanced -> Rendering -> Run Multiple Threads

And I remember some chat about deferred rendering at the same time..

(in before "ok boomer")

Edit: I see now this only referred to background threads. The menu option was a bit misleading i suppose.  But .. yeah, multi-threaded deferred rendering is absolutely possible, maybe one day :)

Edited by Casper Warden
  • Like 1
  • Thanks 1
Link to comment
Share on other sites

Y'know, this all still leaves me pondering...yes, the render process uses (or CAN use) multiple threads and in my case does, but that still leaves me unconvinced thatthese other referred threads are on other cores.

I have carefully, nay, obsessively, watched Task Manager on my 12 core CPU and I'm damned if I see any activity outside of almost transient spikes of activity in any other than the first core, which is running at or near 100% all the time I am using the viewer.

What am I missing?

ETA: I am a Firestorm user...I detest the UI and functionality of the Default viewer.  I only use it when I need to convince LL of a bug.

Edited by Aishagain
Full disclosure of my FS habit
Link to comment
Share on other sites

14 hours ago, Aishagain said:

I have carefully, nay, obsessively, watched Task Manager on my 12 core CPU and I'm damned if I see any activity outside of almost transient spikes of activity in any other than the first core, which is running at or near 100% all the time I am using the viewer.

What am I missing?

The renderer is mono-threaded on the viewer side, and also runs in the main thread (where all input/output processing happens, among other things) so it will take up only one CPU core at most (and only a single virtual core on SMT CPUs). However, modern OpenGL graphics drivers can use a few more threads (meaning the rendering itself will be done on several threads despite the mono-threaded C++ code in the viewer), so provided that you enabled multi-threading in your driver settings, you should see the viewer use 120 to 200% of a single core (i.e. up to two cores loaded at 100%, virtual ones for SMT processors), depending on the scene being rendered.

Then while rezzing (after login, after a TP, or while moving around or camming around), with modern viewers, you should notice an increase in the number of cores in use by other threads of the viewer (fetching, decoding threads, and even GL image creation threads and cache writes threads for viewers implementing them). A viewer such as the Cool VL Viewer can easily saturate a 8 cores non-SMT CPU while rezzing (it does easily saturate my Core-i7 9700K-based PC); however, it does not saturate any more my shiny new Ryzen 7900X (12 cores with SMT = 24 ”threads”), or only for extremely short periods of a few seconds (1-5) at most (everything rezzes so fast, that the work is already done before you notice it happening).

Edited by Henri Beauchamp
  • Thanks 1
Link to comment
Share on other sites

On 7/6/2023 at 7:54 PM, gwynchisholm said:

It’s more like there are certain parts of the modern viewers that can use more cores, but the cpu bound parts of graphical rendering and lighting are still single core bound.

If LL could manage to get their engine to render multi core lighting alone that would dramatically increase performance. 

It's honestly crazy that the renderer hasn't had a re-write to properly utilise all available hardware.

It's something significant that could be done without the usual excuse of breaking decades of content. I know the answer to "does SL use more than one core" is technically "yes" but it's also not as simple as that and so much more could be done.

 

 

Link to comment
Share on other sites

On 7/8/2023 at 1:54 PM, AmeliaJ08 said:

It's honestly crazy that the renderer hasn't had a re-write to properly utilise all available hardware.

It is a tradeoff, as all engineering is.

It is not a matter of "re-writing" the current code to use hardware better by gradual improvement. Switching to a different API like Vulkan is more or less a complete change of rendering architecture. So you need to learn new paradigms, rediscover basic errors with the new API and so on.

In addition, you have an existing userbase on a somewhat aging hardware setup. Going straight for shiny new stuff might leave a lot of paying customers behind. 

So either you invest heavily, get developers that know the new APIs, and spend some person months or years to rewrite the whole engine, with the associated risk. Or you try to do small improvements in to ease the pain. 

 

  • Like 2
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 372 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...