Jump to content

True or False: CPU and RAM specs for SL specifically.


You are about to reply to a thread that has been inactive for 727 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

I am guessing I am trying to narrow down a conversation that is as old as SL itself. However, I am trying to find info that is current, or at least not 100 days old. Some PC issues have required me to replace my Motherboard on my PC, so i am thinking of using this problem as an excuse to upgrade some things. I am looking for guidance, or at least some clarification on things I have read over the years. Just really looking for some current information and opinion.

 

True or false:  SL relies more on your CPU than your GPU? I hear this a lot and my experience has sort of confirmed this thought. If I am lagging badly, I can look at the task manager and see my GPU ( (GTX 1060) is doing fine and my poor little CPU (RYZEN 3 1300X) Is howling in pain. Assuming your GPU is average to optimal, does SL rely heavily on your CPU?

True or False: SL can not use more than one core at a time, so four good CPU cores are better than eight average ones? I have heard this repeatedly and it sort of sounds "wrong" but not impossible when  SL is concerned. Muliticores were certainly not a big thing when the foundation blocks of SL were put into place.  I am Looking at bumping up to a Ryzen 5 series and was wondering if four larger cores would be better than say six slower ones?

 

True or False: Two sticks of 8 GB RAM are better than one stick of 16 GB Rm when using SL? Again, I have been told this often and have no idea if is true or why. I would only add that we are talking about DDR4 3600 RAM.

 

Note: Everything would apply to SL specifically. I am not much of a gamer. The Other uses of the PC are basically photo editing and music production. Any and all information is helpful. Thanks!

Edited by sodasullivan
  • Like 1
Link to comment
Share on other sites

48 minutes ago, sodasullivan said:

True or false:  SL relies more on your CPU than your GPU? I hear this a lot and my experience has sort of confirmed this thought. If I am lagging badly, I can look at the task manager and see my GPU ( (GTX 1060) is doing fine and my poor little CPU (RYZEN 3 1300X) Is howling in pain. Assuming your GPU is average to optimal, does SL rely heavily on your CPU?

As of now, this is true. This might change with the soon to be expected release of the performance viewer.

48 minutes ago, sodasullivan said:

True or False: SL can not use more than one core at a time, so four good CPU cores are better than eight average ones? I have heard this repeatedly and it sort of sounds "wrong" but not impossible when  SL is concerned. Muliticores were certainly not a big thing when the foundation blocks of SL were put into place.  I am Looking at bumping up to a Ryzen 5 series and was wondering if four larger cores would be better than say six slower ones?

Mostly true, but see answer for first question: The performance viewer and ongoing work based on it will make more use of threading, so having more cores will be an advantage here.

48 minutes ago, sodasullivan said:

True or False: Two sticks of 8 GB RAM are better than one stick of 16 GB Rm when using SL? Again, I have been told this often and have no idea if is true or why. I would only add that we are talking about DDR4 3600 RAM.

That's actually true and not just related to SL, since this allows dual channeling which is faster than single channel memory access.

  • Like 3
  • Thanks 4
Link to comment
Share on other sites

CPU vs GPU:  True. SL runs on OpenGL graphics and Havok physics engines, which are mostly CPU based.

Cores: True. SL was originally coded in the early 2000s when most people only had single core CPU's.

RAM: True. Dual channel RAM (paired sticks in paired slots) is faster than a single channel (single stick in single slot). Most ATX motherboards will have four RAM slots, so make sure you're using the correct two slots, they're usually colour coded in pairs.

 

Edited by SarahKB7 Koskinen
Link to comment
Share on other sites

27 minutes ago, SarahKB7 Koskinen said:

CPU vs GPU:  True. SL runs on an OpenGL Havok based graphics engine, which is mostly CPU based.

Havok is a physics engine and has nothing to do with OpenGL. The fact that SL is CPU bound is that purely the age of the application itself and LL cared little to nothing in modernizing their OpenGL implementation, probably so people on 10+ year old toasters could still run SL.

  • Like 3
  • Thanks 1
Link to comment
Share on other sites

3 hours ago, Ansariel Hiller said:

Mostly true, but see answer for first question: The performance viewer and ongoing work based on it will make more use of threading, so having more cores will be an advantage here.

First time I've heard the performance viewer will be multi-threaded, but that is good news if it is true. I've been purposefully clinging onto my older i7 with less cores and higher clock speed just for SL.

I'd be interested to find out how many threads the new viewer will be able to take advantage of, if nothing else to help inform the next upgrade. I imagine there are diminishing returns on splitting the workload out but it's not an area I'm familiar with.

nerd talk: I've always wondered why CPU's/Mobo designs are limited to one clock speed, it's always made sense in my mind to have a CPU chip that behaves as two separate CPU's, one with a low clock speed and many cores, and another with a high clock speed and a few cores. You could then get the best of both worlds. Some programs will just never be multi-threaded particularly business applications where transactions happen synchronously

  • Like 2
Link to comment
Share on other sites

6 hours ago, sodasullivan said:

True or false:  SL relies more on your CPU than your GPU? I hear this a lot and my experience has sort of confirmed this thought. If I am lagging badly, I can look at the task manager and see my GPU ( (GTX 1060) is doing fine and my poor little CPU (RYZEN 3 1300X) Is howling in pain. Assuming your GPU is average to optimal, does SL rely heavily on your CPU?

Certainly seems true to me. I'm surprised at how well my rather recent i5 laptop with only Intel integrated graphics  handles SL compared to my trusty work horse of a desktop with an i7 and NVIDIA GeForce. The desktop is better overall, of course, but the laptop is decent enough for SL. I see the same things as you in Task Manager.

  • Like 1
Link to comment
Share on other sites

6 hours ago, sodasullivan said:

True or false:  SL relies more on your CPU than your GPU?

SL does a lot of work on the CPU , work that games don't.

It still needs a good GPU, but it's more important how quickly it can deal with data coming from the CPU, rather than how much grunt it has to work with that data after its got it. 

 

6 hours ago, sodasullivan said:

True or False: SL can not use more than one core at a time, so four good CPU cores are better than eight average ones? 

SL does is not massively threaded. Some viewers claim to thread more .. in practice, it's not sufficient to change the advice at this time.

I would recommend a Ryzen 5 5000 series chip.

Boost speed is what you need to look at.

If it has more cores than you think you need, don't worry about them. Certainly don't pick fewer cores thinking that going to be automatically better. Just buy the latest, fastest Ryzen you can afford, any extra cores will come in handy.

 

6 hours ago, sodasullivan said:

True or False: Two sticks of 8 GB RAM are better than one stick of 16 GB Rm when using SL? 

2 8GB sticks means your CPU can use both at the same time. This is pretty important to getting the most out of modern systems, especially Ryzen.

It will benefit everything you do on your computer, not just SL.

 

 

 

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

2 hours ago, Extrude Ragu said:

Fnerd talk: I've always wondered why CPU's/Mobo designs are limited to one clock speed, it's always made sense in my mind to have a CPU chip that behaves as two separate CPU's, one with a low clock speed and many cores, and another with a high clock speed and a few cores. You could then get the best of both worlds. Some programs will just never be multi-threaded particularly business applications where transactions happen synchronously

Synchronizing CPUs running at different clock rates takes transistors and time, so you don't want to introduce different clocks unless there's a clear benefit. Since modern CPU cores can pause all or part (like floating point/vector units) of their operation instantaneously, that's the preferred method for reducing power consumption. When system load in a multi-core system does not require all the cores, some can be idled and their portion of the thermal power budget becomes available for those still running, allowing the clock frequency to be increased. Since idling can be done instantaneously, tasks that don't need all their CPU core's ability can "sleep" when they've completed their work, waiting for some event to wake the core again. Though idle cores still dissipate some power, and increasingly so at higher clock speeds, they don't dissipate nearly as much as active cores. From a thermal perspective, idling is indistinguishable from clock scaling.

Over the years, it became clear that there are kinds of algorithms that don't ever need all the facilities of high end CPU cores, and the Big/Little  or Peformance/Efficiency class split was introduced. My M1 Pro based laptop has eight high performance cores and two efficiency cores. I don't know if both classes share the same clock, but even if they don't, only one synchronizing bridge would be required between the two classes.

Finally, synchronous operation doesn't preclude multi-threading. Sync/Async and single/multi-threading are different concepts.
https://www.baeldung.com/cs/async-vs-multi-threading

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

CPU clock speed is now largely variable on a per core basis, they do spin up and down depending on workload to save power and not all the cores will be capable of hitting the very highest speeds. Hence why we have both single core and all core overclocking .. good luck!

  • Like 3
Link to comment
Share on other sites

3 hours ago, HeathcliffMontague said:

Certainly seems true to me. I'm surprised at how well my rather recent i5 laptop with only Intel integrated graphics  handles SL compared to my trusty work horse of a desktop with an i7 and NVIDIA GeForce. The desktop is better overall, of course, but the laptop is decent enough for SL. I see the same things as you in Task Manager.

Mine is a laptop from 2018 that uses a i7/GTX 1050 GPU. Works great with the Intel 660p m.2 SSD and dual channel 32 GB DDR4 RAM I have on it. It's also why I have a 5 fan cooling pad to run on it, not just for SL though LOL! I run a few VM's on it when I have the laptop at work (not everyday, but it's nice, usually via virtualbox in Linux or Windows 11 :) ).

Link to comment
Share on other sites

9 hours ago, sodasullivan said:

True or false:  SL relies more on your CPU than your GPU? I hear this a lot and my experience has sort of confirmed this thought. If I am lagging badly, I can look at the task manager and see my GPU ( (GTX 1060) is doing fine and my poor little CPU (RYZEN 3 1300X) Is howling in pain. Assuming your GPU is average to optimal, does SL rely heavily on your CPU?

8 hours ago, Ansariel Hiller said:

As of now, this is true. This might change with the soon to be expected release of the performance viewer.

@AnsarielThis won't change a bit !  The CPU is still the bottleneck in the performance viewer and it is not the threading of the GL image creation (which only happens on rezzing, and in just one thread in LL's perf viewer, while I use several threads in my backport), that will change anything at all: the modern GPUs  (GTX 960 or better) would be capable of absorbing at least twice what the fastest CPU could throw at it while running the SL renderer. To be able to lift the bottleneck currently residing at the CPU level, we would need a truly multi-threaded renderer...

@sodasullivan Your best bet is to get a CPU capable of the best single-core performances, since the rendering engine is mono-threaded. Note that there are also things you could try with your current CPU to improve the performances, such as over-clocking it (but Ryzen CPUs are not really good at this exercise), or using thread affinity (the Cool VL Viewer offers this possibility) to affect the best performing core to the viewer main thread.

 

10 hours ago, sodasullivan said:

True or False: SL can not use more than one core at a time, so four good CPU cores are better than eight average ones? I have heard this repeatedly and it sort of sounds ”wrong” but not impossible when  SL is concerned. Muliticores were certainly not a big thing when the foundation blocks of SL were put into place.  I am Looking at bumping up to a Ryzen 5 series and was wondering if four larger cores would be better than say six slower ones?

Yes, more cores is the way to go (my viewer can for example use a multi-threaded image decoder and with the perf viewer improvement I backported, a multi-threaded GL image creation, meaning the more cores, the faster you see textures rezzing, which is definitely important when riding a vehicle across main land, for example).

But NOT at the cost of the single core performances, since this is what will determine your frame rates in the end. On my main PC I got a 9700K (8 cores, no SMT), that I overclocked at 5.0GHz on all cores (the CPU is ”locked” at this frequency, i.e. only the C0 and C1 states are permitted), and a GTX1070Ti; this system is plenty powerful enough to run SL, even in the most stressing rendering conditions.

 

10 hours ago, sodasullivan said:

True or False: Two sticks of 8 GB RAM are better than one stick of 16 GB Rm when using SL? Again, I have been told this often and have no idea if is true or why. I would only add that we are talking about DDR4 3600 RAM.

Yes, in general, you should buy RAM by pair (or quad) of sticks, and avoid mixing brands or models (i.e. all sticks should be the same model of the same brand), because discrepancies in timings and performances will cause the ”training” algorithm of the motherboard to relax the timings to meet the worst stick performances... This said, this won't change significantly the actual performances of the viewer (the RAM speed only intervenes by a few percents, at best, like 1-3%).

Just make sure your system got enough DRAM: 16GB would be today's ”must have” and 8GB barely suffice to run a SL viewer.

  • Like 1
Link to comment
Share on other sites

1 hour ago, Coffee Pancake said:

CPU clock speed is now largely variable on a per core basis, they do spin up and down depending on workload to save power and not all the cores will be capable of hitting the very highest speeds. Hence why we have both single core and all core overclocking .. good luck!

Oooh, that shows how far I've fallen behind Intel architectures. I imagine the same is then true of M1, with synchronization happening at the L2 cache/main memory interface.

  • Like 1
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 727 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...