Jump to content

Second Life - Performance Thread


Recommended Posts

1 hour ago, AmeliaJ08 said:

I've seen some beautifully made things that could have only come from someone with extensive experience

The person who uploaded the mesh into SL (and butchered the LODs in the process) is not necessarily the same person who created the model. There's a staggering amount of 3rd party content in SL, in various levels of legality ranging from public domain to assets ripped from PC games.

  • Like 3
Link to comment
Share on other sites

  • 8 months later...
On 2/14/2023 at 10:46 PM, TheBenchmarker said:

SL may run better on Professional grade hardware, like a Quadro or Tesla. Quadro's, Titan's and Tesla's from nVidia have specific optimizations in the driver path for CAD and professional applications such as the ones mentioned earlier like Maya and 3DS Max, Revit etc. All of these applications use OpenGL in one way or another and all of them recommend Quadro's. I know somewhere on LL's site it says Quadro's aren't recommended but there's also a current thread from LL talking about how SL runs better on Windows Vista so I think we can safely ignore all of the written documentation for now.

This is a fun thread to see necrobumped but I’ll hop in anyway. So this is something I’ve tested a little ages ago, and from my experience it’s wildly different depending on the hardware you use to make this comparison. 
Most of what makes a Quadro a professional card is that its drivers and the hardware itself are certified for stability. It’s not really that it’s inherently more capable, you’re paying more for the name because nvidia is putting their certification behind it and showing you it will work with 100% reliability with certain applications.

A GTX 980 will have no problems rendering in blender. But a Quadro M6000 along with just being a bit more suited for it with more video memory (in a moment) is guaranteed to work without any problems in blender. For big companies that’s important and that’s what they pay for. It’s why you’ll usually see consumer equivalents as options alongside professional cards when spending out something like a prebuilt workstation. They’re asking you “do you need or want certification?” more than they’re asking about if you want a different performance tier.

And then as I mentioned there, they tend to have more video memory because most professional applications will utilize more video memory, just on average. Something the tier of a GTX 980 in the context of gaming probably doesn’t need more than 6gb of video memory because most games aren’t using vram like that, or at least weren’t at the time. But it’s Quadro equivalent came in 12 or 24gb variants because even if the GPU wasn’t that much faster on a comparative scale, the applications it would be used for could use that kind of video memory. Right tool for the right job.

But SL does still see benefits from workstation gpus, I think a lot of it has to do with how it uses video memory and as you mentioned, sharing a lot of attributes with 3D software rather than games. But the difference is small and variable from my limited testing, which is why I kinda leave it at that. I don’t have the money to be tossing around at modern high end workstation cards, and I don’t recommend anyone else do that either because unless you have some other primary use for it, spending thousands of dollars on a GPU for sl is absurd. 
My testing with that was limited to a few generations, Tesla (the architecture), fermi refresh and early Maxwell. 
Teslas comparison being the consumer 8800 series vs the Quadro fx line, the Quadros performed better by a thin margin except the fx 5600 wiped the floor with everything else for one crucial factor, 1.5gb of vram vs 768/896mb. That nullified the test at the time since vram was a limit and one card simply had a higher limit.

Fermi, a Tesla C2050, Quadro 5000 and GTX 570. Once again similar story, the Tesla beat the rest because it had 3gb of vram, and this time by a massive margin (relatively speaking) of 10-15fps higher on average in almost every scenario. 
But then we get to Maxwell where vram is no longer a limiting factor in at least 1080p, and then the differences dwindled. The GTX 980 vs Titan X vs M6000. The titan X was just faster overall but the M6000 and 980 were within margin of error of eachother. And that was a 6gb 980 vs a 24gb M6000, neither tapped out their vram, having so much didn’t matter. They performed very similarly.

A lot of that comes down to this is still just an openGL game, there’s only really so much that can be done with it, and hardware optimization for openGL is just kind of a built in taken for granted feature in any GPU made in the last 20 years. There aren’t really any optimizations for openGL that haven’t already been done and aren’t already in effect. It’s why my intel arc A770 plays sl fine, and about the same as it did on launch day. Because while intel has been freaking out trying to get dx9/10/11 working smoothly on it, and optimizing Vulcan and such, OpenGL has just been fine and has been the same. 
And changing the hardware regardless of driver and hardware optimizations won’t change that, which is what workstation cards do.

  • Like 1
Link to comment
Share on other sites

One point on performance that I haven't seen mentioned - on busy/complex sims I was seeing my 12GB graphics card vRAM maxing out, and this was causing noticeable texture thrashing. 

I fixed this and got much better performance by ticking "Enable Lossy Texture Compression" in Preferences/Graphics/Hardware Settings.

I had read dire warnings about how this feature should not be used because of how it degrades texture quality, so I did some tests and found I could not tell the difference between how compressed and un-compressed textures look. So perhaps the compression algorithm has recently been improved, or perhaps it happens in the GPU drivers and is card-dependent. 

But turning on this option definitely resulted a very worthwhile improvement on my RTX 4070ti.

And I'm guessing a lot of SL users are working with much less than 12GB vRAM, so may benefit from this setting even more.

I've also read a few posts on this thread from people unsure where the frame rate bottleneck is in their system. If you install "MSI Afterburner", you can get some system data as an overlay on your screen while SL is running (see attached image).

As a rule of thumb, (as per the attached image) if the GPU utilisation is around 100% then the GPU is likely to be the bottleneck in that particular situation, otherwise the bottleneck is the CPU (you'll never see the CPU utilisation going very high because of how few cores SL uses, but even if utilisation is only single digit percentages, if one of your many cores is maxed out, that is probably the bottleneck).

As a rule, I've found that if there are lots of avatars nearby, the CPU is usually the bottleneck, otherwise it is usually the GPU.

On the different graphics drivers nVidia provides: I did read somewhere that the only difference between the studio and gaming drivers is that for the latter the emphasis is keeping them very up to date, whereas for the former, the emphasis is on keeping them very stable (e.g. the different drivers don't contain code optimised to be faster in the different scenarios). 

 

Screenshot 2023-12-03 045852.png

  • Like 1
Link to comment
Share on other sites

1 hour ago, filz Camino said:

on busy/complex sims I was seeing my 12GB graphics card vRAM maxing out, and this was causing noticeable texture thrashing.

1 hour ago, filz Camino said:

on my RTX 4070ti.

If you are getting texture thrashing on a 12GB VRAM 4070...

Either your card's VRAM is fubared, or your viewer settings are fubared.

Most obvious culprit would be draw distance. People do insane stuff like setting their DD to 1 KM. Then they are not rendering "a busy sim", they are rendering that sim, and the 8 sims around that, and the 16 sims around those, and the 24 sims around those, and half of the 32 sims around those.

EVERY SINGLE TEXTURE ON SINGLE FACE ON EVERY PRIM OF EVERY LINKSET.

 

Second most likely culprit, assuming you didn't do something insane like a 1 KM draw distance, would be simple oversight, forgetting to tell firestorm to use HALF your 12 GB of VRAM, instead of pretending it's the official LL fail-viewer and using 512 MB.

 

1 hour ago, filz Camino said:

I've also read a few posts on this thread from people unsure where the frame rate bottleneck is in their system.

Of course the unspoken bottleneck here is always going to be the monitor. If your "reported fps" is HIGHER than your monitor refresh rate, you're just melting the silicon for nothing, because the MAXIMUM FPS you will REALLY EVER see is limited by the refresh rate of the monitor.

 

Link to comment
Share on other sites

1 hour ago, Zalificent Corvinus said:

If you are getting texture thrashing on a 12GB VRAM 4070...

Either your card's VRAM is fubared, or your viewer settings are fubared.

Most obvious culprit would be draw distance. People do insane stuff like setting their DD to 1 KM. Then they are not rendering "a busy sim", they are rendering that sim, and the 8 sims around that, and the 16 sims around those, and the 24 sims around those, and half of the 32 sims around those.

EVERY SINGLE TEXTURE ON SINGLE FACE ON EVERY PRIM OF EVERY LINKSET.

 

Second most likely culprit, assuming you didn't do something insane like a 1 KM draw distance, would be simple oversight, forgetting to tell firestorm to use HALF your 12 GB of VRAM, instead of pretending it's the official LL fail-viewer and using 512 MB.

 

Of course the unspoken bottleneck here is always going to be the monitor. If your "reported fps" is HIGHER than your monitor refresh rate, you're just melting the silicon for nothing, because the MAXIMUM FPS you will REALLY EVER see is limited by the refresh rate of the monitor.

 

I do wish people would check their assumptions before launching off into unhelpfully incorrect truth claims.

Firstly, it is not reasonable to assume that vRAM showing as being fully allocated means it might be faulty. Faulty memory produces errors, not over-allocation reports, and no hardware errors are being reported. Plus, the card works fine in situations that demand 10GB+ of vRAM (Playing Cyberpunk 2077 at 4K resolution with ray tracing turned on) .

Secondly, this occurs on a busy sim (e.g. some of the dance areas of Peak nightclub) with drawing distance set to 128m. In fact, it occurs with all the standard graphics settings obtained by simply adjusting the slider to 1 stop below Ultra so there is nothing unusual or "fubared" about my graphics settings.

Thirdly,  the GPU memory management options are left at the Firestorm defaults, these allow SL to use 90% of the total system vRAM. And in fact, MSI afterburner reports that at least 90% of the GPU vRAM is indeed fully allocated when the texture thrashing happens. TP'ing out of the sim, vRAM use falls considerably, and closing Firestorm returns vRAM use to the normal level, so it is pretty obvious how and in what situation the vRAM is being used up.

So to re-iterate the facts here - there are situations in SL where 12GB of vRAM may not be enough, and this can be cured by ticking "Enable Lossy Texture Compression".

 

 

Edited by filz Camino
  • Like 2
Link to comment
Share on other sites

2 hours ago, filz Camino said:

I do wish people would check their assumptions before launching off into unhelpfully incorrect truth claims.

Standard diagnostic procedure involves making assumptions off the back of available evidence. I stated the two of the MOST LIKLEY causes of somebody getting texture thrashing in SL with a 12GB card.

In fact, if you had paid attention to the forums, you would have seen somebody only a short while ago, getting texture thrashing because their copy of FS was only using 512 mb of VRAM on their 8GB card.

 

2 hours ago, filz Camino said:

Firstly, it is not reasonable to assume that vRAM showing as being fully allocated means it might be faulty.

Depends on the fault, if the fault means the card isn't reporting 12GB, then everything will base it's usage off what's reported.

 

2 hours ago, filz Camino said:

MSI afterburner reports that at least 90% of the GPU vRAM is indeed fully allocated when the texture thrashing happens

Reports vram usage by everything, including the OS, and Afterburner, and what ever other apps you are using, that's why viewers do not grab 90% of total vram by default.

 

2 hours ago, filz Camino said:

the card works fine in situations that demand 10GB+ of vRAM (Playing Cyberpunk 2077 at 4K resolution

Ahhhh, 4k resolution... I'll make an assumption as to why the card's using so much vram.

 

Link to comment
Share on other sites

47 minutes ago, Zalificent Corvinus said:

Standard diagnostic procedure involves making assumptions off the back of available evidence.

I didn't ask for a diagnosis, I'm perfectly happy with my own diagnosis. Although "standard diagnostic procedure" does not involve simply guessing when insufficient evidence is provided, it involves asking questions and gathering sufficient information to make a correct diagnosis. 

 

49 minutes ago, Zalificent Corvinus said:

Ahhhh, 4k resolution... I'll make an assumption as to why the card's using so much vram.

And that would be another incorrect assumption. I'm starting to see a pattern here!

Testing this again right now (I did previously test this myself because I thought that would be more useful than simply guessing or assuming the resolution was the problem) vRAM usage at Peak at different resolutions are as follows:


* 4K resolution - 7.1GB

* FHD resolution - 6.8GB
 

So just 0.3GB difference between a very common 1920x1080 resolution and the 4K I'm using. And those figures are with compressed textures turned on, if i turned that off, vRAM usage would double, so even at FHD resolution I'd have insufficient vRAM to totally avoid texture thrashing in that particular situation.

Curious to see what other incorrect assumptions you come up with!

 

  • Like 1
Link to comment
Share on other sites

1 hour ago, filz Camino said:

Although "standard diagnostic procedure" does not involve simply guessing when insufficient evidence is provided, it involves asking questions and gathering sufficient information to make a correct diagnosis. 

That you get texture thrashing on a 12GB card with standard textures, when most people do not, is in its self, enough information to show that something is wrong, either your hardware or your software settings.

People with higher draw distance than you, running on ultra, on 8GB cards are not complaining in droves of texture thrashing.

This is a You problem, not an EVERYONE problem.

1 hour ago, filz Camino said:

Curious to see what other incorrect assumptions you come up with!

I've correctly assumed there is no point trying to help somebody who has assumed that their setup is perfect, because they found a low quality work around for the real problem.

 

  • Like 1
Link to comment
Share on other sites

13 minutes ago, Zalificent Corvinus said:

This is a You problem, not an EVERYONE problem.

Where is your evidence for this claim? Is this issue something you've actually investigated? Or is your claim just another meaningless fact-free assumption?
 

What investigations have you done to justify your position? What facts and data have you got that refutes my position? Present your work, present the facts and present your reasoning. 

Because all I've had from you so far is bad guesswork and incorrect assumptions. 

Based on the evidence of your endless incorrect reasoning and false assumptions, it is starting to become reasonable for me to conclude you don't know how to trouble-shoot and you don't really know what you're talking about.

I'm quite happy to accept that there may be something about my setup which means my situation isn't generally applicable, you'll need to come up with an actual argument in order to convince me, though.

 

 

 

Edited by filz Camino
  • Like 1
  • Haha 1
Link to comment
Share on other sites

8 hours ago, filz Camino said:

One point on performance that I haven't seen mentioned - on busy/complex sims I was seeing my 12GB graphics card vRAM maxing out, and this was causing noticeable texture thrashing. 

I fixed this and got much better performance by ticking "Enable Lossy Texture Compression" in Preferences/Graphics/Hardware Settings.

I had read dire warnings about how this feature should not be used because of how it degrades texture quality, so I did some tests and found I could not tell the difference between how compressed and un-compressed textures look. So perhaps the compression algorithm has recently been improved, or perhaps it happens in the GPU drivers and is card-dependent. 

But turning on this option definitely resulted a very worthwhile improvement on my RTX 4070ti.

And I'm guessing a lot of SL users are working with much less than 12GB vRAM, so may benefit from this setting even more.

On the different graphics drivers nVidia provides: I did read somewhere that the only difference between the studio and gaming drivers is that for the latter the emphasis is keeping them very up to date, whereas for the former, the emphasis is on keeping them very stable (e.g. the different drivers don't contain code optimised to be faster in the different scenarios). 

 

 

As a user still on a GTX 1050, I have texture compression turned on for all the SL viewers I use. In Firestorm I also have the Texture Rendering switch checked on, the option that reduces textures to 512px (64 bit only). That's located in the Graphics > Rendering tab under Preferences.

  • Like 1
Link to comment
Share on other sites

15 hours ago, filz Camino said:

I fixed this and got much better performance by ticking "Enable Lossy Texture Compression" in Preferences/Graphics/Hardware Settings.

This definitely would help if you were tapping out your vram, so i would recommend the same to people running out of video memory but i think if you have more vram than that you wouldnt need to worry about it.

Another point worth mentioning is that yeah you can easily tap 12gb of vram if youre playing in 4k in some places, and most of it has to do with shadows and such

Examples, 4k max settings shadows at 1x detail scaling:

0YNj6Tj.jpg

4.3gb of vram used, nothing crazy, looks nice though

4k max settings shadows at 4x detail scaling:

khddrty.jpg

Using over 10gb of vram, it really only looks better up close imo

4k max settings with 4x shadows but with lossy texture compression:

dpUsdeL.jpg

9.5gb of vram use, so theres definitely less being used, if you needed to save .5 to 1gb of vram i could see this being helpful, and considering im most commonly seeing my gpu at 9-11gb anyway, for 12gb cards this could be very beneficial

i have a 16gb A770 so i dont really hit the limit of my vram but ive definitely seen it get over 12gb in some locations 

Heres another cool bit i didnt actually expect, is that lowering render distance doesnt really impact vram usage that much at all, its mostly using vram for the shadows and lighting, heres 32m render distance and a client restart + cleared cache just for posteritys sake:

71S4a5r.jpg

Still 9.5gb of vram, which is a little in line with what i expected, that the textures arent actually using that much video memory, but compressing them can help anyway 

its not the textures using 8gb+ of video memory, theyre maybe using 2gb at worst, most of it is lighting.

  • Like 2
Link to comment
Share on other sites

Its also worth mentioning how much people impact vram

I dont normally go here but london city always has a bajillion people here, i got nic to hop on for a second to add to it, and yeah you can zip past 12gb of vram like this without even trying

Y79ty9B.jpg

I dont know why you would play like this though, the higher detail shadows look great in photos but for just playing normally, i usually keep it at 1

so if you were playing like this for whatever reason, and you had a 12gb card, compressing the textures or limiting them to 512x512 would potentially reduce your vram usage enough to not run out of vram (which tanks performance), but then still id just turn the shadows down and cut the vram in half like that

its still noted here that render distance doesnt really change up vram usage much, though it does just tank framerate in other ways, 1024m or 32m

  • Like 3
Link to comment
Share on other sites

SL viewers often seem pretty bad at managing texture memory resources.

Obviously given the almost entirely unoptimized content it is very easy to use all available VRAM but what happens in that situation is quite inelegant, it feels like system RAM is badly utilized for one and in many cases what should be neatly moved from RAM to cache (and potentially back again) is handled poorly and you get all kinds of texture pop in, blurring, missing textures, constant reloading of nearby items etc.

Not to say this is an easy job but there seems to be an almost randomness as to what texture data the viewer considers most important and a sufficiently busy scene makes it quite evident that things are not being handled efficiently. Keeping the most important (nearest) textures in VRAM ideally would seem logical but this often doesn't seem to happen.

I'm no developer though. Just my observations and I don't think for a moment that this is an easy problem to solve given how hard on resources SL content is.

Edited by AmeliaJ08
  • Like 2
Link to comment
Share on other sites

8 hours ago, gwynchisholm said:

Its also worth mentioning how much people impact vram

I dont normally go here but london city always has a bajillion people here, i got nic to hop on for a second to add to it, and yeah you can zip past 12gb of vram like this without even trying

Yes, the times I've noticed high vRAM usage have all been in extremely crowded places, where there might be 70-80 avatars in the immediate vicinity (e.g. nightclubs). It does seem quite easy to go over 12GB in those situations.

And that is without shadows enabled - I usually don't bother with shadows because of the frame rate / fan noise impact they have. 

Fortunately with lossy texture compression enabled I don't think I've seen vRAM use go over 7GB, so (given that there is no perceivable downside) that sorts the problem out for me. In fact, I don't quite know why this is not the default setting, although it does sound as if perhaps it used to have significant impact on quality, or perhaps it still does on some cards. But I wonder if it is not the default setting for historical reasons that are no longer valid in 2023.

AFAIK, texture compression is standard practice in modern games, and modern cards are able to decompress the textures on the fly when using them. 

 

Edited by filz Camino
  • Like 1
Link to comment
Share on other sites

Just got a new Winmax 2 laptop and thought I'd test out it's vRAM usage in SL. The computer has 64GB of RAM, with 16GB permanently allocated to the integrated GPU.

In Peak nightclub with 77 other avatars, the laptop maxed out all 16GB of video memory, so this does suggest people might benefit from "Use lossy texture compression" even if they have 16GB or more of vRAM.

Unfortunately, enabling this option on the Winmax 2 causes Firestorm to crash. 

IMG_20231208_142308.jpg

Link to comment
Share on other sites

1 hour ago, Nofunawo said:

@gwynchisholm 

Is the A770 really such a lame duck?

Your  shots showing <20FPS in a scene which isn't very demanding and settings not even on Ultra.

<10 with some AVAs  @183 Watt - ouch

That’s in 4k, at absolute max settings in some of those shots. The preset slider doesn’t update based on what the rest of the settings are.

Arc definitely doesn’t understand “power efficiency” in the slightest, but that performance is pretty in line with its equivalents like the 3060ti and RX 6600xt if you were to set everything to their max settings in 4k

dropping shadows to 1x scaling helps a lot, and then not playing in 4k to begin with can help depending on the area, I’ve found some places don’t care about resolution at all, some places 1080p to 4k can cut the framerate by 3/4

Link to comment
Share on other sites

1 hour ago, gwynchisholm said:

dropping shadows to 1x scaling helps a lot, and then not playing in 4k to begin with can help depending on the area, I’ve found some places don’t care about resolution at all, some places 1080p to 4k can cut the framerate by 3/4

Wasn't it you who told me that resolution doesn't matter in SL - LOL.

O.K. London City -4060TI - all settings to max

Resolution: UWQHD

about 50 FPS using 70-90 Watt

The A770 sucks!

https://gyazo.com/602116b6158418d87c99bca25988367c

 

Edited by Nofunawo
Link to comment
Share on other sites

11 hours ago, gwynchisholm said:

IMG_0983.thumb.jpeg.5a724bd1737b50f79505b4669f603220.jpeg

 

I keep looking at the A770 16GB and wondering... it's a shame there's basically none on the used market yet, it's genuinely hard to compete with the 3060 12GB used since it's still very capable with >8GB of VRAM.

I know OpenGL performance is a work in progress for Intel but looking at how they're making great strides with their drivers makes me hope Intel go from strength to strength and really shake up the industry. I know prices will begin to rise if they do though...

 

Link to comment
Share on other sites

5 hours ago, AmeliaJ08 said:

I keep looking at the A770 16GB and wondering... it's a shame there's basically none on the used market yet, it's genuinely hard to compete with the 3060 12GB used since it's still very capable with >8GB of VRAM.

I know OpenGL performance is a work in progress for Intel but looking at how they're making great strides with their drivers makes me hope Intel go from strength to strength and really shake up the industry. I know prices will begin to rise if they do though...

 

There just aren’t many of them, they’re definitely not a popular gpu. Oddly enough I see more used A380’s than 770’s. I think a lot of people buy the A380 because on paper is the perfect GPU for an older pc, low wattage, low profile options, 6gb of vram, it usually matches the GTX 1060 6gb in performance, so it seems like a good pairing for an old office pc.

Except the A380 relies on pcie 4.0 and rebar, and if your system doesn’t have those, the performance is cut in half. Meaning it’s not viable for pre 11th gen intel systems which makes up 99% of the old budget pc market. Rip.

I have an A380 on my GPU shelves because I got it on launch day just to see what it’s all about, and from that launch day one thing it did exceedingly well was Minecraft, and nothing else even loaded.

IMG_0994.thumb.jpeg.b74db4a0a611914cedb9c8f96aad78e1.jpeg

it’s hiding back there next to the eclaro

Battlemage will be interesting, they’ve put a lot of consistent effort into the drivers and the card is far more capable than it was on launch, and in some specific scenarios hits well above its tier. I get 120fps in 4k at max settings in halo MCC across every title except halo 4. After a recent driver update said “up to a 750% uplift in performance in halo MCC” and they were absolutely not lying, tripled my average framerate and removed all stutter.

For reference that’s higher frames than my 3070ti gets at the same settings. Thats not a 3060 tier card in that scenario, that’s well above it. But then just the inverse happens where it gets 60fps in 4k max settings in GTA V, which is about on par with a 3060. And then in cs2 it matches the 3060ti, and then in some picky older games like gta iv or Skyrim whatever remastered edition, it’s doing worse than a 3060.

Edited by gwynchisholm
  • Like 1
Link to comment
Share on other sites

6 hours ago, Jackson Redstar said:

8 gig of VRM is simply not enough for SL in a crowded environment  any more. Wish most cards would have 16 or even 32gig

Sharpview has somewhat better texture management, which helps. Even if you have a 1K x 1K texture on something, Sharpview won't load it into the GPU at that resolution unless you're so close to the object that it occupies almost 1K x 1K of screen pixels. The general idea is to have one texture pixel per screen pixel. This requires background threads frantically loading and unloading the GPU.

Take a look at my recent Sharpview demo video, starting here. This is a walkthrough of parts of southern Hetrocera. That's using about 5GB of GPU space. There's an outdoor roadside bookstore with book covers on display. Each book cover is a high-resolution texture, and there are hundreds of them. But they're not all in the GPU at high resolution at the same time. If you watch carefully, you might see a resolution change. Watch the book cover of "I Want My MTV", which is blurry for almost a full second. I need to speed that up.

You can buy a 32 GB GPU from NVidia for US$3,674, but that's too much. 8 GB is a reasonable desktop hardware target today. That's mainstream gamer hardware, available for about US$250. Probably less after the holiday season.

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...