Jump to content

please LL put developing a multi threaded SL viewer on your roadmap.


DNArt
 Share

You are about to reply to a thread that has been inactive for 172 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

46 minutes ago, Henri Beauchamp said:

For a start, you are diverting this thread from its original topic which is not about the GPU load, but about multi-threading.

I'd say multi-threading to keep the graphics pipeline supplied with data is the whole point of the thread. If the GPU isn't being 'fed', there's something amiss in the pipeline.

8 minutes ago, Henri Beauchamp said:

Second, look at the About boxes, and see how the CPU frequency jumps up and down for the same viewer or between viewers. Obviously, there is something amiss with your system(s) since as soon as any viewer is logged in and rendering any scene, the CPU freq should go to turbo for the core running the main thread and not budge from it !

You know that the internal viewer frequency measurement is inaccurate. The 2nd image posted (CVLV 1.28.2.5, Ryzen system) shows internal viewer measurement at 4.14 GHz with a framerate of 69 fps. The 4th image posted (Singularity 1.8.9.8419) shows internal viewer measurement at 3.6 GHz with a framerate of 83 fps. I've made comparisons on two different computers using Ubuntu 20.04 with the standard LTS kernel. I believe I've provided sufficient data to make my point that the issue is unique to Cool VL Viewer. If a prerequisite to attaining the best results requires 'locking' cores at maximum frequency, how is this viewer intended for general usage by people who aren't hardware savvy?

17 minutes ago, Henri Beauchamp said:

The improvement brought to the latest Cool VL Viewer version won't change a thing to how high is the GPU load (or to the frame rates) in static scenes, but how fast textures are rezzing on login, after a far TP, or while moving around (the latter even more obviously when the new ”Boost textures fetches with speed” feature is enabled, in Advanced -> Rendering -> Textures).

There are two videos posted showing how the viewers compare in rezzing moving scenes. At one time I believed your viewer was 'smoother' with boating and driving. That was last summer. Then I looked carefully at video recordings I made months ago which showed CVLV didn't rez objects as quickly (at that time) as Firestorm. Fewer rezzed objects in a moving scene would correlate with a higher average moving framerate. I have noticed an improvement in rezzing speed with the current version, as the videos in my post above show.

I like your viewer. I have used it almost exclusively for my mesh uploads over the past couple of years, for example. I want to see it get even better, and believe me I'll be first in line to sing its praises when it does.

Link to comment
Share on other sites

Quote

If the GPU isn't being 'fed', there's something amiss in the pipeline.

Nothing amiss in the pipeline: just your CPU slowing down when it should not, which causes the ”drop” in the GPU load, since it's not fed as much work.

Quote

You know that the internal viewer frequency measurement is inaccurate.

It *is* accurate (at least as much as /proc/cpuinfo is) in the case of the Cool VL Viewer: my code detects which core it is running on (this is even reported in the log file) at the very moment it gets the frequency for that core. At this point, the said core should already be in turbo mode !!!  It certainly is on all systems (5 different PCs) I tested it on !

Since, again, you seem to be the one and only person who tested my viewer and found it ”slower”, I will not even bother and loose my time replying to the rest (neither to any other posts of yours): I simply urge everyone reading you, to never take your words for granted, and make their own tests instead !

  • Like 2
Link to comment
Share on other sites

Firestorm with the CPU 'locked' at 3.7 GHz

FS-6.4.13-CPU_LOCKED.thumb.jpg.dccccc68f065011c57c23a1f68a3ae32.jpg

Cool VL Viewer with the CPU 'locked' at 3.7 GHz

CVLV-1_28.2.15-CPU_LOCKED.thumb.jpg.62fbc2b79c51fb41b066964ccfbbee88.jpg

Cool VL Viewer reports 15 MHz less than Firestorm. I opened the Help > About repeatedly to confirm this. It's well within a margin of error. NVTOP still reports that Cool VL Viewer is not fully utilizing the GTX 960.

5 hours ago, Henri Beauchamp said:

Since, again, you seem to be the one and only person who tested my viewer and found it ”slower”, I will not even bother and loose my time replying to the rest (neither to any other posts of yours): I simply urge everyone reading you, to never take your words for granted, and make their own tests instead !

Oh and by the way, your Windlight renderer has had an obvious bug for some months now. The moon is in the sky, but there is no light or shadow from it at night. I would have reported it by now, but ...you know why. I'm surprised nobody else has posted about this bug in your forum yet. The moon works as expected (as well as LL has made it anyway) in your EEP renderer.

Link to comment
Share on other sites

49 minutes ago, KjartanEno said:

Firestorm with the CPU 'locked' at 3.7 GHz

FS-6.4.13-CPU_LOCKED.thumb.jpg.dccccc68f065011c57c23a1f68a3ae32.jpg

Cool VL Viewer with the CPU 'locked' at 3.7 GHz

CVLV-1_28.2.15-CPU_LOCKED.thumb.jpg.62fbc2b79c51fb41b066964ccfbbee88.jpg

Cool VL Viewer reports 15 MHz less than Firestorm. I opened the Help > About repeatedly to confirm this. It's well within a margin of error. NVTOP still reports that Cool VL Viewer is not fully utilizing the GTX 960.

Oh and by the way, your Windlight renderer has had an obvious bug for some months now. The moon is in the sky, but there is no light or shadow from it at night. I would have reported it by now, but ...you know why. I'm surprised nobody else has posted about this bug in your forum yet. The moon works as expected (as well as LL has made it anyway) in your EEP renderer.

You may want to check your CPU thermals, as 'locking' a CPU to it's max frequency (usually) won't stop it from thermal throttling - CoolVL uses more threads than FS (especially in the latest update!) and thus more cores are being used, meaning a weak cooling setup will end up in this kind of behaviour

Link to comment
Share on other sites

59 minutes ago, Jenna Huntsman said:

You may want to check your CPU thermals, as 'locking' a CPU to it's max frequency (usually) won't stop it from thermal throttling - CoolVL uses more threads than FS (especially in the latest update!) and thus more cores are being used, meaning a weak cooling setup will end up in this kind of behaviour

CPU temperature does indeed fluctuate more with Cool VL Viewer than Firestorm. Nevertheless, both viewers are rendering fairly static scenes, and the temperatures are well below the throttling threshold of 80C for an Intel i3 6100 (2c/4t) desktop part. It's not as if I'm rendering a scene in Blender with Cycles or compiling a viewer.

I see no need to worry about the Ryzen 5 3600 since it is more robust CPU with 6 cores and 12 threads. Cool VL Viewer shouldn't need more power to accomplish the same tasks that other viewers are capable of anyway.

I do not intend to run any viewer 'locked' at maximum frequency on a regular basis. I've configured my operating system to go into 'Performance' mode when I run certain programs and return to 'On Demand' mode when those applications are closed. However, since Henri made a claim, I tested it and found it had no basis in fact. Locking the CPU frequency did not result in Cool VL Viewer having a higher framerate than Firestorm in a fairly static scene. I see no need to revisit the boating tests on the Ryzen system using 'locked' CPU frequencies where it can be seen that both viewers rez objects and textures in a similar way while moving through multiple sims. And what about my rigged mesh body disappearing when I cross sim boundaries using Cool VL Viewer but not in Firestorm?

To be quite frank, it's not my computers and operating system that are at fault here. From the first time I mentioned the performance issues with Henri, it was one thing after another. He was so sure it was my AMD adaptive sync. Well, then what about my Intel/Nvidia tests? Oh it must be my CPU isn't 'locked' to maximum frequency? Nope. Thermal throttling? No.

Edited by KjartanEno
clarification
Link to comment
Share on other sites

7 hours ago, KjartanEno said:

Oh it must be my CPU isn't 'locked' to maximum frequency? Nope. Thermal throttling? No.

Any double-digit framerate is fine for my purposes, but nonetheless I'm trying to understand the point of this discussion.

If I understand the story so far, Cool VL Viewer has somewhat different threading code,  appears to produce lower "unlocked" CPU demand and more fluctuating GPU utilization, so (I gather) the implication is that this threading code might stand some further optimization. That seems pretty plausible.*

Or maybe some measurements are contaminated by any of an unbounded set of potential artefacts, and we're now playing Whac-a-Mole. 

Might be easier to get somebody** to replicate KjartanEno's results. And if confirmed, maybe somebody*** to look at the code.

Or am I missing the point?

________________
*... although I gather the whole intended difference in threading is isolated to rezzing textures, not static scene rendering, so if the viewer's resource utilization differences are confirmed rendering static scenes, the source may not be the threading code.
** not me
*** also not me

Link to comment
Share on other sites

2 hours ago, Qie Niangao said:

Any double-digit framerate is fine for my purposes, but nonetheless I'm trying to understand the point of this discussion.

If I understand the story so far, Cool VL Viewer has somewhat different threading code,  appears to produce lower "unlocked" CPU demand and more fluctuating GPU utilization, so (I gather) the implication is that this threading code might stand some further optimization. That seems pretty plausible.*

Or maybe some measurements are contaminated by any of an unbounded set of potential artefacts, and we're now playing Whac-a-Mole. 

Might be easier to get somebody** to replicate KjartanEno's results. And if confirmed, maybe somebody*** to look at the code.

Or am I missing the point?

________________
*... although I gather the whole intended difference in threading is isolated to rezzing textures, not static scene rendering, so if the viewer's resource utilization differences are confirmed rendering static scenes, the source may not be the threading code.
** not me
*** also not me

Many people express a desire for higher framerates overall and better performance when moving through sims, rezzing new objects and decoding textures. And what might be enough for some could be a struggle for others on laptops or older computers. Every optimization that helps the low end also gives a bit more room to push the envelope for those fortunate enough to have top of the line hardware. Taking a load off the render thread is a good idea, thus multi-threading.

I'm all for people doing tests and showing results. Why has no one besides me told Henri that his viewer has had performance issues since at least the EEP integration and up to his latest release? What does the average user do? What settings do they use? Are they almost exclusively in clubs and skyboxes with the shadows off? Why did nobody tell Henri that his Windlight renderer has a bug where the moon doesn't shine or cast shadows, and it's been that way for months now? Maybe most people just use midday sky settings? They may not know or care enough to worry about a framerate difference. Maybe they don't go outside clubs, skyboxes or home parcels to explore the continents, taking pictures and playing with lighting all the time. They pick a viewer that they like, for whatever reason. It has a cool interface or some feature they can't live without. Word of mouth gets around that such-and-so viewer is 'the fastest' and they download it and try it. Maybe it doesn't have some feature they're used to like client side animation re-syncing, so they move on to the next viewer on the list. Someone willing to go to the trouble of poking and testing as much as I have ... I'd love to meet you!

Link to comment
Share on other sites

23 minutes ago, KjartanEno said:

Many people express a desire for higher framerates overall and better performance when moving through sims, rezzing new objects and decoding textures. And what might be enough for some could be a struggle for others on laptops or older computers. Every optimization that helps the low end also gives a bit more room to push the envelope for those fortunate enough to have top of the line hardware. Taking a load off the render thread is a good idea, thus multi-threading.

I'm all for people doing tests and showing results. Why has no one besides me told Henri that his viewer has had performance issues since at least the EEP integration and up to his latest release? What does the average user do? What settings do they use? Are they almost exclusively in clubs and skyboxes with the shadows off? Why did nobody tell Henri that his Windlight renderer has a bug where the moon doesn't shine or cast shadows, and it's been that way for months now? Maybe most people just use midday sky settings? They may not know or care enough to worry about a framerate difference. Maybe they don't go outside clubs, skyboxes or home parcels to explore the continents, taking pictures and playing with lighting all the time. They pick a viewer that they like, for whatever reason. It has a cool interface or some feature they can't live without. Word of mouth gets around that such-and-so viewer is 'the fastest' and they download it and try it. Maybe it doesn't have some feature they're used to like client side animation re-syncing, so they move on to the next viewer on the list. Someone willing to go to the trouble of poking and testing as much as I have ... I'd love to meet you!

You're assuming that everyone has the same results as yourself.

For me, Firestorm is hands-down the worst performing viewer in active development (LL viewer is around 10 FPS faster, other TPVs often 15+). Fastest is a tossup between Alchemy and CoolVL, Alchemy is able to run faster at slightly higher settings due to some AMD-specific optimizations (AMD CPU / AMD GPU here), but loads assets considerably slower and has more bugs than CoolVL.

R.e. the bug you found, I haven't tested - I always have EEP enabled, on parcel settings, and the moon casts shadows as expected. But you're better off posting a bug report on the CoolVL forum than here.

Link to comment
Share on other sites

3 hours ago, Jenna Huntsman said:

Fastest is a tossup between Alchemy and CoolVL, Alchemy is able to run faster at slightly higher settings due to some AMD-specific optimizations (AMD CPU / AMD GPU here), but loads assets considerably slower and has more bugs than CoolVL.

Well, I'm glad to hear that. This boost in speed for Cool VL Viewer happened after you installed Alchemy per your post in Henri's forum on 2021-03-15:

Quote

 

Hey,

After many relogs and TPs around, as I couldn't quite believe it, but my performance on CoolVL has pretty much doubled after installing the latest Alchemy (ShrewdShepard) version.

I've genuinely got no explanation for this, I can't quite believe it, but I'm wondering if Alchemy has installed / updated / overwritten something that CoolVL depends on which has doubled my performance (on an AMD gpu here, so a pretty major thing for SL) - whatever it is, should probably be included with CoolVL if at all possible.

I've attached a log, in case that can give any clue as to how this has happened

(p.s. Same GPU driver version, no hardware changes, no Windows updates, etc - just installing the new Alchemy version (6.4.12.727) has done this for CoolVL)

 

I don't run Windows anymore to see if that would help in my case. The last time I had Windows 10 installed on a dual-boot setup, my tests then showed that AMD open source drivers on Linux ran all viewers around 50% faster than the official AMD drivers on Windows on the exact same hardware. This is on Polaris GPUs (RX580). It would be interesting to see how Navi 2 (RX6xxx) is faring with Linux kernel 5.11 and Mesa 21.1-dev.

If I installed Windows 10 again, I'd be taking a big step backwards in viewer performance. Linux uses faster file systems by default than Windows, and installing one viewer would not have any effect at all on how other viewers run due to how the binaries are compiled with their associated libraries. Faster caching and multi-threading would likely benefit viewers compiled for Windows more than Linux, since Linux is already faster by design.

Link to comment
Share on other sites

6 hours ago, KjartanEno said:

I'm all for people doing tests and showing results. Why has no one besides me told Henri that his viewer has had performance issues since at least the EEP integration and up to his latest release? What does the average user do? What settings do they use?

why?

probably because people on lower spec computers find that compared to other viewers, Cool VL Viewer absolutely motors in terms of raw FPS. Like on my computer

some discussion with pics about viewer settings is here: 

 

Link to comment
Share on other sites

3 hours ago, KjartanEno said:

Well, I'm glad to hear that. This boost in speed for Cool VL Viewer happened after you installed Alchemy per your post in Henri's forum on 2021-03-15:

I don't run Windows anymore to see if that would help in my case. The last time I had Windows 10 installed on a dual-boot setup, my tests then showed that AMD open source drivers on Linux ran all viewers around 50% faster than the official AMD drivers on Windows on the exact same hardware. This is on Polaris GPUs (RX580). It would be interesting to see how Navi 2 (RX6xxx) is faring with Linux kernel 5.11 and Mesa 21.1-dev.

If I installed Windows 10 again, I'd be taking a big step backwards in viewer performance. Linux uses faster file systems by default than Windows, and installing one viewer would not have any effect at all on how other viewers run due to how the binaries are compiled with their associated libraries. Faster caching and multi-threading would likely benefit viewers compiled for Windows more than Linux, since Linux is already faster by design.

Eh, that post isn't all that relevant here in terms of performance difference as the difference is the same now as it was before, albeit with a higher average framerate for both. I still don't have any idea what exactly went on there, but Henri's guess of Alchemy playing with driver flags seems like a good bet.

I'm inclined to say Windows *probably* wouldn't do anything, as OGL on AMD (on Windows) sucks. I tried Linux early last year, but I couldn't get my sound drivers to work properly - I've since been told that they've been fixed, but I'm yet to find the time / motivation to try again.

Link to comment
Share on other sites

Blame AMD for their utterly shoddy Open GL drivers (especially on Windows).

I use a RX Vega 56 on Windows and watching the Visual Studio Profiler on an AVX2 optimized build of the Cool VL Viewer is kind of depressing. The profiler shows maybe 10-20% of time still spent in the viewer code in lots of methods (e.g. not really nice stuff to optimize), but about 80-90% inside the crappy AMD driver or the OS. So even if one could optimize the viewer perfectly the fps will not increase by more than maybe 10-20%.

Henri tends to get 5-10x my fps for his NVIDIA setup/Intel/Linux, compared to my AMD Ryzen 2700x + RX Vega 56 + Win10.

With the multi-threaded texture fetcher i actually get near 100% CPU usage, when teleporting to a new region full of AVs with a draw distance of 512m or flying over the mainland, and it is kind of awesome to see the texture console races through textures as the viewer sucks in like 60+ Mbit/s of data. But frame rate doesn't change.

 

 

  • Like 1
Link to comment
Share on other sites

24 minutes ago, Kathrine Jansma said:

Blame AMD for their utterly shoddy Open GL drivers (especially on Windows).

I use a RX Vega 56 on Windows and watching the Visual Studio Profiler on an AVX2 optimized build of the Cool VL Viewer is kind of depressing. The profiler shows maybe 10-20% of time still spent in the viewer code in lots of methods (e.g. not really nice stuff to optimize), but about 80-90% inside the crappy AMD driver or the OS. So even if one could optimize the viewer perfectly the fps will not increase by more than maybe 10-20%.

Henri tends to get 5-10x my fps for his NVIDIA setup/Intel/Linux, compared to my AMD Ryzen 2700x + RX Vega 56 + Win10.

With the multi-threaded texture fetcher i actually get near 100% CPU usage, when teleporting to a new region full of AVs with a draw distance of 512m or flying over the mainland, and it is kind of awesome to see the texture console races through textures as the viewer sucks in like 60+ Mbit/s of data. But frame rate doesn't change.

I'm very glad you're poking around and experimenting with the code! And you're quite right about AMD's official OGL drivers. Sometimes I wish I had gone for the GTX 1060 6GB instead of the RX 580. My EVGA GTX 960 FTW has been running well for 5 years. If I put the Nvidia card on my Ryzen 5 3600 system, it might actually beat the RX 580 at least in terms of [OpenGL] framerate, though not in memory capacity. I guess being on Linux, I wanted to go with the open source alternative, which is AMD, and my Sapphire RX 580 has been a solid card for Vulkan/DXKV  gaming on Steam. 

Edited by KjartanEno
more info
Link to comment
Share on other sites

1 hour ago, Kathrine Jansma said:

Blame AMD for their utterly shoddy Open GL drivers (especially on Windows).

I use a RX Vega 56 on Windows and watching the Visual Studio Profiler on an AVX2 optimized build of the Cool VL Viewer is kind of depressing. The profiler shows maybe 10-20% of time still spent in the viewer code in lots of methods (e.g. not really nice stuff to optimize), but about 80-90% inside the crappy AMD driver or the OS. So even if one could optimize the viewer perfectly the fps will not increase by more than maybe 10-20%.

Henri tends to get 5-10x my fps for his NVIDIA setup/Intel/Linux, compared to my AMD Ryzen 2700x + RX Vega 56 + Win10.

With the multi-threaded texture fetcher i actually get near 100% CPU usage, when teleporting to a new region full of AVs with a draw distance of 512m or flying over the mainland, and it is kind of awesome to see the texture console races through textures as the viewer sucks in like 60+ Mbit/s of data. But frame rate doesn't change.

 

 

I think any and all efforts to make the viewer more multithreaded is a good thing, even if the net framerate gain is only a small amount.

Anyway, r.e. AMD OGL on Windows, i'm pretty sure it's never going to be fixed - I think it would have been fixed years ago if it was. Then again, LL should have probably given us a Vulkan viewer some time ago too.

  • Like 1
Link to comment
Share on other sites

This should put an end to fake news about viewers performances, and demonstrate the potential of the multi-threaded image decoder: ViewersTest.mkv

Testing protocol:

Since the choice in available (release) Linux viewers is slim, I tested Firestorm and Kokua (the very latest EE releases) against the Cool VL Viewer. All viewers running on the same PC under Linux v5.11.8 (vanilla, self-compiled kernel, with all mitigations off):

  • CPU: 9700K @ 5.0GHz locked on all cores (i.e. only C0 & C1 states are allowed). No thermal throttling (well cooled CPU).
  • GPU: GTX1070Ti @ 1980MHz for the GPU & 9200MT/s (=4600MHz) for the VRAM. NVIDIA current proprietary drivers (v460.56)
  • Exact same graphics settings, so that all viewers use the same, common settings: e.g. no Classic Clouds enabled, 1.00 Mesh LOD multiplier in Cool VL Viewer since they don't exist in others; Objects LOD limited to 3.0 and Terrain LOD to 2.0 in all viewers (FS can do more), etc...
  • EE rendering mode for everyone (Cool VL Viewer can do both EE and WL). Midday default setting.
  • All the viewers' caches and log files are held on a RAM-disk.
  • No other software running than the viewer and the screen capture, which does disadvantage the Cool VL Viewer since the capture software loads one or two of the cores (appear in dark green in MATE's load monitor) that would otherwise be used by an image decoder thread or two.
  • Of course the ”background yield time” setting was set to 0 for all viewers.


A scene with ”relatively heavy” outdoor setting was chosen, pretty much at random, just after looking at the map for one such sim without any avatar around (in draw distance), with a bit of everything: sky, water body, land, trees with alpha textures, buildings, meshes, etc. Only one avatar on screen (alas imposed for 100% reproducibility reasons) wearing a BoM body, mesh hair, prim attachments, flexy cape (i.e. a bit of everything).

The viewers cache are primed with a first dry run (not shown), so that all textures get cached in the RAM disk for the perf measurement run.

”Texture time” HUD info and texture console shown. The scene is considered fully rezzed when less than half a dozen of fetches are active (some moving mobiles in DD may cause new fetches after the initial rezzing, and since they are random we won't count them).

The rezzing time is taken with ALM off, so that less time is used by the render pipeline and the viewers rez faster (the texture fetcher getting more CPU time in the main thread for itself), especially FS and Kokua which do not have a multi-threaded image decoder; those also have KDU, which is much faster than the Cool VL Viewer OpenJPEG...

At the end of each rezzing, I switched the ALM mode and full shadows on (SSAO and DOF off), so that you can appreciate the FPS rate differences as well (and so that I won't hear again some total non-sense about my viewer performances)... The Cool VL Viewer especially shines with ALM off, since I optimized most of the C++ code, but not the shaders (they are almost exactly the same for all these viewers), and very little the render pipeline (llpipeline.cpp) itself (so the CPU-hungry shadows code makes the speed gain less obvious, i.e. not 80% gain and over, but closer to 50%).
Also remark the memory consumption...
 

  • Like 2
  • Thanks 1
Link to comment
Share on other sites

On 3/23/2021 at 10:54 PM, KjartanEno said:

I'm very glad you're poking around and experimenting with the code! And you're quite right about AMD's official OGL drivers. Sometimes I wish I had gone for the GTX 1060 6GB instead of the RX 580. My EVGA GTX 960 FTW has been running well for 5 years. If I put the Nvidia card on my Ryzen 5 3600 system, it might actually beat the RX 580 at least in terms of [OpenGL] framerate, though not in memory capacity. I guess being on Linux, I wanted to go with the open source alternative, which is AMD, and my Sapphire RX 580 has been a solid card for Vulkan/DXKV  gaming on Steam. 

I run Linux with a GTX 960, 4GB RAM, using the current direct-from-nVidia driver NVIDIA-Linux-x86_64-460.56.run  (You have to stop the X-server and run that file from the console to install. I had a fun couple of days last week, and had to learn entirely too much about setting up screen resolution from the command line, but installing that driver came after that stage.)

My CPU reports as a Xeon W5580, which is old, but respectable at 3.2 GHz. I set Firestorm to limit at 30fps, and while it struggles a little with EEP, compared to Windlight, it's OK. Benchmark scores put it close to the level of a 1050 card, and I am not sure you will see much advantage.

I can be grumpy at tedious length about LL, proper threading of viewer code, and their geriatric devotion to OpenGL. Apart from the video card, my box is high-end tech from 2009, and LL still haven't caught up.

 

Link to comment
Share on other sites

On 3/23/2021 at 7:06 PM, KjartanEno said:

I don't run Windows anymore to see if that would help in my case. The last time I had Windows 10 installed on a dual-boot setup, my tests then showed that AMD open source drivers on Linux ran all viewers around 50% faster than the official AMD drivers on Windows on the exact same hardware. This is on Polaris GPUs (RX580). It would be interesting to see how Navi 2 (RX6xxx) is faring with Linux kernel 5.11 and Mesa 21.1-dev.

If I installed Windows 10 again, I'd be taking a big step backwards in viewer performance. Linux uses faster file systems by default than Windows, and installing one viewer would not have any effect at all on how other viewers run due to how the binaries are compiled with their associated libraries. Faster caching and multi-threading would likely benefit viewers compiled for Windows more than Linux, since Linux is already faster by design.

Standard OpenGL driver in Windows are way faster than drivers provided by NVidea/AMD. If someone never plays AAA games or do not need need more advanced functions, be it hardware support or software editing/video recording etc, Second Life and all basic OpenGL runs fast and excellent with the standard drivers.

  • Like 1
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 172 days.

Please take a moment to consider if this thread is worth bumping.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...