filz Camino Posted November 29 Posted November 29 (edited) From past exploration of this topic, let me start this by saying that anyone who responds with ideological claims about the past with statements like "no, Nvidia is always the best" without pausing to critically reflect on the evidence I'm presenting here is going to get quite severely ridiculed by myself. So please take the time to fully consider the 2024 evidence before replying! It has already been mentioned a few times in this forum although for anyone who hasn't heard, AMD has recently introduced two technologies into their drivers that have until recently only been available in games specially coded to take advantage of them, but which in AMD GPUs now work with all games, including Secondlife. These technologies have been widely used in the gaming community for a few years now to significantly improve performance both in terms of the resolution and quality of the displayed image, and also in terms of dramatically improving the frame rate. The technologies are: 1/ AI upscaling, where 1080p content is upscaled to a higher 1440p or even 4K resolution. The upscaled result isn't as good as native high resolution, but it is a big improvement on low resolution content that has not been upscaled, and there are performance benefits to upscaling from a low resolution (you might potentially double the frame rate by doing this). 2/ Frame generation, where in between each frame, the graphics card inserts an extra frame created using motion interpolation, thereby doubling the frame rate. Because the extra frames are interpolated rather than "real", this technology does not improve reaction times in fast first person shooter games so not all gamers are a fan of it, but in something like Secondlife where millisecond reaction times are irrelevant, the technology basically doubles your frame rate without any significant downside. Both Nvidia and AMD support the above technology, but at the moment, Nvidia only support the tech in games that are specially coded to take advantage of it, and have no plans to do this for OpenGL. AMD on the other hand supports driver level upscaling and frame generation that can be used to apply the technology to any game whatsoever, and that includes Secondlife. So, how well does it work? Here's a videoclip taken of a party this morning where there were about 70 avatars on the sim. This is using a 2023 laptop with AMD integrated graphics, so it is a low power 28W APU. Screen resolution is 2560 x 1600 (so around 1440p), and I have set max avatars to the highest value possible, which is 66 so pretty much all the avatars are dancing. Link to test video: Max avatars = 66 As you can see, even at 2560 x 1600 and all avatars animated, the driver is still reporting 25fps, on this low power APU with integrated graphics (Firestorm thinks the frame rate is about half that, because the frame generation is happening in the graphics driver). It doesn't look perfect, but I think it is quite usable. Turning down max avatars to a somewhat more normal (but still extremely high for a low powered laptop with integrated graphics) value of 20, gives the following: Link to test video: Max avatars = 20 The driver is now reporting a very respectable 55fps, and I think it looks great! Sadly the party ended before I could take a video with both of the above technologies turned off, but I've taken a video of how that looks on another sim that is obviously not as busy. Here, only getting around 12fps without the assistance of upscaling and frame generation, and with the resultant motion judder it doesn't really look that great: Link to test video: Max avatars = 20, upscaling and frame generation turned off It may be true that if we ignore upscaling and frame generation, Nvidia still has a slight edge over AMD, but if we take these technologies into consideration, I think it is hard to argue in Nvidia favour for SL. Especially considering AMD dedicated GPUs typically have far more video ram than Nvidia , and that Secondlife does definitely benefit from having plenty of video memory available.... Edited November 29 by filz Camino 1
MarissaOrloff Posted November 29 Posted November 29 (edited) I had bought a 7900xtx to replace a RTX-3080. While DirectX and Vulkan support was excellent OpenGL however was a 25% reduction in SL performance to the RTX-3080 and that was both in Windows and Linux. I ended up returning the 7900xtx and replacing that with a RTX-4090 which (duh..) performed much better by about double the performance of the 7900xtx in OpenGL (Second Life). If all you do is play AAA games without Ray Tracing in DirectX or Vulkan then the differences are minimal. However for OpenGL, Ray Tracing and Hardware Acceleration (GPU Compute) an absolute must if you want to do local AI generation NVidia is much better. I run Ollama, Comfy UI and Automatic1111 all locally with my 4090. AMD just simply falls short in all areas with the exception of DirectX and Vulkan. Some time ago I posted Blender Benchmark results here in the forms which you can search for if you like. Edited November 29 by MarissaOrloff 2 1
Wulfie Reanimator Posted November 29 Posted November 29 I have an XTX and love frame generation, but to be fair, Nvidia users can also get the benefit in any game with external programs like Lossless Scaling, which only costs a couple bucks. 2
Fraser Lisle Posted November 29 Posted November 29 (edited) Given SL is an OpenGL application I don't see how that could be true, perhaps in Linux it is but on Windows AMDs OpenGL driver falls short. Frame gen is nice etc but...in terms of raw performance Nvidia wins. Edited November 29 by Fraser Lisle
filz Camino Posted November 29 Author Posted November 29 @MarissaOrloff thanks, but you don't say when you did these tests so your post is disqualified. (Eg were your tests before or after the AMD driver improvements that have happened relatively recently?) @Wulfie Reanimator that's interesting! and worth knowing about. why on earth is this never suggested as a solution to people's problems? the great thing about frame generation is that it works well even when the frame rate is CPU limited, as is often the case in SL. @Fraser Lisle your post is disqualified because you provide no evidence to back up your claims. 2
Qie Niangao Posted November 29 Posted November 29 Disqualified? Gosh. Must be heartbreaking for those contestants. And they didn't even use performance enhancing drugs (as far as we know). 1
MarissaOrloff Posted November 29 Posted November 29 (edited) 4 hours ago, Qie Niangao said: Disqualified? Gosh. Must be heartbreaking for those contestants. And they didn't even use performance enhancing drugs (as far as we know). I think reality can be a hard pill to swallow for some people. Some people need to reassure themselves that the product they purchased is the best so they make an "impossible" set of comparison rules in their mind. Edited November 30 by MarissaOrloff 1
MarissaOrloff Posted November 29 Posted November 29 (edited) 9 hours ago, Fraser Lisle said: Given SL is an OpenGL application I don't see how that could be true, perhaps in Linux it is but on Windows AMDs OpenGL driver falls short. Frame gen is nice etc but...in terms of raw performance Nvidia wins. AMD and OpenGL is not a good match especially in Linux. Though you will find some that will swear by AMD because their Linux graphics drivers are open source and built into the kernel. Edited November 30 by MarissaOrloff 1
gwynchisholm Posted November 30 Posted November 30 Performance between the two has nearly always been comparable in some product tiers for as long as the two have existed, really the only viable products we ever saw from any manufacturer were in the same performance brackets, if they weren’t, they wouldn’t succeed. It’s no surprise at all that for an openGL game really there’s not many major differences between the manufacturers at the same performance tiers. However the reason why I moved away from AMD cards as my primary gpu several years ago is simply due to product support. AMD throws their products into the legacy support bin so fast it’s not even funny, and long term driver support in general is hit and miss. Let me explain, I have close to 100 different gpus, here’s my R9 Fury Nano and a 4gb GTX 750ti. The R9 Fury Nano is a $649 2015 flagship adjacent tier gpu, it’s meant to compete with the GTX 980, and in its time it did so very well. The 750ti is an entry level $139 gpu from 2014, meant for less demanding titles, advertising 1080p esports performance or im not even joking, 720p AAA game expectations. A cost effective and very popular entry level option. The R9 Nano has hard limit of 4gb of HBM memory, which left it unappealing in the years to come, because even if it was extremely fast, 4gb became a limit very quickly after 2015/2016 when 1440p and 4k gaming became more common and games got more demanding on video memory. It received limited driver support even in its prime, and by the time Polaris launched the specialized driver support for the R9 Fury series in specific games slowed down a lot. AMD saw this product wasn’t going to be as successful as others and it got backburnered. In early 2021 they deemed the hardware obsolete. By 2022 it got its last driver update, and it’s just an Adrenaline Legacy driver that only works on Windows 10 and has no game specific optimizations, and nothing for newer titles at all. Meanwhile over on team green, here’s the up to date windows 11 mainstream driver for the 750ti https://www.nvidia.com/en-us/geforce/drivers/results/235904/ It’s not even on legacy yet. Guess which one performs better in any modern scenario? The 750ti, by a lot. And it’s entirely down to driver support. If you bust out games from 2016 on windows 8.1 and run era drivers, yeah the R9 Nano will smoke it. But it’s not 2016 and nobody except me runs windows 8.1 anymore. In order to be relevant in the mainstream you must be supported in it, modern windows and modern games. And the R9 Nano isn’t supported in modern windows or modern games, at all. Your best bet is legacy drivers on windows 10 and even then the 750ti will do better in newer titles that came out after the end of support for the nano. CS2 on these two cards is night and day, the 750ti plays it without thinking while the nano is a stuttery mess at best and hard crashes at worst. And SL is a similar story on a modern OS where driver support is important, windows 10 with legacy adrenaline drivers means the nano gets smoked by the 750ti. And if you have windows 11, well then the nano isn’t even on the table because it doesn’t have 11 drivers. And that’s why I can’t recommend AMD cards anymore, you won’t really know if you bought a card at a bad time until it’s too late, and then you have a product AMD is going to push into legacy purgatory in a few years while nvidia is fully supporting decade old hardware because its still in use by enough people that support for it matters. And it’s not just the one generation. Obviously all of the 200/300 series r9 cards went Eol at the same time, while the nvidia 700/900 series are still fully supported. Even Vega and the VII aren’t really getting driver priority anymore. I don’t really care if the hardware features of a modern AMD gpu are better than the nvidia options. It’s a coin flip if the hardware will be usable at all in 5 years.
filz Camino Posted November 30 Author Posted November 30 (edited) @Qie Niangao yes, you are disqualified too I'm afraid! (your post is completely off-topic! kindly stick to the topic, please?) @MarissaOrloff unlike some in this forum, i don't have any ideological preference for any particular manufacturer and if you tell me that your tests were conducted after AMD improved the OpenGL performance of their drivers, e.g. that the results of your test are still valid today, then I am happy to accept your argument. too many people in this forum believe that technological truths are eternal and that because manufacturer X was once better, they must always be better for eternity. so, I would be interested to know the date you did your tests? on "bitter pills", the graphics card in my PC is a RTX 4080 Super, which might give you a hint about how attached I am to AMD. My laptop is an ultra miniature handheld PC with a 10" screen that I use when I'm travelling, and there are no computers available in that that category that have a discrete GPU. Given the actual available options I think i made the right choice with it, but if you can show me a 10" laptop that gets more than 55fps on a busy sim with graphics set to high and max avatars set to 20, perhaps i'll change my mind on that too? Edited November 30 by filz Camino
filz Camino Posted November 30 Author Posted November 30 @gwynchisholm interesting! good driver support is certainly part of the equation. although on the longevity side of things, a problem i have with Nvidia is lack of video memory, which is also something that limits the useful lifetime of the product.
Recommended Posts
Please sign in to comment
You will be able to leave a comment after signing in
Sign In Now