Jump to content

NiranV Dean

Resident
  • Posts

    1,145
  • Joined

  • Last visited

Everything posted by NiranV Dean

  1. The issue is well known (and there have been several instances here in the Forum reporting this issue), its a graphics driver issue, you will need to update your graphics drivers to fix this.
  2. No. You can however select Object View from the Camera floater but you will have to do this every time after relog and it needs to be turned off when trying to interact with things
  3. Just another thing i just noticed while reading through the OP. The FPS display (at least in FS as far as i recall) does not show you your actual FPS. It shows you your average FPS. 30 FPS average doesn't mean you have smooth 30 FPS, it means you can be jumping between 15 and 45 FPS periodically if these jumps happen in the measured timeframe and then averaged. This is why in BD i chose to show the actual current FPS (hence why it jumps around a lot and seems super unstable). Most games i know show you either your current framerate or at least some very very tight average to which usually catches smaller FPS instabilities but shows you framespikes. It's called camera smoothing and can be configured in Preferences - Camera. Its set higher compared to other Viewers by default. I do not like the jerky direct camera in third person, that's something for Mouselook or Build mode.
  4. That's amazing news, now lets just get everyone on AMD to update their GPU drivers and hope they didn't introduce a million new issues (again again again again again again)
  5. You could 7zip/gzip the UI to make it "compacter". Jokes aside i've been told already that my UI is too small/compact... despite my best efforts to make widgets and layouts have clear layouts and groupings while keeping empty space to a minimum but people also complain that my UI is too complex, the same people who didn't complain when i simply dropped the Firestorm skin into my Viewer (for the lols) and showed them the exact same Viewer UI except with the Firestorm skin (grey on grey with some touches of orange)... I learned a valuable lesson: Not using Firestorm's skin makes everything more complicated! I mean come on whats not to understand here? It even has the Low-Medium-High thing everyone loves so much.
  6. There is no option, this needs to be done in code. Black Dragon Viewer has a fix for this implemented.
  7. Actually on further inspection, my Lowest Quality setting is 0.8 (unlike what the tooltip says -> 0.25) hence why everything remains high quality, i did see very handful of objects reducing their LOD very slightly, so i tested further and on dropping the object quality all the way down to 0.2 they do immediately update and drop LODs completely. As seen here: Also yes, size seems to play a role, as i gradually increase the quality more and more objects cycle through their 4 LODs, until all of them arrive at 4. Although only smaller objects seem to update immediately, the big tree doesn't at all unless i move my camera.
  8. Dynamic LOD On @ 2 Object Quality Next step, turned Dynamic LOD off and Object Quality to 4, as you can see that the objects did not update. Now zooming closer to update And now zoom out again to see that LOD is still off and they do not change their LOD as expected, they stay at maximum quality as desired by the Object Quality 4 option. Unless LL bugged LOD updates between 6.6.8 and 6.6.0 (which BD is currently based on) it should be the exact same behavior. BD has no changes in how LOD works, i have never even looked into the LOD code to begin with, i'm using stock LOD code here. Note tho, after further testing that they do only update up, not down, they will only increase their LOD level and once at their maximum will stay there regardless of whether you lower your Object Quality setting or not, so its a one-way trip.
  9. That is not the experience i have made in any other Viewer. That sounds like a Firestorm bug to me then.
  10. That is literally what i said. Also turning Dynamic LOD off does not freeze them in their loaded LOD. Dynamic LOD simply makes object use LOD levels based on distance to your camera and object size, multiplied by your Object Quality setting. Turning Dynamic LOD off simply eliminates the range component (i do not know whether it eliminates the size component too but i'd assume so, at least from experience) and instead makes Object Quality directly and universally control used LOD levels. To put it simply Object Quality 2 with Dynamic LOD makes small objects and far away objects switch to lower LOD whereas with Dynamic LOD off they will simply use their highest or second highest LOD (they should not require setting it higher but some objects seem a bit... weird in that regard which might be because the size component is not disabled). Turning off Dynamic LOD however does not immediately show any effects since LOD updates only happen while your camera is moving (or when an object is first determining which LOD to use, possibly when certain other object updates are triggered), after toggling Dynamic LOD you will have to move your camera around, namely zoom closer to force update all LODs to their correct levels.
  11. if that dynamic adjust is another one of FS's dumb way of saying Dynamic LOD a feature all viewers have, then its controlled by the object quality setting (RenderVolumeLODFactor)
  12. Black Dragon does not have an option to turn off the automatic camera reset on movement. There are however two ways to do this: 1) You zoom on something and hold down Left Mouse while moving, this will keep your camera in place but it might wiggle around a bit when walking (i think this can be fixed by lowering the camera lag but i have never tried it). 2) Recommended: You use the flycam (simply just start it, you do not need to move it) to lock your camera in place.
  13. It's important to note that Black Dragon kept the original Windlight Edit window layouts (only added the new EEP options and slightly rearranged them to make space for the new options)
  14. Its a well known issue that has been reported and around since last year's server switch. It very commonly happens when you do something (change, manipulate, rez, delete items) on Viewer #1 and then switch to Viewer #2. Viewer #2 will not know about the changes for some reason and most if not all your changed or modified objects will simply be invisible. When that happens, as you already found out, simply teleporting away and back fixes the issue. If this happens to your attachments you will have to detach and reattach them to fix it. Some TPVs and LL have been banging their head against a wall for over a year now and as far as i can tell they have not found out what is causing this nor how to fix it. https://jira.secondlife.com/browse/BUG-231582
  15. I've gotten several reports of people using remote solutions that using any sort of mouse-driven looking is pretty broken (right-click drag in BD is affected by this too), sadly there is no way to fix this i could think of, i suspect that either the remote transmition itself sends garbage mouse position data or the fact that you are essentially "translating" between two desktops causes weird position offsets to be reported is the issue, nothing the Viewer can account for tho if not even Windows can't (the viewer requests mouse position and velocity from windows directly)
  16. To be fair, multiple too quick relogs have always caused (100% reproducible for me) all kinds of issues, whether this is actually cache corruption i never really checked but it would make sense given how the Viewer works and what Beq explained how the cache functions. This is nothing new in the performance Viewer, its just more noticeable because the shutdown process is a lot faster now. Though even before the performance Viewer i noticed that even shortly after the Viewer window got destroyed it was still doing something for a second or two. What i find even more interesting is that running multiple instances of the same Viewer does not seem to have any adverse effects like this whereas a quick relog does. You'd think attempting to use the cache with two viewer instance at the same time ,preventing the first instance to close it properly, should in theory cause issues. Either way i hope this gets fixed, i'm seeing quite a lot of inventory related issues lately pop up in user reports.
  17. I've gotten a couple reports that this is also happening for a couple people on Black Dragon, sometimes only on specific accounts, sometimes on all accounts.
  18. That's why there's options to change the quality of each subfeature of each rendering path. If deferred with shadows and ao is too much, you can always turn off shadows and ao, keep the slightly better look, materials support, infinite light support at a very slight performance cost. But this extreme inclusiveness has always been detrimental for SL's development as a platform and for each and every user not on the low end of the spectrum. I can give you a content example: Baked AO and prim shadows. They are used for lower end hardware, predominantly for forward rendering but if you like me have deferred rendering and realtime shadows and AO enabled you will see them stack, creating really bad looking stuff. It looks nice for low end, but it looks absolute garbage (and is actually slower and more wasteful for deferred because not only do you render a shadow prim, which also casts a shadow, its just another texture needed to be loaded that is completely unnecessary). Why did we have to accept the downgrades all this time and no one gave a crap but now that we are finally moving forward after 10+ years everyone gets riled up. Why do we insist on supporting even the oldest and most outdated of things when all it does is hamber SL's development and content creators? Inclusiveness only goes so far. It's so incredibly hypocritic of the community to complain that SL doesn't make any progress but at the same time demand their outdated 2007 stuff to still function and look exactly the same 15 years down the line and then when SL finally decides to make progress the community is in uproar, every single time. The lack of SL's progress is a problem of our own making a problem that i have seen in no other community. I find it incredibly sad that i had to take a beating for 10+ years now because of said "inclusiveness". Where is MY inclusiveness? Did i get anything the past 10 years? No, i was shunned upon, my stuff was denied, improvements (free ones at that) were denied, why? Because of low end users. I have as much sympathy for them as they did for me, now they get the shaft.
  19. I'm not. You said its hard making low poly models, i gave you a guide that shows its actually pretty easy. I learned these optimisation "tricks" from a professional modeller, you may recognise his name: Krample (Kampffisken) aka Gordon (the maker of Orange Nova Avatars). He was streaming preparing his Kobold for a VRChat release, i tuned in and he showed off some easy tricks to quickly optimise models very effectively. And yes, optimising for an actual game engine is more involved, i know that because as the pictures above show i'm also working in Unity (for VRChat) and i very much care about having a good performance ranking, the good performance ranking is quite limiting and requires a lot of optimising depending on the model. No more than 70k polys ever, no more than 8 faces (materials) total, no more than 8 meshes or 2 skinned meshes, practically no particles and god have mercy on you if you want to do something like physics or dynamic bones (flowing hair, moving tail etc). I've also optimised my model for the Quest version which imposes very strict limits for a good rating. Less than 10k polys, no more than a single mesh or skinned mesh, no more than a single face (material), no sounds, no more than 90 bones (SL's Bento skeleton would already fall through), no particles whatsoever, no shaders no nothing. If you compare this to Second Life where commonly avatar bodies have 100's of faces (possible materials) its a nightmare. But SL is much simpler in that regard, all you need to do in SL to achieve an optimised model is reduce polycount, reduce faces, reduce separate item count (keep the mesh as one single thing with 8 faces and make good use of them), offer LOD options (although for avatars this sadly doesn't matter really...) and make some intelligent use of textures (don't use 1024x1024 on fingernails, which also links back into making better use of the 8 available faces). That's really it. Achieving 40k polys, 1 single mesh with 8 faces and decent memory usage (~10-20MB VRAM usage) for an avatar is quite easy. I know what you mean. I have a 40" TV right in front of me (1m). I can essentially count pixels but the texture blur does not phase me at all. It is absolutely minimal and by no means unacceptable for something that is essentially a free option. I don't expect FXAA to be the be-all-end-all. It's just a simple edge detection post process shader that doesn't even make use of depth, its not meant to be perfect, its not meant to deliver the best quality, it is meant to deliver some basic AA that eliminates the worst offenders of jagged edges at practically zero cost. Nothing more than that. Also didn't you show me that MSAA is available in deferred? (although i had to relog for it to take effect) Right now after the performance update i can't get MSAA to work anymore at all. But if you want to go ahead and bring TAA to SL be me guest: https://de45xmedrsdbp.cloudfront.net/Resources/files/TemporalAA_small-59732822.pdf
  20. You'd think that but its incredibly easy actually. It's actually so easy to that i made a quick tutorial for what is essentially a couple clicks.
  21. Counter argument: Mesh creation is actually quite easy and quick to learn (making bodies however is a different thing, that requires decent sculpting skills and some know-how about human anatomy). Mesh content is everywhere because it is actually quite easy to get into SL... unless its a fitted mesh, but that's because the tools (or lack thereof) are simply bad they are basic at best and often require paid extra plugins (such as Avastar). I tried it, downloaded the official skeleton example for blender, smacked my model on it, retargeted the bones, everything was matching up just fine, bring it into SL, it was twisted and broken, stretched, axis were wrong... requiring a lot of additional extra work to fix, work that shouldn't be necessary if SL wasn't so.... limited. Look at VRChat and NeosVR for example. NeosVR akin to SL offers all tools ingame to create everything and i mean ALL tools, we are talking about editing things on engine level through the ingame UI. We are changing internal mesh file parameters via the ingame UI, that is practically unseen levels of freedom, SL is a joke compared to it. NeosVR is incredibly user unintuitive, much more than SL is due to the extreme complexity of said tools (mostly due to the lack of documentation and tooltips though). Did it stop people from creating content there? No. VRChat. Offers less customization than NeosVR but does so in the engine it is build upon -> Unity. It's an external engine that you have to download and work with in addition to texturing, image editing and modelling tools but it offers everything in one place, all the tools for setting your stuff up are right there, sure you cannot live-edit them (unless you script options in the world or add customisation to your avatar) but did that stop anyone from making stuff for it? No. I wouldn't mind if SL started offering external tools similar to Unity Engine at the cost of non-live edits. They would make setting up stuff a lot easier or more centralised and would also allow for deeper edits. Both NeosVR and VRChat use PBR and as borked as Materials is your Materials content translates very well to PBR and the other way around. I straight up imported a model for VRChat for PBR into SL with the original PBR textures and it looks pretty decent (apart from the missing 4K texture support).
  22. I don't demand anyone to run with shadows and neither do i demand anyone to run 150m render distance, in the later case thats pretty excessive. Deferred rendering however is not just a sub-part, its not a feature, its an entirely different rendering path which has been around for over a decade now. I would like everyone to run on Deferred though as it puts everyone on the same basic page. We obsoleted turning off vertex shaders (aka pre-windlight). Did it hurt SL? No. Deferred is marginally slower than the full windlight forward rendering (if shadows and ambient occlusion are off, it hardly matters on any mid-range hardware of the past 10+ years, i ran deferred and shadows back when it was only publicly available through Kirstens Viewers on a GTX 260 @ roughly 15-20 FPS). Any and all GPU's should be able to run deferred nowadays and all the way back 10+ years. If any GPU cannot run deferred for whatever reason (such as intel iGPU's or partly AMD GPUs bad performance due to bad OpenGL support) then you simply made a wrong choice (i bet you paid more money for that than a PC would cost nowadays). Also you are using SL's bad optimisation as a reason not to enable deferred which is funny, considering that the reason SL is so badly optimised is the very choice you defend so much. Many of SL's problems are attributed to a choice we have, a choice that in turn means a compromise and this compromise has always hit the higher end of the spectrum. You simply cannot expect even the oldest systems that barely run SL to be supported forever. I have never in my life complained that when any of my games updated, reworked and enhanced their engine or visuals and in the process obsoleted older hardware, even if it made the game unplayable for me, i either endured it, found a way to continue playing (through mods or ini tweaks) or simply came back later when i had a better PC. Also, i don't see the reason you are making such a fuss about this, you have a choice too, you can keep supporting the option to turn deferred off. Your Viewer is known for old hardware support and is commonly one of the first Viewers recommended when it comes to older hardware. After all users have a choice, that is either leave SL and come back later or use a different Viewer that caters to their needs, it is not a true choice as you say but what is the difference anyway? compromise on graphics or compromise on viewer choice, there is no difference there. You are just moving the compromise from point A to point B, the compromise stays which is simply unavoidable. 10+, Henri its 10+ years. Not 5. I ran deferred + shadows back in 2009 when it was a crashy hellhole with a GTX 260 and an AMD Phenom II, the GTX 260 is 14 years old. 14 years Henri. I ran deferred with a GTX 260, a 460, a 670 and now a 1060 and apart from having more VRAM available for more textures to load (which LL still doesn't even support) the performance has never really improved, not a single time did i get something that classifies as "better" performance upgrading the GPU (and neither did i with upgrading my Phenom II to an FX, infact performance was slightly worse...). Any and all improvements came from LL's side (and an upgrade to a Ryzen). This video of mine is 12 years old. It's recorded with a GTX 260 (again 14 years old, it was 2 years old at the time of recording) Back then i recorded with trash like Fraps and Camtasia, both of which absolutely murdered my framerate while recording (until i got a 460 and it was less impacting). Deferred with shadows can run on 14 year old hardware Henri and somewhat "usable" by SL standards, turning off shadows and ambient occlusion would easily give you a decent framerate. That was 12 years ago, we've got massive performance improvements now and we got a lot more options to adjust things to our liking and we've also got more ways to keep our framerate somewhat intact. I suppose that's the point you are trying to make but it doesn't make any sense what you are saying. You cannot achieve any "nice shadows" since shadows are something that is done by the Viewer modelling a tree for its shadows (how would nice tree shadows even look like?) would most certainly result in a tree that doesn't look nice. And the reason it runs at 3 FPS is not because of shadows its because the tree is trash and the person doesn't know how to optimise, which has been the case for practically the entirety of SL as a whole but again we have choices right? We can choose to ignore optimisation, we can choose to ignore LOD's, we can choose to spam 1024x1024 textures on everything, we can choose to upload an animated tree with a million polygons and 256x8 faces. Deferred actually runs quite decent for being so old and unoptimised considering what deferred has to put up with in SL, i can tell you that Unity breaks a lot faster with far less and that isn't even with full realtime lighting! I think you are missing the point of FXAA and me as a whole. FXAA was never meant to replace proper MSAA. No other AA other than SSAA can truly replace proper MSAA. FXAA was used because it was a free and easy solution to offer and it does its job decent enough. It's not meant to be perfect, its just meant to reduce jagged edges to an acceptable level at virtually no performance cost and it does so damn well, at the cost of texture blurriness but i'm willing to take that compromise. Infact i have always preferred FXAA over practically any other AA method, SMAA is slow and does a worse job at the very thing its supposed to do, TXAA is absolutely horrid, MSAA is slow and often not available (due to deferred rendering), SSAA is a complete no-go unless you have over-the-top hardware and hundreds of FPS to spare. A good TAA implementation + FXAA has been so far the best, its relatively fast, doesn't have the ghosting of TXAA and smoothes edges extremely good both in motion and in stand still. Only big downside is it blurs even more than FXAA alone. Does it matter though? No. I prefer the "blurred" textures with FXAA because it has a similar effect to water mipmapping, without mipmapping the water texture is incredibly sharp and noisy on distance which looks incredibly bad, mipmapping helps with that. FXAA does a similar thing with textures, a lot of high-repeat textures look super noisy on anything but very close up, noisy to the point they become hard to look at, FXAA blurring these textures is actually a good thing for less repetitive textures it is hardly a problem, the blurring is barely noticeable and with SL's 1024x1024 texture limit doesn't blur much of anything anyway. But again we got FXAA as a compromise, performance vs quality... deferred was incredibly slow. It was for the longest time (until the performance update). Deferred has never really seen any improvements both in optimisation nor in quality (evident in how bad shadows and depth of field still looks) because very few people were using it because no one saw a reason to, it was slower, it didn't offer much new and lots of people couldn't run it at their desired framerate (and because lots of people assume that deferred automatically means shadows, which it doesn't). Largely due to the CHOICE of not needing to use deferred it was never further improved, which in turn meant less people use it... its a vicious cycle. It's why a lot of new things fail, if they don't get popular they die but how do you make something "niche" popular? By making it better and more readily available, but that's not the case because there is very little market because, you guessed it, very little people are using it. You don't spend time and money on something no one uses and no one uses something no one spends time and money on. A first step has to be taken and this first step must be an incentive to use the new thing. Materials could have been this incentive but its bad and dumbed down implementation turned a lot of people away from it, so does the missing optimisation for it, you can't run Materials anyway, not with 1GB of max texture memory allocated to SL, just look at all the LL Viewer reports that the Viewer keeps constantly reloading textures! Everything bad that ever happened to SL i can attribute all the way back to compatibility or choice. Oz was in a sense right, options are bad. Sometimes. Some options should simply not be given. I will certainly not support turning off deferred (infact i never did, the only reason i still offer this option is to turn deferred on, in case it ever got disabled for some reason) and my users didn't have a problem with that. In fact they are glad, even people who can't use my Viewer because their hardware is too weak don't complain. They simply chose to use a different Viewer until they got an upgrade, which according to them never even came to mind before (again because prior to using my Viewer that is commonly known for requiring better hardware) they never had any reason to, Firestorm doesn't give them a reason to upgrade, it outright crashes and refuses to work for most people that try to do fancy stuff on old hardware, whereas weirdly enough my Viewer manages to do that just fine, just at a suboptimal framerate while doing so. I'm pretty confident that A: SL will not die. (remember it has been dead for 10+ years) and B: It will hurt people less than you think. We have seen LL make much worse decisions for much worse reasons and SL hasn't really changed despite people complaining and threatening to leave. What LL is doing wrong (for the probably 100th time) is not officially announcing their decision to remove the deferred toggle. They are not giving people a warning that old hardware support will be eventually faded out and that's where i see the problem. Something like this should have been officially announced soon after the decision was made to give people a warning and some extra time.
  23. You don't see how leaving an option to toggle back to an entirely outdated and "compatibility" focused renderer at this point is holding back SL's development? Look at the past 10 years! What noteworthy visual improvements has SL made? None. I'm not even talking about graphics. Meshes are a good example of how development has stagnated outside of graphics. They were added.... and that's it. They have never been improved, they have never been touched, they have never gotten new features to them. Meshes are still the same thing they were 2010. Did we ever get blendshapes? Nope, we could be adding custom shape sliders to our meshes or we could replace the ones from SL with custom ones (to keep "compatibility" but offer better quality). Animations. Have there been any... NOPE! Other file support? Future proof files? Nope. Can we remap bones? Nope. Skeleton? We got Bento.... which was literally just adding a couple extra bones... something that took so long to add but shouldn't have been taking longer than 10 seconds. Have we learned something from Bento? Did we improve something from Bento? No. We didn't get custom skeletons, we didn't get loosened skeleton rules, the animation formats are still trash. Meshes are still barebones supported and all of that because we have a CHOICE. Compatibility, the choice to use old and new, both are always "viable" but in order to stay compatible and keep both viable we need to keep the future down, we can't further develop anything because it would either out-develop old content and obselete it or straight up break compatibility. This has always been the reason SL stagnated so hard. SL's hard push to keep old things in tact, to keep the option to revert back, offering the choice between 2 systems, the old original and the new shiny one that is just a polished turd because it had to be dumbed down to stay compatible to the old system. Windlight, EEP, Meshes, Animations, Materials, Graphics, everything, everywhere is constantly being bogged down by the choice we have to use something old. In case its not obvious, i'm arcing out so widely across SL because i honestly don't even know where to start, there is not one example, there isn't 1000 examples, SL as a whole is the example, it would be easier to ask me to list things that haven't stagnated because the list would be pretty much empty (not counting the upcoming PBR rework and GLTF support) And we didn't even talk about the elephant in the room. Offering the option to turn off Deferred means they officially support non-deferred. People not using Deferred will want to have a piece of the cake too, they want to stay in SL at all times and again they will not see a reason to ever upgrade because LL supports them no matter what. Why should they upgrade, SL works fine as it is! This will give off the wrong impression to users, this will lead to people complain, eventually taking up LL's time again for something that is unsupported and is only there to give people a barebones way of staying in SL which they don't see as such, they see it as full support and thus will ask for further support. They will want fixes for their non-deferred rendering too. They can't just tell them that they don't care because non-ALM is officially unsupported now because its clearly available as an option making it a supported feature. You have to understand that people had 10+ (~13 now since deferred) years time to get an upgrade. Not a big one, not a decent one. A really small one, again a fully Deferred capable machine isnt/wasn't that expensive. You don't need to buy a 800€ GPU. You can get a perfectly deferred capable GPU for far less than 200€ and thats with the outragious prices. Lots of PCs are fine except the GPU which usually lacks so spending a couple hundred for a long lasting upgrade over 10 years should be possible. This isn't visual dictatorship this is simply the truth, people had more than ample time, everyone with capable machines had to endure this painstaking non-existant progress, the constant denial of new features due to "oldtimers" far longer than necessary. Sometimes compromises have to be made and the compromises we have been making the past 10 years has driven a lot of people away, skilled people, good people, they will never come back, they have found better places to be. To be perfectly honest, if it wasn't for my own Viewer i'd be gone too. SL holds nothing of value to me anymore, its development has been spiting me for 10 years, Oz's bad decisions and unwillingness to support and make improvements have been turning me away from trying to help LL and SL itself simply doesn't offer anything anymore that is in any way special or unique. Its convoluted, it stagnated, it didn't give me any more candy in 10 years and now its simply too late, i don't care anymore. I'm just sitting here keeping my Viewer alive for people, because there are people who like it. It's not even for me anymore at this point. Anything beyond this is simply a waste of my time and skills. And that's coming from someone who has been enthusiastic about SL for over a decade, it is simply heartbreaking. Many feel like this, many felt like this, even Lindens feel this, some of the Lindens are really great people and they really want to develop SL, they see the potential, the future but are constantly held back. I just don't know anymore... it hurts.
  24. No i'm fully supportive of dropping forward rendering in favor of deferred rendering (although i wish they would put some more love into it), it is simply time. We've had deferred rendering for more than 10 years now (13 years roughly and counting). Deferred Rendering also does not by any means need a gaming PC as it seems to be commonly believed, that is absolutely wrong. Before i was using a Ryzen and a GTX 1060 i was on a NVidia GTX 670 with an AMD FX 6200. When the GTX 670 died (due to a power outtage frying my PU and damaging my GPU) i had to switch to my old GTX 460 for a while and i was surprised to see SL run at "stable" 30 FPS with Deferred and shadows enabled, i say "stable" because these 30 FPS were... weird, it was like i had more FPS (probably around 40-50 FPS) but the Viewer was hard-capping it at 30 FPS for some reason, no matter what i did it would hard-stop at 30 FPS perfectly but even those 30 FPS with a poor mans 460 were astonishing which just showed that SL didn't need a powerful GPU at all as long as the little shader work can be done in time by the GPU. Remember that was with an AMD FX, the FX series was known to be complete trash single-core performance wise (sadly), before that i had an AMD Phenom X4 which had slightly better performance. The FX is 10 years old at this point. The Phenom is way more than 10 years old and these can run SL just fine, that was BEFORE the performance update, now with the performance update i'd imagine you'd get even better FPS. Sorry but if you can get 30+ FPS in deferred rendering WITH shadows, with ambient occlusion, with SSR, with DoF and everything else enabled in MY Viewer, which is arguably the slowest of them all on 10+ year old hardware then there is simply NO excuse. If you couldn't afford to get a PC capable of running SL (even just without shadows, again i had 30+ FPS WITH shadows, the biggest FPS killer) with deferred rendering on in the past 10+ years BUT you could somehow afford a trashy, Mac or Laptop that EASILY cost more if not outright twice the amount of a normal PC then its your own fault. That 460/670 + AMD FX was literally already a budget PC both together costing like ~300€ at the time of release, add the 16GB RAM i had at the time for another 100€ and we are looking at 400€ for a really really basic, deferred rendering capable PC. Even if we have to build the PC from scratch (assuming you already have a mouse, keyboard and a monitor or TV you can use) we are looking at maybe 600€ (with an expensive case). Yes it has been... hard to get GPU's lately due to cryptomining and other reasons but again you had 10+ years time to scrape together ~500-600€ to get something that works for SL well into the future. If you spent 500-1000€ on a laptop or mac that can't run SL then by all means that was a bad investment. I simply cannot find anymore excuse to have the forward renderer around (unless they were going to upgrade it to forward+ and get rid of deferred). On top of all that forward has been slower for me for over 10 years. Turning deferred off either drops me a couple frames or straight up halves my framerate in some cases, it has been like that for many years (roughly around since i've had the GTX 670), i feel like it would be an edge case due to forward being slower with certain things (example: lights) but it has been happening on 2 different GPU's with 2 different CPU's and on practically every Viewer 2+. Yes, it didn't happen on Cool Viewer but Cool Viewer was also slower in deferred than most other Viewers for me, which i'd attribute to Henri having deferred off most of the time (according to himself) and making changes and optimisations mainly in and around forward rendering. I just find it weird that they waited so long for this. Having done this much sooner could have made this decision (like many others) a lot easier. The longer LL keeps waiting with things, the more people keep getting used to something, the more they will see something we have as the defacto standard, the harder it gets down the line to make a change. 10 years after deferred was introduced people are like "but i never needed that for 10 years!" "it has been like this for 10 years, why now?", its because it was about time (~8-9 years ago already), the sooner we had made the switch the sooner we would have given people incentive to look for something capable of running SL with deferred. If we only had deferred 10 years ago, people would have been much less likely to buy something that isn't capable of running SL with deferred. This has nothing to do with selfishness. This has to do with doing the right for the platform going forward which in the end means profit for everyone. I find it incredibly shortsighted to immediately put this down as selfishness, the goal of this is to make SL better for everyone involved in the long run, this means, you, me, LL, everyone. How is this selfish? Isn't it selfish to keep bogging down (like we have for the past 10 years) SL with compatibility with old stuff for those people that want their old stuff to look as bad as they did 10 years ago? (yea looking at that one Linden who got that deferred-shiny change into rolling) Their "we need to keep everything as it is" mentality has been destroying quite a lot of my things, when they were meant to preserve something right but they were meant to preserve old content, in the process destroying new, thats the other side of the coin. If going full deferred for the future is selfish, then staying fully compatible with old is selfish too, they are 2 sides of the same coin both will impact the other side negatively.
  25. In your picture the diffuse isn't blank. Diffuse also can't be blank, its the main texture after all, it is always at least just white.
×
×
  • Create New...