Jump to content

Henri Beauchamp

Resident
  • Posts

    1,295
  • Joined

Everything posted by Henri Beauchamp

  1. This bug is tracked with this JIRA issue: BUG-232037. It happens systematically 2 days and 21 hours after each sim restart in my home sim, and is darn annoying (since it means the sim gets restarted thrice a week: on rolling restart and twice after it, to ”fix” this missing-friends-on-login issue). I regularly nag Lindens (@Rider Linden and @Simon Linden) at the SUG . Sadly, and despite the added burden it imposes on SL support (with the resulting many sim restart requests), it does not seem to be very high on LL's priority list. ☹️ Feel free to nag with me (post in the JIRA, come and complain at the SUG, etc) !...
  2. Your best bet, when you notice some weird and new thing happening after a driver update, is to roll back to the former version... It is not unusual that new versions introduce new (and sometimes very painful) bugs... The latest is not always the greatest ! For people running (or trying to run) Windoze (yuck !), I'd recommend using NVCleanstall to install their driver for NVIDIA cards: it makes it easy to roll back to any version, and also allows to disable unwanted ”features” in all the mess the normal installer is otherwise piling on your system disk and running in your memory without your approval (which includes so-called ”telemetry” spying stuff)...
  3. Saving the VAT on the membership fee is indeed a relief, but clearly, land ownership in SL stays more expensive for an UE resident than for an US one... So, please, do ”reconsider it in the future” ! 😛 Also, it would have been great to add intermediate mainland use fees (instead of keeping one level for each doubling of the surface), because the gap to jump from, say 4096m² to 8192m² is large (around $130 a year with the VAT) and I would have appreciated (and likely used, as a result of the saving I will be doing with the membership fee VAT removal) a plan for 6144m², for example... Finally, there is something unclear in the knowledge base about stipends grand-fathering. I have been considering for a long time to change from quarterly to annual membership plan, but this knowledge base article, if exact/correct, means I would loose my L$400 stipend for a L$300 one doing so (loosing roughly $20 a year in the process), because the knowledge base says that the L$400 stipend is reserved for Premium plans opened before November 1, 2006, and I'm pretty sure mine (that does have the L$400 stipend) was opened after that (likely early 2007, since I joined SL as a basic account in October 2006)...
  4. The current version can be found by following the link for the project viewer GLTF PBR Materials found on the Alternate Viewers page. Note that this first alpha release lacks occlusions, so it is slower than what it should be: to do a fair comparison with LL's current release viewer, you must disable occlusions in the latter. But as you will see, the PBR viewer is in no way faster than the release viewer in ALM mode, and certainly not able to match the latter with ALM off on ”weak” GPUs... I do not see Firestorm not strictly following what LL is doing in this respect, but I am not involved in Firestorm development.
  5. You are totally missing the point... It's not about holding back progress, but just about keeping something which exists and allows people to enjoy SL when then won't be able to enjoy it any more if removed !!! First, I always have been an eager early adopter, and since you mention meshes, may I remind you that I backported mesh to the Cool VL Viewer only a few weeks after LL released their own mesh viewer, when some people said it was an impossible task... Here, it is not even such a hard thing to do: it's just keeping code and shaders as they exist along the new ones, just like I did for the WL+EE renderer (and in the latter case, things where much more complex, not because of the renderers, but because I had to implement real time translation between WL and EE day/sky/water settings so that both types would render fine in both renderers: no such issue here). And so far, Dave failed to deliver, based on what you can already experience today in the first alpha release of the PBR viewer. I would love to be proven wrong, but I'm afraid I won't...
  6. The Cool VL Viewer can indeed run on a Pi 4B or RockPro64, for example... And yes, that's thanks to the forward rendering mode. 😜 Running it on Chromebooks would likely be doable, after the necessary changes to adapt the Linux code and port it to Android... They can run SL, even if painfully. But what would change, should LL pursue in their suicidal way of removing the forward renderer, is that they won't be able to run it at all any more (or so slow, or in such a degraded way, that nobody would stand running SL on them). Add to this the current issues you get faced with to upgrade or buy a new computer (computer parts prices, financial constraints on your budget due to the inflation), and you can see how bad a timing it is to raise the hardware entry level to SL... Also, the future of SL depends whether a true client (with full 3D renderer) will be ported to mobile platforms or not... Anything that makes the viewer unable to run on modest hardware makes this goal more difficult to attain or right out impossible... Thing is, the ”effort” to just keep (freeze) the forward renderer as it is while still developing ALM for PBR and more, is close to zero, like I demonstrated already with the Cool VL Viewer v1.28, when I kept the WL renderer along the EE one (because EEP got pushed too soon to release status, and was so much slower, until, at last, the performances viewer fixed the broken EE renderer, months later; WL was a life savior for slow hardware then).
  7. Exactly my fear and what is likely to happen if LL removes forward rendering, making entry level laptops unusable (or miserable) at rendering SL. Hey, LL, I told you so ! I do really hope @Vir Linden and others Lindens involved in the viewer development will read this !
  8. The ”good rate” is equal or above your monitor refresh rate. Mine is a 60Hz VSync monitor, and anything above 60fps is good and smooth. However, you must also account for frame rate drops, which happen a lot when you move around, since while rezzing new objects and decoding textures, the CPU load increases a lot (the tasks linked to rezzing and texture decoding also take some time to perform during each viewer renderer ”frame” loop, so even if they are partly threaded, there is still a longer time needed to render a frame in the viewer when rezzing is in progress).
  9. I got a GTX1070Ti and it got strictly no issue with SL graphics, unless I switch on shadows (at which point, the fps rate might fall below 60 in various scenarios, which I would find unacceptable). And I'm using my viewer (the Cool VL Viewer, of course) with graphics settings maxed out, and with the draw distance set to 256m (in land sims with neighbouring sims) or 512m (in islands, or while sailing or flying). But I also got a good CPU (9700K locked @ 5.0GHz on all cores), and this is why my system works fine with SL, since as long as you got good enough a GPU, the bottleneck of the mono-threaded renderer found in the viewer is actually at the CPU level ! It you change your graphics card for something super-powerful (and a RTX 3060 would fall in that category, for SL), without changing your CPU, then you indeed will see little to no difference in fps rates (though, in my case, I would likely have better rates with shadows on). A balanced system is the key: do not put an over-sized GPU in a system with an old CPU, and vice versa.
  10. I suppose you mean ALM (advanced lighting model, AKA deferred rendering). Be aware that LL is currently planning to remove the forward rendering mode (i.e. ”ALM off” mode) in the future PBR viewer (which alpha release already got it removed), and although I am fighting this bad move, it would make such a computer totally helpless at rendering at acceptable frame rates (above 15 fps) even the simplest scenes with the shortest draw distances and lowest graphics settings and with just a few avatars on screen. For a new buy (and should I fail to convince LL to keep the forward rendering mode in their future viewers), you should really avoid any computer with integrated graphics (Intel iGPU, such as Iris for this model, or even AMD APUs), but buy one with a discrete NVIDIA GPU (even a mobile GTX 1060 will do); also avoid AMD GPUs because their OpenGL performances are sub-par.
  11. This protocol is just one for encapsulating communications data (this is a data exchange format/protocol): it does not deal the least with the hardware infrastructure and architecture needed for these communications, neither with the network layer involved in the transmission of the data and its own communications protocol. As such, it got strictly nothing to do with scalability or performances, and won't solve SL's issues with IM chat and groups.
  12. Zen 4 is indeed too expensive for now; I'm waiting next year to upgrade my system, likely with a Zen 4, but not until AMD gets more reasonable with their prices, which they might be soon forced to do, if they don't want Intel to grab all the market shares for desktop PCs, with their Raptor Lake CPUs... Plus, Zen 4 = DDR5, so it's expensive too, at least for now... Again, next year might yield lower prices for the DDR5. Finally, the Zen 4 motherboards (with the X-chipset) are for now also too expensive (we will see what their B-chipsets will be sold for, when they will hit the market)... That's why I mentioned the 5800X (Zen 3, DDR4, reasonably priced MBs), which also happens to run neck to neck with the 12400F (or even slightly better, when overclocked) in mono-core perfs, while offering more cores and threads (good for rezzing faster in SL), and more cache (good to hold more viewer code in caches and run it faster as a result)...
  13. Ah, forum post signatures ?... I have these turned off (no cruft on my screen !). In any case, I won't trust any spec given that way: what if you are running the viewer on another computer ? Or forgot to update the specs in your signature after upgrading one of your computer parts ? The convention is to show the hardware specs via the About floater, in the screen shots you post, or to list the current specs of the PC you are using in the text of your post...
  14. For OpenGL applications/games, and therefore for the SL viewers, NVIDIA beats AMD hands down in performances (at equivalent card prices), and its OpenGL drivers are also 100% OpenGL compatible (meaning no need for ”workarounds” in the code like what happens with AMD and Intel drivers). The RTX 3060 will work like a charm for SL and with such a powerful card, the bottleneck will be on the CPU side anyway (for a given CPU generation, the more GHz on the CPU core running the viewer main thread, the more fps you will get; the improvement is proportional to the the frequency increase). So I'd use part of the $140 you considered adding for a 3060Ti into the CPU instead... And the CPU could as well be an AMD... A Ryzen 5800X would probably perform better than the 12400F (equivalent IPC, better turbo frequency, more cores to run more threads while rezzing stuff and decoding images in SL).
  15. You know, using the debug settings floater to set huge numbers serves no purpose: the number of particles is limited (clamped) in the viewer code to 8192, because they must fit a single vertex buffer, which size is also limited... Most other render settings are also clamped (if they were not, you'd get magnificent crashes): use the graphics settings sliders and spinners to set the parameters. And 64m DD now ? And still no info about your hardware...
  16. With what hardware ?... I always had them set at 8192... Never experienced particles-related crashes either, even during griefings. If your PC cannot render 4096 particles, then you have a problem... And particles are important, for chains, ropes, etc (with this clue you can tell what places I frequent 😄)... OK, so here is what I get at the exact same place as yours, but with all graphics settings pushed to the max, including Classic Clouds on (something other viewers do not have), meaning even more (giant) particles, mesh boost factor to x3 (again a Cool VL Viewer exclusivity), no impostors, no limit on avatar complexity (and one other avatar in close FOV), all shadows, etc, and 512m draw distance: 46fps, i.e. over twice your figure, but since we do not know what is your hardware (mine is listed in the About box you can see in the screen shots)... And with 256m draw distance: 118fps... But like Rowan remarked, since there is almost no objects to see there, this is largely non-significative a benchmark !
  17. Glad to see linsket data back to the RC sims this week, however the release notes page for this release is missing:
  18. A well known race condition bug (between the receival of the interest list sent by the server, and the viewer pipeline spatial groups refreshes/rebuilds after an agent sim change)... The Cool VL Viewer got an automatic ”objects visibility refresh” feature, that gets triggered a few seconds after login, each TP and each sim border crossing, to work around that bug; you may also toggle it manually with ALT SHIFT R (sort of an ”objects rebake”). Another known bug (or rather ”odd feature”, put into place to fight griefing objects, most likely), due to the maximum node size (in number of vertices) of a spatial group: when it overshoots the maximum configured value, the whole group is skipped from the rendering pipeline; this is seen in mesh-heavy places, when too many highly detailed objects are grouped together. You may increase the ”RenderMaxNodeSize” value to fit them into the node and get them to render. Try 65536 or even 131072 KB. Of course, the Cool VL Viewer also got a workaround for this (via a ”mesh boost factor” setting).
  19. You could actually script that, using Lua, with the Cool VL Viewer... 😛 For example, I have a a small bit of Lua code that I can activate (via a Lua side bar button in the UI) to automatically adjust the draw distance (between 256 and 512m, by 32m step) to keep the fps rate between 60 and 120 fps: I use it while sailing/boating so to get the optimal draw distance and see as far as possible when along coasts or sailing across a channel (with few objects in the FOV), without getting a bad fps rate when sailing towards and approaching coasts close.
  20. OpenGL and Vulkan do not do things exactly alike... I'm pretty sure it would be possible to achieve a better AA quality in deferred rendering with OpenGL, but the current shaders (be it FXAA or SMAA) just cannot do it...
  21. Pictures and videos are better than just words, since most people (and me among them) will only believe what they can see. Let's first deal with the anti-aliasing problems in deferred rendering mode (ALM), which is the main reason why I (and likely many other SLers) am not using ALM, most of the time (while my PC is plenty-powerful enough): Here are three pictures, taken in Port Babbage, where you can see the masts and rigging of pretty ships disfigured by ALM: ALM + FXAA (LL's AA shader) ALM + SMAA (Alchemy Next's backported shader) Forward rendering (native 4xSSAA). As you can see, only forward rendering is capable to render properly the thin edges that make up the masts rigging... FXAA is also so blurry, that it is right out unacceptable (it blurs the details in the textures themselves), while SMAA only blurs the edges (which, sadly, also blurs a bit the edges inside textures with transparency, such as tree leafs in a foliage texture, for example). (*) Now, here is a video that also demonstrates another extremely annoying artifact induced by anti-aliasing (both with FXAA and SMAA) in deferred rendering, and which appears when you move around; I call it ”edges wobbling”, and it is right out ugly, like you will see by watching this video (note: since it's hosted on my ISP's site and I have to respect a quota, it will likely be removed in a few weeks). As long as LL will not come up with a solution for these annoyances (TAA, perhaps ?), I am afraid ALM will stay off 80% of the time for me (I only use it where it makes sense, i.e. where there actually are some interesting materials to see). ----------- (*) Users with a high DPI screen (120 dpi and over) are of course less likely to see/notice the blur, but with a 92 dpi screen (such as for a standard Full HD 24” monitor), the blur is obvious (and unacceptable).
  22. It is not more ”outdated” than turning off shadows !!! This is just a graphics setting, damned it ! Do you realize how totally illogical and nonsensical (and egoistic) is your reasoning ?... Basically, you say: ”hey, I'm creating super nice shiny stuff with materials and all, so I demand that everyone runs with ALM on because I want them to see all my shinies, always, even 150m away, even if they can't actually see them because they are outside a building in which my shines are, and I do not care if their Second Life becomes miserable as a result, because they would run at 8 fps instead of 25 (1) !” This is exactly like if you were a mesh trees creator who had worked very hard to obtain super-nice shadows and were demanding that everyone turns shadows on to be sure everyone will see your pretty shadows, even if they are actually never using SL in outdoor settings (where your trees are), and even if they run at 3 fps as a result !!! Beside, and mind you, I rarely ever use ALM myself, and not because it would run at lower fps rates (it actually performs slightly better in some scenes), but simply because I rarely use SL in places where there are materials around, or where they would actually matter or make any visible difference, while for these use cases (outdoors in main land, for example), I really can benefit from a better AA (2) and a lower texture memory consumption to increase the draw distance, and this by using the forward rendering mode. I turn on ALM only in places where it does matter (a club, for example, where a lot of avatars are wearing stuff with materials, or some scenic sims making heavy use of ALM in their buildings). It's just a matter for me to type CTRL ALT D to toggle it on or off in my viewer, and I can immediately see whether it is desirable or not. It's my choice for my various use cases, and I deny you or any one else the right to impose me how I should setup the graphics in SL ! (1) And yes, that's exactly what you get with an iGPU or an old card. (2) ALM's FXAA is just plain sh*tty, and SMAA which, AFAIK, is only available in Alchemy Next and the Cool VL Viewer, is just acceptable for textures (it does not blur them, unlike FXAA), but is no better than FXAA for anti-aliasing objects in the distance and/or while you move around. You have to understand that some people just cannot afford to upgrade a 5 years old computer (no need to go 10 years backwards to find GPUs that run ALM like sh*t, not to mention iGPUs), or even to buy anything else than an entry level computer without a discrete GPU (that, too, will run ALM like sh*t, even with a ”modern” iGPU). You also have to understand that no, ALM does not always look nicer than forward rendering (AA is sh*t in ALM, period), and that some use cases (that are far from being ”niche” ones) benefit from turning ALM off. This is true even with a modern gaming computer. Should LL manage to raise ALM quality and add a toggle to turn off materials in the distance to save texture memory, for example (since they don't matter when seen from far away), then it would allow me (and many others with good computers) to indeed keep ALM on all the time; it's simply not the case for now ! It's tough as well for users, if they can't run with decent performances and/or good rendering quality with ALM on. You are not in control of what people can afford as a computer to run SL... And I'm pretty sure that you'd rather see them buy your stuff instead of using that money to upgrade their PC... 😜 You must understand that the ”choice” they make by turning ALM off is not a true choice, but the result of a compromise; if the drawbacks for enabling ALM are just unacceptable (be it because of the PC power or because of a particular use case), I do not see why we should impose them.
  23. This is selfish because, by removing the choice to turn off ALM in order to be able to run the viewer better (or at all) for a particular use case (or for everything, depending on your PC), you simply exclude people from SL or degrade their experience in their particular use cases, and this just because you (and some others) think that everyone shall see the same thing on their screen: this is visual dictatorship ! I also do not see how leaving a simple choice (just like you have choice to turn shadows on or not in ALM) could be at all harmful or holding back SL's development !!! You know me better than that, and you perfectly know that I am an eager early adopter of all new shiny features (just look at the Cool VL Viewer and how fast I backport every new feature to it: no other TPV is updated faster to integrate new stuff; the last one was Puppetry, that I released only a week after LL got a RC viewer out !). However, I am also totally unforgiving towards regressions (for example, the anti-aliasing quality in ALM, even with a SMAA shader to replace LL's FXAA, is just too poor for my taste: it looks plain ugly when compared to genuine 4xAA in forward rendering). I also am very worried that raising the bar in terms of computing power (*) will just reduce the SL users base. I do not demand to stop developing SL (much to the contrary), but I demand freedom for everyone to use SL as they see fit, with the PC they can afford to buy ! As it is, removing the forward renderer is an impairment to SL adoption and user base growth. --------- (*) and on an especially bad timing, with computer parts having reached astronomical prices during the COVID and not even returning to their ”normal” price now, as well as with the inflation and the energy crisis, that will deter people from buying non-essential goods such as a new computer or video card to play in SL !
  24. Failed, apparently... Still no trace of LSD on the main grid (I just tested the RC sims sandboxes too, in Agni, just to be sure)...
×
×
  • Create New...