Jump to content

Henri Beauchamp

Resident
  • Posts

    1,273
  • Joined

Everything posted by Henri Beauchamp

  1. Fixed in this commit for Maint-U viewer. The bug was related to the new capability-based offline message retrieval (it did not affect the old method based on UDP messaging)...
  2. Drat ! I missed an excellent occasion to stress-test my viewer... 😜 I suppose you meant 8GB... But any viewer will have issues fitting any ”heavy scene” on a PC with only 8GB of RAM (especially since at least 1GB of that RAM will be used by the OS and services it runs). This said, on the condition you reduce your ”Texture memory” (and switch off the buggy ”Dynamic” setting, for FS) to 512MB, and reduce your draw distance to 64m or so, you might be able to attend large venues with a 8GB PC. Also, under Windows, do make sure to allocate a large swap file (say, 16GB) of a fixed size else that Mickey Mouse OS will reallocate the swap file to fit the viewer's used memory and will return NULL pointers while doing so should the viewer request an allocation ==> Crash ! I would also suggest testing the Cool VL Viewer in the same tight-memory conditions: I worked a lot on the textures fetching/prioritizing/decoding algorithms, and it will use much less texture memory than other viewers while avoiding trashing and blur everywhere... But let's face it. For today's SLing, 16GB of RAM is a strict minimum (with 32GB highly recommended).
  3. Bad idea and totally useless for a medium power CPU... You'd be better off replacing the air cooler for a better one, such as a Noctua NH-U12. Easier to deal with than liquid cooling (no leakage risk too), much less noisy at idle (almost silent), and same performances as (or even better than) many entry level AIO liquid coolers... Even today with my brand new Ryzen 7900X (a beast, but it runs damned hot), I am still using an air cooler (NH-D15S with two 120mm fans from Noctua too): the motherboard cooler evaluation ”AI” algorithm ranks my air cooling system at the same level as a high performance liquid cooling system !
  4. Or other ”condemned” features I won't name here to avoid seeing them disappear while I still have them used in my viewer... 😜
  5. What about trying your ISP's DNS server(s), instead of Google's ones ?... As an alternative, if you are worried about spying (ISP DNS as a result of national laws, Google DNS for commercial/profiling or spying (NSA) purposes) or censorship (ISP DNS, as a result of national laws enforcement), then your could try dnscrypt. Be aware, however, that using a DNS not located in your own country may cause the viewer to connect to CDN servers not located near you (this is not the issue here, since all SL sim servers, involved in the connection process, are hosted in the US on AWS, and not on CDN servers), meaning maybe a slower rezzing experience.
  6. The Physics engine is not even aware of what animation is played on the avatar (animations are only played on the viewer side): for this engine, the avatar is just a ”bounding box” (centered on the avatar in-world position, which is itself offset by the playing animation on the viewer side), and that's it. The engine therefore cannot know where are the limbs of the avatars, and which limbs are supposed to collide with the ground/floor/objects...
  7. Not in the Cool VL Viewer, no... I removed useless Hypos well over a decade ago from its code...
  8. It's barely Poser 3 quality, yes... without the IK that Poser can do ! Though, for the latter, there might be some hope with the work done around puppetry (IK is actually the feature I see the most promising/desirable in the puppetry project). Another big issue with SL's animations, is that they cannot auto-adapt to the avatar ”real” size (counting meshes and other attachments). Well, there is a couple workarounds (RLV @ajustheight or playing ”leveling” animations), but they are crude, to say the least.
  9. The problem is not with the new CPU: the i7-4770K is quite old (2013) and cannot match a Ryzen 4500 (2022), whatever the program it runs. See this comparison. Define ”lag”, please... As for ”halting”, do you mean while rendering in SL, or while running non-3D programs ? Usual suspects for graphics 'hiccups' are memory-related: you only have 16GB of RAM, and the viewer can become quite memory-hungry when configured to use a lot of texture memory (each texture in VRAM, got two copies in RAM: one ”raw”, and one decoded). With Firestorm's current release, disabling the ”dynamic” texture memory settings may help lowering the memory usage, like so: With Windows, you will also want to configure the swap file for a fixed size, to avoid seeing it resized while the viewer runs (could cause slow downs or even crashes). See this post about how to do it. There is also a possibility for ”generic lag” (non-SL related), with power management under Windows: make sure to enable the ”Performance” mode, to avoid seeing your CPU slowed down to save power, and taking time to switch back to turbo when you need it. Also, do disable the ”game mode” of Windoze 10: it sucks rocks and will actually impact negatively the performances. Finally, in NVIDIA's driver settings, do enable threaded rendering (+30 to +50% fps in SL) and disable Vsync if it is enabled (use triple buffering instead to avoid tearing), like so:
  10. In modern viewers ?... It does nothing at all. (*) This is a pre-Windlight, totally deprecated setting (I found its trace back in my archived v1.19.0.5 SL viewer sources, that is, the last non-Windlight SL viewer version). It was used to override the time of day and Sun angle transmitted by the simulator to the viewer, but this code was removed from the new Extended Environment code (the debug settings and menu code are therefore inoperative remnants) I therefore removed that setting entirely in the Cool VL Viewer, when I cleaned up the Windlight (WL) code while backporting the Extended Environment (EE) one. Yes, you can change the Sun direction and elevation, using the Local environment editor (and not that deprecated Sun override setting); however, with EE viewers, you cannot change the ”time of the day” (which varies the various other settings than just the Sun) any more while preserving the parcel environment settings (you need to apply a fixed setting instead). With the Cool VL Viewer I therefore preserved the WL settings editor, and using it, you can still apply the WL settings you want, adjust the time of day, and then hit ”Preview frame” so that they get translated into their EE counterpart and get used as a local fixed setting... (*) For Lindens and TPV devs, have a look at process_time_synch() in llviewermessage.cpp, which used to deal with that setting: you will also see that the remnants in that function do nothing at all (only _PREHASH_UsecSinceStart is extracted, but its value is sent to the LLWorld::setSpaceTimeUSec() method, setting mSpaceTimeUSec, which is never used anywhere any more in the viewer)... process_time_synch() and associated ”SimulatorViewerTimeMessage” UDP message could be removed entirely for SL (still useful for OpenSim however, on the condition the OpenSim-compatible viewer kept the WL code making use of that UDP message values).
  11. You are mistaken. Currently, with its mono-threaded OpenGL renderer, what counts most for frame rates, once you got powerful enough a GPU (GTX 1070 or better), is the mono-thread performance of the CPU. SL is ”light” on the GPU, compared with AAA games, but it is very heavy on the CPU (or more exactly, on the only CPU core it uses to render, when modern AAA games use several cores for the same task) and, for example, increasing the frequency of the latter translates in an almost proportional increase of your frame rates. Things will change, eventually, when a multi-threaded Vulkan renderer will be implemented... This also means that non-3D variants of Ryzen CPUs will actually perform better in SL, since they got the same IPC as, and a higher core frequency than their 3D counterparts, while still being able to keep in their caches the time-critical code of the SL viewers, which is small in size: the full executable program size is currently between 26MB (Windows build of the Cool VL Viewer, 49MB for the Linux build) and 87 MB (Linux build of Firestorm, 51MB for their Windows build), only a small part of that size actually representing the renderer and other code called every frame (which is the part that will be kept in the CPU caches).
  12. Poor people in rich countries don't buy Apple either. But even when not poor (I am not, even if I am definitely not rich either, by far), but you do not have disposable income (just like me), you do not spend your money in overpriced stuff, but on the contrary you seek for the best performance*durability/price ratio; note that in ”durability”, and in the PC world, parts replacement plays a big role, so you want to be able to upgrade (or repair) just one part of your PC in the future, and not be forced to buy a whole new PC every 5 years.
  13. Avoid APUs/iGPUs at all price ! They are too slow, and they do not have VRAM (the RAM is then used both for graphics and programs); you would have a miserable experience in SL, especially with future viewers; forward rendering has (sadly) been removed from the PBR viewer, and the latter can use up to 5 textures per object face (4 PBR textures), instead of just one diffuse texture per face in forward rendering, meaning it will eat up VRAM (and RAM) quite fast, and there will be no check box in the Preferences to help you reduce the memory consumption... So, buy a PC with a discrete GPU, and preferably a NVIDIA one: their graphics drivers is way faster (+50% and more) and way stabler than AMD's (current Windows drivers for AMD cards are very, very, very crash-prone). Anything equal or above a GTX 1070 (with 8GB VRAM) would do quite fine, so you have an ample choice in graphics card cost. As for the CPU, six cores are a strict minimum: like Monty said, viewers have pushed more and more things to threads (not to mention modern graphics drivers also can use threads of their own), and you can now easily saturate a 8 cores CPU while rezzing in SL. Plus, the future of SL will be Vulkan, and a Vulkan renderer would be able to use more than just one thread as well... So, 8 cores + SMT (i.e. 16 ”threads”) is the way to go. AMD or Intel, it does not really matter. Do not forget the RAM: a viewer can easily gobble up over 16GB when pushed to its limits; anything below 16GB of RAM is therefore not enough, and 32GB is the good amount of RAM for the future (PBR, again).
  14. Then this is abnormal (and likely a viewer bug)... However, do make sure the spammer did not just register a new avatar under a slightly different name (which is a common occurrence among spammers/griefers accounts).
  15. If you get that message on login, it might be the result of a race condition between the queued offline messages retrieval and the mute list retrieval/initialization; the mute/block feature is entirely a viewer-side process (only the mute list is stored serve-side on log off, and sent to the viewer at its request, after login), and all viewers are not created equal in this matter...
  16. It does not work, it crawls ! Come on !... For Linux you got native Linux TPVs to run, and they run faster than their Windows and macOS builds, and faster than LL's viewer.
  17. Courtesy starts with accepting others' opinions and hearing their arguments, instead of getting them to shut up and keep quiet (which is called censorship). 'nuff said, and end of the discussion for me since arguing with you seems totally pointless anyway.
  18. 🤪 I will voice my opinion without asking your permission... ever !
  19. In Apple ?... Really ?... Are you a worm ? 😜 If you worked for Apple, then it explains quite well your fanaticism... I am sorry, but Apple is no better (and actually worst) than Micro$oft about privacy. Geez, I had a hard time blocking every hidden request from my Hackintosh VM (which I use exclusively to check that my viewer still compiles for Macs) to Apple ”services” (spying tools, ads pushes and other annoyances and privacy breaches); it was harder than for Windoze 11 (which sucks rocks as well about privacy, but is much easier to ”convince” to just stop phoning home). Apple also got closed hardware, with voluntary measures to prevent changing parts for non-Apple ones (forget about adding RAM to a M1/M2 Mac, for example: it's simply impossible 😲 ), or to opt to buy ”compatible” parts instead of Apple over priced ones. Then, there is macOS itself, and the fact it is compatible with nothing out of Apple stuff (and becomes even less compatible as time passes), with proprietary APIs that only impair portability of programs written for other OSes (e.g. OpenGL, and even Vulkan). Special mention as well, about the total lack of documentation about Apple hardware, forcing Open Source developers to spend their time reverse-engineering everything. Finally Apple is for the rich (overpriced, fragile hardware) and the snobs who prefer to buy the brand than to buy more performant hardware for the same amount of money. Oh, and I almost forgot... Apple is the super champion to evade taxes !... When they will pay the taxes they owe to my country (and to the EU in general), then, I will, perhaps, consider investing some of my free time (like in free beer) in supporting Apple stuff... Till then, they will not get a single cent from me !
  20. I did not say this, but what I do say, however, is that I do not personally care the least about macOS...
  21. I don't... Several macOS users (see the credits in the viewer About floater) have been providing and provide macOS builds of my viewer, and they also contributed some necessary changes to keep it compiling with Xcode successive versions. This said macOS compatibility is not a valid constraint/criteria for me (if at some point, my viewer stops to work under macOS, well.. so be it !). But frankly, running a viewer (any TPVs and SL viewers alike) under macOS is simply a loss of time: viewers are waaaaay faster when running under Linux, or even (horror !) under Windoze...
  22. Indeed: a macOS user certainly does not want freedom, safety and privacy... 🤣
  23. Of course it is... Relax, man, I was just joking (see the smiley)... This said, once Linux properly supported on M* Macs, the option for a dual boot would be open... Hopefully, by then, LL will have come up with a Vulkan renderer and macOS will be able to run it as well...
  24. Yes, there is: follow the link I gave...
  25. Soon time to install Linux for M1/M2 on your Apple to keep using OpenGL ?... 🤣
×
×
  • Create New...