Jump to content

Henri Beauchamp

Resident
  • Posts

    1,295
  • Joined

Everything posted by Henri Beauchamp

  1. Use a viewer that properly loads textures and got a texture memory setting (found in the Graphics settings of the Preferences floater) going above the 512MB max offered by LL's viewer... You do not need 30 minutes to load everything (2 minutes should be more than enough). The Cool VL Viewer is a super-fast rezzer (with also texture rezzing boost features: see in the Advanced -> Rendering -> Textures sub-menu). Be careful however: increasing too much the texture memory may also cause large hiccups and slow downs (and sometimes seconds-long freezes !). I usually use 2048MB to 3072MB maximum for textures memory. Depending on how textures-heavy is the sim, you might have to adjust the draw distance so that all textures can load in the configured memory while keeping the Texture discard BIAS at 0.00 in the texture console (CTRL SHIFT 3 to open it, normally, or find the entry for that console in the advanced menus, depending on the viewer); if the BIAS increases above 0.00, some textures will be downgraded to a lower Level Of Details (LOD) and appear blurry (that's why LL's viewer cannot cope with only 512MB of textures memory).
  2. For Linux, you may use the last (working) beta version (515.43.04), or any other legacy versions (for older cards): 470.129.06, 390.151, etc... The list is available here.
  3. Stop it, please... You are being thick. I do no tell you how your viewer works, please don't tell me how mine has been working for the past 15 years !... NO, my viewer DOES NOT list ANY instant message in the chat history floater. It *may* list them in the chat console in the ”IM: First Last: Text” form, but this is just a configuration option (”Preferences” -> ”IM & Logs” -> ”Include IM in chat console”, and by the way this option did exist in v1.1x viewers as well.
  4. Typically a race condition between viewer spatial partitions updates and server ”interest list” messages sending... This may also be worked around by right-clicking in the empty place where you know some object should be showing: it instantly causes the object to ”pop-up” into existence. This race condition bug has existed since the ”interest list” got implemented by LL. Some viewers (the Cool VL Viewer is one of those, of course) allow you to ”rebake” or ”refresh visibility” of objects: ALT SHIFT R shortcut for the Cool VL Viewer, which also auto-triggers this ”object rebake” after login, far-TPs and region borders crossing (since these are also the victim of this race condition). However, I noticed that LL's latest ”performance viewer” was more prone to NOT rezzing some objects until you got your camera closer to them (I noticed this when comparing the FPS performances with my viewer: the texture consoles showed that only half the number of textures were loaded by LL's viewer compared with mine, and looking closer, I indeed found out that some objects simply were not rezzing at all until I zoomed in on them)...
  5. Heat should make no difference whatsoever in regard to the number of rendered objects: this is the CPU which selects what objects data is sent to the GPU (even if the latter is also used to ”cull” objects hidden behind others, but even this cull result is processed by the CPU side of the code before the final draw calls are sent to the GPU). GPU overheating would cause bugs, either in the form of graphics ”artifacts” (bogus textures on objects, distorted objects, parasites/noise on the screen) or right out crashes. Note however that all modern GPUs will simply lower automatically their frequency when reaching temperatures close to their ”absolute maximum ratings”; artifacts seen nowadays are usually the result of an excessive over-clocking of the GPU or VRAM, and not overheating. If your GPU overheats (or rather heats up too much for your taste), then consider improving the air exhaust from your computer (for desktop computers), lowering the operating frequency of your GPU (for laptops) or using frame-limiting features of your viewer (the simplest and most performances hurting method being to enable V-sync: it sadly slows everything else down, and not just the frame rate); I recently added a frame-rate-limiting feature to the Cool VL Viewer because it became so fast that it made my GPU fans spin noisily for no purpose (rendering at 300+ fps is not really useful). The coding of this feature is yet smart enough to avoid hurting network/rezzing performances (the ”free time” being used to rez textures and pump the network instead of having the CPU just waiting and twiddling thumbs for the next V-sync pulse).
  6. Once and for all: there is no such mix ! But better than words, here is a screen shot with a legend for the UI elements: As a genuine role player, I make a clear distinction between chat (for In Character exchanges between avatars) and IMs (Out Of Character discussions between role-players), and separating the two kinds of textual exchanges is essential. It should also be noted that the v1 UI is by far the one providing the best screen estate, with high information density and small floaters (unlike the super-cumbersome v2 UI floaters).
  7. The ”not made at home” syndrome which always stroke LL... They hardly ever accept a contribution ”as is”, even a better coded and more performant one as their own. 🙄 For the mini-map, I would cite the better (way faster) algorithm for drawing parcel boundaries, for example... Instead of reusing what has existed for years in some TPVs (IIRC, this was first seen in Catznip) with a fast image overlay recalculated only every few frames, LL implemented them using a slow algorithm redrawing every single border segment with GL lines at each frame !... Just laaaaaaaame !
  8. This is just a configuration choice: you may perfectly keep IMs out of the chat and console (and yes, this is what I always did)... Note also that the Cool VL Viewer offers a great deal of configuration options for the chat and IM floaters (e.g. with or without a chat input line for the chat floater) and chat console. Also, there are two kinds of ”v1 chats”: the v1.18.0 one, i.e. the ”good one” (the one I always have been using for the Cool VL Viewer) with separate Friends and Groups floaters, and no chat input line (v1 viewer have a separate chat input line, so you really do not need one in the chat floater), and the ruined v1.18.2 (first ”voice viewer”) ”chatterbox”, which is just as horrible as v2 and newer... v2 chat is an horrendous thing that I thoroughly hate !... Each one their taste (or lack thereof)... 😛
  9. Which prompted a change in how my viewer (the Cool VL Viewer, a v1 viewer that did not have v2+ viewers auto-creation of calling cards on login) handles calling cards. Quoting the release notes:
  10. If you recently updated to NVIDIA's driver 515.48.07 for Linux, be aware that it is totally borked and fails to render properly in ALM mode, with culling issues (disappearing or flickering objects & avatars, bad shadows and reflections, etc). The bug has already been reported on NVIDIA's forum by a couple people. Until NVIDIA fixes it, revert to the previous driver (515.43.04, which is the former beta driver, is working just fine).
  11. Check with the task manager for background tasks that would eat up your CPU and/or GPU: right click on the task bar, choose ”Task manager” (or whatever it is called in English or in your language), then ”more details” in it (if not already in detailed view). You will get something that will look like this (in French, here, sorry): Sorting by clicking on the column title for the CPU or GPU will allow you to see the top consumers; here, there is just the SL viewer running and consuming lot's of CPU and GPU, and this is what you ideally should see as well. You may also use the ”Performance” tab in the task manager, that will tell you how much of your CPU, GPU, memory, etc is consumed in total (it will also allow to check the actual frequency of your CPU, in case it would fail to go in turbo mode due to a bad configuration or overheating, for example)...
  12. Your PC specs are just fine and should allow viewers (even slow ones) to run at 50+ fps. Check that there is not another program using the CPU and/or GPU in the background (mining malware for example)...
  13. The commit for SL-17244 (a crash fix) won't explain any slow down (and I certainly did not notice any slow down in my viewer after backporting it)...
  14. A rather unusual request, that should go to the Cool VL Viewer support forum... (*) But you can do it yourself, by editing just one line in skins/[default|dark|silver]/colors_base.xml: <DefaultBackgroundColor value=”62, 62, 62, 140” /> <!-- Background color for unfocused floaters --> Replace 140 with 255 and voilà ! ---------- (*) You will find many answers to questions already replied on this forum too... Including how to create your own custom skin (which may only differ by the default colours) or override the existing ones...
  15. Hey, I'm not THAT old ! 👴 And the v1 UI is by far the best anyway (unbeatable for productivity and screen estate)... 😜 As for checking the performances, well, you should really give it another try now... But be careful, because speed can be addictive ! 😛
  16. I agree 100% with the last two phrases, however I strongly disagree with the idea of regulating metaverses like you would regulate public spaces in a town... This would ruin all the fantasy aspects of SL, for example (and would pretty much mean banning all ”adult” activities in it) ! Privacy is privacy, period. What happens in adult sims shall not be ”regulated” as long as it happens between consenting adults and following the rules set by the sim owner/manager. As such, a privately ran sim or a private parcel in a main land sim, should be considered the same as the sim/parcel owner's RL home: they invite whoever they want and do whatever they want in it, and what happens in their home (as long as it is legal) is certainly not the government's business !
  17. Verify that the V-sync setting is disabled in LL's viewer.
  18. Thanks ! This apparently mirrors Cinderblocks' git repo, which is also still available...
  19. Not sure what happened to the site, but for me, the last properly and fully working Radegast version under Linux/Mono is v2.18 (newer versions got serious issues with fonts and button labels, among other less visible things); Radegast v2.18 is still available from an old git (used by the original and alas deceased developer, Latif Khalifa). Sadly, this old version makes use of a login procedure that LL said will be removed ”soon”... Not too sure how ”soon” it will be, but at some point, v2.18 will become unable to connect to the grid. 😞
  20. @AnsarielThis won't change a bit ! The CPU is still the bottleneck in the performance viewer and it is not the threading of the GL image creation (which only happens on rezzing, and in just one thread in LL's perf viewer, while I use several threads in my backport), that will change anything at all: the modern GPUs (GTX 960 or better) would be capable of absorbing at least twice what the fastest CPU could throw at it while running the SL renderer. To be able to lift the bottleneck currently residing at the CPU level, we would need a truly multi-threaded renderer... @sodasullivan Your best bet is to get a CPU capable of the best single-core performances, since the rendering engine is mono-threaded. Note that there are also things you could try with your current CPU to improve the performances, such as over-clocking it (but Ryzen CPUs are not really good at this exercise), or using thread affinity (the Cool VL Viewer offers this possibility) to affect the best performing core to the viewer main thread. Yes, more cores is the way to go (my viewer can for example use a multi-threaded image decoder and with the perf viewer improvement I backported, a multi-threaded GL image creation, meaning the more cores, the faster you see textures rezzing, which is definitely important when riding a vehicle across main land, for example). But NOT at the cost of the single core performances, since this is what will determine your frame rates in the end. On my main PC I got a 9700K (8 cores, no SMT), that I overclocked at 5.0GHz on all cores (the CPU is ”locked” at this frequency, i.e. only the C0 and C1 states are permitted), and a GTX1070Ti; this system is plenty powerful enough to run SL, even in the most stressing rendering conditions. Yes, in general, you should buy RAM by pair (or quad) of sticks, and avoid mixing brands or models (i.e. all sticks should be the same model of the same brand), because discrepancies in timings and performances will cause the ”training” algorithm of the motherboard to relax the timings to meet the worst stick performances... This said, this won't change significantly the actual performances of the viewer (the RAM speed only intervenes by a few percents, at best, like 1-3%). Just make sure your system got enough DRAM: 16GB would be today's ”must have” and 8GB barely suffice to run a SL viewer.
  21. That's why I wrote ”if you are into AAA games, then it is easier to run them under Windows”. I never said you could not run them under Linux, but it requires more know-how, fiddling and sometimes right out ”hacking” for some games... Plus, the DX games will run slower under Linux+Wine (or its Proton/Lutris/Whatever flavours) than under native Windows... So, yes, for a ”gamer” (not a SLer), a dual-boot is usually the best solution.
  22. SL viewers (all of them) are *much* slower under macOS than under Linux or even Windows for equivalent hardware (or prices); the reason is that OpenGL is stuck at v2.1 under macOS and in very bad shape (with lots of glitches and workarounds to avoid them in the viewer code). If I were you, I would stay away from Macs... at least for SLing. As for the supposed issues between Linux and laptops, it's also largely a cliché, as long as you stay away from the newest hardware that may not yet be supported under Linux. If you choose among OS-less laptops, they are usually designed to be fully Linux-compatible (but just like for everything, and not just computing, do make sure before buying). There are also laptops sold with Linux pre-installed...
  23. V-sync is bad for everything. Period. Tearing won't happen at all, as long as you enable ”triple buffering”. Please read this (old but) excellent article about it. V-sync is a PITA as soon as your ”unlimited frame rate” (i.e. the inverse of the actual time it takes for your PC to render a single frame) drops below 60FPS (or the frame rate of your monitor), the worst case scenario being when your render time is about 33% higher than a monitor frame time: then the driver ”freezes” the program until the next V-sync pulse happens on the monitor. This causes frame rate ”hiccups” that do not happen with triple buffering. In SL, this artificial slowing down also causes slower network transfer rates, objects data, textures and mesh decoding, etc, because all these operations are executed in the same thread as the renderer: if you slow down the latter, you also slow down all the rest !
  24. A total cliché !... Nothing that can be done under Windows is not doable under Linux. True, if you are into AAA games, then it is easier to run them under Windows, but Linux will run *everything* else better and faster, and at no cost ! I will keep recommending it and do not need your permission ! I have been running Linux for over 25 years, but there have never been as many reasons to use it in place of Windows as today: security (no virus, no malware !), speed, hardware support (with Windows 11 many PCs suddenly became ”obsolete” just because Micro$oft decided so), privacy, freedom... Take your pick !
×
×
  • Create New...