Jump to content

Henri Beauchamp

Resident
  • Posts

    1,286
  • Joined

Everything posted by Henri Beauchamp

  1. This is just a disconnection, and you simply have no other choice than quitting the viewer (the program itself should never ever crash in this case: if it does for you, please report it with crash dump and logs, since even such crashes are unacceptable for me). Another good reason for choosing NVIDIA: their drivers are rock-solid and never crash for me, even if some versions have had rendering bugs (but reporting them on their forum gets them promptly fixed). But these are promptly fixed as soon as you report them to me. In fact, with the exceptions of when I am developing and testing new code, I never crash with my viewer, or more exactly, the rare occurrences I crash unexpectedly, I fix the corresponding bug immediately so that it never crashes anymore there. 😛 This is in stark contrast to what happened with 32 bits viewers in the past, when you always ended up crashing at some point, provided you stayed connected long enough, due to the limited virtual address space and its fragmentation...
  2. For SL, you do not need such a powerful video card, and 8GB of VRAM would be enough for modern SLing, even if more is always better (especially when visiting textures-heavy places with large draw distances). The choice of a NVIDIA card is however indeed the best (way faster drivers than the competition for OpenGL and even Vulkan). I am currently using a RTX 3070, and it is underused (with this card, I need a faster CPU than my 9700K @ 5.0GHz). Do care about the CPU mono-core performances, since they currently are the bottleneck for viewer frame rates. Also, I would recommend at least 32GB of RAM (when pushed to its limits, with 256 or 512m draw distances and in textures/meshes heavy places, the viewer alone can easily gobble up to 24GB or RAM), with 64GB as a very comfortable option (this is what I currently have).
  3. Yes, the profile picture ratio used to be (approximately) 4:3 in v1 viewers and TPV viewers that re-implemented (or kept) the floater-based profiles when LL decided to commit the mistake to migrate to web profiles, where the profile picture was displayed with a 1:1 ratio... And now that LL (finally !) understood their mistake and reverted to floater-based profiles in their own viewer, they kept the web profile 1:1 ratio... Annoying, isn't it ?... 🙄
  4. You can, using the Cool VL Viewer to export and reimport it (once done, you should have the newly imported list used by any viewer, since it will have been updated server side). Proceed as follow: Launch the Cool VL Viewer, log in with the avatar you want to export the list from (let's call it ”Avatar1 Resident”), then log out. If you never logged with your second avatar (let's call it ”Avatar2 Resident”) with either the Second Life official viewer or the Cool VL Viewer, then log in now with it, and log out (this ensures the proper per-account directory is created). From the settings directory (~/.secondlife/ under Linux, %AppData%\SecondLife\ under Windows, not sure which under macOS), inside the per-account sub-directory (here ”avatar1_resident/”), copy the ”mute_list.txt” file, and use it to replace the one (or add it when absent) inside the destination avatar directory (here ”avatar2_resident/”). Log in with the second avatar using the Cool VL Viewer: the mute list (AKA block list for v2+ viewers) should have been updated (”View” -> ”Mute list” menu entry to check) with the entries coming from your first avatar. You may now log out (the list will get uploaded to the server by the viewer) and may now relog with any other viewer. It is also possible to replace entirely the second avatar mute list with the one of the first: simply set the ”MuteListIgnoreServer” debug setting to TRUE (CTRL ALT S to open the debug settings editor) before logging out from Avatar1, recovering its mute_list.txt file to replace the one of Avatar2, and logging in with it using the Cool VL Viewer (the mute list of Avatar2 is then set to the exact copy of the one of Avatar1), at which point, do not forget to reset ”MuteListIgnoreServer” to FALSE before logging out. When using Linux (this should work with macOS as well: just replace ”~/.secondlife” with whatever directory name macOS is using) and the Cool VL Viewer only, you may also make it so that all your alts share the same mute list thanks to this feature, and thanks to the OS symbolic file links support (something Windows never properly implemented). Proceed as follow: Copy the mute_list.txt file from one avatar's per-account directory (e.g. ~/secondlife/avatar1_resident/mute_list.txt) to ~/secondlife/mute_list.txt. Create a symbolic link inside every per-account directory to that mute list file. E.g.: cd ~/.secondlife/avatar1_resident/ rm -f mute_list.txt ln -s ../mute_list.txt cd ../avatar2_resident/ rm -f mute_list.txt ln -s ../mute_list.txt Then log in with each avatar in turn with the Cool VL Viewer and log out. It will cause the common mute list file to get updated with each avatar's mutes, and each time you update the list from one avatar account, it will propagate to the other avatars on next login. Caveat: when running simultaneously two or more instances of the Cool VL Viewer with avatars sharing the mute list, the latter will be updated with the list currently known by the last logged out session, so you might temporarily miss a recent update to that list (but it will get resynced on next logins, eventually, since the server side lists are kept separate and used to update the mute list file as well with missing entries).
  5. The GTX 660 and the Radeon HD 7000 have been released in 2012, so this should really not pose any issue for PCs equipped with a dGPU. Intel Gen9 (Skylake) was released in 2015, so PCs with just an iGPU would be the most impacted by the abandon of OpenGL in favor of Vulkan. You must also take into account that the upcoming PBR viewer will require at least the cited GPUs to work at acceptable frame rates; forward rendering got removed from it, and older GPUs, such as the GTX 460 or equivalent, are very bad at rendering in deferred mode (ALM now, PBR tomorrow)... I can only speak for myself, but I will likely make a poll on the Cool VL Viewer forum when the time will come, and if any user without a Vulkan capable hardware shows up, I will perhaps maintain a legacy OpenGL branch for some time...
  6. No, this bug still plagues some (many, all, depending on traffic, perhaps ?) simulators (come see in my home sim, Hunburgh, if you want), and does not depend the least on what viewer you are using. It appears about two days after the last sim restart (and as a result my region gets restarted twice or thrice a week).
  7. Chat excerpt from last Server Users Group: Feel free to nag LL harder about it, since this bug is now over one year old ! 😲
  8. I actually considered implementing a Lua function for it... It was a (long) while ago, and I do not remember all the gory details which made me push the idea farther down my ”ToDo” list, but chances are they are related with how touch-events are dealt with, between viewer and server (asynchronous tasks are involved, with network messaging, which, while not impossible to deal with, do complicate things).
  9. There is already an ”HUD objects scale” settings in viewers, and while it applies to all worn HUDs, nothing would prevent to make it a per-HUD setting (based on the HUD inventory name, for example). That won't be hard for LL to implement (the viewer name is provided to the login server: it's just a matter of setting a flag on the connected avatar as a result, and make it available to scripts). I really think this is an over-dramatization, since I do not see LL abandoning desktop PC SLers any time soon, even in the event of a massive mobile viewer adoption. Totally wrong ! With this principle, arts won't exist at all, buildings would all be of the same model, etc... And innovation could not even exist ! Again, I do not see the mobile viewer and its future users as a menace for existing SL contents, features, and users ! Much to the contrary, I see it as an opportunity to develop SL in new ways, that will benefit everyone.
  10. While I am personally not interested at all in a mobile viewer (being the old fart I am, I do not even own a smartphone; perhaps I'd need a fartphone ? 🤪), I am nonetheless quite happy that LL is finally developing one ! Why ? Because, let's face it, SL urgently needs a way to attract more users to survive. LL cannot only rely on old time SLers to keep the business going. The mobile platform, while quite limited and imperfect for enjoying the full range of activities SL offers (I really don't see how you could para-RP on a smartphone, for example, or build, or script...), will give an opportunity for younger and/or less ”geeky” users to come to SL, and stay (the user retention has always been a BIG issue in SL). So yes, there will be newcomers with different views on what SL is to be used for, but we should not fear it, and on the contrary embrace it as a chance to see SL strive and develop in the next two decades, instead of slowly dying before a foreclosure... I'm not worried: SL is big enough for expanding the diversity of its usages without predating existing ones. Long Second Life !
  11. Vulkan, yes. 😛 ”Next year” was just a vague mention. I won't bet on it, even if I am looking forward to it. Between the PBR viewer (and a few more months needed to finalize it) and the mobile viewer, I bet LL's programmers got enough work for quite a few months, before they can dedicate their time to such a big task. As long as you got a Kepler (GTX660) or newer GPU for NVIDIA, or a GCN (Radeon HD 7000) or newer GPU for AMD, or a Gen9 (found in Skylake) or newer iGPU for Intel, you are good to go. Yes, with likely a threaded renderer, meaning the CPU single core performances bottleneck would disappear...
  12. It shows that your texture memory is under high pressure (bias is very high, at 4.5, for a maximum of 5.0, meaning the textures must be quite blurry since decoded at a lower resolution to fit the 384MB of max bound textures). I would bet this is LL's viewer and its old 512MB maximum for the texture memory setting (which is quite low for modern graphics cards that got gigabytes of VRAM)... But this is not really a worry as far as freezes are concerned (those would happen when the VRAM is full but here, ”GL free” shows 8700MB, so that's fine). In conclusion, I doubt the freezes are due to the hypothesis I first made (but you still can have a look at the texture console when they happen, to see if those figures get worrisome). From the Windows task manager, you should be able to see the GPU usage figure, per process... For more detailed info, you could use (from a terminal) nvidia-smi.exe (/Program Files/NVIDIA Corporation/NVSMI/nvidia-smi.exe), which should list the VRAM usage, per application. EDIT: I just saw this article about a bug that could cause high CPU usage in NVIDIA driver v531.18, so you might want to try the former version of the driver, or reinstall v531.18 without the telemetry (which would apparently be the culprit); NVCleanstall can do that for you...
  13. Here is, for the record, the question I posted via the form: When I joined SL (2006), we could build everything with just the viewer build tools and a very basic texture editing software. Nowadays, short of using hyper complex software such as Blender, we cannot build any more modern contents (and this is going to get even worst with PBR materials); we urgently need some in-viewer tools to perform basic mesh modeling (with, e.g., pre-made basic mesh prims and a way to model them like clay, or a mesh hull generation tool that would create an optimized mesh hull out of a SL-prims based build), and materials texture maps generation (and baking, for the compatibility diffuse map). Is there any plan to bring this in the future, or will the Sansar mistakes be repeated and contents creation reserved to experts and professionals ?
  14. What is the textures console saying (CTRL SHIFT 3) ?... Bias ? Bound GL textures ? VRAM usage ? This is a wild guess, but it looks like the VRAM getting full and spilling over the RAM (causing second-long freezes). FS (and LL, for the PBR viewer, at least) recently changed how they account for textures memory usage, but it may cause issues that were not previously seen happening... Also, try the Cool VL Viewer (it got fixes and different algorithms to work around such issues). There was an issue with some NVIDIA drivers and Discord, but it affected NVIDIA RTX30* cards only, and got fixed in their recent drivers (your 531.18 version should be fine). Yet, you could have a look at the GPU frequency and VRAM usage (per application) when the freezes happen, to see if any similar issue would be arising...
  15. This problem is rather most likely due to bad/insufficient virtual size re-evaluation as the camera FOV changes (only a small part of the texture virtual sizes are reevaluated at each frame). Try the Cool VL Viewer (I reworked the texture fetcher and cache quite a bit): you won't see this issue, unless a texture got somehow corrupted (it happens, sometimes, due to network issues), at which point you still can force-reload it (right click on object and press CTRL SHIFT U). I also implemented a feature to load textures at proper resolution faster when moving around: ”Advanced” -> ”Rendering” -> ”Textures” -> ”Boost proportional to active fetches”. And you may also ”Boost textures fetches now” manually (CTRL B), which also happens automatically on login and far TPs, to rez the world hyper-fast.
  16. Unlike what you are suggesting, the viewer does use the full range of LODs of JPEG2000 textures: this is thanks to this mechanism that it is able to use so little memory, when compared with what would happen if it had to display all the textures at 0 discard level (i.e. max LOD/full resolution). And yes, it makes a HUGE difference at both the RAM and VRAM consumption level, because it basically means that for each increased discard level, the displayed texture is four times smaller in size (pixels height and width are both divided by two at each level). E.g. a 1024x1024 pixels texture (as defined by the builder of the object using it) can therefore be finally used at 512x512, 256x256, 128x128, etc, depending on how important to the camera it is, i.e. how large is its ”display area” in the currently rendered scene; at 1024x1024, the (decoded) texture uses 4MB of RAM and 4MB of VRAM, while at 256x256, it only uses 256KB... If you want to see what would happen if the viewer were to load the full res textures, enable the ”TextureLoadFullRes” debug setting, and get ready for a crash (unless you are in a place where there are not too many textures), because the memory consumption will soon skyrocket, till your RAM and VRAM are exhausted. Also, the viewer is ”clever enough” to use HTTP range requests so to avoid having to re-download the low LOD part of the texture files (i.e. the already downloaded part) when it needs a higher LOD for it, and its textures cache is used intensively to avoid and re-download a texture when increasing its discard level to make room in memory for another, more prominent texture in your FOV; instead, the already cached raw texture is fetched from the disk cache and only re-decoded at a lower LOD.
  17. Indeed, for a role-player like me, voice is an absolute no-no: when I roleplay, I care about how the characters look, feel or ”sound” (via the posed text), not about who are the actual, real life persons behind them, and I certainly do not want what is left to my imagination being ruined by a RL voice that won't match the character being played... Not to mention the (spoken) language barrier: see below. I wish voice would be banned from those meetings... When they are held (even partly) on voice I cannot attend them; not being a native English speaker, I have the greatest difficulties to understand spoken English, especially when spoken with thick accents, a high speech rate, mumbled words, or through a microphone and speakers, with saturation, statics and the like. Attending voice meetings would just be the source of a lot of frustration and a total loss of my time.
  18. Yes, indeed... My (still running) oldest (third, computing power wise) PC got a Core2 Quad Q6600 (OCed @ 3.4GHz) with a GTX660. Yours is barely, a wee bit, more ”powerful”, and I would definitely not be using it for everyday's SLing any more, even with the fastest viewer. Forget it ! You would be CPU-bound... For today's (and tomorrow's, considering PBR impact) SLing, I would recommend at the strict minimum an i7-4770K (or equivalent) and a GTX960 (forget about Intel/AMD (i)GPUs: their OpenGL drivers are simply not on par with NVIDIA's at equivalent hardware price level), with 16GB RAM.
  19. You should read better what I wrote: ”Linux got almost no malware and virus” Yes, there are examples, such as injections via ”pip” with a hacked Python extension on github... However, if you limit yourself to your distro's packages and the Open Source software you compile yourself, you just never have any issue; I have been using Linux since 1993 and never, ever had encountered any malware, rootkit of virus on any of the many PCs I have been running it onto.
  20. You are mistaken: I did not post the videos in this thread: @Arluelle did. 😄
  21. So you are running it under Windows... Yes, AMD's proprietary OpenGL driver is a likely culprit... AMD rewrote their driver a few months ago (because the old version sucked rocks and was dead slow), but their new driver is crippled with bugs. If it gets confirmed that your problem stems from the graphics driver, then short of replacing your GPU with a NVIDIA one, your other option to avoid crashes and freezes would be to run the viewer under Linux (which Mesa Open Source drivers run well and are stable); the good news is that the viewer would then run even faster and smoother !
  22. Signed binaries are not any guarantee that the binary does not contain malicious code (it could have been compiled with secret, malicious code, then signed). If you want the guarantee the viewer you run does not contain any malicious code, then compile it from sources yourself (it is much harder to hide malicious code in the sources, when the said sources are open; in fact, it is impossible to hide on the long term). This is one of the strengths and one of the main reasons why Linux got almost no malware and virus: everything is compiled from Open Source software, unlike what happens with Windows and macOS. <shameless self-promotion> If you want an easy to build, RLV-enabled viewer, then the Cool VL Viewer is the best candidate; once the necessary build tools installed (gcc & Co under Linux, VS2022 under Windows, Xcode under macOS, with everything explained in details in the viewer's sources doc/*BuildHowto.txt files), you just need to launch a single script/batch file command from a terminal to build it. </shameless self-promotion>
  23. I always have been, in the past, simply because it was possible to get great gains from it. My best overclock achievements have been obtained with an Intel Core Quad Q6600 (3.4GHz OC instead of 2.66GHz stock frequency) and an i5-2500K (4.6GHz locked on all cores instead of 3.3GHz base / 3.7GHz turbo at stock). I also overclocked old 486-SX/DX and Cyrix 6x86/M2/MX processors before (just like you, I am unbiased towards any brand, and just choose the best performances for my money at the time I buy new hardware), but while I did own AMD CPUs (K6-2, K6-2+, K6-III, Athlon XP, Athlon64), none of them provided sufficient overclocking headroom for it to be worth the time spent tuning the knobs and testing the stability. With the i7-9700K however, I found out that modern Intel CPUs do not have any headroom any more (or so little that it is anecdotal: only 100MHz for my 9700K when the 2500K had 900MHz of headroom over the turbo frequency), and all you could achieve is running all cores at the turbo frequency. Undervolting won't allow to achieve any stable overclock. It is only good when you won the Silicon lottery and your CPU can work stably at the same frequency with a lower Vcore (meaning less heating), and if you can achieve this, then your CPU is also a good candidate for overclocking. Testing the stability of an overclock is much more demanding than just running Cinebench ! I do the following to ensure a stable overclock (everything done under Linux): Running compilation of large programs (the viewer code is a good candidate for this, since it can load all cores at 100% with just one short mono-core ”pause” during its whole compilation) in an infinite loop (I run such loops at night, so it's 8+ hours of compilation). gcc (the GNU compiler) is an excellent unstable CPU crasher ! 😄 Running Prime95 in torture mode with ”smallest FFT” and ”small FFT” modes for an hour or so, test runs repeated in both SSE2 and AVX modes (important for Intel), with a variable amount of cores (all cores, 6, 4, 2), to ensure an adequate voltage is provided by the VRMs in various loads conditions. Running BOINC tasks (various projects, various loads, with SSE2, AVX, AVX2, etc) during a few nights: any computation error reported by the project could be the sign of an instability (but must be careful since some project tasks do error out ”naturally”: just look at what other BOINC participants for that task got on that result) As you found out, idling can also be the cause of issues, so I also test an idle PC at night ! With a Zen CPU, I would likely attempt locking all cores at turbo frequency as well, and should it fail, I would try locking only the best cores at the max turbo, and the rest at a slightly lower frequency; this is easy to do with good motherboards BIOS/UEFI (I'm sure you can afford buying one such MB 😜 ), or under Linux, via the /sys/devices/system/cpu/* controls... Note that locking frequencies on cores allows to achieve the best overclocks, because they avoid the Vcore drop outs and overshots which happen when the frequency (and the power consumption with it) brutally changes and the VRMs must catch up (there is always a delay, causing the voltage transitory variations and possible resulting crashes). It also avoids the latencies seen when a CPU core must reenable its caches and other parts, when it gets affected a thread to run and was idling a few ms sooner, thus providing even better performances. Well, the 13900KS is not a good candidate for any overclock (i.e. running it over the max turbo, or even running all cores on turbo): it is already pushed at its best by Intel, at the factory, and no amount of personal effort will ever provide you with better results than what Intel got... All you can hope (on the condition of cooling your CPU very, very well), is to run it at a higher base frequency on all cores (and let Intel's algorithm deal with turbo)...
  24. Nope... The figures I give correspond to comparisons done with carefully stripped-down Windows installations (Win7 and Win11), with all the cruft removed, all the ancillary background tasks and services impeached to run (this of course includes Defender, Search, Smartscreen, Security center, etc).
  25. Regrouping your posts, since I was wondering, reading the second (cited first here), what was your ”high end system”... 😛 I am not surprised you got better frame rates with the 13900KS compared with the 5950X (RAM speed should not make a huge difference, but does of course contribute): at this level of GPU performances (the RTX 4090 is a monster !), the bottleneck is fully seen at the CPU mono-core performances level, and the P-cores of the newer and super-clocked 13900KS definitely beat the (one generation older and slower clocked) 5950X cores hands down... You could have tried and overclocked the latter, however (if only two cores overclocked, and the viewer affinity set to these overclocked cores), since every % of clock speed translates in the same amount of % in frame rates. This said, and sadly for most of us, ”poor” SLers, not everyone can afford a system such as yours !
×
×
  • Create New...