Jump to content

Henri Beauchamp

Resident
  • Posts

    1,295
  • Joined

Everything posted by Henri Beauchamp

  1. Indeed, for a role-player like me, voice is an absolute no-no: when I roleplay, I care about how the characters look, feel or ”sound” (via the posed text), not about who are the actual, real life persons behind them, and I certainly do not want what is left to my imagination being ruined by a RL voice that won't match the character being played... Not to mention the (spoken) language barrier: see below. I wish voice would be banned from those meetings... When they are held (even partly) on voice I cannot attend them; not being a native English speaker, I have the greatest difficulties to understand spoken English, especially when spoken with thick accents, a high speech rate, mumbled words, or through a microphone and speakers, with saturation, statics and the like. Attending voice meetings would just be the source of a lot of frustration and a total loss of my time.
  2. Yes, indeed... My (still running) oldest (third, computing power wise) PC got a Core2 Quad Q6600 (OCed @ 3.4GHz) with a GTX660. Yours is barely, a wee bit, more ”powerful”, and I would definitely not be using it for everyday's SLing any more, even with the fastest viewer. Forget it ! You would be CPU-bound... For today's (and tomorrow's, considering PBR impact) SLing, I would recommend at the strict minimum an i7-4770K (or equivalent) and a GTX960 (forget about Intel/AMD (i)GPUs: their OpenGL drivers are simply not on par with NVIDIA's at equivalent hardware price level), with 16GB RAM.
  3. You should read better what I wrote: ”Linux got almost no malware and virus” Yes, there are examples, such as injections via ”pip” with a hacked Python extension on github... However, if you limit yourself to your distro's packages and the Open Source software you compile yourself, you just never have any issue; I have been using Linux since 1993 and never, ever had encountered any malware, rootkit of virus on any of the many PCs I have been running it onto.
  4. You are mistaken: I did not post the videos in this thread: @Arluelle did. 😄
  5. So you are running it under Windows... Yes, AMD's proprietary OpenGL driver is a likely culprit... AMD rewrote their driver a few months ago (because the old version sucked rocks and was dead slow), but their new driver is crippled with bugs. If it gets confirmed that your problem stems from the graphics driver, then short of replacing your GPU with a NVIDIA one, your other option to avoid crashes and freezes would be to run the viewer under Linux (which Mesa Open Source drivers run well and are stable); the good news is that the viewer would then run even faster and smoother !
  6. Signed binaries are not any guarantee that the binary does not contain malicious code (it could have been compiled with secret, malicious code, then signed). If you want the guarantee the viewer you run does not contain any malicious code, then compile it from sources yourself (it is much harder to hide malicious code in the sources, when the said sources are open; in fact, it is impossible to hide on the long term). This is one of the strengths and one of the main reasons why Linux got almost no malware and virus: everything is compiled from Open Source software, unlike what happens with Windows and macOS. <shameless self-promotion> If you want an easy to build, RLV-enabled viewer, then the Cool VL Viewer is the best candidate; once the necessary build tools installed (gcc & Co under Linux, VS2022 under Windows, Xcode under macOS, with everything explained in details in the viewer's sources doc/*BuildHowto.txt files), you just need to launch a single script/batch file command from a terminal to build it. </shameless self-promotion>
  7. I always have been, in the past, simply because it was possible to get great gains from it. My best overclock achievements have been obtained with an Intel Core Quad Q6600 (3.4GHz OC instead of 2.66GHz stock frequency) and an i5-2500K (4.6GHz locked on all cores instead of 3.3GHz base / 3.7GHz turbo at stock). I also overclocked old 486-SX/DX and Cyrix 6x86/M2/MX processors before (just like you, I am unbiased towards any brand, and just choose the best performances for my money at the time I buy new hardware), but while I did own AMD CPUs (K6-2, K6-2+, K6-III, Athlon XP, Athlon64), none of them provided sufficient overclocking headroom for it to be worth the time spent tuning the knobs and testing the stability. With the i7-9700K however, I found out that modern Intel CPUs do not have any headroom any more (or so little that it is anecdotal: only 100MHz for my 9700K when the 2500K had 900MHz of headroom over the turbo frequency), and all you could achieve is running all cores at the turbo frequency. Undervolting won't allow to achieve any stable overclock. It is only good when you won the Silicon lottery and your CPU can work stably at the same frequency with a lower Vcore (meaning less heating), and if you can achieve this, then your CPU is also a good candidate for overclocking. Testing the stability of an overclock is much more demanding than just running Cinebench ! I do the following to ensure a stable overclock (everything done under Linux): Running compilation of large programs (the viewer code is a good candidate for this, since it can load all cores at 100% with just one short mono-core ”pause” during its whole compilation) in an infinite loop (I run such loops at night, so it's 8+ hours of compilation). gcc (the GNU compiler) is an excellent unstable CPU crasher ! 😄 Running Prime95 in torture mode with ”smallest FFT” and ”small FFT” modes for an hour or so, test runs repeated in both SSE2 and AVX modes (important for Intel), with a variable amount of cores (all cores, 6, 4, 2), to ensure an adequate voltage is provided by the VRMs in various loads conditions. Running BOINC tasks (various projects, various loads, with SSE2, AVX, AVX2, etc) during a few nights: any computation error reported by the project could be the sign of an instability (but must be careful since some project tasks do error out ”naturally”: just look at what other BOINC participants for that task got on that result) As you found out, idling can also be the cause of issues, so I also test an idle PC at night ! With a Zen CPU, I would likely attempt locking all cores at turbo frequency as well, and should it fail, I would try locking only the best cores at the max turbo, and the rest at a slightly lower frequency; this is easy to do with good motherboards BIOS/UEFI (I'm sure you can afford buying one such MB 😜 ), or under Linux, via the /sys/devices/system/cpu/* controls... Note that locking frequencies on cores allows to achieve the best overclocks, because they avoid the Vcore drop outs and overshots which happen when the frequency (and the power consumption with it) brutally changes and the VRMs must catch up (there is always a delay, causing the voltage transitory variations and possible resulting crashes). It also avoids the latencies seen when a CPU core must reenable its caches and other parts, when it gets affected a thread to run and was idling a few ms sooner, thus providing even better performances. Well, the 13900KS is not a good candidate for any overclock (i.e. running it over the max turbo, or even running all cores on turbo): it is already pushed at its best by Intel, at the factory, and no amount of personal effort will ever provide you with better results than what Intel got... All you can hope (on the condition of cooling your CPU very, very well), is to run it at a higher base frequency on all cores (and let Intel's algorithm deal with turbo)...
  8. Nope... The figures I give correspond to comparisons done with carefully stripped-down Windows installations (Win7 and Win11), with all the cruft removed, all the ancillary background tasks and services impeached to run (this of course includes Defender, Search, Smartscreen, Security center, etc).
  9. Regrouping your posts, since I was wondering, reading the second (cited first here), what was your ”high end system”... 😛 I am not surprised you got better frame rates with the 13900KS compared with the 5950X (RAM speed should not make a huge difference, but does of course contribute): at this level of GPU performances (the RTX 4090 is a monster !), the bottleneck is fully seen at the CPU mono-core performances level, and the P-cores of the newer and super-clocked 13900KS definitely beat the (one generation older and slower clocked) 5950X cores hands down... You could have tried and overclocked the latter, however (if only two cores overclocked, and the viewer affinity set to these overclocked cores), since every % of clock speed translates in the same amount of % in frame rates. This said, and sadly for most of us, ”poor” SLers, not everyone can afford a system such as yours !
  10. In NVIDIA's GPUs case (with NVIDIA's proprietary drivers), it runs better (about +5 to +20% fps rates, depending on the rendered scenes) because Linux (the kernel itself) got less overhead and (way) more efficient I/O (much less frame rate ”hiccups” while moving around and crossing sim borders, for example). In AMD's GPUs case, this is both because of the above and because of AMD's deficient OpenGL implementation in their own drivers, where they are replaced with Mesa under Linux.
  11. The problem might be that with a middle-range, aging PC, you are using a high resolution monitor: your snapshot is 3840x1961 pixels wide, meaning that unless you took a ”high res snapshot” (which doubles the native resolution), you got a high DPI screen... If this is the case, then on a ”standard” (full HD) screen, you'd get much better frame rates... And yes, with a high res monitor, an RTX 3070 would far better, but not that much either (I'd say you'd get twice the fps or so in this spot), because the CPU single core performances would become the bottleneck. With a 1920x1200 screen (i.e. slightly over full HD which is 1920x1080), a 9700K @ 5.0GHz (locked on all cores), an RTX 3070 (2025MHz graphics clock, 16000MT/s VRAM clock), running the Cool VL Viewer under Linux, with all graphics settings maxed out, including shadows on, I get 60fps, for a draw distance set to 256m. And 150fps with shadows off (not much of a difference in this spot, visually, so I could not tell if you had shadows on or off from your screen shot): As for the CPU and GPU usage, with shadows on, it was 36% CPU (~ 2.9 cores loaded), and 42% GPU.
  12. I come back to this thread and... Wow ! I never thought such a minor ”issue” could inflame people like that ! 🤣 But, please:
  13. What is already worn at the same spot, of course... Maybe ”Wear/replace” then...
  14. Rename it ”Replace”, perhaps ? Or ”Wear & replace” ?
  15. Maybe, depending on what is the root cause for the crashes (if it's a bug in the graphics driver, it won't solve it)... Always worth a try, anyway.
  16. No, the full contents of the About floater. Without any info about your system (especially GPU brand/model, RAM, GPU drivers version etc), we cannot even start making hypothesis...
  17. Using LL's official viewer, what happens when you right-click and ”Edit” the unresponsive HUDs ?... If you can edit them, they are ”touchable” but their script(s) is(are) ”simply” stuck (or maybe crashed). If, when right-clicking, another (invisible) HUD object is selected (outlined), then just ”Detach” it with the context/pie menu... Oh, also... You are in a parcel/land allowing scripts, are you ?...
  18. There might be two issues at play: Your viewer has registered a RLV touch-restriction and is simply reapplying it on login. The scripts don't restart, or restart too late (i.e. you loose patience before they do ), and the said restriction can therefore not be refreshed/reevaluated. Your viewer should have a RestrainedLove log/restriction floater of sorts, where you would be able to see what are the current restrictions in force. Otherwise, and as suggested above log in with RLV disabled or a RLV-less viewer (such as LL's official viewer), and see if it solves your issue...
  19. This issue got strictly nothing to do with RLV or the viewer in use. It is a scripts deserialization (*) issue, server-side. ------------ (*) When your avatar changes region, a handover is done between the departing and the arrival sims. That handover involves pausing the scripts and serializing them (kind of ”zipping” their data/state), the serialized data being then deserialized (”unzipped”) on the arrival sim. Once the scripts ready, they are un-paused to continue their execution where it was suspended. This process normally takes a second or two (the more scripts, the longer it takes, of course), while here it can take a full minute.
  20. I have noticed this issue happening from time to time for the past two weeks or so, especially after far TPs. This is ”just” a long delay (several seconds to almost one full minute) between the arrival in a sim and the scripts states restoring; just be patient and wait for the scripts states to be deserialized... I will mention this issue at the next SUG though, since it might be related with a recent server-side change.
  21. This specific bug is a race condition between the interest list data sent by the server and the rebuilding of spatial groups in the viewer render pipeline; it is normally ”seen” (no pun intended) happening after login, a TP, or a sim border crossing. Some viewers (TPVs, not LL) got workaround(s) for it (usually ”manual” ones, involving selecting a ”Refresh visibility of objects” or equivalent item in a menu); the Cool VL Viewer implements an auto-refresh of objects visibility for the three cases when such an issue might happen (login, TP, sim border crossing). For viewers not implementing any specific workaround, the most reliable way to make such ”invisible” objects pop into existence, is to toggle the wire-frame render mode on/off. As Animats wrote, there are other possible issues that can lead to ”invisible” objects, but the one you described is only due to what I just explained.
  22. Most viewers do not even bother distributing MSVC runtimes (or they distribute it with missing DLLs, or without repeated copies in llplugin/ sub-folder, because yes, Windoze sucks big time and SLPlugin.exe won't find the runtimes if not placed in its own ”home” folder), resulting in such errors when the runtimes are not installed system-wide (and one viewer I tested and I will not name by pure charity, is, on the other extreme, installing unconditionally system-wide runtimes while there are already present on the system). There are also Micro$oft bugs in their own runtimes, e.g. with vc143 (VS2022) runtimes that complain about missing ”api-ms-win-core-com-l1-1-0.dll” on Win7, because the corresponding runtime DLLs (*140.dll) reference that (Win10+ only) library (which is not part of the re-distributable VS2022 runtimes), while they do not even make any use of it (I can tell this for sure, because the viewer would nonetheless start and run just fine, after ignoring the missing DLL Windoze error dialog)... 🤪 This is why, for the Cool VL Viewer, I always test my installer on a VS-runtime-free Win7 VM, to ensure no such problem will ever occur.
  23. I never saw any UDP packet loss due to to high a UDP traffic (and I have my limit set to 16Mbps in my viewer) ! This is an urban legend that needs to cease. UDP messages processing takes very little time when compared to the rest of the ”ancillary” tasks the viewers must deal with. One of the most heavy such task is by far textures prioritization, now that almost all the rest got pushed to threads.
  24. You might be (alas), a tad bit optimistic on the release date for a Vulkan viewer... Vulkan-capable GPUs That's a pretty big list, and most (i)GPUs released in the past 8 years or so can do it... Plus, there will likely be a transition time with OpenGL viewers still maintained for some time. I'm afraid that the bar will be raised much sooner however, with the future PBR viewer (which, at least for now, got rid of the less demanding forward rendering mode)...
×
×
  • Create New...