Jump to content

Kathrine Jansma

Resident
  • Posts

    216
  • Joined

  • Last visited

Everything posted by Kathrine Jansma

  1. +1. Regressions and especially (usefully documented, logged) crash bugs get fixed pretty rapidly. In all the years i use Henri's excellent viewer, it never kept crashing for more than two weeks (and thats easily fixed by using the previous version, unless you are masochistic and like to collect crash dumps and stack traces like me...). Other bugs, well, if LL broke the design, there is only so much one can do to fix things. It usually gets just better with new versions. Some minor hiccups may happen when major new features appear, usually in the unstable/experimental branch.
  2. Thats quite a significant number. Like 1+ Outfit per day for 10 years. Even for modest image formats, we are talking 400-500 MB of data for images here. So in case you have a 1 GBit/s fibre connection, in the optimal case it would take at least 5 seconds. Its rarely optimal. If you have 100 Mbit/s internet, we are already talking about around 1 minute of download time alone. With some luck, there is a client side cache, as it would be trivially cacheable. That said, something like that shouldn't crash. But if done badly, it might do all kind of fun stuff, like saturate you network link so other communication fails to reach you or slow down the main rendering loop too much. Did you double check your antivirus exclusions too? I would imagine a virus scanner going crazy on scanning the downloads might make things even worse.
  3. As the official viewer only uses a single thread for this, it is only slightly useful. It also needs proper support by the GPU driver to be of any real use. (see https://github.com/secondlife/viewer/blob/5c16ae13758bdfe8fe1f13d5f67eabbb6eaa30a1/indra/llrender/llimagegl.cpp#L2568 ). This may be reasonable, as the whole texture decode pipeline probably does not produce more textures to bind. In the Cool VL Viewer, a similar setting uses as many threads as there are texture decode threads running to bind multiple textures in parallel. And there are some optimizations in place to make some callbacks far less costly too, which may reduce stuttering (see the message thread here http://sldev.free.fr/forum/viewtopic.php?f=10&t=2335 and the resulting code, its basically a scheme to only check when a texture is needed, not blocking on the check when it is bound). What are the risks of enabling it? Temporary spikes in RAM/VRAM usage, while the GPU driver churns through all the bind requests. I saw random spikes by 5-8 GB with AMDs drivers. Stuttering and misbehaving GPU drivers. A driver can either process the texture bind requests in parallel, e.g. NVIDIA and newer AMD drivers, or it could just process them serialized. In the latter case, this would be worse. Crashes if the GPU driver is bad. Overloading your CPU with too many threads, if you have a slow machine with not many CPU cores. Extra RAM usage. Every thread needs some RAM. So spawning more threads on a machine with not enough RAM (e.g. 8 GB and iGPU) is a bad idea.
  4. There will be some mess in the transition period anyway, as nothing forces users to update to a WebRTC enabled viewer. But the Link to Inara Pey in the first post in the thread had this snippet from the SL SUG Meeting: So that is some level of coordination. Even if "majority of users" doesn't mean a lot for other TPVs, with Firestorms current userbase.
  5. Known and fixed Razer vulnerability. That even needs cooperation from the logged on user and physical access. But yes, that Synapse stuff is pretty crappy. The main attack vector for a viewer might be CEF. The embedded web browser. It can leak your IP-address. It is usually not as up-to-date as the desktop chrome browser either. So it may still have vulnerabilities that are already fixed for desktop chrome browsers, that could in theory be exploited by malicious web pages on Media-on-a-Prim surfaces. Or just web links in chat that you clicked on. But overall, the report sounds like multiple issues at work. Those might be in-world, those might be your machine. Hard to guess.
  6. AV adds latency to every file access. The periodic scans are not the issue, on-access scans are the problem. Thats often not an issue if you have enough I/O requests flying around in parallel. Thats what Windows IO Subsystem is great at, parallel I/O. But if you do sequentiell I/O, your pipeline dries up rapidly. You need to wait for AV to scan those tiny files, and AV takes longer for it than the whole file operation would otherwise take. The viewers cache I/O pattern is mostly sequential, so it gets hit hard by AV, even when using a single I/O Thread to make it less bad. But really fixing it would need a different I/O and cache model. Microsoft noticed the issue for its own development tools too and added some improvement called "Dev Drive" on Windows 11, which reduces some of the issues with AV (https://learn.microsoft.com/en-us/windows/dev-drive/ ). It has a kind of performance mode that delays AV scanning so it does not affect the performance as badly (see Protect Dev Drive using performance mode - Microsoft Defender for Endpoint | Microsoft Learn ).
  7. Will be interesting, once AMD Stryx Halo APUs hit the market, as they claim to get GPU performance on NVIDIA 4070 levels.
  8. In theory, you can add some AV exclusion configuration logic for the default Microsoft Defender AV into your installer, as it runs with Administrative permissions during installation. Its just a call to Add-MpPreference (https://learn.microsoft.com/en-us/powershell/module/defender/add-mppreference?view=windowsserver2022-ps) after all. Microsoft automatically adds some rules for certain server roles (e.g. https://learn.microsoft.com/en-us/defender-endpoint/configure-server-exclusions-microsoft-defender-antivirus#automatic-server-role-exclusions ), but it is a grey area. In practice this would probably be a reason to flag the installer as malware most of the time...
  9. Why is a program using ressources that are available a bad thing? This only becomes an issue if you try to do a lot of things in parallel and need that extra RAM for something else. Secondlife has really huge amount of assets and textures in a typical scene, so caching them in RAM is a useful thing to do, as reloading them from even a fast SSD would be worse.
  10. Yes, but if that is actually the case is surprisingly hard and error prone to do for all those broken and weird graphics drivers. OpenGL does not have a builtin, standard function to tell how much memory is free. So every driver has its own weird way to tell you the part of the answer that you do not really care about. And if you have an AMD APU or Intel GPU inside your CPU the answer is even harder. Theoretically your GPU has as much VRAM as your system has RAM on those. Add some broken algorithms that are too trigger happy and you get the chaos seen.
  11. Not easily. You would need to find a misbehaviour that leads to the certificate being blacklisted due to the CAs policy. But you can rather easily Denial-of-Service the OCSP responder (see https://www.imperialviolet.org/2014/04/19/revchecking.html ). Thats a reason OCSP stapling was invented. And another reason that browser are planning to remove mandatory OCSP support (in addition to the privacy issues with OCSP). See for example: https://letsencrypt.org/2024/07/23/replacing-ocsp-with-crls.html
  12. Not really. There is some YAGNI (You ain't gonna need it) style of development, where you try to radically simplify and throw out all stuff that looks too complex, because simpler code runs faster and is easier to optimize and makes the CPU and compiler happy. Thats generally a good thing. This works just fine, if you make good a good guess how your system works. In a simple lab situation with no failing textures and a decent throughput with a high end computer the FIFO queue might actually be better, as it tends to have nicer memory access patterns. But handling the error cases correctly is often the really hard part. So it seems LL optimized for the "happy case" here and neglected the much much harder to analyze error case. If you never see failing textures, because your system decodes and loads them rapidly enough, the two are indistinguishable. So the old saying "As simple as possible, but not simpler." In this case LL probably erred on the side of too simple.
  13. I run all the time with a RX Vega 56, Driver 24.3.1 and a Ryzen 5600G. Doesn't crash with SL. I did have some crashes with older drivers and older versions of the Cool VL Viewer. But i see reproducible sudden system shutdowns in other (i think) Unity based games, which seem to point to some high power spikes. Those crashes go away, if i go to Optimization and ask the AMD driver to do GPU undervolting. Maybe give that a try.
  14. It could be interesting on Mac OS X to try to use the Apple ImageIO framework instead of OpenJPEG, as Apple seems to have (or had, not sure?) a Kakadu license for that system library.
  15. With NVIDIA being mostly occupied with chasing the AI profits and hype, there is probably pretty much a standstill of GPU performance ahead. So NVIDIA RTX 5xxx will probably not be such a massive leap forward and AMD already said their RX 9xxx series will not bother with getting the performance crown. Same with CPUs, you do not see that massive progress anymore, since the "4 cores" barrier fell. Frequency is mostly maxed out by physics and cooling. Smaller processes are needed for AI stuff currently, so no die shrinking either. So unless Intels Arc's next iteration suddenly finds a massive performance breakthrough, there will not be much progress in GPUs for the next 4-5 years. A few more AI cores and raytracing maybe, but not that much else. So the perspective of that machine to last 10 years isn't bad.
  16. There are a few dimensions to the question. How large (resolution) is the screen you want to play on? Full HD (1920x1080), WQHD (2560x1440) and 4k are common sizes these days. The larger your screen is, the more performance does the computer need to make it pretty. What is your approximate budget? Computers meant for gaming tend to be >= 1000$ mostly, due to the prices for decent GPUs. What is your current system? Sometimes you can just add some GPU or RAM or an SSD to a system to get from utter potato to something ok. What kind of things are you doing in SL? (and what other things do you need to do with the system) How long should it last?
  17. Yes, wait and see. That kind of regulation you linked to is in place in germany since the 1990s at least. Doesn't help a lot about childs accessing porn. The big players need to do the compliance theatre, the children simply visit sites outside germany. I would be positively suprised if they could support the pseudonymous age verification API of the german identity card. Currently each time i needed age verification or similar, i had to visit the post office, so they could make a paper photocopy of the digital ID card and get some in person signature instead of a digital signature that would be good enough for eIDAS qualified signatures. So, i highly doubt age verification actually works. But legislation and compliance rules may mandate to invest in theatric plays sometimes to sooth the gods of bureaucracy. Guess LL can just skip age verification for all accounts that are 18+ years on SL, as those cannot be minors anymore?
  18. True. But both have some Know-your-Customer compliance topics to deal with. Tillia obviously more so, due to money laundering legislation.
  19. ID verification is expensive. At least 1-2 $ per check, often more. Good, reliable ID verification is probably 10x as much. In addition, all those documents are kind of toxic waste from a PII perspective. Exposing you to massive risk in a data breach ( like 600 € per user for basic GDPR issues, or worse). So as long as LL/Tillia can get away with it from a money laundering/know your customer compliance perspective, they will not bother. Unless the damages to their bottom line by fraudulent trades becomes a real issue. Otherwise it is just part of the cost of doing business. It surely is painful for a vendor to be caught in some fraud. But thats not much different than being caught up in fraud in RL situations. Cost of doing business. ID verification doesn't kill that kind of fraud either, it just changes the way it is done (e.g. luring people into acting as money couriers or something like that). On the other hand, raising the entry barriers for Second Life users even further sounds like an extremly stupid thing to do. Then you end up with zero perfectly validated customers and zero fraud, because you have no business anymore.
  20. You do not need a phone to run TOTP based 2FA. You can use a password manager with TOTP support like KeePassXC Password Manager to handle that. It is less secure if you run it on the same computer, but still much much more secure than just a simple password. See KeePassXC: User Guide
  21. Take a look at your graphics options in the AMD Adrenaline system: I found those worked reasonably well for my Vega 56. Still much worse than what NVIDIA cards of the same price point can do. http://sldev.free.fr/forum/viewtopic.php?f=6&t=2445 Especially the "SMART Access Memory" thing is worth a try, might need the proper settings in your UEFI/BIOS (Resizeable BAR or so). The VRAM memory estimates given by the AMD driver can be bogus and include parts of your main memory, especially if you enable stuff like HBCC. So setting some sane explicit maximum may be better than keeping it on Auto and getting absurdly high values (that just make the driver spill data into your RAM, which is dog slow or may even crash). I do get some 100+ fps when alone and shadows disable in the Cool VL Viewer, drops to less in busy places of course.
  22. I do like posteo.de, they allow anonymous payments and do not want any personal data for opening an account, have a lot of good encryption options, useful 2FA options.
  23. You can do weird stuff with gcc and --entry and --nostartfiles options to change that, but thats not standard C of course. You still need some kind of entry point. But just because the entry point happens to be an event handler, does not make the program really event based. e.g. you can do code like this (which should of course use a timer event), to stay in a non-event regime. Unless you need outside input of course. default { state_entry() { while (1) { llSay(0, "The time is: "+ llGetTimestamp()); llSleep(1.0); } } }
  24. Thats a bit bogus argument. state_entry() can be seen as a 'main()' entry point if you want to look with a C hat on. You could compare LSL in part with some Tcl, even if LSL is much much less powerful for metaprogramming. But a lot of idioms look familiar. In fact, it would be mostly trivial to implement the LSL syntax on top of Tcl, if someone wanted to. As Tcl can also be set to event driven, it feels pretty similar at times.
×
×
  • Create New...