Jump to content

Kathrine Jansma

Resident
  • Posts

    189
  • Joined

  • Last visited

Everything posted by Kathrine Jansma

  1. As the simulators run on AWS, and the CDN probably isn't, you could take the published AWS IP ranges (AWS Public IP Address Ranges Now Available in JSON Form | AWS News Blog (amazon.com) ) to determine if it is a simulator or a CDN call. Now map that to the source IP and you are basically done. And if the source IP isn't unique, start a small SOCKS5/HTTP proxy in a container, assign it a fixed local IP and use that for connection by your viewer and you have all you need to determine routing roules. If your router can use Lua or Tcl or something to setup such dynamic routings this should be trivial: Like: Fetch AWS JSON file to get the current AWS IPs. Local SL Proxy Source IP => ANY AWS IP : pin to one connection Local SL Proxy Source IP => ANY Non AWS IP: allow multipath
  2. Thats extremly unlikely for anything fetched from a CDN. Otherwise the CDN would need to do some form of channel binding, which is hard and breaks easily. It may be true for anything connecting directly to a region simulator. So if you used L7 routing you could probably detect the CDN cases and fully load balance them, while keeping the region connections to one link.
  3. If you have Layer 7 Routing you could probably load balance the content delivery network traffic (e.g. textures ) which should give a nice boost to rezzing performance.
  4. The 6GB of VRAM of your card are needed for more than just textures. e.g. Framebuffers to render into ( maybe 100 MB, depends) Shader programs Vertex Data Textures Depth Buffers and so on so clamping use at 4GB for just textures is probably reasonable. The stuttering is more likely a different issue, related to how new textures are loaded, decoded, bound and transferred to the GPU, while old textures are discarded and removed, which is a little bottlenecked in the viewers architecture.
  5. Well, Discord hasn't exactly the best track record for GDPR compliance. https://www.cnil.fr/en/discord-inc-fined-800-000-euros But it seems Discord is the tool everyone defaults to these days. Popular often wins... /me feels old considering Usenet groups, IRC and Jabber/XMPP servers as antediluvian options Did anyone consider something like github discussions? Might have some benefits as it makes it easy to just link to the Linden Viewer code on github and other code focussed tools that might help to discuss code related topics? Communicating on GitHub - GitHub Docs
  6. I saw such crashes with current AMD drivers, but i have a peculiar setup with 2 AMD graphics cards, which confuses the memory estimation code. (but wasn't Firestorm either). You could disable the dynamic memory settings of Firestorm and set some more conservative memory limit, as the card has "only" 2 GB VRAM available.
  7. Maybe check if that aligns with the keepalive duration of the HTTP connection? The server might have killed off the connection and the client failed to notice. Sometimes the connection state can get lost in the byzantine maze of TCP handshakes, TLS handshakes and HTTP keepalive & pipelining.
  8. If you really want to dig deep into that kind of hiccups and slowdowns, on Windows, you can unleash stuff like ETW and WPR/WPA. https://randomascii.wordpress.com/2015/09/24/etw-central/ has a few fairly deep dives into Windows System profiling. This kind of analysis shows all the interactions between system components like network and filesystem. SL Viewers lack the necessary ETW trace points to really hook into that system. Usually for the viewer one can use the Tracy profiler to get some insights, but that stops at the process border basically, only looking at the GPU too. If one hooked up ETW, it would allow a deep dive from network driver through virus scanner and filesystem driver right into the GPU parts. Most games do not have such a fat/intense network and disk activity for streaming and discarding random new content. So pure game profilers might fall short for general effects.
  9. Do you have an exception in place for your Virus Scanner to avoid touching the Viewer Cache directory?
  10. I doubt DirectStorage would be such a big thing for SL style viewers. Games have assets on NVMe storage, so you can optimize to get the 3-15 GB/s streamed from those devices instead of the shoddy 100-500MB/s you often get for a single threaded access. And if you need to decompress a 3 GB/s stream of assets in real time, some GPU support is super helpful of course. But unless you have all the scene cached in an SL setup, your network pipe is typically faaaar away from anything like 3 GB/s, maybe you get 10 GBit/s in rare cases with fast fibre setups. If you have extra CPU cores around, you can easily do the decoding and I/O in a thread pool with an efficient async I/O API like io_uring or IOCP and get some decent throughput. But getting the textures into the GPU for usage is crappy right now with OpenGL and needing to bind all the textures etc. Sure, if you set minimum OpenGL version to something newish (and not working on OS X), rewrite the rendering loop to use bindless textures and other modern tricks, you could benefit from a fast pipe that pumps data into the GPU memory for decoding/processing. But its more likely you would switch to Vulkan APIs first, so maybe LL has a look at the pipeline from Network / Cache to the GPU at that time and profile it a bit.
  11. It could be nice for Systems with an APU & GPU, e.g. Intel or AMD CPU with a graphics core plus some external CPU. A bit like Intels video transcoder.
  12. With Henri's Cool VL Viewer i tend to crash only for three reasons: Network hiccups / disconnects. Nothing to fix there usually. AMD GPU Driver issues. It just crashes deep inside the driver sometimes. Bugs in new features that have not been found and killed yet. Thats fairly good. If you crash a lot more often, try updating drivers and the viewer.
  13. With the Cool VL Viewer i frequently hit 10+ GBs of RAM usage in texture/mesh heavy areas with a large draw distance. More RAM doesn't hurt usually. Windows will use it for filesystem caches and other things.
  14. If you want to add it to an NSIS installer, its not all that much trouble to run a VC redist installer: !define VC_RUNTIME_2019_X64 "vcredist_2019_x64.exe" Section $(SECT_vcredist) vcredist SectionIn RO SetOutPath $TEMP nsExec::ExecToLog '"${VC_RUNTIME_2019_X64}" /q' delete "${VC_RUNTIME_2019_X64}" SectionEnd
  15. You might want to clean the fans/cooler after 2 years. Usually works wonders. I had an older Acer Aspire and cleaning basically rejuvenated it quite a bit (it felt like 20% faster/no more thermal throttling).
  16. Try it. Especially with a good network pipe and enough cores for the texture decoding it makes visiting a some shops with lots of textures awesome. Only all the shredded meshes of avatars that need time to assemble themselves properly are a bit annoying, when all the rest of the scene appears rapidly.
  17. Depends on your AV solution. At least Windows Defender does not care for Signatures. It might warn, but in details it lets you install whatever you want. Code signing is really just an indicator to show that a binary is really by the organization it claims to be created from. But if that organization is Evil Inc. it will not help against malware at all. Unless you explicitly had a list of trusted organizations configured or manually check it, thats basically useless as a defense.
  18. If you just want to silence the Antivirus, you have a few options: Configure AV to ignore the signing part. Convince someone to create signed binaries Shell out around 200 €/$ per year for an authenticode signing certificate. There are a few commercial options for that. Sign the application yourself... See for example: Using Powershell Authenticode Certificates to Self-Sign Application - Ipswitch
  19. Just lock the current attachment via RLV ;-). "Wear" basically stopped being useful (enough) once "Add" was added. It just clutters the menu and is bad UX. LL should really have some decent UX people streamline a bunch of the UI flows in the viewer to follow modern SL habits. But maybe make "Add" the default and only "Wear/Replace" when pressing Shift?
  20. This might be a side effect of SATA SSDs, which have a much smaller command queue than NVMe ones. The other effect might be that the fsync()/flush file buffers calls are typically per drive. A typical advice is to add a virus scanner exclusion for the Cache directory, as that helps quite a bit.
  21. On reason that happens is when some mandatory body part is lost from the restored outfit. E.g. shape, eyes, the old style hair. Once you put it on, the avatar rezzes fully. Maybe viewers could double check that the mandatory parts of an Avi are restored on load and offer some help when that fails? But thats probably not the only reason it can happen.
  22. GL_ATI_meminfo() is reporting bogus numbers for the combination of 5600G + discrete AMD GPU (on Windows, on Linux it looked fine). I saw 22 GB VRAM reported for my 5700G + Vega 56. So if Firestorm depends on that extension to determine available VRAM, it might drastically overcommit the VRAM.
  23. A mid range NVIDA card would have been a better choice for SL. The Vega burns quite some energy, if you run it on Linux it is halfway decent, on Windows it gets better with the newer drivers. The performance is kind of acceptable, but NVIDIA cards usually blow it out of the water for OpenGL like SL. For example, when visiting some club with 20+ AVs around my FPS tend to drop to 15-25 or so (with AMDs older drivers it was like 5-12 fps), while Henri reported 70fps with his older 1070 card and same graphics settings (but a little faster CPU) when we looked at some issue a few months back. I would probably get an NVIDIA card instead if it is specifically for SL.
  24. At least AMD Driver 22.11.2 fails to report proper memory on Windows if both a 5700G APU & Vega 56 card are in the system. I get stuff like 22 GB of VRAM reported in that combination.
  25. In theory you can break every kind of disk performance if you issue frequent fsyncs() to the drive. Or if you disable the write cache on HDDs, e.g. if you tried to use SL on a Windows Domain Controller. 😉 Other example: The Firefox browser used a gazillion of fsyncs() for its internal SQLite Database (https://bugzilla.mozilla.org/show_bug.cgi?id=421482) which made it a total mess. The cache layer of the SL Viewer might have similar lurking issues with fsyncs. Pushing the cache to a different drive if you have such fsync() storms running might help. The generic filesystem code of the viewer is pretty dumb, it does not really use the high performance async I/O options you would need to really get the benefits of a modern NVMe SSD. Usually that does not matter, but sometimes it does. Modern I/O works best when you have lots of operations in flight via thread pools or modern async APIs like IOCompletionPorts/io_uring. The viewer just runs a small threadpool for disk access, far away from the queue depths of 65536 possible parallel operations a typical NVMe SSD supports. Most of the time thats just fine and fast enough. But you could do better.
×
×
  • Create New...