Jump to content

Kathrine Jansma

Resident
  • Posts

    188
  • Joined

  • Last visited

Posts posted by Kathrine Jansma

  1. 3 hours ago, Jesiris said:

    RAM memory: 8,00 GB

    If you can, add another 8GB for happyness, its like 12 € for a single DDR3 8GB module if you have a slot free or double that if you replace your old modules.

    For efficiency Cool VL Viewer and Genesis are both decent, especially if you tweak settings a bit, but are not V2 style.

     

    Not sure why you crash all the time with Firestorm, there are a few things that could go wrong or could overwhelm your machine.

    You might want to tweak some settings, due to your low VRAM/RAM/CPU core count.

    You could try to disable "Dynamic Texture Memory" in Preferences -> Graphics -> Hardware Settings as some AMD GPUs/Drivers report bogus values for free memory that causes crashes.

    You could also try to reduce Image Decode Concurrency to 1 on the same page, that might free CPU corest (but slows down rezzing).

    You could als try to enable Preferences -> Graphics -> Rendering -> Restrict Maximum Texture Resolution to 512px.

     

    • Like 1
  2. 19 minutes ago, Love Zhaoying said:

    Conceptually, in RL you get a blur when things are far away.

    You may also get blur like effects due to air movements (e.g. heated air over asphalt roads), dust, fog and other particles in the air that cause light to bend.

    So some amount of blur over dusty places like cities is probably okay.

    But i agree with others that a cutoff at 128m is a bit harsh on the eyes.

    I think there are a few things to consider here:

    1) Speed of rezzing/loading the whole static scene.

    After all, even with 256m draw distance, the viewer needs to ingest a huge amount of textures and meshes to render the full view. Thats okay if i am stationary and just want to gaze into the distance. But depending on network speeds (e.g. "just" 100 MBit/s) it might take some time to flush all that data down the pipe for rendering, so some intelligent z-depth sorting and blur effect might help to get a nicer effect while loading happens.

    2) Speed of rezzing/loading while on the move

    A huge draw distance is nice, but i am just fine with a few imposters in the distance if moving around faster.

    3) FPS preserving bandaids

    If the sheer amount of textures, meshes and other data overwhelms the viewer and/or the network, there should be some  optimization to rescue a somewhat useable framerate.

    It would be interesting to have some statistics about bandwidth consumption for typical usages with different draw distances and scenarios. Busy club, empty roads on deserted mainland, typical island, New Babbage with few avatars in sight. Of course this differs when the area is already cached, so cache hit rate would also be interesting. Not sure if LL has any kind of data about that. I assume they have a vague idea about bandwidth usage per user.

    • Like 1
  3. 7 hours ago, Love Zhaoying said:

    These charges - and any future charges by AWS is a risk LL took when migrating to AWS from their own data centers. I didn't see anyone mention it yet, but "obviously" LL "should" be paying less overall with AWS, or they would not have made the migration (unless other tangible benefits like scalability make up for it).

    Cloud like AWS is always more expensive for "always-on" servers, when you do not need the dynamic scaling or other services.

    But you gain a few nice things that may or may not be worth it.

    • A different allocation of money, you do not need to sink a lot of money into operating a datacenter and estimate your growth for proper sizing. So you can save on taking on debt for the investment.
    • Flexibility to experiment with technologies, for example the new even regions
    • Commodity infrastructure options you could use to replace your aging home grown systems with 
    • Theoretical options to geo-distribute your regions
    • Easier rampup / hiring of admins 
    • etc.

    I'm not sure if LL regrets the move to the AWS system. But i am pretty sure they have not yet used all the useful opportunities they could get from the infrastructure, so currently it feels is a little worse for some systems.

     

  4. Sadly there are still stupid ISPs that only offer IPv4 access (mine for example 😞 ). So going IPv6 only is still not a good option for all cases.

    IPv6 has a whole list of changes but for most simple uses adapting code to do IPv6 is mostly trivial. The tricky parts come in when you try to do dualstack IPv4/IPv6 and stuff like "happy eyeballs" to try parallel connections via IPv4 & IPv6 and use the quicker one.

    The other stuff is on the routing and DNS layer but not really interesting to simple client applications. So if the regions used IPv6, I'm sure that viewers would have no real trouble adapting to that. But getting all user to IPv6 is more of a problem. 

    • Like 2
    • Thanks 1
  5. 21 hours ago, Lucia Nightfire said:

    Why is there not someone designated to keep those concurrent?

    The only right solution is to automate that stuff away. Just throw in some ACME or AWS magic and the simhosts or load balancers should be able to refresh their certificates on restart.

    • Like 1
  6. The logfile might show more insights, why you crash. Inventory / Cache File corruption as mentioned by Ardy Lay is surely one issue that might happen.

    Another possible issue would be lack of VRAM. A 1050 Ti has just 4 GB VRAM. Depending on how good the free VRAM detection of the viewer works, you might run out of VRAM and crash. If this is the case, you might be able to set a fixed amount thats good enough for two sessions.

     

  7. 10 hours ago, Casper Warden said:

    OpenGL supports context sharing, it's possible to fill vertex buffers and such on multiple threads.

    The current viewer does use that already, for loading and binding textures.It does help a bit but can grind to a halt when the OpenGL driver blocks on the necessary synchronization primitives. The most interesting features are also either modern (e.g. OpenGL4.x so not available on OS X) or vendor specific extensions.

    • Thanks 1
  8. On 7/8/2023 at 1:54 PM, AmeliaJ08 said:

    It's honestly crazy that the renderer hasn't had a re-write to properly utilise all available hardware.

    It is a tradeoff, as all engineering is.

    It is not a matter of "re-writing" the current code to use hardware better by gradual improvement. Switching to a different API like Vulkan is more or less a complete change of rendering architecture. So you need to learn new paradigms, rediscover basic errors with the new API and so on.

    In addition, you have an existing userbase on a somewhat aging hardware setup. Going straight for shiny new stuff might leave a lot of paying customers behind. 

    So either you invest heavily, get developers that know the new APIs, and spend some person months or years to rewrite the whole engine, with the associated risk. Or you try to do small improvements in to ease the pain. 

     

    • Like 2
  9. 20 minutes ago, Aishagain said:

    Multi-thread maybe but still single core only.

    That sentence sounds strange, but has some truth in it.

    Modern viewers can use extra cores to run threads to decode textures and binding OpenGL textures. Current NVIDIA and AMD OpenGL drivers also use multiple cores for rendering. But the main rendering loop of the viewer is still running on a single core.

    So more CPU cores help to rezz things faster, but do not help all that much with getting more FPS once textures are loaded.

    • Like 2
    • Thanks 1
  10. On 6/14/2023 at 9:14 AM, Titan Varela said:

    Getting back to the OP the account name (username) is not personal information with regards to data protection as it is not the name of a living person (unless you managed to obtain a name that does actually match your own RL name).

    For GDPR that does not matter.

    This blog article describes the differences to the usual PII and the definition in the EU GDPR which is broader and does cover usernames.

    https://techgdpr.com/blog/difference-between-pii-and-personal-data/

    But usually the risk involved is small enough, so besides some extra paperwork not much happens usually.

  11. 23 hours ago, Nalates Urriah said:

    While it is a good idea to change one's password periodically,

    It is not a good idea.That idea is dead in 100% of sane authentication guidelines by now. (e.g. NIST Digital Identity Guidelines and others).

    The current recommendation everywhere is more like:

    • Have a unique strong (long, aka 15-25 chars passphrase or so) password
    • Use MFA
    • Change the password Only IF you assume some breach has happend

    Edit: Well doing a voluntary change is fine. Forcing a periodic change is useless.

     

     

    • Like 2
  12. 2 hours ago, Henri Beauchamp said:

    this will however have to wait till a multi-threaded renderer is implemented (Vulkan, you know)

    Just in theory, in a system with two GPUs (e.g. CPU like AMD 7xxx with built in graphics + dedicated graphics card), could one use offscreen rendering on the secondary GPU for that too?

    e.g. with https://registry.khronos.org/OpenGL/extensions/AMD/WGL_AMD_gpu_association.txt or https://registry.khronos.org/OpenGL/extensions/NV/WGL_NV_gpu_affinity.txt 

    ?

  13. At least while Apple still shipped x86 hardware, the MBPs were quite popular as company paid nice notebooks, even when the job needed Windows stuff.

    Right now its easier to rationalize some fat NVDIA 4090 equipped windows laptop "for AI stuff..." in the workplace.

     

    • Like 1
  14. 13 hours ago, KydronAegis said:

    the viewer simply broke

    Define "broke". That could be anything.

    What does not work anymore?

    Does the viewer crash on startup?

    Does it just look bad?

     

    I use the Cool VL Viewer, Firestorm, Genesis and 23.4.3 with a Vega 56 and it still works as good or bad as with most previous drivers. 

    If you suspect the driver being the culprit, rolling back to a known good driver might be a good idea.

  15. 3 hours ago, Love Zhaoying said:

    Is sarcasm, or is it really much different than using a solid-state drive?

    It obviously depends on how good your system is with its disk cache.

    The hardware differences are still huge:

      Latency Throughput
    DDR5 RAM 20 ns 80 GB/s
    SSD NVMe  200000 ns 2 - 15 GB/s (PCIe 3 to 5)

    Add to that the extra latency of the driver stack and its a massive difference. BUT..., if your OS has a decent disk cache it may make the difference vanish. But it is pretty easy to get data kicked out of the disk cache, so unless you have giant amounts of RAM and a sufficiently huge disk cache, the chances are that a dedicated RAM disk is faster.

    Does it really matter? Rarely.

    It is squeezing out the last extra bits of performance. With a RAM disk you do not really need to multithread your file access all that much, as you can do a lot more sequential I/O calls in the same timeframe, while with NVMe you would need to get the concurrency up to make up for the slower hardware to compete. As NVMe allows a huge amount of parallel requests (>> 64k), you can recover a lot of the latency differences if you are able to keep the I/O queue full. But viewers do not really do that, so RAM wins.

    Telling your AV solution to keep its hands off your disk cache is typically an order of magnitude more important. 

    • Thanks 1
  16. 1 hour ago, animats said:

    Same region, Firestorm. Download bandwidth averages below 20Mb/s.

    You should really given Henri's Cool VL Viewer a run for comparison. It frequently is able to saturate my 100Mb/s link and fetches textures and meshes with a pretty similar bandwidth/throughput as sharpview, despite the legacy architecture (just with the threadpool running texture decoding and a threaded FS cache to put the downloaded data on disk.)

    I am pretty sure using modern io_uring/IOCP would be even better than a simple threaded filesystem access, but for fairly low the bandwidth/access frequency (compared to theoretical NVMe SSD IOPS) it doesn't matter. 

  17. 17 hours ago, animats said:

    Something is badly wrong.

    Sure. The AMD drivers. 

    AMD reworked the whole Windows OpenGL driver part to work similar to their Vulkan driver. 

    It IS much faster and multithreading now, but the first few versions were horribly unstable. The current ones are (23.4.1) are a little better.

    I sometimes see the RAM usage of the viewer shoot up from 7GB to around 12-14GB RAM, with intense CPU usage, nearly all inside the AMD driver on multiple cores. The GPU memory usage looks stable though, at least as far as Windows shows. This runs for a while, a few seconds to maybe 1-2 minutes, freezing the whole viewer, then crashing to desktop. And the crash dump stacktrace is basically completely inside the AMD driver each time (of the 30+ ones i looked at).

    It crashed with other OpenGL games too, reproduciable, with stable crash stacktraces, after driver updates.

    • Like 3
  18. 53 minutes ago, Paul Hexem said:

    it could speed up fetching of multiple assets at once like sounds and textures

    Even that should only be a matter if you max out one connection or if the CDN throttles you based on source IP. The viewer doesn't care. If you have enough CPU cores to handle it, a viewer could probably saturate a 1GBit/s pipe.

    Link aggregation/multipathing is usually only worth the trouble for failover or throughput optimization when you cannot buy a faster link. So unless you have at least 1GB/s it is probably a waste of time to do load balancing instead of just getting a faster link. And if you go faster, you might discover the cheap TP Link router is starved of CPU power to keep up with the bandwidth and encoding requirements (e.g. PPPOE).

    And if you use it to bridge networks, your are typically better off with some MPTCP bridge stuff. The only real other usecase where multipathing is frequently used otherwise is for mobile devices when you multipath over Wifi and the LTE connection. E.g. Apple does that with some MPTCP stuff.

  19. As the simulators run on AWS, and the CDN probably isn't, you could take the published AWS IP ranges (AWS Public IP Address Ranges Now Available in JSON Form | AWS News Blog (amazon.com) ) to determine if it is a simulator or a CDN call.

    Now map that to the source IP and you are basically done. And if the source IP isn't unique, start a small SOCKS5/HTTP proxy in a container, assign it a fixed local IP and use that for connection by your viewer and you have all you need to determine routing roules.

    If your router can use Lua or Tcl or something to setup such dynamic routings this should be trivial:

    Like:

    Fetch AWS JSON file to get the current AWS IPs.

    Local SL Proxy Source IP => ANY AWS IP : pin to one connection

    Local SL Proxy Source IP => ANY Non AWS IP: allow multipath

    • Like 1
    • Thanks 1
×
×
  • Create New...