Jump to content

It's time for a place for technical discussions on viewers


animats
 Share

You are about to reply to a thread that has been inactive for 331 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

11 minutes ago, bigmoe Whitfield said:

@animated 100% usable on a 980 sc.  love it.

Good to know. Thanks. That's a 4GB board. You might be able to find some very cluttered SL regions where the GPU will fill up. An NVidia 640, with 2GB, is definitely not big enough. I haven't been able to find a region in SL that won't fit in an 8GB NVidia 3070 GPU.

Still early days.There's nothing in Sharpview yet to turn down the quality for low-end GPUs. Current test versions will slow way down or fail completely.

My goal is to make SL look like an AAA game title on machines that can run modern AAA game titles. Here are the system requirements for Cyberpunk 2077:

Cyberpunk 2077 Recommended Requirements

  • CPU: Intel Core i7-4790 or AMD Ryzen 3 3200G
  • RAM: 12 GB
  • VIDEO CARD: NVIDIA GeForce GTX 1060 or AMD Radeon RX 590
  • DEDICATED VIDEO RAM: 6 GB
  • PIXEL SHADER: 5.1
  • VERTEX SHADER: 5.1
  • OS: Windows 10 64-bit
  • FREE DISK SPACE: 70 GB

That's roughly what you'll need.

The typical Steam user today has an 6GB NVidia 1060 level machine. Steam publishes data on this, and that's what the industry targets.

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

CPU: Intel(R) Core(TM) i7-5820K CPU @ 3.30GHz (3317.27 MHz)
Memory: 32679 MB
Concurrency: 12
OS Version: Microsoft Windows 10 64-bit (Build 19045.2846)
Graphics Card Vendor: NVIDIA Corporation
Graphics Card: NVIDIA GeForce GTX 980/PCIe/SSE2
Graphics Card Memory: 4096 MB

Love how firestorm is like "you are at 3.30ghz" when reality is, I'm at 4.5ghz since 2015 after I build it lol.   anyways yeah.  that's my setup.

  • Like 2
Link to comment
Share on other sites

3 hours ago, bigmoe Whitfield said:

Love how firestorm is like ”you are at 3.30ghz” when reality is, I'm at 4.5ghz since 2015 after I build it lol. 

This is not Firestorm's fault: Windoze does not offer any mean to find out the actual frequency for the CPU core running a software, and believe me, I tried hard to get it for my own viewer... ”Modern” processors with turbo mode simply get reported the TSC frequency, sadly.

Amusingly, I managed to find a method to get the turbo frequency while the Windows build of my viewer runs via Wine under Linux (i.e. Wine folks got the corresponding system call right, unlike Micro$oft) ! 🤣

Edited by Henri Beauchamp
  • Like 3
Link to comment
Share on other sites

Will "Mesa+Zink" fit the topic of the thread as well?

I have been playing around with using Mesa+Zink to 'translate' OpenGL calls into Vulkan.

Works quite well for me so far (Kokua Viewer + AMD Ryzen 3700U [Vega 10]). Very stable, smoother movements, etc.

Very infrequent glitches (none in the places I frequented, thankfully).

The only drawback is when changing Graphics settings (e.g. from Medium to Ultra -- when I'm gonna take some selfies) Mesa will need to trash its state and rebuild everything; I have been experiencing 6-10 seconds delay changing Graphics settings and back again. But after that delay, everything works again smoothly.

 

  • Like 1
Link to comment
Share on other sites

7 hours ago, primerib1 said:

Will "Mesa+Zink" fit the topic of the thread as well?

Sure. For what target machine?

"The Zink driver ... emits Vulkan API calls instead of targeting a specific GPU architecture. This can be used to get full desktop OpenGL support on devices that only support Vulkan."

Another one of those intermediate graphics plumbing layers. Programs need something to call to get the graphics system to do things. Down at the bottom, there's a layer for talking to the GPU. Programs can talk directly, but then they're not very portable. So there's a large collection of plumbing components to connect the two layers in a cross-platform way. There's Zink, which is part of Mesa, a 20-year old 3D plumbing layer. Unreal Engine and Unity have their own internal layers. There's WGPU, which I use, which connects to Vulkan (Linux and Windows), Direct-X (Microsoft), Metal (Apple), and Android. There are others, but those are the main ones in active use right now.

They're all complicated systems for papering over the differences in the underlying platforms. I was chatting recently with a developer who has a sailboat racing game written in Rust, and I asked what he used. He just targets "good old" Microsoft Direct-X 11. So his program is limited to Windows, but straightforward. Life is simple.

This plumbing has to be developed by someone who has all the target machines on their desk and the knowledge to use them. Or by a closely cooperating team. There is a shortage of such people in the Open Source world. It's not very fun to work on. Yet it's essential to getting anything to be portable across platforms. The target today can be a desktop, a laptop, a limited laptop such as a Chromebook, a tablet, a phone, a game machine, or a VR headset. The days when you could just target 32-bit Windows and be done with it are over.

Such plumbing issues have taken more than half my time in my own viewer development. This is why the Unity-based people are getting results sooner.

Link to comment
Share on other sites

23 hours ago, animats said:

Another one of those intermediate graphics plumbing layers. Programs need something to call to get the graphics system to do things. Down at the bottom, there's a layer for talking to the GPU. Programs can talk directly, but then they're not very portable. So there's a large collection of plumbing components to connect the two layers in a cross-platform way. There's Zink, which is part of Mesa, a 20-year old 3D plumbing layer.

Mesa3d + Zink actually does some heavy-lifting trying to 'simulate' OpenGL state machine in Mesa, with Zink doing the additional heavy-lifting to target Vulkan.

Here's what the two did, in principle (though the article is about OpenGL-to-Direct3D): https://www.collabora.com/news-and-blog/blog/2020/07/09/deep-dive-into-opengl-over-directx-layering/

For me, running Kokua on top of Mesa3d+Zink does not negatively impact performance. Though the in-viewer FPS meter does not show any increase, I subjectively can say that movements are smoother. Probably because of something similar to the "batching" that OpenGL-to-Direct3D did.

What I can objectively vouch for is an increase in stability. My group has a dance party every Saturday, and before deploying Mesa3d+Zink, if I look into the center of the dancing room / the throng of dancers, I will almost always crash within minutes. First the textures go haywire (black / noise), textures blinking in and out of existence, then the display freezes and my driver crashes (followed by the viewer crashing). In one dance party I can crash like 4-5 times within the span of 2 hours.

After deploying Mesa3d+Zink, I no longer crash in the same setting. Even if I look towards the center of the room with all dancers in view. Sure, my measly GPU (AMD Radeon Vega 10 iGPU on Ryzen 3700U) struggles to animate people beyond PowerPoint slide show, but no crashes. Nada. Zilch.

This might also due to what @Henri Beauchamp mentioned elsewhere ... that AMD's driver sucks and lies about available texture memory. And somehow Mesa3d+Zink "covers" that egregious lie and just make things work.

So the tl;dr : Mesa3d+Zink goes way beyond just mapping calls, they simulate/emulate OpenGL state machine in such a way that improves stability and smooths performance.

  • Like 1
Link to comment
Share on other sites

36 minutes ago, primerib1 said:

After deploying Mesa3d+Zink, I no longer crash in the same setting.

Well, I (quickly) tried Mesa3d+Zink (v23.0.3) today, and with my NVIDIA card, it almost immediately crashes the viewer after login... Might be due to core profile and multi-threaded shared GL workers though...

Link to comment
Share on other sites

5 hours ago, primerib1 said:

..., I will almost always crash within minutes. First the textures go haywire (black / noise), textures blinking in and out of existence, then the display freezes and my driver crashes (followed by the viewer crashing). In one dance party I can crash like 4-5 times within the span of 2 hours.
 

Something is badly wrong. As I wrote the last time this came up:

Quote

First, try a standard WebGL demo, "Aquarium".  This shows a nice 3D scene of fish swimming, and exercises the GPU. If there's any crashing, freezing, or glitching on that standard demo, you have a graphics system problem on your computer. Let that run for 5-10 minutes.

Second, try this tough GPU test: https://benchmark.unigine.com/valley

This is a benchmark and stress test for Unreal Engine. It's 10 years old, so it needs resources comparable to what SL needs today. Download and run. Let it run for a while. It looks like a game, but it's just a tour of some nice forest scenery. There's no gameplay. It's intended to exercise the GPU. It shows the GPU temperature and frame rate. The GPU temperature should level off after a few minutes, and the frame rate should not drop. Any crashing, freezing, or glitching means a system graphics problem.

If you can run The Valley, your hardware and software are in good shape. Then it's time to look for viewer bugs.

 

Link to comment
Share on other sites

18 hours ago, Henri Beauchamp said:

Well, I (quickly) tried Mesa3d+Zink (v23.0.3) today, and with my NVIDIA card, it almost immediately crashes the viewer after login... Might be due to core profile and multi-threaded shared GL workers though...

Oof, I forgot, there needs to be some tweaking.

First, I used the installer from here: https://github.com/pal1000/mesa-dist-win

Next I use the "per-app-deploy" script and made sure to follow the instructions here:

(I use 45COMPAT btw.)

Here's the complete batch file I used to start Kokua:

@echo off

set MESA_GL_VERSION_OVERRIDE=4.5COMPAT

set MESA_GLSL_VERSION_OVERRIDE=450

set GALLIUM_DRIVER=zink

start /MAX KokuaViewer.exe --set InstallLanguage en

  • Like 1
Link to comment
Share on other sites

13 hours ago, animats said:

Something is badly wrong.

Yeah, very likely so.

Still Mesa3d+Zink objectively stops the crashes.

I even tried several times "uninstalling" Mesa3d+Zink and again crashes within minutes.

So I've decided to no longer try to troubleshoot things and just run Kokua with Mesa3d+Zink. I'm happy, I've been enjoying it, and that's what counts.

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

17 hours ago, animats said:

Something is badly wrong.

Sure. The AMD drivers. 

AMD reworked the whole Windows OpenGL driver part to work similar to their Vulkan driver. 

It IS much faster and multithreading now, but the first few versions were horribly unstable. The current ones are (23.4.1) are a little better.

I sometimes see the RAM usage of the viewer shoot up from 7GB to around 12-14GB RAM, with intense CPU usage, nearly all inside the AMD driver on multiple cores. The GPU memory usage looks stable though, at least as far as Windows shows. This runs for a while, a few seconds to maybe 1-2 minutes, freezing the whole viewer, then crashing to desktop. And the crash dump stacktrace is basically completely inside the AMD driver each time (of the 30+ ones i looked at).

It crashed with other OpenGL games too, reproduciable, with stable crash stacktraces, after driver updates.

  • Like 3
Link to comment
Share on other sites

5 hours ago, primerib1 said:

Oof, I forgot, there needs to be some tweaking.

First, I used the installer from here: https://github.com/pal1000/mesa-dist-win

This is what I did. I chose the mingw build, since it is supposed to be faster... Perhaps the msvc one is better/stabler ?

5 hours ago, primerib1 said:

Next I use the ”per-app-deploy” script

Yep, what I did too.

5 hours ago, primerib1 said:

Here's the complete batch file I used to start Kokua

It would be simpler to set the environment variables for your account... No need for a wrapper script then.

5 hours ago, primerib1 said:

set MESA_GL_VERSION_OVERRIDE=4.5COMPAT

Hmm, it would force the Core GL profile off... Not exactly a good thing for speed (the viewer is 50% faster, at least, on NVIDIA hardware/drivers under core profile).

5 hours ago, primerib1 said:

set MESA_GLSL_VERSION_OVERRIDE=450

This might be what was missing for me (IIRC it reported a v3.2 GL version in the About floater when I tried).

5 hours ago, primerib1 said:

set GALLIUM_DRIVER=zink

IIRC, it did select Zink properly and automatically...  I will try again later, time permitting, but I do not expect any improvement when compared to NVIDIA's proprietary OpenGL drivers...

Still a good thing it works well for you for AMD.

 

EDIT: I just gave it another, quick, try with MESA_GL_VERSION_OVERRIDE=4.6, MESA_GLSL_VERSION_OVERRIDE=460 and GALLIUM_DRIVER=zink set at the system environment variables level. It did work without crashing, this time, and I can choose between core profile or not from the viewer graphics settings (but I did not notice any significant speed difference). However (and as I anticipated), the Mesa+Zink drivers render at half the speed of the proprietary NVIDIA OpenGL drivers... A no-go for NVIDIA.

Edited by Henri Beauchamp
Link to comment
Share on other sites

Some comments on performance. Mostly about my Sharpview viewer, but gives some insight into what happens as you enter a region.

perfgraphlondoncity.thumb.png.fb2c9b788d387fda227868b96ab92cd8.png

London City, cold cache.

A bit of insight into what a region load looks like. Draw distance is the entire region.

  • Frame time - average frame time for the last second. Scale is 0 ..100 milliseconds.
  • Worst frame time - longest frame time for the last second. Scale is 0 ..100 milliseconds.
  • Incoming msg queue - UDP messages not processed yet. Scale is 0 ..10. That queue is not supposed to get very long.
  • Asset fetch queue - queued requests to the asset servers. Scale is 0 ..10000. That queue gets big.
  • Moving objects - number of objects in the region that are moving.

The graphs are 100 seconds wide.

So what is this telling us? First, for all graphs, smaller is better. I'm getting 60 FPS when the region is fully loaded, but average only 40FPS while the region loads. Some frames are slower, and as a user this shows as jerkyness. This is due to a locking bug at the WGPU level, which you can read about in detail if you're interested in how concurrent interfaces to Vulkan work down at that level. A key feature of Vulkan is that you can load assets into the GPU while also rendering. So, if you do everything right, asset loading should not impact frame rate. I'm getting there, but am not there yet.

Second, you're seeing what happens when the viewer has enough concurrency that things are not, mostly, waiting for other unrelated things. All those metrics are mostly independent. They're much more interdependent in the single-thread viewers, which is why it's hard to see what's holding things up. With more concurrency comes clarity.

The asset queue shows how asset loading is progressing. Both meshes and textures are in the queue, with meshes generally having priority. Note how the graph goes up and then down. The sim server is sending out object updates at a metered rate, with a delay of about eight seconds at the start. Not clear why it takes so long to get started, but it does. The server may want to get all the terrain updates out before starting on object updates.

Asset loading and decompression is all done in background threads. Textures are prioritized by how much screen space they fill. A big backlog of distant textures is not a problem. While it took about a minute to get the whole region loaded, everything the user really needed to see was in within the first 10 seconds. The loading queue is re-ordered as the camera moves around. That's why there's no waiting around for something to load while you're standing in front of it. This enormously improves the in-world shopping experience. You can always see the merchandise and the vendors.

You can see the peaks in "Incoming msg queue". Those are all small peaks, no higher than 6 messages. Don't want that queue to build up. Otherwise the user interface will become unresponsive. A case can be made for having two queues, one for movement updates, and one for everything else, but so far that does not seem necessary. The servers are careful to throttle UDP messages, because the single-thread viewers will drop packets if they come in too fast, and on slow links the network will congest.

There are more optimizations to come.

perfgraphlondoncity2.png.f9e5caf0601186ca4cf3679288e3c55f.png

Content loading finished. Stable at 60FPS.

Same scene, a bit later. Asset loading is done, and all the graphs go flat at peak frame rate. Steady-state is easy; the GPU is doing all the work. All this is designed for a gamer PC comparable to what the average Steam user has. Sharpview is currently much worse on low-end hardware than the LL-based viewers. You can't even reduce the draw distance yet.

So what does all this tell us? It tells us where the servers are throttling to prevent overloading the viewers. They do a good job at that. UDP message overload is not a problem. The default throttling level seems to be set for rather low-end network connections. Some of those parameters were more appropriate twenty years ago, when a 20Mb/s DSL line was fast. It tells me where the bottlenecks are in my own system. Those FPS graphs need to be made flat.

Edited by animats
Fix decimal points
  • Like 5
  • Thanks 1
Link to comment
Share on other sites

2 hours ago, animats said:

So what does all this tell us? It tells us where the servers are throttling to prevent overloading the viewers. They do a good job at that. UDP message overload is not a problem. The default throttling level seems to be set for rather low-end network connections. Some of those parameters were more appropriate twenty years ago, when a 20Mb/s DSL line was fast. It tells me where the bottlenecks are in my own system. Those FPS graphs need to be made flat.

Did you also try using non-default TCP congestion strategy? Doesn't affect UDP of course, but for me it helps with HTTP texture streaming.

On Windows I need to change the congestion strategy to CTCP; residents using Linux have a gaggle of congestion strategies like CUBIC, Hamilton, Vegas, and Westwood. Maybe one of them will be better suited for SL.

ETA: To be honest, I really think SL will benefit from implementing SCTP even if in userspace on top of UDP ...

Edited by primerib1
added thots aboot SCTP
  • Like 1
Link to comment
Share on other sites

On 5/4/2023 at 2:19 AM, Henri Beauchamp said:

Mesa+Zink drivers render at half the speed of the proprietary NVIDIA OpenGL drivers... A no-go for NVIDIA.

Yeah. Nvidia has had good OpenGL drivers for quite some time. AMD's are ... very meh.

Also since I have an anemic iGPU (Radeon Vega 10), trading some CPU cycle to try to optimise OpenGL gave me some net benefits.

Edited by primerib1
  • Like 2
Link to comment
Share on other sites

13 hours ago, primerib1 said:

Did you also try using non-default TCP congestion strategy? Doesn't affect UDP of course, but for me it helps with HTTP texture streaming.

Not much of an issue. I have 16 threads (number of CPUs x 1.5) fetching and decoding assets from the servers. Asset server bandwidth is consistently about 200Mb/s. The bottleneck is still JPEG 2000 decoding, even with 6 CPUs working on that.

SL content storage is centralized AWS front-ended by hundreds of Akamai cache servers all over the world. It's straight HTTP/HTTPS, just like web pages. The Akamai servers seem to accept that level of traffic. They have an "anti-DDOS" system with "AI", according to Akamai's documents. At some point they're going to throttle. Thresholds are customer-settable, so either the defaults are OK or LL set them high enough this is not a problem. About once per thousand requests, the asset servers just fail to answer the HTTP request at all, and it has to be retried. The request works the second time around. Unclear what's failing, since there's no status code.

The bottleneck in the system is probably Akamai to AWS, not user to Akamai. Distributed caching assumes many users are looking at the same content. The ideal case for Akamai is a huge number of users looking at the same pop culture thing, such as the Olympics or football. But SL doesn't have such a concentrated load. For caching to help, there have to be several users in the same region in the same Akamai cache server area. So most of the time requests won't be served from cache.

So a viewer should have a large number of requests in flight. I normally have 12 requests in flight and 6 in decoding. I've tried having 48 in flight, and can pull 400Mb/s from the servers. But it doesn't help the user experience much. So I'm holding at 200Mb/s for now.

I'm not sure what the Akamai throttling system will do once multiple regions are supported in Sharpview and you can go driving around mainland, pulling 200Mb/s from the asset servers for an hour. Akamai's system is intended to serve web pages, and browsers don't demand content at such a high sustained rate.

I'm looking at something else to reduce texture traffic substantially. This is to have files which contain all the texture UUIDs for a region and their average color. One UUID, one RGBA value. Load those when connecting to a region and use those as initial colors instead of grey. For small, distant objects, it may not be necessary to fetch the real texture at all, especially if the user is just passing through an area.

monocolor1.thumb.jpg.40751a09f3274c0247173040a1d1c39f.jpg

Monocolor mode. There are no textures. This would be applied only to small or distant objects, ones which only take up a few pixels of screen space.

The viewer might keep these as cached data from previous visits, or those summaries might be collected by a mainland mapping bot and published on a server.

The goal of all this is solid immersion. The SL world should should not flicker. Users should not notice loading. You don't see that in AAA games. It's just not acceptable any more.

Edited by animats
typos
  • Like 4
Link to comment
Share on other sites

  • Lindens
9 minutes ago, animats said:

Asset server bandwidth is consistently about 200Mb/s.

And Akamai's edge throughput runs around 250Tb/s for comparison.  Landing in a new region is like a very, very badly constructed web page full of javascript and duplicate images.  :)

Link to comment
Share on other sites

52 minutes ago, Monty Linden said:

Landing in a new region is like a very, very badly constructed web page full of javascript and duplicate images.

Yes. Remember when web pages with a thousand thumbnail pictures were a thing? That's a region landing.

My concern was that Sharpview would hit the asset servers hard enough that some server throttling mechanism would trigger and cut the bandwidth way down or deny requests. That does not seem to be happening.

calletabandwidth.thumb.png.e17728e4313bc8bdcec989015fd1f759.png

Loading Calleta region. Much content, but little visitor traffic. Averaging around 150Mb/s.

--calletafsbandwidth.thumb.png.ea6aa2fc6a4edde4cc15bac6d1f82379.png

Same region, Firestorm. Download bandwidth averages below 20Mb/s.

So Sharpview is loading content at about 8x the bandwidth of Firestorm. This tells us that the primary bottleneck is viewer-side. The asset servers can handle much higher bandwidths.

fairelandsjunctionbandwidth.thumb.png.7e6b0cd99846323fbf53dea6ae591b4e.png

Fairelands Junction region. A complex region currently seeing heavy use.

Landed down in the frozen ground, not at the official landing point. Bandwidth is higher, probably because that content is already in asset server caches.

Loading prioritization, both via the interest list and the viewer's asset priority queue, is working well enough that after the first few seconds, the scene is stable for the user. This reflects the ground-level viewpoint. If we were flying over, we'd see some artifacts. That's where the monocolor mode would pay off. That would also cut bandwidth consumption. Right now, Sharpview always loads the entire JPEG 2000 file, even if it only needs part of it for lower resolution. That needs to be optimized.

Note that Sharpview isn't loading avatars yet. So this tells us nothing about avatar clothing delays and the "pink cloud" problem. That lies ahead.

Note that "Total Received 2.1 Tb." figure. If your "unlimited" data plan isn't really "unlimited', heavy use of high performance viewers may cause overage charges or throttling by your network provider.

Conclusion: asset loading delays are mostly a viewer-side problem which can be fixed.

  • Like 1
Link to comment
Share on other sites

1 hour ago, animats said:

Same region, Firestorm. Download bandwidth averages below 20Mb/s.

You should really given Henri's Cool VL Viewer a run for comparison. It frequently is able to saturate my 100Mb/s link and fetches textures and meshes with a pretty similar bandwidth/throughput as sharpview, despite the legacy architecture (just with the threadpool running texture decoding and a threaded FS cache to put the downloaded data on disk.)

I am pretty sure using modern io_uring/IOCP would be even better than a simple threaded filesystem access, but for fairly low the bandwidth/access frequency (compared to theoretical NVMe SSD IOPS) it doesn't matter. 

Link to comment
Share on other sites

39 minutes ago, Kathrine Jansma said:

You should really given Henri's Cool VL Viewer a run for comparison. It frequently is able to saturate my 100Mb/s link and fetches textures and meshes with a pretty similar bandwidth/throughput as sharpview, despite the legacy architecture (just with the threadpool running texture decoding and a threaded FS cache to put the downloaded data on disk.)

I am pretty sure using modern io_uring/IOCP would be even better than a simple threaded filesystem access, but for fairly low the bandwidth/access frequency (compared to theoretical NVMe SSD IOPS) it doesn't matter. 

And if you use a RAM-disk for the cache, with an 1 Gbps FTTH link, like I do, you often climb to 300-400 Mbps during rezzing in meshes+textures-heavy sims you never visited (i.e. not yet cached)...

Edited by Henri Beauchamp
Link to comment
Share on other sites

15 hours ago, Henri Beauchamp said:

And if you use a RAM-disk for the cache, with an 1 Gbps FTTH link, like I do, you often climb to 300-400 Mbps during rezzing in meshes+textures-heavy sims you never visited (i.e. not yet cached)...

Ooh that's a great idea, using a RAM-disk for cache!

19 hours ago, animats said:

Not much of an issue. I have 16 threads (number of CPUs x 1.5) fetching and decoding assets from the servers. Asset server bandwidth is consistently about 200Mb/s. The bottleneck is still JPEG 2000 decoding, even with 6 CPUs working on that.

SL content storage is centralized AWS front-ended by hundreds of Akamai cache servers all over the world. It's straight HTTP/HTTPS, just like web pages. The Akamai servers seem to accept that level of traffic. They have an "anti-DDOS" system with "AI", according to Akamai's documents. At some point they're going to throttle. Thresholds are customer-settable, so either the defaults are OK or LL set them high enough this is not a problem. About once per thousand requests, the asset servers just fail to answer the HTTP request at all, and it has to be retried. The request works the second time around. Unclear what's failing, since there's no status code.

Without proper tuning of TCP, you might end up being trapped in "TCP retransmit avalanche". You see network usage maxed out but the effective TCP stream is actually just a trickle because most of the traffic is just TCP retransmission.

If you're sharing internet connection with lots of devices, your traffic can further be suppressed by other traffic on the ISP side.

And also the trigger for a "TCP retransmit avalanche" can be anywhere between your endpoint and the other endpoint (in this case, Akamai).

Hence is why I suggested using a different TCP congestion avoidance method, because the 'standard' method was so simple and easy to lead one into becoming a victim of TCP retransmit avalanche.

Link to comment
Share on other sites

45 minutes ago, Love Zhaoying said:

Is sarcasm, or is it really much different than using a solid-state drive?

Not sarcasm at all, IMO...

With a proper RAM-disk (one for which you can save the contents on shutdown/reboot, and restore at boot, something which is trivial to obtain with a simple ”SysV init” script under Linux), you benefit both from persistent caching of already/often visited sims together with the fastest possible file I/O your system can procide, and you also preserve your SSD endurance by avoiding a gazillion of writes on it !

Since I got 64GB of RAM, I find no issue allowing a RAM-disk of up to 16GB: I also use it to compile the viewer, for example (thus its ”huge” size, while 4GB would be more than enough for a viewer cache RAM-disk), here again, avoiding to wear out my SSDs too soon...

Edited by Henri Beauchamp
  • Thanks 1
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 331 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...