Jump to content

SecondLife tech myths


You are about to reply to a thread that has been inactive for 119 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

3 hours ago, filz Camino said:

"Expert" opinions do not trump facts, the reverse is the case. 

Oh dear.  You had to say that.  "Facts" are open to interpretation and individual bias. They can be, and often are, cherry-picked.

"Expert opinions" are mostly derived from extensive personal experience.

I know which, In this day of marginal  honesty, I prefer to accept.

  • Like 1
  • Thanks 1
  • Haha 1
Link to comment
Share on other sites

4 minutes ago, elleevelyn said:

yes. This is the standard Linden release viewer

Although it turns out my later test with the Linden viewer is only 4 fps slower than the non-PBR Firestorm (see above).

I agree that 'good' is subjective, though! I personally normally run SL with Graphics settings set 1 notch above High, and frame rate limited to 30 fps (having shadows and higher frame rates doesn't justify the increased power consumption and fan noise for me). With those settings, my computer is barely ticking over in pretty much all situations in SL.

 

  • Haha 1
Link to comment
Share on other sites

4 minutes ago, Aishagain said:

Oh dear.

So when an expert tells you that it is pouring with rain in a certain location even though you are actually in the prescribed location and know it to be dry and sunny, you will trust the expert because  ""Facts" are open to interpretation and individual bias. They can be, and often are, cherry-picked. "Expert opinions" are mostly derived from extensive personal experience."

Good luck living successfully with that heuristic!

 

 

  • Haha 1
  • Confused 1
Link to comment
Share on other sites

On the topic of "experts", I will trust an expert opinion over a lay opinion or even my own un-expert opinion.

But only until that expert's opinion flies in the face of my own direct experience. 

And especially when the expert fails to handle such a discrepancy in  a rational and open-minded manner. 

I guess it requires an ability in critical thinking to discern when that is happening, though. And if you can't think critically, I guess simply trusting the expert, even over your own direct experience, may well be the safer option.

Edited by filz Camino
  • Haha 1
  • Confused 1
Link to comment
Share on other sites

6 hours ago, filz Camino said:

OK I've just downloaded that viewer and run the same test - I get 20 fps with what I think are the same settings as you.

https://gyazo.com/3ede58647e3e86fe7a2aeb95a9e27709

It is possible that my AMD 7800X3D is giving me an advantage in this situation. 

20FPS is not brilliant, but it is still perfectly playable.

Up in the sky, with nothing but my avatar to render I get 292 fps. That's with Vsync turned off (my monitor only goes up to 144Hz).

https://gyazo.com/5844d6354ccf1a257d4da3a2c0917281

It's nice that you were willing to try the official viewer.

  • Like 1
Link to comment
Share on other sites

8 minutes ago, Aishagain said:

@filz Camino Indeed your data convinces YOU, of course.  but I am capable of critical thinking and I'll manage to get through my life, one way or another, by relying on experts.

Fair enough. Although rather than talk about hypotheticals, I'm nevertheless curious to understand how you would respond to these 2 situations:

 

1/ You have a laptop with integrated graphics, and find in a packed nightclub with 47 avatars nearby, you get a solid 24 fps at 2560 x 1600 resolution, and in a less demanding situation get 160 fps with the same resolution. However, a SL Viewer expert tells you that for SL, a computer "very much does need a discrete GPU", so do you:

(a) Sell and replace the laptop, because the expert has told you it is no good for SL

(b) Ignore the expert opinion, in the face of your own direct experience to the contrary

 

2/ You have a GPU that features AMD's new "Super Resolution", a form of driver-level AI upscaling that (unlike AMD FSR and Nvidia DLSS) adds AI upscaling to any game without needing to be explicitly supported in the application code. You enable it for Firestorm and find that it works.

Regarding driver level AI upscaling however, the expert tells you that "SL doesn't use those fancy tricks".

How do you reconcile the expert's truth claim with your direct experience of it working? 

And how do you reconcile her expert claim with the contradicting claims of AMD experts, saying that it does work?

 https://www.amd.com/en/technologies/radeon-super-resolution

 

Because those are the actual discernments I've made in practice, and which have attracted a critical comment from your good self.

  • Thanks 1
Link to comment
Share on other sites

34 minutes ago, Love Zhaoying said:

It's nice that you were willing to try the official viewer.

i like to walk my talk - i'm interested in the truth so if someone posts facts that challenge my truth claim, i feel i've actually got to look at that and consider if my claim has actually survived contact with reality. if it hasn't, my claim needs to be modified. 

as it is, i agree elleevelyn's experience of 12 fps is hardly a "high frame rate and smooth user experience", but it turns out that elleevelyn's processor is a couple of generations old and my 2023 processor manages a more reasonable 20 fps. my original post does mention 2023, there have been some quite big leaps forward this year. also, i think setting all the graphics settings to the maximum possible is quite a demanding test, i can't really think of any game on my PC that would not grind to a halt if I did that, except for the older titles that were written in an era of less powerful hardware. 

Edited by filz Camino
  • Haha 1
  • Confused 1
Link to comment
Share on other sites

10 hours ago, filz Camino said:

I always know I'm going to get out of date information when someone does that.

I can wait while you get the source code, I'm sure it wont take you long to get a firm grasp of what it actually does.

10 hours ago, filz Camino said:

you don't actually know because you haven't bothered to test it,

I have spent considerable time performance  testing the viewer with actual tools.

10 hours ago, filz Camino said:

you're just relying on historical knowledge. Let's check the reality right this very minute - here's what my 2023 computer

Shame LL doesn't ship your computer.

 

 

 

  • Like 2
Link to comment
Share on other sites

8 hours ago, filz Camino said:

My guess would be that 1 core is handling the rendering loop in Firestorm and the other core is possibly being used by the graphics drivers, but whatever the case, I don't think it is reasonable to call this a single threaded application.

No.

Of course.

No reason at all to call that "bloated drivers".

But ok ... 

  • Like 1
Link to comment
Share on other sites

14 minutes ago, Coffee Pancake said:

I can wait while you get the source code, I'm sure it wont take you long to get a firm grasp of what it actually does.

I have spent considerable time performance  testing the viewer with actual tools.

Shame LL doesn't ship your computer.

None of that changes the actual reality of 2023 hardware performance.

I've posted screenshots and experimentally obtained framerate numbers of Firestorm running on 2023 hardware that refutes your performance claims. 

In order for this discussion/debate to be a rational debate, you've actually got to engage with the evidence I have posted, and either provide a counter argument or modify your own position.

If you fail to do that, then my arguments stand.

 

rational.png

Link to comment
Share on other sites

More important than expertise is wisdom btw. 

It takes wisdom to have awareness of where the limits and edges of one's own knowledge lie, and to know that one should proceed over that line carefully, and not simply dismiss information that contradicts our pre-existing opinions just because it comes from someone who we think must know less than we do.

When someone is talking about their own direct experience, we are already stepping over reasonable epistemological boundaries if we make claims of certainty about what they have actually experienced.

Unless we were there when it happened, or unless we are omniscient and actually have direct access to their experience. 

Are you omniscient, Coffee? because you certainly weren't here when I did the tests!

 

 

Edited by filz Camino
Link to comment
Share on other sites

7 hours ago, filz Camino said:

On the topic of "experts", I will trust an expert opinion over a lay opinion or even my own un-expert opinion.

But only until that expert's opinion flies in the face of my own direct experience. 

And especially when the expert fails to handle such a discrepancy in  a rational and open-minded manner. 

I guess it requires an ability in critical thinking to discern when that is happening, though. And if you can't think critically, I guess simply trusting the expert, even over your own direct experience, may well be the safer option.

 

1 hour ago, filz Camino said:

More important than expertise is wisdom btw. 

It takes wisdom to have awareness of where the limits and edges of one's own knowledge lie, and to know that one should proceed over that line carefully, and not simply dismiss information that contradicts our pre-existing opinions just because it comes from someone who we think must know less than we do.

When someone is talking about their own direct experience, we are already stepping over reasonable epistemological boundaries if we make claims of certainty about what they have actually experienced.

Unless we were there when it happened, or unless we are omniscient and actually have direct access to their experience. 

Are you omniscient, Coffee? because you certainly weren't here when I did the tests!

 

 

So, which is it? Experts or Wisdom?

Did you switch to Wisdom being more important because an Expert disagreed with you?

I'm very confused at this point.

 

Link to comment
Share on other sites

9 minutes ago, Love Zhaoying said:

So, which is it? Experts or Wisdom?

false dichotomy - the two aren't mutually exclusive. 

9 minutes ago, Love Zhaoying said:

Did you switch to Wisdom being more important because an Expert disagreed with you?

i raised the point when it became obvious the "expert" would rather double down with outdated, incorrect beliefs than learn something new. 

what sort of expert thinks it is better to save face than keep their knowledge updated to match the changing landscape? 

wisdom and humility supports life long learning which is a premise for maintaining expertise - that is why i think it is more important.

11 minutes ago, Love Zhaoying said:

I'm very confused at this point.

i can tell.

Link to comment
Share on other sites

1 hour ago, filz Camino said:
1 hour ago, Love Zhaoying said:

Can you please also test the other Third Party Viewers, now that you tested the Official viewer?

nope - you can do that if you want it done.

Ok then, I guess it's fair to say that your statements are based on conclusions from just the Firestorm and Official viewers. That should make up the majority of the user-base.

Are just 2023 gaming computer part of your conclusions, or do you think that your assertions apply sometimes to 2022 (and possibly older) computers and/or non-gaming computers?

Some of us are "just fine" with non-gaming computers from 2022 and even older.  Do our experiences count, or are they not part of your claims?

I got the general impression that for some reason, you were referencing "only 2023 gaming computers".

 

Link to comment
Share on other sites

14 hours ago, filz Camino said:
18 hours ago, Coffee Pancake said:

SL is single threaded, it uses one core. If it manages to burp a second core into life, well .. that second core isn't doing half the work. 

And what about the graphics drivers?

Both the newer AMD and the NVIDIA graphics drivers are multi-threaded even for OpenGL by now. 

It is true, that OpenGL has quite some limits for multi-threaded use and that the SL viewers main render loop is still single threaded (and especially the shadows stuff which hurts). So yes, the viewer dies a slow death of a thousand cuts when it is running the main loop and has to issue a gazillion draw calls due to badly optimized content. But at least extra cores help to free up the main thread to do the rendering and not get context switched away all the time. 

BUT:

1. Texture decoding and OpenGL binding of textures is using extra threads, so if you need to decode textures, more cores help (in Firestorm, Cool VL Viewer and maybe others, and in the modern SL Viewer too)

      See: https://github.com/secondlife/viewer/blob/a592292242e29d0379ee72572a434359e1e892d1/indra/llimage/llimageworker.cpp#L64 

2. Cache operations and filesystem I/O get pushed to the threadpool as well

3. A few other things may get pushed to threads too.

In a mostly stationary scene, the extra cores do not help much (besides the driver stuff). But when moving around or shuffling lots of textures in and out, those help quite a bit.

Some extra points:

  • AMD iGPUs aren't that bad. Sure, a modern 200+ $ GPU will run circles around it (burning 3-10x energy) and usually has maybe 100% more fps. But it is quite okay for most cases, unless you really want to visit that busy club or need ultra high resolution or draw distance.
  • Notebook APUs tend to have fast soldered on memory, and memory speed is pretty important for APUs
  • The AMD (windows) drivers improved massively in OpenGL performance over the last few years, especially if the GPU allows ReBAR/Smart Access Memory.
Edited by Kathrine Jansma
  • Thanks 1
Link to comment
Share on other sites

18 hours ago, animats said:

"Packet loss" in SL is strange. What the viewers report as "packet loss" includes packets dropped by the viewer because the viewer is falling behind. Also, "ping time" as reported by the viewer reflects time until a packet is processed along with the next display frame. This is why slow frame rates result in higher reported ping times.

If you slow the network down, it may reduce the packet loss rate in the viewer, because the viewer has less packets to process per frame.

Latency is a separate issue. Excessive delay will break some things. A round trip time of 1 second will break double region crossings with vehicles every time.

UDP packets are (mostly) retransmitted if lost, by SL's "reliable UDP" system. There are a limited number of retries. There are also "unreliable" UDP messages, mostly the ones that indicate something moved. You want the last one of those, not one from the past, so retransmitting where the avatar was ten frames ago is not useful.

It's recently been discovered that the event poller, which is a slow TCP poll, is much less reliable than it should be. Monty Linden is working on this. Henri Beauchamp and I have some workarounds.

If you dig deeply enough into this, some unexplained SL behavior turns out to be low level bugs.

I did not use any of the Second Life Viewer's metric displays to evaluate the performance of my network or the performance of the viewer when I deliberately delayed delivery of packets via my network.  I used the same external tools I use when measuring performance of voice and video transports.

The Second Life Viewer's behavior as experienced by the user is all that I cared about.  As data was delayed on the way to the viewer, I, the user, could 'feel' the delays in on-screen updates in the location of objects and avatars.  There are two interpolation mechanisms in the viewer that have some effect on this.  You know about them as you have described them in great detail in the past.  I left them at their defaults.

It's interesting to note that, yes, some packets may arrive too late to be used and may be counted as "lost" by the viewer, and that some message types sent via UDP will be resent if missed, either by loss or excessive delay.  Mostly I was trying to get a feel for how network characteristics affected the end-user experience on Second Life.  I ran all the same tests on several other applications.  Telnet and SSH really sucked to use when delay was introduced.  Voice was okay with some delay except for the time-shift issues that caused participants to overlap and collide when the delay was more than 120ms.  One-way video gives no hoots about delay.  ALL of them sucked when loss was introduced along with the delay, because of the timeouts and additional delays with any retransmissions.  Most voice and video transmissions tested did not loss well.  SRT introduced delays by buffering the video and requesting retransmission of missing data.  It tolerated up to 20% loss unless the bandwidth available introduced so much delay that the retransmitted data arrived too late to be played, limited by the allocated buffer size.  SRT with elastic-buffering just kept increasing the depth of the buffer until it hit a limit and crashed.  HLS, however, just makes the user wait for the data to arrive, no matter how long that might take, at it is downloading files to play sequentially.  

With the Second Life Viewer, which I personally have used for many hours a day since March 16, 2008, loss in-flight ALWAYS sucks when that which is lost is required.  You know the messages...  The ones that result in retransmissions and timeouts, up to and including disconnects.  However, if the round-trip time is reasonable and there is sufficient bandwidth to carry the retransmissions without causing additional loss, some retransmission activity can be tolerated.  What seems to annoy me the most is the loss or delay for "course updates" that result in increased inaccuracy of avatar position and the loss of "full updates" that result in objects, including avatars, sticking around in the viewer when they should be gone and incidents of some objects, and yes, sometimes entire avatars, not appearing when they should be visible.

I, however, wasn't adding the complexity of teleporting and region crossing.  As you have described in excruciating detail, these activities are way more sensitive to network foibles and infidelities, resulting in much anguish over disconnections.  I greatly appreciate the efforts being made to find the causes of these sensitivities and correct or mitigate them. 

  • Like 1
Link to comment
Share on other sites

8 hours ago, filz Camino said:

as it is, i agree elleevelyn's experience of 12 fps is hardly a "high frame rate and smooth user experience", but it turns out that elleevelyn's processor is a couple of generations old and my 2023 processor manages a more reasonable 20 fps.

i have 12th gen processor. This year's available is 13th gen.  Price difference for me is approx. $NZD300. Pay this amount to maybe get from 12 to 20 FPS under high load. I am not sure this is worth the money. Given that under light and medium load it motors along quite nicely

my advice to people when buying computers for SL is to get the highest rated graphics card they can afford within their budget - be that NVidia or AMD. Highest rated meaning manufacturer specifications. For example: NVidia xx90 is rated higher than xx70

in the way back, the best graphics card I could afford was a 230. it trundled along at about 6 FPS, 12 on a good day with everything graphics dialed down and I was happy to at least be able to log in and potter about. Then some years later (about 11 years ago) I could afford to put together a box with a 660. Then some years later I was able to upgrade the box to a 1050Ti.  Recently I was able to afford a new box with a 4070Ti

and my today box is capable of being upgraded (hopefully) to a 50xx or 60xx at some time in the future when I can afford it - same as I planned when I bought the 660 box. The price difference today for me between a 4070Ti and a 4090 is approx. $NZD2,400. A few months ago it was approx. $NZD1,000. Who knew the price would go as nuts as it has ! But even then I don't have another $1000 never mind $2,400 even though I really really want a 4090

so we can't just blithely say, buy a modern computer and we good to go. We do have to pay for it, and our budget is what it is. All the wanting in the world isn't going to change this

Link to comment
Share on other sites

2 hours ago, Ardy Lay said:

I did not use any of the Second Life Viewer's metric displays to evaluate the performance of my network or the performance of the viewer when I deliberately delayed delivery of packets via my network.  I used the same external tools I use when measuring performance of voice and video transports.

The Second Life Viewer's behavior as experienced by the user is all that I cared about.  As data was delayed on the way to the viewer, I, the user, could 'feel' the delays in on-screen updates in the location of objects and avatars.  There are two interpolation mechanisms in the viewer that have some effect on this.  You know about them as you have described them in great detail in the past.  I left them at their defaults.

It's interesting to note that, yes, some packets may arrive too late to be used and may be counted as "lost" by the viewer, and that some message types sent via UDP will be resent if missed, either by loss or excessive delay.  Mostly I was trying to get a feel for how network characteristics affected the end-user experience on Second Life.  I ran all the same tests on several other applications.  Telnet and SSH really sucked to use when delay was introduced.  Voice was okay with some delay except for the time-shift issues that caused participants to overlap and collide when the delay was more than 120ms.  One-way video gives no hoots about delay.  ALL of them sucked when loss was introduced along with the delay, because of the timeouts and additional delays with any retransmissions.  Most voice and video transmissions tested did not loss well.  SRT introduced delays by buffering the video and requesting retransmission of missing data.  It tolerated up to 20% loss unless the bandwidth available introduced so much delay that the retransmitted data arrived too late to be played, limited by the allocated buffer size.  SRT with elastic-buffering just kept increasing the depth of the buffer until it hit a limit and crashed.  HLS, however, just makes the user wait for the data to arrive, no matter how long that might take, at it is downloading files to play sequentially.  

With the Second Life Viewer, which I personally have used for many hours a day since March 16, 2008, loss in-flight ALWAYS sucks when that which is lost is required.  You know the messages...  The ones that result in retransmissions and timeouts, up to and including disconnects.  However, if the round-trip time is reasonable and there is sufficient bandwidth to carry the retransmissions without causing additional loss, some retransmission activity can be tolerated.  What seems to annoy me the most is the loss or delay for "course updates" that result in increased inaccuracy of avatar position and the loss of "full updates" that result in objects, including avatars, sticking around in the viewer when they should be gone and incidents of some objects, and yes, sometimes entire avatars, not appearing when they should be visible.

I, however, wasn't adding the complexity of teleporting and region crossing.  As you have described in excruciating detail, these activities are way more sensitive to network foibles and infidelities, resulting in much anguish over disconnections.  I greatly appreciate the efforts being made to find the causes of these sensitivities and correct or mitigate them. 

That's about what I see. I've also tested introducing delay in the past. See my 5 year old video. Back then, LL people were complaining that region cross failures could not be reproduced, and so they could not fix the problem. So I came up with a way to generate a double region crossing fail every time, by adding a full second of network latency using Linux tools.

The retransmission time calculation in the "reliable UDP" system may need some work. In Sharpview, I start retransmits at 2*RTT and back off from there. I think the LL code has a retransmission time constant, set decades ago, and it may be too large. Something to be looked at after event poller reliability gets fixed. Monty Linden has been looking at the event poller, and finding ancient low-level code that needs work.

Some of this is a viewer architecture issue. The viewer tries to cope with message overload  that would reduce the frame rate too much. Some of that is done by dropping some messages and letting the retransmit system send them again. That's why, when frame rate goes down, "packet loss" goes up. That's the viewer, not the network. In a mostly single thread viewer, the question is always how much time can you steal from the rendering thread. That has to be limited. Remember, when SL was designed, nobody had multiple CPUs on consumer desktops. Today, everybody has lots of them, even on mobile.

In Sharpview, incoming messages are queued and handled in a separate thread. Received packets are never dropped. One statistics graph I display shows the queue length. It's usually length 0, or, occasionally, 1, indicating that updates to the world are caught up. If that queue length starts to climb, something is wrong. Same problem, different approach. Scales to machines with multiple CPUs, which is the point.

There's a standards group working on metaverse standards. Linden Lab is not a member. But everyone from Autodesk to W3C is. Their big push for 2023 was on glTF and USD standardization for metaverse content. There's a networking subgroup, and I've been sending in some comments.

  • Thanks 1
Link to comment
Share on other sites

7 hours ago, Love Zhaoying said:

Ok then, I guess it's fair to say that your statements are based on conclusions from just the Firestorm and Official viewers. That should make up the majority of the user-base.

Are just 2023 gaming computer part of your conclusions, or do you think that your assertions apply sometimes to 2022 (and possibly older) computers and/or non-gaming computers?

Some of us are "just fine" with non-gaming computers from 2022 and even older.  Do our experiences count, or are they not part of your claims?

I got the general impression that for some reason, you were referencing "only 2023 gaming computers".

 

I'm only talking about 2023 computers because that is all I have available on my desk to actually test.

If you'd like to include older computers in the discussion, why don't you run some of the tests I've done on your computer and get back to us to let us know how it compares?

Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 119 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...