Jump to content

Would Second Life Benefit from an Engine Rebuild?


You are about to reply to a thread that has been inactive for 1020 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

@Oz Linden At the risk of annoying loads of people, maybe it would go better (for the client at least) if some of the best coders and developers were co-opted from the leading TPV teams.  The downside here might well be that they simply won't work within the Linden Lab system (or is it the other way around...I don't know).

Back in the day I didn't have a clue what was going on (and knew it) and I delude myself now occasionally by pretending that I do, so I am emboldened to make really daft suggestions!

Link to comment
Share on other sites

1 hour ago, Oz Linden said:

because that went over so well last time....

That was mostly just a UI rebuild by people who didn't use SL. The concept of a side bar was great, the implementation forced everything to use it & bottlenecked everything everyone did.

This time round, lets start with a new asset fetch-decode-vram-render pipeline, something that might make use of all these cores we have laying about collecting dust, bit of vulkan with a little RTX on the side so our fancy pants GPUs have something to do.

Modern PC's are wasted on SL.

 

  • Like 2
Link to comment
Share on other sites

1 hour ago, Coffee Pancake said:

That was mostly just a UI rebuild by people who didn't use SL. The concept of a side bar was great, the implementation forced everything to use it & bottlenecked everything everyone did.

This time round, lets start with a new asset fetch-decode-vram-render pipeline, something that might make use of all these cores we have laying about collecting dust, bit of vulkan with a little RTX on the side so our fancy pants GPUs have something to do.

Modern PC's are wasted on SL.

 

The implementation wasn't even that faulty as everyone seems to make it look like. It had only one main issue and that being not able to open multiple panels at the same time, which was fixed later (in 2.1 already) by being able to detach panels and use them like you previously did. But for some basic things i found the sidebar to be very fast. Kinda sad its gone now. Other than that however the UI was a massive improvement, i'm actually way more pissed off that over the years the usual "LL" style has been shoved back into the V2 UI... now we have huge windows, huge spaces between each option and 0 screen real estate. Kinda sad that the third party devs who did the V2 UI with no knowledge what SL is actually about did a better job at making a somewhat consistent and clean (although boring) looking UI.

Edited by NiranV Dean
  • Like 2
Link to comment
Share on other sites

1 hour ago, Coffee Pancake said:

That was mostly just a UI rebuild by people who didn't use SL. The concept of a side bar was great, the implementation forced everything to use it & bottlenecked everything everyone did.

This time round, lets start with a new asset fetch-decode-vram-render pipeline, something that might make use of all these cores we have laying about collecting dust, bit of vulkan with a little RTX on the side so our fancy pants GPUs have something to do.

Modern PC's are wasted on SL.

 

I don't know what made me search for "Vulkan" in the forums, but I found this active thread, and I think you're right on the money with this. Vulkan can multi-thread rendering a lot better than regular OpenGL. I kind of went down a rabbit hole, but I found this regarding performance http://wiki.secondlife.com/wiki/Culling. The edit history is quite old, I don't know how much is actually relevant.

Some of it may be outdated, but the Performance section has draw calls being a bottleneck, and the wiki considers adding more triangles to draw calls. But OpenGL's draw call performance is much lower than Vulkan, I've seen benchmarks with Vulkan being 2 to almost 4 times faster when benchmarking draw calls, with reduced CPU usage.

I wonder what the draw call bottleneck looks like for SL. It's dynamically loading content from the web, so mesh is slowly trickling in, but I don't know if it would make a difference. I don't know enough about this stuff. I'd think it has a lot of potential to increase performance. It'd be good to get the SL viewers more multi-thread friendly, especially when phones usually have 8 very weak CPU cores to use, as a long term goal.

Link to comment
Share on other sites

23 hours ago, Flea Yatsenko said:

I wonder what the draw call bottleneck looks like for SL. It's dynamically loading content from the web, so mesh is slowly trickling in, but I don't know if it would make a difference...........

SL's bottlenecks are many and complex. Although the biggest is that everything is tied to a single thread.

One core does the decoding and that same core tells the GPU what to do (and everything else). It's very easy to fully utilize a single core just telling GPU what to render and equally easy to max it out decoding some textures or pulling files from the cache or moving data to VRAM.

There is just too much for a single thread to do. 

Trying to add more threads to the existing pipeline is a waste of time, as all the data still has to come back together at one point so the next step in the process can be completed (if anything this tends to make the end result slower).

Vulkan and SL are almost a perfect mix on paper, but it's going to take a rebuild with that in mind from the ground up. This is not a trivial task and beyond what a few hobbyist developers making third party viewers can be expected to accomplish.

  • Like 1
Link to comment
Share on other sites

 

On 1/4/2021 at 2:11 PM, Oz Linden said:

because that went over so well last time....

I think the biggest problem with sansar was majorly UX design, the fact that it was VR for some reason, and according to someone on glassdoors it was an "every man for himself" "everyone working on different stuff uncoordinated" I got the idea that it was kind of a mess.

imo theres a few changes that could be made to majorly improve the metaverse we have already though, I listed on page 2, 7th post down: 

 

On 12/7/2020 at 1:38 PM, Nikkesa said:

More complicated, scripted animation controls, a proper 2D user interface API for people to make decent HUDS. I feel like the average person starts up second life and is immediately turned off by the fact that the avatar has the same default animations as they did in 2004, and moves around feeling like it did back then too. If they just made it fun to play the game - made it satisfying to do something as simple as run around without glitching out and crashing during a sim crossing, or even having a palpable minute amount of lag during a sim crossing.... If you started them off in a smooth, optimized, cool-looking city with a decent vehicle and showed them the possibilities, way more people would get into this game.

  and here I went into more detail 

not to entirely focus on animations, because that was the original post that I was talking in there, but mostly around the "new user experience" which should be less about "here's some cool stuff you can be"  - instead it should focus more on "Here's some new stuff you can do".

The game needs a "How it _feels_ to play" uplift more than anything else. Starting up the client, then being able to run around like you're playing mirror's edge or breath of the wild or super mario 64 or NieR:Automata or some wicked hoverbike racing game.

https://hopefulhomies.com/2017/02/18/movement-mechanics/ <- some more research on the topic

 

I understand you guys probably already know this stuff and there are probably tons of leaps and hurdles you'd have to go through with to make this kind of thing a reality, but idk man, I'm really hoping that we get to see the metaverse fully realized at some point. I feel like if Linden Labs doesn't do it themselves, someone's going to realize how much potential there is on a platform like Stadia and just do it there... Or some rich VRchat player will invest and it'll get a major tech overhaul that puts it in league or something.

Link to comment
Share on other sites

4 hours ago, Nikkesa said:

I'm really hoping that we get to see the metaverse fully realized at some point.

Me too. The technology is here to build a metaverse. SL comes closest. It's just too sluggish.

Technically, we know how to solve every problem except large numbers of avatars in a small area.

Link to comment
Share on other sites

  • 2 weeks later...
On 1/11/2021 at 5:14 PM, animats said:

Me too. The technology is here to build a metaverse. SL comes closest. It's just too sluggish.

Technically, we know how to solve every problem except large numbers of avatars in a small area.

The solution to that is to simply lower resolution locally/do what they already do. They can dynamically allocate further server resources now that they're running on AWS, so definitely shouldn't be a problem server-side.

Link to comment
Share on other sites

3 minutes ago, Nikkesa said:

The solution to that is to simply lower resolution locally/do what they already do. They can dynamically allocate further server resources now that they're running on AWS, so definitely shouldn't be a problem server-side.

Server side is mostly single thread. It doesn't inherently have to be, but it's 20 year old technology.

There are ways to architect around this. See Spatial OS, from Improbable. Doing it cost-effectively is tough, though. All three free to play big-world games on Spatial OS shut down because they were too expensive to run.

A more technical view:

You can sort of see from the performance problems what's wrong server side. Most CPU consumption is script time. Much of that is just the 3us per frame wasted on each script that's doing nothing. 5000 idle scripts in a sim and you're out of script time.

Scripts could potentially be run in parallel on multiple CPUs. LSL's API has the interesting property that most calls request either get data or put data, but don't do both. The ones that do usually involve a delay. Someone was thinking ahead when they designed that.That allows more concurrency. The way this would be done is that all scripts read the state at the beginning of the frame. All the world state changes a script makes would be accumulated in a queue and applied all at once, after all scripts for the frame have run. If a script does a set followed by a synchronous get, it has to lose a turn and wait one frame. That eliminates most locking problems.

The other big bottleneck is that when an avatar enters a sim, there's a huge transient load that lasts seconds. Go to a big shopping event and open the performance window and the nearby users window. Watch the performance hit as new avatars enter. This is what causes super-slow walking in busy sims. Why this is the case I don't know.

Link to comment
Share on other sites

  • 5 months later...
On 12/5/2020 at 3:10 PM, Aishagain said:

I see suggestions that we need an SL2.0.  Really?  LL tried that already, it was called Sansar and we all know how well THAT went!

Don't get me wrong SL WOULD be better if it was rebuilt around a new, better base programme.  The simple fact is that no one can or will spend the resource to make asset transfer between the "old" and the "new" SL possible, and as a result it is a non-starter.

They were GOING to build 2.0, but two things happened.  The SL version 0.00000 purists were screaming about losing their prim flexy dresses and that greenie's cat that cost 11,534 prims or their freebie dump wardrobe that didn't even look good in the dark, plus the cast of Rosedale haters working at Linden Road that were still mad because while we wanted life 2.0 that Philip promised, they wanted a monetized Web 3.0.  "We" lost that fight, and the Web Threepers dreamt up something they didn't want us to want (because they think all we do is throw dildos at each other), but that they KNEW the world would love and flock to en masse ... a place where you could build your own static 3D museum that would be soooo popular, because like  ... there were so many other virtual museums that all folded within six months of sign on.

Yeah.  No one called that SL 2.0 the day after they made an announcement essentially laughing at that prospect, and said they would offer us "experiences" instead.

But yes, a REAL 2.0 would be lovely.  And, I'm not going to scream about my 2007 wardrobe ... because I deleted it all the moment mesh came along.  My wish if it happens ... Let us keep the creative control (let us build this one like we did the current one), and let us make the avatars we want.  We love the world we have now ... but we know it can be better ... not different ... not something else ... but this one, better.

  • Haha 1
Link to comment
Share on other sites

OP Wants an engine rebuild and then goes on about scripts like that's going to make some big difference. That's not the rendering engine that's the simulator.

SecondLife's rendering engine is mostly built around a single thread model, which was fine when multi-threading wasn't such a big thing back in the day and chips gradually just got faster and faster.

Today chip cores aren't really getting faster, we're just getting more cores. In fact, SecondLife sometimes gets slower on newer chips because these new 'faster' chips actually trade core speed for more cores.

The biggest performance gain to be made would be to improve the rendering engine to take advantage of the extra cores aka Multithreading. It's certainly possible. Other 3rd party viewers have done it to an extent and there are noticeable performance gains when using them.

Edited by Extrude Ragu
correction
  • Like 1
Link to comment
Share on other sites

39 minutes ago, Extrude Ragu said:

Other 3rd party viewers have done it to an extent and there are noticeable performance gains when using them.

Our own performance testing suggests claimed multithreading improvements are extremely situational at best and straight up conformation bias most of the time. Touted multithreading changes did not uniformly improve performance when tested in isolation and actively reduced performance in some (very typical) use cases. Observed end user differences can easily be attributed to mismatched settings, jpeg2k decoder performance (KDU, specific openjpeg versions), variation in test conditions in SL, lack of consistency or sample size of testing.

Actual benchmarking in SL is difficult and time consuming, most users tend to read off the average FPS and assume a higher number is better, when really due to the massive spread of frame times, perceptual performance is what actually matters. As an example, average FPS in the viewer might report 14fps, the 1% lows are actually 8fps and the 0.1% are 7fps. SL feels way closer to the lower number.

I want the days spent running those AB performance tests back.

Hoping to get some better stats into the viewer, in the mean time using FRAPS + FRAFS is as nice independent way for end users to get some numbers. (See GamersNexus)

  • Thanks 2
Link to comment
Share on other sites

Just slapping multi-thread on an old engine isn't going to work. You need to make major changes instead of just making some parts of it multi-threaded. Otherwise you just keep profiling and chasing the next single thread bottleneck. Multi-thread has a lot of potential to make SL faster, but it's not like you can just make parts of existing SL multi-threaded and have it be some sort of cure all magic performance boost.

Think of it this way. the SL client is a huge amount of single threaded code. So you look at the slowest parts you can find and multi-thread them. But everything else is still single threaded. So maybe you'll speed up a few specific things but everything you didn't touch will stay the same. Meanwhile, you've just re-written a bunch of code that worked fine and risked adding a ton of bugs with new code that's much more complex. Which leads to situations like, "performance increased up to 15%" and people mad things that worked fine were broken.

But SL's greatest strength is also it's biggest weakness. There is near endless content on SL for anyone, more than any other virtual world by far. But some of that content has been around for a very long time, and deprecating it can cause problems. Realistically though, people are upset over losing something they probably spent a dollar or two in real world money on. If there is demand for a deprecated product someone will fill in the supply void to meet the new demand.

If LL was to do something with the client, they'd need to do a full engine rebuild from the ground up. I've always been a fan of splitting clients into a simplified, high performance "regular user" client that doesn't allow building, but still lets you customize your AV, and a "content creator" client which is basically exactly what SL is now. But when I've mentioned it before people got quite bothered by it. I don't know why, it's a chance to rebuild a new client that's much easier on new users that runs a lot better. 100% everything new users need to stay in SL. And when you want to build you just fire up the content creator client and have all the tools and features you always had. I'd wager the vast majority of people in SL only need an interface to teleport around and change their avatar.

  • Like 2
  • Haha 1
Link to comment
Share on other sites

The basic loop that's run in the client makes multithreading the existing client a waste of time, there are some minor gains under certain conditions, or quality of life stuff, but  nothing that's going to move the performance needle in any meaningful way for everyone.

It's like building a 6 lane super highway with only a single lane on/off ramp at either end.

A simple vulkan port of the existing code would not change anything (aside from maintaining Apple support and enabling some new visual shiny). 

The entire fetch-> decode->render pipeline needs to be resigned to be multi threaded from scratch. Which as much work as that sounds will probably be simpler in the end than just trying to mash vulkan onto the existing codebase.

<rant> This is especially frustrating as LL have known for years and years about the impending loss of Apple support and instead chose to 'wait and see', before switching to 'gathering statistics'. Time is now short, money is short, the Lab is a smaller company, the grand announcement from SL18B was 'getting ready to do something'. If they had spent a fraction of SL's income rather than diverting it all down the sucksar toilet, chasing none existent magical new customers that explicitly weren't us, this would not be a problem...... (there is no end to this rant. 😠)

1 hour ago, Flea Yatsenko said:

But SL's greatest strength is also it's biggest weakness. There is near endless content on SL for anyone, more than any other virtual world by far. But some of that content has been around for a very long time, and deprecating it can cause problems. Realistically though, people are upset over losing something they probably spent a dollar or two in real world money on. If there is demand for a deprecated product someone will fill in the supply void to meet the new demand.

<unpopular opinion ahead>

Lets start with LSO scripts. Remove that functionality entirely. There's nothing that can't be replaced. (oh no .. that 18 year old free pop gun wont work anymore, however will people manage without their vintage 'editing appearance' prim sex furniture ..)

Sculpts .. A terrible stop gap feature that's now replaced entirely by mesh. Dropping them entirely would come with a sweeping loss of legacy content (and kill off massive, cheaty low Li full sim surrounds). It would be brutal, everyone would loose something ... which is why it probably should be done. There would be a huge incentive for the creators of those sculpt assets to reengage with SL and make new mesh versions of all their content - which most will already have all the required source files to do. Losing hacky sim surrounds (which depend on hacked in region sized mega prims) sounds a lot like the start of an awesome new feature from LL for region owners. Converting sculpts to mesh isn't viable, (ignoring any IP issues) there would be massive Li changes / opportunity to abuse the transition period to cheese in huge 1Li mesh objects.

There would be a lot of grumbling. But the net effect on how LL approaches big problems and residents rising to the challenge of replacing everything that was lost would be a huge gain for the platform.

1 hour ago, Flea Yatsenko said:

I've always been a fan of splitting clients into a simplified, high performance "regular user" client that doesn't allow building

This is more difficult to really specify what the end result would look like, where is the line between editing and tweaking, how do you separate pick and place activities from building - is decorating a Linden home building? How about tinting something or adjusting a unrigged attachment.

I'm more tempted to go the other way. By all means burn the edit floater though, that's just bad UI design and a nightmare to work with from a development perspective .. like seriously, this hellish floater is the reason build tools in the viewer haven't changed in a decade.

  • Like 3
  • Thanks 1
Link to comment
Share on other sites

I thought up a way I think could improve viewer performance without too much development from LL

In essence, we can reduce texture memory usage by supporting vertex colors. OpenGL supports it therefor the viewer technically does, Blender, Collada supports it. It would mean that a creator could for example tint a 128px repeating brick texture using vertex colors to do ambient occlusion etc instead of using 1024 textures for every section of a building, just because of ambient occlusion. I go into more detail over here.

  • Haha 1
Link to comment
Share on other sites

2 hours ago, Extrude Ragu said:

I thought up a way I think could improve viewer performance without too much development from LL

In essence, we can reduce texture memory usage by supporting vertex colors. 

I'm feeling like the baddy here :(

VRAM isn't the problem. Once textures and mesh are in VRAM then all the hard stuff has all been done. So long as you have enough VRAM for the scene the viewer is tryin to render, then it's all fine. Using less wont make it faster.

If you don't have enough VRAM then the viewer gets stuck swapping data in and out, the constant never ending decode cycle hurts performance and texture swapping is visually very annoying. How all this works could be a lot better.

Sorry :( 

Link to comment
Share on other sites

Looking at SL visual quality, the highest level of detail for most objects is pretty good. Lower levels of detail, not so much. What can be done about that?

Here's an idea I haven't seen discussed: a mesh-reducing edge server.

Suppose you had a server front-ending the asset servers, but with application-specific processing capability. This is called an "edge server" in web jargon. The idea is that it sometimes makes the lower mesh LODs itself, starting from the highest LOD version. This allows using more modern algorithms to improve the LODs of existing content. The new LODs would be cached, so this doesn't get done on every fetch. This doesn't require modifying either the simulator servers or the viewers. It does mean more computers are required.

(The details of this are complicated. First, you need a good mesh reduction system. There are good ones, like Simplygon and the one in UE4/5, and cheap ones as free software on Github. I've tried some of the free ones. They're very brittle; if a mesh isn't entirely correct and watertight, some of the algorithms fail, because they want to work on a volume with a clear inside and outside. That's fixable, but a pain to fix.

Second, most of the newer mesh reduction algorithms do a terrible job on SL clothing. That's because they can't handle thin sheets well. Also, applying mesh reduction to clothing can result in the lower-LOD more blocky mesh pushing through outer layers. Clothing needs special handling.

Third, in SL, normal maps and meshes are totally independent. You'd like to take out fine mesh detail and replace it with normal maps to get the same look. But with SL's formats, that's difficult. The asset servers have no idea which mesh goes with which material; that information takes a different path. For now, that's out as something that can be done as a retrofit.

Fourth, there's no reason to push mesh reduction too hard. Never force the triangle count below 25-100 just to reduce the mesh. It's not really speeding up draw, and often does not even reduce the land impact. That will prevent objects from disappearing at distance due to crap lowest LODs.

Fifth, you only want to control this in the edge server, not do the mesh reduction there. You want to do it only once for all users, so that effort belongs near the asset storage on AWS.)

 

  • Haha 1
Link to comment
Share on other sites

8 hours ago, animats said:

Second, most of the newer mesh reduction algorithms do a terrible job on SL clothing. That's because they can't handle thin sheets well. Also, applying mesh reduction to clothing can result in the lower-LOD more blocky mesh pushing through outer layers. Clothing needs special handling.

One thing to consider is that user made LODS when people actually make them have the capacity to be much better than anything a computer could generate. Automatically generating LODS on items like these would waste the work of creators who actually did upload with custom LODS for their models in the past.

You could have a prim property to go between generated and user lods, but I would guess that it would cause the object to increase in Land Impact when the object is changed and thus the end user will stick with the bad LODS.

In all it's a case of incentivizing good building practices and I think wherever a system is created it will be gamed. A more human approach is needed.

Link to comment
Share on other sites

11 hours ago, animats said:

Looking at SL visual quality, the highest level of detail for most objects is pretty good. Lower levels of detail, not so much. What can be done about that?

Hence my post about putting real time decomposition in the viewer and only using provided LOD models if they were explicitly added by the creator.

Even if it was no better than the uploader at the start, it would at least leave the door open to future software improvements and allow local performance to dictate how aggressive such an engine was at runtime.

Link to comment
Share on other sites

4 hours ago, Extrude Ragu said:

One thing to consider is that user made LODS when people actually make them have the capacity to be much better than anything a computer could generate. Automatically generating LODS on items like these would waste the work of creators who actually did upload with custom LODS for their models in the past.

You could have a prim property to go between generated and user lods, but I would guess that it would cause the object to increase in Land Impact when the object is changed and thus the end user will stick with the bad LODS.

In all it's a case of incentivizing good building practices and I think wherever a system is created it will be gamed. A more human approach is needed.

A more evaluated approach is needed.

There are many image matching programs available. Ones that compare two images and tell you how closely they match are easy today. This makes LODs testable. Render from a few viewpoints at high LOD, downscale the image with a little blur, and compare that with similar images  of lower LODs. That tells you if the lower LOD is no good.

Then focus automatic LOD re-generation on the ones where the image comparer reports a poor match between LODs. That should catch all the see-through lower LODs.

  • Haha 1
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 1020 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...