Jump to content

What do UE5-like technologies mean for Second Life Viewers?


You are about to reply to a thread that has been inactive for 74 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

I've seen a few threads talking about Unreal Engine, UE5, and game-engine technology with interest in seeing this kind of innovation in Second Life Viewers.

I thought I would spend a moment to take a shallow dive into these technologies and put them into the context of Second Life.

1. Lumen is an incredible feature that could add significant lighting realism to Second Life, if there were a UE5 based SL Viewer, because it could add a real-time global illumination approximation to existing SL geometry. This does require a pretty beefy modern GPU, and it also would need some lighting adjustments to rebalance light around secondary light bounce effects. However, the results can be quite impressive, even with existing legacy geometry. For a taste of how legacy low-poly geometry can look with Lumen, check out some of the many "World of Warcraft remastered in UE5" videos. Lumen is a runtime feature, and can handle runtime generated (and downloaded) geometry, such as would occur with SL. 

How is this different than current SL lighting? Current Second Life viewers are based around only primary light sources. The reason SL looks "flat" compared to games, is that most games use an expensive "lighting bake" that does a slow edit-time raytrace of all secondary light bounces into a "lightmap" for the entire scene. However, this not only freezes time-of-day, but it also freezes the position of all objects in the scene. This is not done in Second Life for many reasons, but mostly because objects can be moved, and each time they were moved, lightmaps would look "wrong and broken" and some computer would have to run a potentially long lighting bake to make new lightmaps. Lumen is a technology that allows an approximate raytraced lighting to be computed in real-time, even for dynamically generated geometry.

Is it possible to implement Lumen-like tech in existing SL Viewers? Of course, but it would take years of funding and expert graphics developers. Lumen merges previous research into voxel global illumination, screen space global illumination, and light-probes into a really craft solution. While each of those technologies is relatively simple (and open source), combining them together into Lumen was years of work by expert graphics engineers at Epic to create creative solutions to the edge cases, that have led to a real-time GI solution that not only has an acceptable level of artifracts, but is also incredibly efficient. To get a respect for it's complexity, you can watch the 2021 SIGGRAPH presentation of Lumen by one of the developers. The next milestone we should be watching for is for *any* other engine to have technology comparable to Lumen. It is possible this might not happen before we have hardware capable of full real time path-tracing (probably in 3-6 years).

2. Nanite geometry is an alternative to polygon meshes, allowing efficient runtime tesselation for perfect automated LOD. This technology is not easy to make use of for a Second Life viewer, because Nanite geometry can not be runtime-generated inside a UE5 game client. It is a slow-step that only happens in the UE5 Editor.

What is Nanite? Nanite both ultra-high detail objects to be used with reasonable performance, and it allows incredible scene complexity to be rendered with reasonable performance. This would be an incredible addition to Second Life, but it's not one that can come "for free" in a Second Life Viewer. To use Nanite in UE5, every SL object, after it is uploaded, would need to be processed into the Nanite format **inside the Unreal Engine Editor**.  While these objects could also exist in a legacy mesh form for other clients, this kind of reprocessing of meshes into an alternate format (and serving that alternate format to a Nanite capable viewer) is much more extensive than writing a Second Life Viewer client. 

3. Don't be fooled by the path-tracing examples. We've all seen shocking UE5 scenes, sometimes with absolutely massive numbers of trees (millions!), like this "UE5 Sierra Nevada" Scene. These images are *not* real-time rendered. They are using the UE5 Editor's progressive path-tracer, which is a similar technology to Blender's Cycles progressive path-tracer. If you watch these users working in UE5 editor, you can see "path tracing" enabled in the editor, and when they move the scene you can see the image pixelate before filling in over time. Equally important to note, is that when scenes have an absolutely huge number of instanced objects, such as millions of trees, they can *not* be rendered at real time in a game. In other words, just because you see people building these absolutely massive scenes in Unreal Editor, does *not* mean they would work in real-time in an Unreal Engine based game.

4. Large View Distances. Modern game technology, including UE4 and UE5 can support fast rendering of vastly larger view distances than you see in Second Life.. However, this is as much a product of the way these terrains are built as it is because of the game engine technology. Many Second Life locations are built by users in performance inefficient ways. For example, by using tons of uniquely different "ground" objects, instead of using the terrain height map, or by placing a massive number of unique objects to give the scene a certain visual look. Scenes built like this could cause any game engine (without Nanite) to struggle and render with low FPS. When working on a game, designers make choices for performance as much as aesthetics, and SL does not require, enforce, or even encourage a workflow with these kinds of limitations. The SL solution is to keep a relatively short view distance always, to allow users to add scene complexity right in front of them.

5. Large Object Count. In game rendering, one thing that slows down rendering is the number of uniquely drawn objects (draw call count). Modern graphics systems, like Vulkan, are lowering this cost, but there is still a cost. In a game engine like Unreal, often the many different pieces of an object are merged into a single mesh in the Unreal Editor, and many objects placed in a scene are merged into the scene object mesh. This is one of the reasons that so many games have "bookshelves" and "props" that are immovable and indestructable. They are already merged into the background mesh. It is possible for any Second Life Viewer to do this kind of "mostly static" mesh merging at runtime, to optimize drawing (and it's possible some do, i have no idea), but this isn't something game engines do out of the box, so it's very possible that a "Second Life Viewer implemented in UE5 without runtime mesh merging" would have performance issues rendering scenes with very large numbers of unique placed objects. This is related to #4 above, as this is another reason I suspect view distances are kept relatively short in SL.

I'm sorry for how long that turned out, and how technical some of it is.

I hope some of that explanation is helpful to set expectations for this technology showing up in SL Viewers, and also to inspire some possibilities of things we might see in the future.

 

 

 

 

 

Edited by Dave Emerald
  • Thanks 1
  • Haha 1
Link to comment
Share on other sites

LL does not have the time, money, staffing or hardware to port this entire game over to UE5 without losing content. So while other game engines are extremely capable, SL is kinda stuck where it is due to the nature of this game, it’s locked into supporting legacy content because it’s all legacy content.

 

Link to comment
Share on other sites

I've posted on this topic before, so I'll be brief today.

  • Lumen. I wonder how close we can come with reflection probes. Once everybody gets bored with shiny, some good work may be done with low but nonzero reflection surfaces to get nice wood effects in interiors. There's an example of this over at Rumpus Room, but the shiny is turned up too high. Think subtle.
  • Nanite geometry is all about pre-computation. It's basically a way of making huge meshes which contain all the levels of detail in a pre-computed way. It takes a really weird renderer. It relies on instancing within the mesh, such as a brick wall where the bricks are modeled, but mostly duplicates. You  could still have a hole in the wall. Probably not that helpful for SL.
  • Don't be fooled by the path-tracing examples. Well, maybe.
  • Large View Distances. You've got to have impostors. I've written on this before. I'm looking at using OpenDroneMap, which is free software, to make mesh impostors of entire regions, to be displayed like off-sim meshes you can never get close to. When you get close, the real region replaces the impostor. These would be made by having a bot take some 2D pictures of each region once a week or so, then running them through OpenDroneMap. Anybody interested in working on this?
  • Large Object Count. How large? Actually, with a gamer-level GPU, we're in good shape on object count. Triangle count for avatars is a bigger problem. The real implication of user-created content is far fewer instances of objects. SL Marketplace has about 50,000 different chairs and maybe a hundred different road guardrails. Look at a video game closely, and you see the same items showing up over and over. The effect is that a metaverse needs about 3X the GPU memory of a game.
  • Like 4
  • Thanks 2
Link to comment
Share on other sites

6 hours ago, animats said:
  • Large View Distances. You've got to have impostors. I've written on this before. I'm looking at using OpenDroneMap, which is free software, to make mesh impostors of entire regions, to be displayed like off-sim meshes you can never get close to. When you get close, the real region replaces the impostor. These would be made by having a bot take some 2D pictures of each region once a week or so, then running them through OpenDroneMap. Anybody interested in working on this?

I unfortunately cannot really contribute, but I cannot stress how genius this is. As an aviator in Second Life, the biggest issue I've faced has been draw distances. Faking large DD by making the SL world outside of either the current region or nearby regions, would seriously pave the way for helping improve SL aesthetically and in circumstances like aviation and elsewhere, functionally. Computationally and otherwise, I don't see it being too difficult to implement, and I'd say make sim mesh updates a nightly thing. Oi - Lindens, if you're looking for a nice thing to do after Puppetry, you got one right here.

Edited by Rathgrith027
Link to comment
Share on other sites

28 minutes ago, Rathgrith027 said:

I unfortunately cannot really contribute, but I cannot stress how genius this is. As an aviator in Second Life, the biggest issue I've faced has been draw distances. Faking large DD by making the SL world outside of either the current region or nearby regions, would seriously pave the way for helping improve SL aesthetically and in circumstances like aviation and elsewhere, functionally. Computationally and otherwise, I don't see it being too difficult to implement...

There's the easy approach, which looks like this:

landimpostor1.thumb.jpg.7eafe3856c1050299aec8df2a2d192fa.jpg

Putting the SL map onto an elevation mesh. Simple version.

This is just a proof of concept. Others have made models like this. Someone had a few at SL20B. You don't really want to use the SL map, ad layers and all, for this. Also, it looks terrible from ground level, because you don't have imagery of sides of buildings. It's enough to let you find runways when landing and river mouths when sailing. This is easy to do, and I'll probably do it as a test in Sharpview.

basicterrain1.thumb.jpg.f30bf8ffc2492009222db5670172cb49.jpg

Open Drone Map output, after a pass through Blender. This is closer to the goal. This is all reconstructed from 2D images. You can do this with real drones in RL, so we can do it with bot drones in SL. The end result looks like Google Earth.

I'd really like to find some people into photogrammetry to work on this. Open Drone Map exists but needs some work to do this job for SL. The goal is to generate textured SL mesh objects much like off-sim terrain objects. Then have viewers display them in the right places. IM me if interested.

  • Like 3
Link to comment
Share on other sites

6 hours ago, Candide LeMay said:

How are the pre-rendered imposters going work if every parcel and every avatar potentially have a different EEP applied to it? If I'm on a parcel with a misty dark EEP and see a bright sunshine impostor in the distance that's worse than seeing nothing.

The impostors would be low-detail 3D models. So they get lit and processed through whatever environment is currently turned on. Their images probably need to be generated twice - once in sunlight, and once in a dark night. The dark version becomes the emissive texture, so that, at night, lit areas show up. Fog and lighting effects apply to the impostor models, too.

You never get closer than 128 meters to an impostor region before the real region starts to appear. This is only for distant content. It only replaces staring off into emptyness.

It won't be as good as modern Microsoft Flight Simulator, but that's the general idea.

  • Like 5
  • Thanks 1
Link to comment
Share on other sites

  • 3 months later...
On 12/6/2023 at 9:10 PM, gwynchisholm said:

LL does not have the time, money, staffing or hardware to port this entire game over to UE5 without losing content. So while other game engines are extremely capable, SL is kinda stuck where it is due to the nature of this game, it’s locked into supporting legacy content because it’s all legacy content.

 

Noobs viewers have nothing to do with lindens servers. Go on YouTube n find crystal Frost viewers, it made on Unity engine. Now u get a idea viewers don't depends on servers.

Yeah some teams should work on a unreal engine 5 viewers for secondlife. UE5 have da kick-ass optimization for low to high end PC.

  • Haha 2
Link to comment
Share on other sites

On 3/20/2024 at 4:03 AM, randakong said:

Noobs viewers have nothing to do with lindens servers. Go on YouTube n find crystal Frost viewers, it made on Unity engine. Now u get a idea viewers don't depends on servers.

Yeah some teams should work on a unreal engine 5 viewers for secondlife. UE5 have da kick-ass optimization for low to high end PC.

The viewers do depend on the servers, without the servers the viewers have nothing to connect too. 

  • Like 1
Link to comment
Share on other sites

On 3/21/2024 at 11:15 PM, bigmoe Whitfield said:

The viewers do depend on the servers, without the servers the viewers have nothing to connect too. 

U have no idea what im was talking about learn some coding before u fartin.

  • Haha 2
  • Confused 1
Link to comment
Share on other sites

1 minute ago, bigmoe Whitfield said:
18 hours ago, randakong said:

U have no idea what im was talking about learn some coding before u fartin.

then read what you said lol,  seriously. 

"He who smelled it, dealt it!"

  • Like 1
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 74 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...