Jump to content

Making Second Life look like an AAA game


animats
 Share

You are about to reply to a thread that has been inactive for 840 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

3 hours ago, InternetHITS said:

 

It would be awesome if we can use SL as a test environment for machine learning.

So we can test it virtual before going RL.

Imagine how many customers you would get.

 

 

 

 

 

I used to make animated network diagrams in Second Life and produce the presentations using them.  Most people asked why the presenter in the video was a bipedal skunk in a top hat and green dress or suit.  They got so distracted by the avatar that they didn't remember to study the diagrams.  So, I went back to using Visio.  The only presentation that got positive feed back was the one produced for Fox Media explaining how multicast IP networks worked and how they could safely replace encrypted RF coaxial networks for end-user distribution.  My contact there liked the presentation method.  She said it was unique and informative and made people sit up and take notice when a female woodland creature showed them something new in a concise, polite manner.  "It's cute too."

  • Like 1
Link to comment
Share on other sites

Second Life was never designed to be specifically a game, it was originally a 3D virtual environment named Linden World which had game-like interaction, but was specifically created by Linden Research Inc (Linden Labs) for testing prototype haptics technologies.

After a while, the haptics development was dropped by LL in favour of exploiting their virtual environment for social and economic uses instead. This lead to the rebranding of Linden World to become Second Life.

A video of Linden World that was created in August 2001:

 

Edited by SarahKB7 Koskinen
  • Like 1
Link to comment
Share on other sites

2 hours ago, SarahKB7 Koskinen said:

Second Life was never designed to be specifically a game, [blah blah, yakkety schmakkety, snipped for history lesson]

Might one possibly hope that you have a point to make beyond semantics? The title of this thread, after all, is "Making Second Life look like an AAA game" not "Making Second Life into an AAA game". Dragging the age-old game/not game debate (on which I happen to agree with you, but that's irrelevant here) into this subject seems actively unhelpful and more like an attempt to derail the discussion than contribute to it.

Adding pretty to SL sounds marvellous. More power and the best of fortune to the OP's efforts.

Edited by Spartacus Morningstar
  • Like 3
Link to comment
Share on other sites

@animats I don't even know how to ask my question, so, I am just gonna throw raw thoughts out here because I have no idea what is palatable or feasible.

There is a Second Life client that has a decent user interface and little scene rendering, and there is your rendering work.  Any way to use them together?

Link to comment
Share on other sites

2 hours ago, Ardy Lay said:

@animats There is a Second Life client that has a decent user interface and little scene rendering, and there is your rendering work.  Any way to use them together?

Probably not. My code is all in Rust, and the internal architecture is completely different. The LL viewer is in C++, mostly single-thread, object-oriented, with a homebrew "coroutine" structure, what would be called "async" today. The underlying world object is the prim, with sculpts and meshes bolted on later. My code is 100% Rust, multi-thread, data-oriented, and uses interprocess channels rather than "async". The underlying world object is a mesh, with prims and sculpts translated to meshes early in processing. Trying to make these two incompatible systems connect would be a big job on the C++ side. I talk to the Firestorm and Catzip developers regularly, but we don't see bolting these things together as feasible.

LL tried to do something like what I'm doing with asset loading some years back. It's in a dead project viewer from the "Project Interesting" era. They bolted on a priority queue from Boost, which was the right idea. But they didn't finish the job.  The graphics Lindens would like to go to Vulkan and PBR, but that effort has not been funded. So, much of what I've done has been considered by LL, but not successfully executed.

It's up to LL to set their own priorities and level of effort. As customers, we can judge them by their results.

I'm interested in this partly to find out how to build a good metaverse client, using SL and Open Simulator as testbeds. Both John Carmack (Doom, Oculus) and Tim Sweeny (Epic, Fortnite) have written that this has many unsolved problems. So it's a good project to work on. I'm a computer scientist by training, so I work on unsolved problems.

  • Like 2
Link to comment
Share on other sites

8 minutes ago, animats said:

Probably not. My code is all in Rust, and the internal architecture is completely different. The LL viewer is in C++, mostly single-thread, object-oriented, with a homebrew "coroutine" structure, what would be called "async" today. The underlying world object is the prim, with sculpts and meshes bolted on later. My code is 100% Rust, multi-thread, data-oriented, and uses interprocess channels rather than "async". The underlying world object is a mesh, with prims and sculpts translated to meshes early in processing. Trying to make these two incompatible systems connect would be a big job on the C++ side. I talk to the Firestorm and Catzip developers regularly, but we don't see bolting these things together as feasible.

LL tried to do something like what I'm doing with asset loading some years back. It's in a dead project viewer from the "Project Interesting" era. They bolted on a priority queue from Boost, which was the right idea. But they didn't finish the job.  The graphics Lindens would like to go to Vulkan and PBR, but that effort has not been funded. So, much of what I've done has been considered by LL, but not successfully executed.

It's up to LL to set their own priorities and level of effort. As customers, we can judge them by their results.

I'm interested in this partly to find out how to build a good metaverse client, using SL and Open Simulator as testbeds. Both John Carmack (Doom, Oculus) and Tim Sweeny (Epic, Fortnite) have written that this has many unsolved problems. So it's a good project to work on. I'm a computer scientist by training, so I work on unsolved problems.

Yeah, I probably would not suggest using Second Life Viewer (LL) code.  I was thinking about Radegast when I wrote that, but, that may not be any more feasible.

I was just thinking it would be nice to have a little bit of stuff like navigation and maybe local chat.  Just the stuff I would use when making machinima.

Edited by Ardy Lay
  • Haha 1
Link to comment
Share on other sites

the majoity of SL content is somewhat AAA only while it's being viewed in external 3D software which was used to create it. the version of it that is being imported to SL, is unusable for PBR rendering because it contains light and shader information all baked into diffuse textures, to satisfy users with low-end hardware. i link a video showcasing content that is produced to work with PBR shaders and lighting, you can see how 'flat' it looks before final passes with shaders and lighting are applied (pretty much like all-time-favorite shadowless CalWL preset in SL). nobody ever can untick a 'lighting and shaders' checkbox in GTA V settings, since it's is a fundamental part of the render, you only can lower the fidelity of it.

 

Link to comment
Share on other sites

15 minutes ago, Beev Fallen said:

the majoity of SL content is somewhat AAA only while it's being viewed in external 3D software which was used to create it. the version of it that is being imported to SL, is unusable for PBR rendering because it contains light and shader information all baked into diffuse textures, to satisfy users with low-end hardware.

Game content also has baked lighting, it's just not baked right into the image maps as every asset would end up with it's own unique texture dependent on context.

SL can't do this, the moment you rez another object the baked data is junk and needs to be fully recalculated or the new object will stick out like a sore thumb. This is the price associated with a fully dynamic environment and the precise reason why literally no other platform works like SL. Even Sansar avoided this fundamental part of SL and went with a lengthy bake process as part of publishing an experience .. leading to a static world.

Real time PBR rendering (as would be needed for SL content) is hardware intensive, if you don't have a discrete GPU from the last couple of years, you're SOL. It would look truly awesome with the right 'neutral' assets, but don't expect the vast majority of SL users to be able to use it, for everyone falling back to older rendering SL just took a huge visual downgrade.

This Vulkan project is the only render engine with a hope of achieving realtime PBR and it's still going to look terrible as SL users tend not to place light sources.

PBR in SL from LL is likely to end up a half way house expanding on what we have now but still requiring a general AO map to be baked into the texture.

Link to comment
Share on other sites

1 hour ago, Coffee Pancake said:

Game content also has baked lighting, it's just not baked right into the image maps as every asset would end up with it's own unique texture dependent on context.

SL can't do this, the moment you rez another object the baked data is junk and needs to be fully recalculated or the new object will stick out like a sore thumb. This is the price associated with a fully dynamic environment and the precise reason why literally no other platform works like SL. Even Sansar avoided this fundamental part of SL and went with a lengthy bake process as part of publishing an experience .. leading to a static world.

Real time PBR rendering (as would be needed for SL content) is hardware intensive, if you don't have a discrete GPU from the last couple of years, you're SOL. It would look truly awesome with the right 'neutral' assets, but don't expect the vast majority of SL users to be able to use it, for everyone falling back to older rendering SL just took a huge visual downgrade.

This Vulkan project is the only render engine with a hope of achieving realtime PBR and it's still going to look terrible as SL users tend not to place light sources.

PBR in SL from LL is likely to end up a half way house expanding on what we have now but still requiring a general AO map to be baked into the texture.

Maybe have PBR incorporated through the settings, and you can choose to disable or enable it. I mean if that is possible. 

Link to comment
Share on other sites

On 10/28/2021 at 8:31 PM, animats said:

I've mentioned occasionally that I'm working on a new viewer. Here's some video from an early test version.

Second Life at full detail.

This is what Second Life and Open Simulator should look like. No more standing in front of a blurry object and waiting for it to load. Waiting, and waiting. And wondering if it's worth the wait. This changes the whole SL experience, for the better. Now Second Life looks like an AAA game.

Second Life content does not have too much detail. It just needs a more effective graphics system to display it.

What's going on here? This is an all-new viewer, with no Linden Lab code. It's written in Rust, and uses Vulkan for graphics. It has physically-based rendering. It's multi-threaded. One CPU is just refreshing the screen, at 50 to 60 FPS here. The other CPUs are making changes to the scene as the camera moves. All those high-detail textures are being loaded from cache just before the camera gets close enough to see them. If everything is in cache, this viewer can stay ahead of camera movement, even for very  high detail content like this. If the content has to come from the server, the objects that cover the most screen area are always loaded first. So what's in front of you is never blurry for more than a very brief period.

All this is very experimental. This is just the rendering part of the viewer. There's no user interface other than moving the camera. All this can do is look.

I'm working through the hard problems of building a high-detail metaverse here. The underlying technology is cutting-edge - the Rust programming language, Vulkan, WGPU for cross-platform graphics, and Rend3 to make WGPU usable. The lower levels libraries are not yet stable or complete. (For example, WGPU doesn't implement rigged mesh yet, so there are no avatars shown.) I'm doing this to see what's possible medium-term, not to produce a new SL viewer in the near term.

Linden Lab tried to do something like this once, as part of Project Interesting. But it was a tough retrofit for the old viewer code, and they were not successful.

Okay why are you not working for LL to help improve the graphics side of things? 

Link to comment
Share on other sites

1 hour ago, Sammy Huntsman said:

Maybe have PBR incorporated through the settings, and you can choose to disable or enable it. I mean if that is possible. 

If there is render code needed to make it work, then yes, it will be poss be user selectable. However from an object perspective its just going to be a couple of additional image maps expanding on the existing materials.

We're not going to get a whole new realtime lighting engine for PBR materials so will still have to prebake AO into the primary texture map.

For us in practical terms, PBR is going to mean materials 2.0

Link to comment
Share on other sites

12 hours ago, Coffee Pancake said:

If there is render code needed to make it work, then yes, it will be poss be user selectable. However from an object perspective its just going to be a couple of additional image maps expanding on the existing materials.

Exactly. Adding more material layers isn't that complicated. They're just images.  Material layers are named items in LLSD, and  having more of them is possible within the existing protocols. Older viewers would just ignore the new layers.

The video I posted is using PBR rendering. The translation from SL specular to PBR roughness and metallicness isn't right yet, though. Shiny will work a lot better in PBR. The SL rendering system can add diffuse and specular and get all the way to full bright, which is why chrome turns white in bright light. In PBR systems, you never get  out more light from a surface than you put in with lights.

Still no mirrors, though; that's a different problem.

Probably the most useful layer to add is subsurface scattering. This is for materials where light goes a little way into the surface, bounces around a few times, and comes back out nearby. Like skin. This is why skin sort of "glows" in real life. Without subsurface scattering, skin rendering is only adjustable along a range from "dead" to "plastic". Second Life avatars are close enough to photorealistic that this matters.

There are about twenty additional PBR layers, but few materials need more than one or two of them.

Blender and Maya already know how to create and render all those layers. Changes would be mostly to the viewer UI and the uploader. Probably the best way to do this is to have the uploader understand GLTF / USD format in addition to COLLADA. Blender speaks GLTF. Maya has a GLTF plugin. Everybody seems to be standardizing on that.

6 hours ago, Beev Fallen said:

LL said couple times they are not interested in SL to be cutting-edge graphics.

Open Simulator, though...

  • Like 2
Link to comment
Share on other sites

2 minutes ago, animats said:

Exactly. Adding more material layers isn't that complicated. They're just images.  Material layers are named items in LLSD, and  having more of them is possible within the existing protocols. Older viewers would just ignore the new layers.

The video I posted is using PBR rendering. The translation from SL specular to PBR roughness and metallicness isn't right yet, though. Shiny will work a lot better in PBR. The SL rendering system can add diffuse and specular and get all the way to full bright, which is why chrome turns white in bright light. In PBR systems, you never get  out more light from a surface than you put in with lights.

Still no mirrors, though; that's a different problem.

Probably the most useful layer to add is subsurface scattering. This is for materials where light goes a little way into the surface, bounces around a few times, and comes back out nearby. Like skin. This is why skin sort of "glows" in real life. Without subsurface scattering, skin rendering is only adjustable along a range from "dead" to "plastic". Second Life avatars are close enough to photorealistic that this matters.

There are about twenty additional PBR layers, but few materials need more than one or two of them.

Blender and Maya already know how to create and render all those layers. Changes would be mostly to the viewer UI and the uploader. Probably the best way to do this is to have the uploader understand GLTF / USD format in addition to COLLADA. Blender speaks GLTF. Maya has a GLTF plugin. Everybody seems to be standardizing on that.

Open Simulator, though...

When will we expect a full release and unlike most viewers will this be hardware intensive. I mean Firestorm is pretty hardware intensive and if this one isn't as bad as Firestorm, I may plan to switch to that as my daily driver. 

Link to comment
Share on other sites

17 minutes ago, Sammy Huntsman said:

When will we expect a full release and unlike most viewers will this be hardware intensive. I mean Firestorm is pretty hardware intensive and if this one isn't as bad as Firestorm, I may plan to switch to that as my daily driver. 

What I'm doing is a long way off. It's really R&D into how to build the metaverse. I'll have a login, move, and view viewer at some point.

Link to comment
Share on other sites

1 hour ago, Sammy Huntsman said:

I mean Firestorm is pretty hardware intensive and if this one isn't as bad as Firestorm, I may plan to switch to that as my daily driver. 

This is built differently from firestorm, it stands a decent chance of meaningfully maxing your PC out. I wouldn't expect this to be a lighter solution on older hardware.

Link to comment
Share on other sites

1 minute ago, Coffee Pancake said:

This is built differently from firestorm, it stands a decent chance of meaningfully maxing your PC out. I wouldn't expect this to be a lighter solution on older hardware.

I mean I don't think my hardware is that old. My PC is only 4 years old 

CPU: Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz (3408 MHz)
Memory: 16328 MB
Concurrency: 8
OS Version: Microsoft Windows 10 64-bit (Build 19043.1348)
Graphics Card Vendor: NVIDIA Corporation
Graphics Card: NVIDIA GeForce GTX 1070 Ti/PCIe/SSE2
Graphics Card Memory: 8192 MB
This is what I run under the hood. 

Link to comment
Share on other sites

8 minutes ago, Coffee Pancake said:

This is built differently from firestorm, it stands a decent chance of meaningfully maxing your PC out. I wouldn't expect this to be a lighter solution on older hardware.

Correct. The target hardware is what Steamcharts says their average user has. Which is an NVidia 1060, five years old, or better.

Link to comment
Share on other sites

Here's a sense of what a modern game looks like.

A walking tour of Cyberpunk 2077's world. There's no gameplay. This is just exploring the 3D world. It's a lot like roaming mainland in SL. So this is a good way to look at renderng and modeling in a current AAA title.

Items of note:

  • The handling of light and shadow is very good. That's Unreal Engine at work. The columbarum area, with bright sunlight and dark areas adjacent, is especially well rendered. Watch what happens as the player moves from dark to bright areas. Everything is always clear, yet you get a strong experience of lightness or darkness. SL could improve in that area.
  • The reflectivity of water on cement is a nice touch, although what's wet and what isn't doesn't seem to be consistent.
  • Night areas with lots of glowing neon look much better than they do in SL. That's physically based rendering at work.
  • There's a lot of instancing. The fence made of horizontal metal slats keeps re-appearing, as do the garbage bags, and the pile of garbage bags impostor. SL, with user created content, doesn't have much instancing, which increases download bandwidth needs but reduces the cliche effect of seeing the same thing over and over. In Cyberpunk 2077, you do see the same items over and over, but in action gameplay users don't notice too often.
  • Few of the big buildings can be entered. They're just dummies. Same with most of the shops. In SL, it's more like Hangars Liquides than New Babbage in that respect.
  • The NPC movement is no better than SLs. This has been criticized in reviews.

It's useful to look at this as a target to shoot for.

  • Like 1
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 840 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...