Jump to content

Making Second Life look like an AAA game


animats
 Share

You are about to reply to a thread that has been inactive for 855 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

7 minutes ago, Beev Fallen said:

Cyberpunk 2077, haha! we don't even have cubemaps in SL. i remember we had screen space reflections few years ago - why was it dumped? and even earlier before i arrived to SL, there was a global illlumination solution, i saw it on youtube. why was it dumped too?

Probably performance, unless the demo starts with a blind teleport, you can't ever really tell just how much chewing the viewer had to do before the demo starts

Link to comment
Share on other sites

1 hour ago, Coffee Pancake said:

Probably performance, unless the demo starts with a blind teleport, you can't ever really tell just how much chewing the viewer had to do before the demo starts

But, that would include the arriving naked part!

Link to comment
Share on other sites

28 minutes ago, Sammy Huntsman said:

Isn't LL planning on updating from Open GL to Vulkan in the near future? So what you did, would be possible when they change over to Vulkan. 

They have mentioned they are going to look at an OpenGL wrapper to maintain Mac compatibility and may look at vulkan at some point, right now they are focused on fixing generic performance problems .. something they will have to do whatever path they choose next.

Link to comment
Share on other sites

I'm using WGPU, which is a Rust cross-platform wrapper for Vulkan. On Linux, it's basically a pass through to Vulkan. It can also do that on Windows, where it talks to Vulkan or DX11 or DX12. (You get to pick. Some of the options work.) On Mac, it translates to Apple's Metal, which is comparable to Vulkan, but Apple just had to Think Different and annoy everybody writing graphics software.

WGPU is maybe half complete. Right now it can't do rigged mesh, or some of the fancier shader stuff, or all the targets.

For C++ land, there's something called MoltenVK, which is basically a translation layer from Vulkan to Apple's Metal. LL could conceivably use that. Haven't looked at it in detail.

Unreal Engine works cross platform, but at a higher level than Vulkan. To use that, you basically have to do things their way. Plus pay them 5% of your revenue once you past $1M. Unity has something similar, I think.

This sort of thing is why Mac support for many 3D programs comes later, if at all.

  • Like 1
Link to comment
Share on other sites

  • 2 weeks later...

That's fairly amazing. By all rights the SL viewer should be doing the things you've done by now. Multithreaded and running on Vulcan or Direct X. As it is, the bar for hardware remains pretty low because the viewer literally can't take advantage of new top end systems. Also, the caching algorithm sucks balls (Oz admitted that years ago). If you can get this working as a full viewer, it will be wonderful. Even better as plug in code for TPV's

  • Like 1
Link to comment
Share on other sites

  • 2 weeks later...

Awesome work Animats! I'd love to poke at this and tinker with it some myself, possibly contribute some if I can find the time lately. But, have been considering doing this very same thing lately, all started getting mac and no native client so it kinda runs like even more of garbage than it already did... considered doing vulkan port with standard client and got to lookin over the sources, then got to thinking about rust as well since I dabble with that too some, and then thought to google it and lo n behold you already the man!

Still though, would take a monumental amount of work and effort but it's a really great start tbh.

If I can find some spare time after holidays I could def start handling some of the GUI.

And I agree, there is absolutely no reason why we shouldn't have a more modern client by now... and definitely in the future for when SL eventually (and will) bite the dust, opensim will be the saving grace for this community.. that time is def sooner than later especially with momentum in the "metaverse" scene. Most will just move on to the next best or hot thing out and others will def migrate to opensim.

Edited by ST33LDI9ITAL
finish thought
Link to comment
Share on other sites

I'm not impressed yet. The video really doesn't show much of what you are saying the Viewer is doing.

The framerate (in your video) is absolutely not even 30 FPS, that's more like 15 FPS.

Loading from server is not shown here either, this is a fully cached scene and while some cache improvements would be nice they aren't exactly world moving.

The only major difference i see there is the shadows seemingly being either prebaked or independent of camera (e.g world spaced rather than camera spaced) which gets around the ugly "crawling" and constantly changing shadow corners. Which i wish was something SL would do.

Link to comment
Share on other sites

Well, it's certainly impressive to make a non standard client renderer. Much less in rust given the infancy of support with it currently. Rust will definitely be a much more accepted language for game dev once it matures some over the next year or two and gains support along the way. As with most things nowadays it's coming sooner than later. Wgpu is a really interesting project and people have been exploring it for all sorts of things.

Edited by ST33LDI9ITAL
  • Like 1
Link to comment
Share on other sites

7 hours ago, NiranV Dean said:

I'm not impressed yet. The video really doesn't show much of what you are saying the Viewer is doing.

The framerate (in your video) is absolutely not even 30 FPS, that's more like 15 FPS.

Loading from server is not shown here either, this is a fully cached scene and while some cache improvements would be nice they aren't exactly world moving.

The only major difference i see there is the shadows seemingly being either prebaked or independent of camera (e.g world spaced rather than camera spaced) which gets around the ugly "crawling" and constantly changing shadow corners. Which i wish was something SL would do.

Live, it runs about 55 FPS on an AMD Ryzen 5 with an NVidia 3070. Linux 20.04 LTS. The video capture isn't full speed, because Kazaam, the capture program, can't keep up.

Shadows are not prebaked. Sun only, though. Shadow crawling was a problem in an earlier version of Rend3, but has been fixed.

Asset loading from server isn't full speed yet. Using a JPEG 2000 decoder from Rust is currently a problem, and the decoder is currently a command-line decoder running in subprocesses. That's a temporary solution. Here's what startup currently looks like with loading delays visible: http://www.animats.com/sl/misc/accessmalltest.mp4 That's an early test, not final. As you get closer to objects, their loading priority increases and the priority queue is updated, so you get the high-detail version fast as you approach something. There was an LL attempt to do that in the SL viewer years ago, but it was abandoned.

 

  • Like 1
Link to comment
Share on other sites

4 hours ago, animats said:

Live, it runs about 55 FPS on an AMD Ryzen 5 with an NVidia 3070. Linux 20.04 LTS. The video capture isn't full speed, because Kazaam, the capture program, can't keep up.

Shadows are not prebaked. Sun only, though. Shadow crawling was a problem in an earlier version of Rend3, but has been fixed.

Asset loading from server isn't full speed yet. Using a JPEG 2000 decoder from Rust is currently a problem, and the decoder is currently a command-line decoder running in subprocesses. That's a temporary solution. Here's what startup currently looks like with loading delays visible: http://www.animats.com/sl/misc/accessmalltest.mp4 That's an early test, not final. As you get closer to objects, their loading priority increases and the priority queue is updated, so you get the high-detail version fast as you approach something. There was an LL attempt to do that in the SL viewer years ago, but it was abandoned.

 

I'd highly suggest using Shadowplay/Share then since you have a NVidia GPU which captures at live speed and without performance loss.

On Linux... well to have some kind of measurement we'd have to go to the place ourselves and on a Linux Viewer (Linux Viewers are known to be much faster ~ up to twice the framerate in some cases). Would be interesting to see how the usual Viewer fares against this performance wise. We should keep in mind though that a normal Viewer is doing a lot more work than your Viewer currently is.

Link to comment
Share on other sites

3 hours ago, NiranV Dean said:

I'd highly suggest using Shadowplay/Share then since you have a NVidia GPU which captures at live speed and without performance loss.

On Linux... well to have some kind of measurement we'd have to go to the place ourselves and on a Linux Viewer (Linux Viewers are known to be much faster ~ up to twice the framerate in some cases). Would be interesting to see how the usual Viewer fares against this performance wise. We should keep in mind though that a normal Viewer is doing a lot more work than your Viewer currently is.

I should probably use OBS for recording for public release. Mostly I just record video for debug purposes, and sometimes edit that into something to show.

Right now, the lower levels (Rend3, WGPU, some other stuff) aren't working right for cross-compile from Linux, so I can't yet build for Windows easily.

Yes, the normal viewers are doing more work. The render loop, though, is on a CPU by itself, not slowed down by what the rest of the program is doing.

There's no magic here. It's just routine modern rendering technology. Now, UE5, that's magic. Probably by UE6 or so you'll be able to use that for an virtual world viewer. UE5 still requires too much asset prep with UE tools.

Avatars are still tough. Haven't gotten there yet. WGPU doesn't do rigged mesh yet. This is a long way from release.

Edited by animats
Link to comment
Share on other sites

Is this going to remain just a personal project? I have interest in rust and in this, I'd really just like to tinker with it some myself and not have to start from scratch. Curious how the metal rendering works out or even using vulkan and testing molten to see differences in performance with wgpu. Also like to see your culling technique. Have you looked into rafx or rg3d?

Edited by ST33LDI9ITAL
Link to comment
Share on other sites

  • 2 weeks later...

Wouldn't all of this depend of LL forcing textures to be optimized? I mean, 8 1024 textures for a row of buttons on a  shirt would essentially crap out frame loading rates, no? Kind of like they do now. Doorknobs, buttons, nails, hinges all with 1024x1024 textures. It took me a while to figure out why these tiny things were grey for so long when rendering a new region... 

On a funny side note, my first thought on seeing this thread was, "Sooner or later SL will break down and call AAA."

Link to comment
Share on other sites

1 hour ago, Drake1 Nightfire said:

Wouldn't all of this depend of LL forcing textures to be optimized? I mean, 8 1024 textures for a row of buttons on a  shirt would essentially crap out frame loading rates, no? Kind of like they do now. Doorknobs, buttons, nails, hinges all with 1024x1024 textures. It took me a while to figure out why these tiny things were grey for so long when rendering a new region... 

This has nothing to do with why tiny things stay grey.

The viewer tries to prioritize things it thinks are bigger, tiny stuff has to wait. The total size of the full resolution image has no impact on this as the viewer progressively downloads and decodes images, stepping though ever increasing resolutions till the one that best fits the objects on screen size or the max size of the image is reached.

It does not download the whole 1024 right from the start. This is why textures start out fuzzy and get sharper the longer they are on screen / the closer you get.

Object has a texture!

  1. Get  XX bytes of data.
  2. Is this enough for the size of the object on screen ?
  3. No? .. GOTO step 1.

Huge textures aren't a huge problem. The viewer wont go ahead and fully load a 1024 unless you shove your cam up REAL CLOSE and force it to. It could do a better job of how it handles this, butt (🍑) that can easily end up more computationally expensive.

The texture (at whatever level it's been loaded to so far) will stick around in VRAM till the object is gone. Partially because stepping backwards though the decode levels is more expensive that just leaving it and letting your GPU worry about it.

 

There is also another system that  impacts how textures get loaded. If your avatar or camera is in motion, the viewer is in panic mode. Get something, anything, on screen as fast as we can!! It will not waste time on small things and it will not load the full resolution for anything till you stop.

 

In short, textures are cheap once you have them downloaded and the viewer wont go mad wasting time on decoding stuff it doesn't think you can see. It tries to be a little smart about it. The bulk of the work is CPU bound so getting too clever ends up being more expensive than being wrong from time to time.

Meshes eat up video ram far more aggressively .. and animating avatars gobbles CPU time like it's going out of fashion, this in turn impacts how much time texture decoding gets.

Link to comment
Share on other sites

19 hours ago, Coffee Pancake said:

Meshes eat up video ram far more aggressively .. and animating avatars gobbles CPU time like it's going out of fashion, this in turn impacts how much time texture decoding gets.

In viewers with concurrent texture decode threads (Firestorm & Cool VL Viewer) it might also depend on the concurrency level and CPU count chosen. Try the texture fetch/decode boost after TP/movement in the Cool VL Viewer, it massively improves the time for textures to show up. Meshes still take quite some time to materialize, but i get close to zero grey textures with it even moving.

Link to comment
Share on other sites

2 hours ago, Kathrine Jansma said:

Try the texture fetch/decode boost after TP/movement in the Cool VL Viewer, it massively improves the time for textures to show up. Meshes still take quite some time to materialize, but i get close to zero grey textures with it even moving.

This a result of a boost in the parameters of the texture fetcher, not in the multi-threaded texture decoder, but thanks to this boost, the multi-threaded decoder can be fed with many more textures, decreasing massively the rezzing time.

Also, the fetcher parameters boost causes a higher rate in texture area recalculations, allowing to adjust the LODs faster and more accurately whenever the camera moves closer/farther to/from objects, reducing the amount of blurry textures.

There is a drawback, however: since the texture area is calculated in the main viewer thread (also responsible for the CPU-side of the rendering), this eats up the frame rate and makes it more ”hiccupy”. This is why, by default, the boost only happens in the Cool VL Viewer for a limited amount of time (it is adjustable and defaults to 30s) after login and TPs (so to rez the scene much faster than in any other viewer).

You may also play with the Advanced -> Rendering -> Textures settings, where you will find ”Boost textures fetches now” (which basically triggers the same boost as after TP/login, for the same duration), ”Boost fetches with speed” (the faster the camera moves, the higher the boost), and a very interesting one: ”Boost proportional to active fetches”. The latter will increase the boost factor as more textures are being fetched and you will get much less grey/blurry textures in pretty much all conditions, still with a ”hiccupy” frame rate drawback however. You could even play with the ”TextureFetchBoostRatioPerFetch” debug setting and reduce the default value (200 fetched textures for 1.0 boost ratio increase) to a tenth of its value, obtaining almost the same results as Animats' nifty demonstrator, but at the cost of low frame rates while textures rez...

What we lack in current viewers is a way to refresh the textures required LOD in a child thread (to free the main thread from this burden)...

Edited by Henri Beauchamp
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 855 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...