Jump to content

Big world impostor test, 128m.


animats
 Share

You are about to reply to a thread that has been inactive for 253 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

I'm not so keen to knock the blur as Henri. Whilst I agree the screenshots are not the prettiest, the general concept is something SL needs and I imagine that we can get it looking better and the blur may well be useful for objects in the distance to obscure the fact it was rendered off a lower lod level.

One could perhaps use alpha masking to achieve hard edges of buildings etc, to make the visuals seem more natural at a glance, whilst still leaving the content itself blurred to obscure any weirdness in the rendering. Also, for people like me, having a draw distance +300m is achievable and normal, so it might be the case that rather than the next sim over, it's the sim after that that's drawn like this, making the transition much less noticeable.

Link to comment
Share on other sites

I also agree that the blur while not the best, isn't as bad as suggested. I enjoy the technical posting of both @animats and @Henri Beauchamp a lot, I get to learn things. But isn't the key issue here ...

Would a normal user want to sort of see everything and have a sense of a much wider world at 30fps, or the current way it works with limited worldview and get 5-9fps?

  • Like 2
Link to comment
Share on other sites

7 minutes ago, Katherine Heartsong said:

Would a normal user want to sort of see everything and have a sense of a much wider world at 30fps, or the current way it works with limited worldview and get 5-9fps?

Idea: Only show Premium-Plus-Plus users the full picture, that would reserve precious server processing to increase the overall server throughput for ALL users. 

ETA: I didn't say it was a GOOD idea.

Edited by Love Zhaoying
  • Haha 3
Link to comment
Share on other sites

8 minutes ago, Katherine Heartsong said:

I also agree that the blur while not the best, isn't as bad as suggested. I enjoy the technical posting of both @animats and @Henri Beauchamp a lot, I get to learn things. But isn't the key issue here ...

Would a normal user want to sort of see everything and have a sense of a much wider world at 30fps, or the current way it works with limited worldview and get 5-9fps?

I think a 1024sqm distance is far too much unless you are at a vantage point such as a tower as in the picture.  At ground level 256sqm is more than adequate in my opinion.  As I've said previously I keep my viewer at 256sqm most of the time and get frame rates in excess of 30fps unless there are lots of avatars so that's doable.  If it helps with rendering avatars then it would seem like a good idea.  Perhaps two imposters could be generated, one with and one without blur.  The user could then select which ones they want via a viewer setting.

Another thing to consider is selecting imposter distance based on the users relative camera altitude to the view.

  • Like 1
Link to comment
Share on other sites

19 minutes ago, Love Zhaoying said:

Conceptually, in RL you get a blur when things are far away.

You may also get blur like effects due to air movements (e.g. heated air over asphalt roads), dust, fog and other particles in the air that cause light to bend.

So some amount of blur over dusty places like cities is probably okay.

But i agree with others that a cutoff at 128m is a bit harsh on the eyes.

I think there are a few things to consider here:

1) Speed of rezzing/loading the whole static scene.

After all, even with 256m draw distance, the viewer needs to ingest a huge amount of textures and meshes to render the full view. Thats okay if i am stationary and just want to gaze into the distance. But depending on network speeds (e.g. "just" 100 MBit/s) it might take some time to flush all that data down the pipe for rendering, so some intelligent z-depth sorting and blur effect might help to get a nicer effect while loading happens.

2) Speed of rezzing/loading while on the move

A huge draw distance is nice, but i am just fine with a few imposters in the distance if moving around faster.

3) FPS preserving bandaids

If the sheer amount of textures, meshes and other data overwhelms the viewer and/or the network, there should be some  optimization to rescue a somewhat useable framerate.

It would be interesting to have some statistics about bandwidth consumption for typical usages with different draw distances and scenarios. Busy club, empty roads on deserted mainland, typical island, New Babbage with few avatars in sight. Of course this differs when the area is already cached, so cache hit rate would also be interesting. Not sure if LL has any kind of data about that. I assume they have a vague idea about bandwidth usage per user.

  • Like 1
Link to comment
Share on other sites

1 hour ago, Love Zhaoying said:

Conceptually, in RL you get a blur when things are far away.

You are confusing blur caused/introduced by optical defects/imperfections (such as the limited depth of field of a camera lens) with the eye resolution (the minium angle between two objects/points for the eye to distinguish them apart).

The resolution of the human eye allows to tell two objects 30cm apart at 1km. If you see them blurred, you need glasses (at my age, I alas do need them, but with them I see just as well in the distance as when I was 30 years younger).

In SL, we are speaking about 512m max draw distance... There should simply be no blur at all on rendered objects larger than 15cm. Period.

1 hour ago, Kathrine Jansma said:

You may also get blur like effects due to air movements (e.g. heated air over asphalt roads), dust, fog and other particles in the air that cause light to bend.

This is a different (environmental) aspect, which is possible to simulate in SL already (via Extended Environment settings, particles, etc), but should not be ”hard-coded” in the renderer...

1 hour ago, Kathrine Jansma said:

1) Speed of rezzing/loading the whole static scene.

After all, even with 256m draw distance, the viewer needs to ingest a huge amount of textures and meshes to render the full view. Thats okay if i am stationary and just want to gaze into the distance. But depending on network speeds (e.g. ”just” 100 MBit/s) it might take some time to flush all that data down the pipe for rendering, so some intelligent z-depth sorting and blur effect might help to get a nicer effect while loading happens.

An acceptable kind of ”blur” would be to use lower resolution textures for distant (>256m) objects (that kind of blur won't affect the least the contours of the objects, which play a big role in how ”sharp” an image appears to the eye). The viewer is already doing it via texture LODs, but maybe pushing down one lower LOD for far textures won't be too noticeable: then you would save some bandwidth, but only for objects you are not getting closer to (since those will need full LOD at some point anyway); I have some doubts about the benefits in term of bandwidth.

1 hour ago, Kathrine Jansma said:

2) Speed of rezzing/loading while on the move

A huge draw distance is nice, but i am just fine with a few imposters in the distance if moving around faster.

The viewer had this kind of option in its code (it allowed to load slower higher LODs the faster the camera was moving): it was never used and, when enabled, it made things much worst in rezzing time terms for objects you were getting closer to as you were moving... It finally got removed.

1 hour ago, Kathrine Jansma said:

3) FPS preserving bandaids

If the sheer amount of textures, meshes and other data overwhelms the viewer and/or the network, there should be some  optimization to rescue a somewhat useable framerate.

The textures resolution does not impact the least the frame rates. On the other hand, using better optimized (or slightly simplified) meshes would definitely help (the well known mesh LODs issue in SL).

 

  • Like 1
Link to comment
Share on other sites

I'm amazed people survive Mainland with such long draw distances. Whenever I crank up my cam to take a deep 360 or something, I have to spend the next five minutes derendering garbage floating in the sky.

Admittedly, I barely skimmed the thread so I may have missed it: Does this envision a mechanism to "curate" the imposter images—and the 3D scene—to hide stuff like floating debris?

  • Like 2
Link to comment
Share on other sites

22 hours ago, animats said:

And to flexible/rigged mesh, says one source.

That is most likely to not happen, at least in the short / mid term. There are a series of implications with how those features work at their base level that render the nanite conceptual process a destructive hell. I mean, they had introduced the control for skinned meshes influence control in editor, but still like a decade later that reduction results in the mesh collapse in 80% of the cases... We'll see, but I won't hold my breath while waiting for that to happen.

Link to comment
Share on other sites

Good comments.

I'm writing this from the viewpoint of a third party viewer developer who has to deal with existing SL content.

First, looking down from that tower in New Babbage is one of the hardest cases in Second Life. There's a whole city down there, with much detail, and you can see most of it from the Albatross's docking tower. At street level, most objects are hidden (occluded) by buildings. So let's look at what current viewers can do.

LOD factor 3, NVidia 3070 with 8 GB, 32 GB RAM, gigabyte fiber networking. A good gamer PC, more than most SL users have. About US$2000.

babbage128m.thumb.jpg.3ba5077c2441061d77cd0160091a5ea1.jpg

Where did everything go? Where's my glorious vista of the city? This is a 128m draw distance. 70 FPS.

Looks bad from up here on the tower, but very playable. The standard New Babbage environment has been turned off for this test, so we see a rare sunny day in New Babbage.

Let's try 256m.

babbage256m.thumb.jpg.fcfa6490fcbd48fdf9f61ad86348ff7d.jpg

That's better. But we can't' see City Hall, the big tower. 256m. 20 FPS.

Frame rate has dropped to a barely acceptable level. We can see about two blocks. At ground level, 256m looks pretty good.

 

babbage0.thumb.jpg.31d7e1a8d18c5348031291b9784d2685.jpg

Let's see the whole city. 1024m draw distance. 4 FPS. Minutes of loading time.

This is unplayable, but looks great. The usual problem of staged SL photography.

So these are the options we have right now. Can we do better?

 

 

Edited by animats
  • Like 4
  • Haha 1
Link to comment
Share on other sites

Let's look at some options for improving this:

  • Throw hardware at the problem. There are better GPUs. If you have to ask how much they cost, you can't afford them. Also, most of that compute power is going into drawing background objects you can barely see. This is not cost-effective.
  • Pre-render impostor images. This works if you can keep the viewpoint from getting too close to a flat image. That's what I've discussed above. We don't have to blur the impostor images. I've been doing that to keep image size down, but it's not essential. "Too close" is an issue. 256m looks pretty good. 128m is pushing it.
  • Identify distant hero objects and give them more graphics resources. This is common in games. The distant castle on the hill that's important to gameplay may be manually assigned more resources. In SL, the viewer has no idea what's important, but it can at least tell what's big. So the viewer might pick 5 to 10 distant but large linksets and do them at a higher LOD than usual.
  • Identify lesser objects and cut their resources down. If something is small and distant, it might just be rendered as a single-color mesh cube, scaled to the original dimensions. This works well for buildings. The color is the "1x1 texture", or what you get when you reduce the texture down to 1x1, which is a single color with an alpha value. The GPU can draw a huge number of little cubes without problems.

I often use Grand Theft Auto V as an example, because it's a very successful big-world game and they use all kinds of cheats to cram a big chunk of Los Angeles into a fast-moving game. So here's a close-up from the GTA V scene I posted above.

gtablur1.jpg.d39ede82a0af7232db9b3a43cab169ac.jpg

A very close look at a GTA V background. This is from the same image posted above.

Now, this is definitely blurred. But notice the nice hard edges on the large buildings. Those are "hero objects" that received special handling. They get parallax effects against the background as the viewpoint moves, which distracts from the fact that the background is flat.

This is all standard game technology.

Edited by animats
  • Thanks 3
Link to comment
Share on other sites

SL has its own special problems, of course. Some comments on those.

1 hour ago, Qie Niangao said:

I'm amazed people survive Mainland with such long draw distances. Whenever I crank up my cam to take a deep 360 or something, I have to spend the next five minutes derendering garbage floating in the sky.

Admittedly, I barely skimmed the thread so I may have missed it: Does this envision a mechanism to "curate" the imposter images—and the 3D scene—to hide stuff like floating debris?

That's a problem. One reason I use New Babbage and Bellesaria as examples is that both prohibit low-altitude sky junk. I posted a long draw range image a few weeks ago taken from SL's highest mountain, and what ought to be a beautiful vista looks awful.

So I'm considering a sky junk filter. If it's not attached to the ground, it doesn't appear in impostor images. (Technical definition: "Attached to the ground" means that its bounding sphere does not intersect with some other bounding sphere that is attached to the ground. This is a recursive definition. If there's a chain of objects down to the ground, it's not sky junk.)  Tall towers will still show, but stuff just floating, no.

1 hour ago, Henri Beauchamp said:

(the well known mesh LODs issue in SL).

Yes. A "crap LOD" detector is needed. A first cut is just to look at the triangle counts. If they look like High=2000, Lowest=2, that's a crap LOD item. I'm tempted to have Sharpview replace those with a single-color cube if distant, or push them up to High LOD if close. It's not a great solution, but it's something.

(In Firestorm, you can set "LOD factor" to 0, which shows everything at lowest LOD. Most large objects hold up OK. Some don't. Large buildings and trees are the worst offenders, because they blight a whole area. Try that, at least with your own stuff. Look for trees that turn into a bare trunk, and see-through buildings with loose triangles. If it's yours, please fix it. If you're a landlord, talk to your tenants. Thank you.)

  • Like 2
  • Thanks 3
Link to comment
Share on other sites

1 hour ago, animats said:

Good comments.

I'm writing this from the viewpoint of a third party viewer developer who has to deal with existing SL content.

First, looking down from that tower in New Babbage is one of the hardest cases in Second Life. There's a whole city down there, with much detail, and you can see most of it from the Albatross's docking tower. At street level, most objects are hidden (occluded) by buildings. So let's look at what current viewers can do.

LOD factor 3, NVidia 3070 with 8 GB, 32 GB RAM, gigabyte fiber networking. A good gamer PC, more than most SL users have. About US$2000.

Similar hardware here (Ryzen 7900X, RTX 3070), also fiber (but Babbage is definitely not a challenge for the network), graphics pushed to the max excepted no shadows and water reflections set to terrain and trees ”only”, with additional 2.0x multiplier on mesh LODs (i.e LOD=3 for all but mesh, LOD=6 for meshes) and I get 50+ fps at 1024m draw distance with the Cool VL Viewer (current release), and rezzed pretty fast: with a cleared cache (i.e. not ”cheating” and fetching everything from the network), half a minute for 256m, and needed barely two minutes more after increasing DD to 1024m...

But better than words, here is a video.

Frankly, Babbage is not a challenge to render...

Note: you will notice I took great care to use almost exactly the same FOV as yours, so that we render the same objects...

Edited by Henri Beauchamp
Link to comment
Share on other sites

28 minutes ago, animats said:

If they look like High=2000, Lowest=2

Lowest = 2 is pretty much a ”requirement”, because it allows to lower dramatically the LI (and the upload cost with it)... I'd blame LL for a poor algorithm with mesh LI scaling.

This said, a mesh with three well designed higher LODs and a 2 triangles lowest LOD will still rez just fine at all but very far distances (where it will pretty much vanish): the lowest LOD is rarely ever seen rendered, especially seeing how much we must push the RenderLODFactor setting to get truly crappy meshes to render properly at all.

Link to comment
Share on other sites

Cool VL Viewer is doing very well here. I can't get Firestorm much above 10 FPS with comparable settings, even with shadows off. I should download Cool VL Viewer and try that, too.

Here's the same scene in Sharpview. One region only. This discussion is all about what to do about distant regions, and I haven't implemented that yet.

babbagesv0.thumb.png.f8742d00ac14ca22b597bf668531cc69.pngNew Babbage in Sharpview, single region only. 60 FPS. Shadows on. GPU is 43% busy. About 200m to the region boundary from this point.

(Note funny things sticking upwards out of flying submarine atop central tower. That's an old sculpt object, and it does something strange with sculpt coordinates that Sharpview doesn't emulate properly. I know of three such objects in world, all from the pre-mesh past of SL.)

So that's where I am now. Resolution for near objects is fine. This discussion is all about what to draw in the distance that won't slow things down much. The goal is to be able to see distant landmarks so you have a sense of place. Nice for sailing, flying, and to a lesser extent driving. Or just walking around a city.

It's possible to use more resources drawing the stuff that's barely visible than the close-up stuff. That's no good; you sacrifice local detail and responsiveness for a minor improvement in distant stuff.

I've commented on GTA V's backgrounds. Turns out they are not flat images. There are custom-built low poly models of each large area in that game. This is something that would be hard to do for SL, although not impossible. You can generate simplified meshes of entire regions. Someone had some of those on display at SB 20. I've made some myself. Those are just the SL map projected onto an elevation map. It's possible to do much better, but it comes close to copy-botting if it gets too good. So I'm been planning on using flat images, which are permitted per the SL terms of service.

  • Like 3
Link to comment
Share on other sites

3 hours ago, animats said:

This discussion is all about what to draw in the distance that won't slow things down much.

Depending on how your viewer is coded and whether it can do true multi-threaded rendering or not, instead of blurring distant objects or using 2D impostors, I'd push their rendering to a low frame rate rendering thread. It would be fine to render a far building, say, once every second, and reuse the result in the final, ”real time” render.

You could vary the threaded frame rates depending on how close objects are and how fast you are getting closer or farther to them as the camera moves around...

You could use several such threads too, e.g. one for 128m to 256m at 5fps (with fully textured objects), one for 256m to 512m at 2fps (with very low texture LOD) and one for 512m to 1024m (no texture, just a suitable ”averaged” color) at 1fps...

Quote

I've commented on GTA V's backgrounds.

IMHO, you are too much geared towards reproducing what AAA games are doing to solve their own issues within their own constraints.

You perfectly know that SL is not an AAA game with pre-calculated and pre-rendered assets, very limited camera ”paths”, limited environment settings, etc... The solutions for SL cannot be the ones used in AAA games, even if the latter can be inspiring to find more suitable algorithms.

What you suggest would imply that LL puts into place specific servers for pre-rendering impostors, backgrounds, etc: while it would be doable, I do not see LL doing such a thing, since it would cost a lot in term of server power (and here, we are speaking about servers capable to do 3D rendering, unlike what are SL's server right now): way too costly, especially now, with AWS as the servers host...

Edited by Henri Beauchamp
  • Like 2
Link to comment
Share on other sites

4 hours ago, Henri Beauchamp said:

IMHO, you are too much geared towards reproducing what AAA games are doing to solve their own issues within their own constraints.

You perfectly know that SL is not an AAA game with pre-calculated and pre-rendered assets, very limited camera ”paths”, limited environment settings, etc... The solutions for SL cannot be the ones used in AAA games, even if the latter can be inspiring to find more suitable algorithms.

What you suggest would imply that LL puts into place specific servers for pre-rendering impostors, backgrounds, etc: while it would be doable, I do not see LL doing such a thing, since it would cost a lot in term of server power (and here, we are speaking about servers capable to do 3D rendering, unlike what are SL's server right now): way too costly, especially now, with AWS as the servers host...

I understand both @Henri Beauchamp and @animats and others technical discussions from a very high level point of view (and many thanks for all the info and back-and-forths!) only, but I'd like to jump in here from a serious UX point of view.

You are correct in saying that the AAA games keep being brought up with pre-rendered assets etc, something SL isn't. And yet, this is what people who play immersive, realistic looking 3D games want and expect these days. They don't care how it's being done technically, whether it's GTA, The Sims, World of Tanks, Cyberpunk 2077, Lives by You (next month release), or SL, they want the environments in games that are set in a real world type of environment that humans are familiar with to act and look as real as possible.

That's the expectation these days.

When a new user (or press etc) look at the default that is SL and make fun of the graphics, or see that what LL shows on their homepage (I posted said pics yesterday) and compares that to what they get when they arrive in game, even with the barely passable looking NUX) is not the reality, they are disappointed. Their expectations are not met and that's part of why so many won't stick with the "game" that is SL.

So, in the end, it doesn't matter how we get SL looking more like a modern game, any more than I care about how Penn and Teller perform their illusions, but we can not keep getting further and further behind the curve on how SL looks if we want to attract and retain new users. The tech needs updating, but how? Unless that's not LL's plan, and just keeping a steady 40K users who put up with all of the rickity tech limitations that are showing their age over the past 20 years is good enough.

  • Like 4
Link to comment
Share on other sites

I don't mind this effect in games (nor would I mind it in Second Life) for one important reason: games that use this effect are mostly third person view and so is Second Life.

It's a subconscious thing almost, if you can see your own character you are viewing the world as either a camera operator or a god. Controlling the avatar (and even calling it an avatar) doesn't quite overcome the feeling of being some sort of observer. The effect looks okay in outdoor areas given this because you're used to seeing similar from real world cameras, in movies etc. Sometimes it's a little too strong in games but seeing camera style effects, artifacts etc feels quite normal given the third person camera view.

I like the idea personally, I think SL has become quite an 'indoor' environment in general though and I'm not sure there's a whole lot of demand for the giant open world view that this enables. It's a shame and maybe there's good reasons people have migrated to smaller, more confined spaces in SL that this effect tries to mitigate but just look around mainland...it's mostly deserted, most people don't explore SL on this kind of scale and don't seem to have a whole lot of desire to even observe its vastness.

 

Edited by AmeliaJ08
  • Like 3
Link to comment
Share on other sites

2 hours ago, Katherine Heartsong said:

And yet, this is what people who play immersive, realistic looking 3D games want and expect these days.

And I am not against this or denying it.

However, what I am saying is that, to achieve the AAA-games-like quality they expect, you just cannot use the same techniques as the ones used in those games, because of the specific constraints of SL (or rather lack thereof, since you can pretty much upload anything into SL and pile it in an infinite number of ways around the corners of a sim).

2 hours ago, Katherine Heartsong said:

we can not keep getting further and further behind the curve on how SL looks if we want to attract and retain new users. The tech needs updating, but how?

One word: Vulkan.

Once the renderer converted to use Vulkan, many more things will be possible, especially related to parallel (threaded) rendering, like I suggested in my above post...

Apparently (from what I understood based on what we were told during the SL20B meetings), LL also started to implement some on-the-fly conversion techniques so to be able to render SL on mobiles: this could also benefit the ”standard” viewer. For example, I'd envision some on-the-fly mesh optimization, so that the viewers (mobile and desktop alike) could render simpler objects (which could then be used, at least when at far distance, in the desktop viewer)...

Edited by Henri Beauchamp
Link to comment
Share on other sites

6 hours ago, Katherine Heartsong said:

And yet, this is what people who play immersive, realistic looking 3D games want and expect these days. They don't care how it's being done technically, whether it's GTA, The Sims, World of Tanks, Cyberpunk 2077, Lives by You (next month release), or SL, they want the environments in games that are set in a real world type of environment that humans are familiar with to act and look as real as possible.

That's the expectation these days.

Yes. That's the whole point of all this.

4 hours ago, Henri Beauchamp said:

One word: Vulkan.

Sharpview uses Vulkan. It helps rendering speed considerably, which is why Sharpview consistently gets 60FPS on SL content on a reasonably good GPU. It's not magic. It doesn't help with GPU memory space, for example. I could go into more detail, but it gets boring. IM me if you really want to talk Vulkan. The general idea is that you don't want to use most of your graphics resources rendering stuff so far away it can barely be seen. Hence levels of detail. Most of what I'm talking about here involves coping strategies for bad lower LOD models.

4 hours ago, Henri Beauchamp said:

LL also started to implement some on-the-fly conversion techniques so to be able to render SL on mobiles: this could also benefit the ”standard” viewer.

That's the right way to do it. The region impostor images I've been talking about here are one example of that sort of thing. That's the simplest form and easy to do. The next step up is automatically making low-poly models of large areas. Or of avatars, a subject I've discussed elsewhere. That's harder.

  • Like 3
  • Thanks 1
Link to comment
Share on other sites

  • 2 weeks later...

What's striking about all this is that SL, with long draw distances, is putting more rendering work into the distant stuff than the near stuff. If we can somehow get past that, the big world thing will work better.

Here's more of how GTA V does it. (I use GTA V as an example because, although 10 years old, it remains a popular big-world game that looks good. Its tricks aren't too complicated to be considered as improvements to SL.)

So, an alternative to flat sim surrounds is 3D sim surrounds, like off-sim 3D terrain people add to their islands.

vinewoodhillslowpoly.thumb.jpg.79fd217d7766b3cfd78f311ab13ed39c.jpg

Rolling terrain, the easy case. This is one mesh with one texture. There are tools for SL to make sculpts like this. Someone was showing examples like this at SL20B.

littleseoullowpoly.thumb.jpg.9d7319d36df3d31f4542e86163418252.jpg

Urban terrain, the hard case. Vertical surfaces are hard. Those buildings, and those orange cranes at top center, will be hard to do automatically. It matters. Distant hard edges against the sky need to look right. That's how people navigate visually.

Ten years ago, this was almost certainly done by hand. Today, we see Google Earth and Microsoft Flight Simulator doing this automatically from aerial imagery and depth info. Anyone know of a tool for this? If we had a library of low-rez SL region models like this, updated once a week or so, the compute load for looking at the big world would come way down. Beyond draw distance, the viewer would switch over to these low-rez models.

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

I can sort of see ways to do this. Suppose we had orthographic images with depth from straight down, and angled down from north, south, east, and west. That's something I could generate with a version of Sharpview. Now we have some 2D images with depth, like aerial photographs. Google Earth puts that kind of data together very well. You can look down into narrow canyons between buildings in Manhattan. The canyon has to be really narrow before they are unable to generate a good side image of the building.

Now the trick is to crunch that down into a low-poly 3D model. There's open source photogrammetry software, usually used for making 3D models from drone images, that can do this. The "low-poly" part may be hard. Most of the software for this generates point clouds. We want the faces of buildings to be a very small number of triangles, with the outer edges correctly aligned.

  • Like 2
  • Thanks 1
Link to comment
Share on other sites

Some background.

This is prep work for my experimental Sharpview viewer. (New release 0.4.6 is now available. If you want the download password, IM me. It's a tech demo at this point. Move and view in one region only. Avatars draw as blocks. Use a low-value alt.) I'm trying to work through the technical issues of making SL-type metaverses go fast with high detail. I'm trying out various exotic ideas, some too risky for a mainstream viewer. Sharpview is a development platform for that. It's all new code, 100% safe Rust, with an architecture quite different than the C++ viewers. Think of this as R&D work on how to build a metaverse for real.

Currently, I draw one full region at High LOD at a steady 60 FPS. This is like running at LOD factor . The Vulkan-based graphics and concurrent rendering are fast enough to handle a full region at full resolution. Textures are sized to be one texture pixel per screen pixel, and constantly loaded and unloaded as needed to maintain that. Priority queuing makes that work smoothly. The annoying problem where it takes 30 seconds to a minute for some texture right in front of you to load does not appear in Sharpview.

I can't continue drawing meshes at High LOD once more than one region is drawn. Not without requiring some giant 24GB GPU or something. That's not the right answer, anyway. It's inefficient to use all those resources on distant background objects. We need to save drawing capacity for close-ups of overdressed, or underdressed, avatars.

Now, the standard SL viewer solution is the LOD system and the draw distance. The LOD system suffers from too much content with bad lower LODs. The draw distance limit means that you can never see far-away objects. Or even, if you have to reduce the draw distance, objects that are not that far away. Both of these approaches often look bad.

I'm trying to come up with something better that will work with current SL / OS content. I've discussed better LOD generators, flat images of distant regions used as sim surrounds, and low-poly 3D models of entire regions. All these things are used in modern games. Some combination of those approaches should be sufficient to pull this off.

The hard part for most of this is automation. SL doesn't have an art director. Major game studios have a small army of technical artists who use Unreal Editor and similar tools to clean up game assets created by the creation artists. Polishing of game assets for AAA titles was, until recently, mostly manual, but the level of automation is increasing. I've been watching the advances in technology there to see what can be adapted for SL content. That's why I write about such things as silhouette protection in the Unreal Engine 5 level of detail generator, advances in convex hull decomposition, and related technologies.

There are enough people on these forums with game dev backgrounds that it's worth it to discuss this in detail. Some of the criticisms are good. Such as the dislike of blur. Can we do all this at a high enough resolution that blurring is down below screen pixel size? On high end machines, maybe. We may have to cut corners on lower end hardware.

So that's how all this fits together.

  • Like 2
  • Thanks 1
Link to comment
Share on other sites

More impostor-related experiments. There may be an app for this.

basicterrain1.thumb.jpg.b1ef5704e36366a1327b66de7b8111b5.jpg

Reconstruction of real world terrain using Open Drone Map. 2D images go in, 3D model comes out. Mesh reduced to 3100 triangles in Blender. Area is about a quarter of an SL region. Compare this with the GTA V images above.

Creating a low-rez 3D model of distant regions from SL screenshots is the same problem as reducing drone images to models like this one. There's open source software for doing this for drone images. It's easier for SL, because we know exactly where the camera is.

This looks promising. Lots of work goes into processing drone imagery. It's a busy field with software being actively developed.

Anyone into photogrammetry or drone data reduction? If so, let's talk.

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 253 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...