Jump to content

Making the big world look big - some progress


You are about to reply to a thread that has been inactive for 782 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

I've talked about sim-scale impostors in the past. Here's what I'm talking about.

entiresimimpostor1blurred.thumb.png.5f4438fe1ebf5d839090784dd0875107.png

Beyond draw distance, the world would look like this. You'd have a sense of place, more immersion, and the world would feel big.

The idea is to show a 2D image, sort of like a skydome surround, showing the SL world beyond the draw limit. See for miles and miles. Something I'm working on for my experimental viewer, which is  still a long way from usability, but can take pictures like this.

Technical details:

  • Near objects would rendered normally. This only shows objects that would normally be out of range.
  • Connected regions only, of course. If your region is isolated, there are no distant sims to see.
  • The image is blurred on purpose, as sort of a depth of focus effect, and partly because the illusion is not perfect, as with a skydome.
  • In actual use, the image would be warped a bit to adjust for the viewpoint. When you're flying, you'd see the "above" images. Imagine it looking like Google Earth, for SL.
  • These images would be taken once a week or so by a land survey bot, so they'd often be a little out of date.
  • Take a day image and a night image.
  • Only ground level pictures will be saved. Sky platforms would not get this effect.
  • The above image was made with my own experimental viewer. It's Vallone region, rendered in isolation, as seen from the east.

The un-blurred version.

entiresimimpostor1.thumb.png.143a4ba363c77b458a5b8a91f889083d.png

To blur, or not to blur, that is a question.

entiresimimpostor3.thumb.png.a6fa17ce8b6e5d5f34c62ee64e8d2382.png

Another viewpoint. 45 degrees to the left of the previous one.

This needs to be taken with a isometric camera to get the clipping right. One image every 45 degrees, plus one from above, probably.

Comments?

 

  • Like 3
  • Haha 1
Link to comment
Share on other sites

Nice idea(s) !

I thoroughly hate blur however. Even though, given my age, I am wearing glasses, I always see things hyper crispy when I look at anything, be it in the distance or before my nose.

Blur is a defect of camera lenses and is in no way a normal way to see things with your eyes (your eyes adjust to the subject distance, so there is never any blur, but in the peripheral vision): why the hell people think it is so cool to reproduce camera defects is totally beyond me !

Edited by Henri Beauchamp
  • Like 1
  • Thanks 3
Link to comment
Share on other sites

4 hours ago, Maitimo said:

How does it handle alpha glitching?

Good question. What I want to try is taking pictures that have a depth channel. Then, when they're composited into the world, real world objects appear in front of or in back of the backdrop, depending on distance. This isn't a perfect illusion, but it beats looking off into emptyness.

R8PWkS.jpg

GTA V distant impostors. That's the goal here.

In games, levels are designed to make impostors like that work. If you look at GTA V images, you can often find the break between 3D foreground and 2D background.

8LgR0e.jpg

Notice where fog starts.

EJo81H.jpg

Notice how the scenery behind the dark freeway ramp from left to right looks different.

It's a trick of design and technical art to bring this off well. Parts of SL are designed to help, by not having long sightlines. A hill, or a turn in the road, and the transition from detail to background (for SL, nothingness) is hidden.

The stuff I'm looking into is standard game technology and has been for a decade.

  • Like 2
Link to comment
Share on other sites

3 hours ago, animats said:

In games, levels are designed to make impostors like that work. If you look at GTA V images, you can often find the break between 3D foreground and 2D background.

8LgR0e.jpg

Notice where fog starts.

 

I think I see it...

 

 

Image1.jpg

Link to comment
Share on other sites

Great threat, its about time SL offers something new here. Sim-surrounds are nice but can't grow larger then 2048m (lack of usable megaprims) and since viewdistance is capped at 1024m  they hardly make sense anyway. I created a jira request many years ago for SL to add a "skybox"-feature to textures that would for example enable you to build a spacestation, put a starfield texture right on the windows and it would appear infinite far away (when you look at it while moving). Doom from '92 can do it, so 30 years later SL should be able to do it. 

  • Like 1
Link to comment
Share on other sites

I think you're confusing the concept of a painted matte and a background impostor.

The first one, a painted matte, is what you can bake into the environment and is so far away scenery, that the map you're playing in is never going to let you reach by walking or driving your character there.

The second, a background impostor, is part of a different system of Lods groups, where an actual hierarchy of objects would be needed to make full use of such a feature. Then a connection to the player camera would change the opacity of each angle of view related objects when the billboard rotation gets close to the next angle impostor. I have seen up to 64 impostor groups for buildings that tower in a background scenery, and the object Lod group that gets to zero opacity then gets turned off, so at any time, there are just 2 up to 4 visible planes in view,and a distance fog in the middle does the rest of the work

  • Like 1
Link to comment
Share on other sites

I like the general idea but think background blur should not be baked in. If you look at land/cityscape photographs, distance is inferred by haze, not blur. DOF is proportional to focal distance, and doesn't blur out the horizon until you pull focus in fairly close. If you were cam-flying, blurring the horizon would make it pop out, not blend in. My biggest complaint with skydomes and backdrops is that they're too blurry.

I just watched the TV series "The Mandelorian" and was fascinated by Disney's creation of the "Stagecraft" production flow using "The Volume". The idea started out as a way to have the "green screens" generate scene realistic ambient lighting, but the video wall technology advanced in resolution so quickly they realized they could do away with post production scene insertion entirely and film live against a real, realtime background.

For good reason, you are working in the opposite direction, replacing real 3D scenery with 2D backdrops. Even if your idea really only works well for static photography, it's attractive.

  • Like 1
Link to comment
Share on other sites

3 hours ago, Madelaine McMasters said:

I like the general idea but think background blur should not be baked in.

Agree. Making distance haze play nicely with EEP presents some problems, but you're right.

It will never look as good as an AAA title given the limitations of SL, but distance can look much better than it does now. The real trick is pulling this off with SL's limited draw distances. If you look at those GTA pictures, it's drawing in 3D for 300 to 400 meters. In SL, you'd like to draw the region you're in, and one region beyond that, then go to a distant impostor. That's 9 regions, which needs a lot of resources. A gamer PC can do that now. You're never closer than 256m to a distant impostor. That should look pretty good. If the viewer doesn't have the resources for that, the next step down is to draw the four regions to which you are closest, so when you pass the center of a region, the region ahead turns on, replacing the distant impostor, and the region behind you drops out and is replaced by a distant impostor. You can get as close as 128m to a distant impostor, and you'll probably be able to tell this is happening.

There's another, related trick.

monocolor1.thumb.jpg.2d862ba4a15174f99ebfe3e78fe29939.jpg

Monocolor mode. No textures. Each face has a single color. Not too bad in the distance, bad in close-up.

This is the color you get if you reduce each texture to 1x1 size. The plan is that, when collecting impostor images, also collect and store these average colors. So, when it's time to draw a new region, download a small file with the average colors of most of the objects in the region, and use those instead of the current grey.

This is for areas out near the limit of draw distance, but not far enough away for the distant impostor. That allows showing more distant objects with less resources and less network usage. It's a midway step between full textures and grey. Add a bit of haze, and it should work.

Yes, distant objects doing color changes will be the wrong color until the texture loader catches up. Beats grey.

This trick exploits a property of human vision - color resolution is much less than greyscale resolution. Old-style analog TV exploited this. You could only have about 10 full range color changes across the entire width of the screen. So nobody wore brightly colored stripes on camera.

All this is to create big-world ambience. You can see what's in the distance, and, as in real life, you can go there and take a closer look.

(This is a design discussion for an experimental viewer I've been working on for the last year. It's a long way off, but there is steady progress.)

  • Like 2
Link to comment
Share on other sites

15 hours ago, animats said:

Monocolor mode. No textures. Each face has a single color. Not too bad in the distance, bad in close-up.

This is the color you get if you reduce each texture to 1x1 size. The plan is that, when collecting impostor images, also collect and store these average colors. So, when it's time to draw a new region, download a small file with the average colors of most of the objects in the region, and use those instead of the current grey.

Can you use the average color value stored in the metadata field of the JPEG2000 image header?

I don't like haze because it's usually caused by pollution.  I am sure this varies geographically.

"During summer, the central and eastern United States often is blanketed by a murky veil of haze that may last for days. The haze shrouds the sky and dims the sun --- sometimes making it altogether disappear before the end of the day. To some, haze is so commonplace that it is assumed to be natural and is accepted as a fact of life.

But haze is, in fact, not predominantly natural, and its presence warrants attention for several reasons. For example, haze affects human health. Researchers at New York University Medical Center have determined that the acid droplets found in haze are hazardous to exposed tissues of the lungs and breathing passages. They also concluded that when haze occurs in association with smog --- the brownish, photochemically-enhanced form of air pollution derived from automobile exhausts --- the destructive nature of that phenomenon significantly is increased.

Haze also affects aviation. While not posing an overwhelming risk to jet aircraft, haze shrouds visual cues important to pilots of small planes. When widespread, haze may significantly reduce visual range over thousands of square miles. Haze, in fact, likely was a factor in the fatal crash of the small plane piloted by John F. Kennedy, Jr., near Martha's Vineyard, Massachusetts in July 1999." -- Haze over the Central and Eastern United States (noaa.gov)

Link to comment
Share on other sites

On 3/7/2022 at 4:46 AM, Ardy Lay said:

Can you use the average color value stored in the metadata field of the JPEG2000 image header?

Yes, but if you have to read each image file header to find that out, it takes too long to start up a new region. You need a "hint file" of colors, either as a local cache or on a "hint server".

A useful way to think about SL content is that everything can change, but mostly it doesn't. You can afford to cache some wrong guesses about distant stuff, as long as they're not visually jarring.

 

On 3/4/2022 at 1:47 PM, Maitimo said:

I think I see it...

Yes. There are other tricks in those GTA V pictures. Look at the one of the city with the freeway ramps. Note that beyond the left to right ramp, you're looking at an impostor image. (You can get closer and the detailed version will load, which is the difference between an impostor and a backdrop or skydome.) Notice the lighting. Lights are white dots in the impostor image. Although the sky and near view show dusk, the impostor image represents night. It looks like they have a day impostor and a night impostor, but not more than that. I was thinking noon, midnight, dawn, and sunset images, but that may be more than is necessary. As long as the sky and sun look as good as they do now, distant impostors probably don't need more than two time versions.

There are a lot of other tricks used in games for distant content, but many rely on pre-analyzing the scene to determine what sightlines are possible, and designing scenes to avoid long sightlines to high detail. Not so much of an option for an SL viewer. Unreal Engine 5 has "regions". Here's a technical discussion. They give the game developer a lot of control over when regions turn on, which isn't that helpful for SL. Interestingly, rather than draw distance from the user, they have region turn-on distance from the region. So low-detail regions could turn on at longer distances than high-detail ones. Interesting to think about in terms of impostor policy.

This, as I've said before, is design discussion for parts of my own viewer that haven't been started yet. However, there's nothing prohibiting someone from doing this in LL-based viewers.

  • Like 1
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 782 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...