Jump to content

Physically based rendering, and all that


animats
 Share

You are about to reply to a thread that has been inactive for 479 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

Physically-based rendering has come up quite a bit recently at Creator User Group. Since I have an experimental renderer that uses PBR, I'm posting some pictures as examples of what this looks like. (This is a long, long way from being usable as a viewer. It's just the "view" part at this point.) This is mostly to support technical discussions.

portbabbagedocks.thumb.jpg.056a51d322601fad6a7c7c4d99db60f0.jpg

Port Babbage docks.

So what's different here? This is high dynamic range rendering, which looks good for brightly lit scenes. The only light here is the sun, and some default ambient light which is not physically-based.

The translation from SL colors is simple, perhaps too simple. SL "diffuse" becomes PBR "albedo". SL "specular" color is averaged to a grey value, which becomes PBR "metallic".

portbabbagefood.thumb.jpg.37de61ed5c52cc863066772964a1e0f0.jpg

Food, glorious food. Nice light and shadow effects.

babbagepalisadestorefronts.thumb.jpg.8f4d57245b291437fd55acdec667be99.jpg

Storefronts, Babbage Palisade.

animatscafe.thumb.jpg.21734f49574b4b21ff301aaed162c324.jpg

Animats cafe, Vallone.

dessert.thumb.jpg.1a59dc32064dc9ead98af44c8a715df6.jpg

Dessert cafe, Port Babbage. No sun, no lights, no joy. This looks like SL viewers with ALM off.

So this is a demo of basic, backwards-compatible PBR. Items to note:

  • Lighting really matters. Well lit, things look great. Unlit, things look dead. This is the most important thing to know about PBR.
  • No environmental mapping here yet. SL rendering has environmental mapping, which means shiny objects with a material environment value above 0 reflect the "sky". The "sky" is just a single built-in image. If you make a shiny object, give it a high "environment" value, and put it indoors, it will still show a "sky" reflection, not the room. This is called a "static environment". The next step up is a "baked environment map", where a pre-built low-rez picture of each area is constructed, somehow. Many games do this. It's hard for SL, because there's no "level build" phase in making SL content, where global things like that get done. Maybe it could be done occasionally, like pathfinding mesh updates. A fully dynamic environment map is possible, looks great, but increases rendering effort substantially. That's when mirrors start working. (Mirrors are easy to implement but can double rendering load.)
  • The high dynamic range rendering here doesn't have auto-exposure yet. The "tone mapping" is fixed. What will happen eventually is that as you go from dark places to light places, over a second or two your "eyes" will adjust and the screen light level will rebalance, as in real life. Many games, especially Cyberpunk 2077,  do this. It's subtle and very effective in making rendering look real.
  • All this is what's possible without changing or breaking content. It's possible to do more with content in new formats. Especially for skin, which needs a "subsurface reflection" texture layer to capture the way that light makes skin glow a bit. SL "Facelights" are an attempt to get that effect, but since they don't respond to illumination, they only look correct in specific lighting situations.

So this provides some concrete examples, as PBR for SL is discussed.

Edited by animats
  • Like 8
  • Thanks 3
  • Haha 1
Link to comment
Share on other sites

22 minutes ago, animats said:

A fully dynamic environment map is possible, looks great, but increases rendering effort substantially. That's when mirrors start working. (Mirrors are easy to implement but can double rendering load.)

I've seen it done before in other games that the env map is fully dynamic, but rendered at a much lower resolution using a lower LOD - as a hypothetical, would it be possible to do something similar, and (bonus points) implement some sort of upscaling algorithm (FSR, DSR etc) to upscale the image into a higher resolution?

Maybe it would be better to have some sort of hybrid approach could be better for static objects wherein they use the baked env maps and only physical objects use the fully dynamic map.

(It's also possible i'm being stupid and that approach wouldn't take enough load off to be worth doing, but figured I'd put it in type anyway).

Link to comment
Share on other sites

2 minutes ago, Jenna Huntsman said:

baked env maps

This brings up a whole other area - when do you do the stuff that needs to be done infrequently? This is where Second Life is so different from game development. In game development, artists and level designers create objects. Then there's a polishing step, where people using tools such as Unreal Editor tighten up the content. They reduce mesh complexity, create lower levels of detail, create shadow maps, lighting maps, and environment maps. This used to be a manual process, but today, it's mostly automated. So when someone changes the content during game development, the polishing step can be re-done automatically.

Second Life lacks much of that. We put raw creator content into the world, and expect the viewer to cope. The viewer doesn't have much time to do that, and doesn't have time to build global summary assets, which it couldn't share with other viewers anyway.

SL is starting to do some infrequent work. Bakes on Mesh is an example. When you change clothes, all the stacked texture layers are composited into one image. So a job that used to be done on every frame is now done only when you log in or change clothes. This is progress.

The next logical step in that direction is not just baking on meshes, but baking the meshes themselves. Combine most of the meshes of an avatar into one big mesh when you get dressed. Discard all the triangles that you can't see, merge meshes and faces, and generate a game-type avatar with levels of detail that looks just like it did, but is far cheaper to render. There's also an opportunity to adjust clothing fit at this time. Roblox's new avatar system does this. (Roblox is not just blocks any more.) This is hard to do, but could fix two of SL's big problems: too many avatars in one place kill the frame rate, and clothing fitting is a pain.

Another infrequent task is generating the world map. That could be extended to generating 2D pictures of regions from multiple directions to be used as background impostors. See the mountains in the distance. See the land across the sea when you're sailing. (See all the sky junk for miles around. We're going to need a sky junk filter.) I've previously posted some pictures from GTA V showing that they do that.

If SL had an "infrequent work" system, it could be used to generate environment maps. But that's for the future. For now, I'm looking at what can be done within the existing architecture.

  • Like 4
  • Thanks 1
  • Haha 1
Link to comment
Share on other sites

5 hours ago, animats said:

SL is starting to do some infrequent work. Bakes on Mesh is an example. When you change clothes, all the stacked texture layers are composited into one image. So a job that used to be done on every frame is now done only when you log in or change clothes. This is progress.

The next logical step in that direction is not just baking on meshes, but baking the meshes themselves. Combine most of the meshes of an avatar into one big mesh when you get dressed. Discard all the triangles that you can't see, merge meshes and faces, and generate a game-type avatar with levels of detail that looks just like it did, but is far cheaper to render. There's also an opportunity to adjust clothing fit at this time

It doesn't even have to be every time you change clothes.  This can be cached with saved outfits (caches invalidated when outfit modified). More rapid clothing changes and less processing. Lots of opportunity for caching at multiple levels of detail here.  

Additionally, baking the meshes themselves across the outfits makes it harder to rip complete original meshes from passers by (doesn't help when the avatar wearing the mesh is participating in ripping process, though).  Might help protect creators.

  • Haha 1
Link to comment
Share on other sites

1 hour ago, Quistess Alpha said:

Just out of curiosity, when is the nav-mesh updated? Would it make sense to do other region baking at the same time?

I think it's not as when someone is actively working on nav mesh (a rare occurrence) they are making a batch of changes and rebaking to see the combined effect, also rebaking in the middle of testing pathfinding would be a royal pain in the butt.

I've spend a lot of time playing with nav mesh back when I had a region, the intent was to have roaming events that would slowly catch up with avatars, but it was an absolute nightmare to make work in a real environment and will randomly just break. Even with very clearly defined wide roads and everything else off limits.

There is also a huge script time penalty for having even a single nav mesh agent active on a region .. I actually put a few of them in a enclosed environment and managed to impact scripting so heavily the neighbors "entire 5000 li build in a temp rezzor" just flickered in and out of existence.

It's fine for test setups on empty regions with a few prims and limited scripting, but the moment you get more a more complicated actual SL lived in space, it's garbage.

 

  • Thanks 1
Link to comment
Share on other sites

2 hours ago, Quistess Alpha said:

Just out of curiosity, when is the nav-mesh updated?

When you click on the squggly icon in the toolbar. Anyone who can edit any object in the region can do this. An update takes a few seconds.

The navmesh and the active pathfinding system are separate. Even if you don't have any pathfinding characters, you can still do llGetStaticPath, and it's cheap. My NPCs do that; they are not using LL-type pathfinding, but use the navmesh as a starting guess for paths.

If you want to talk pathfinding, we should do that in another topic. Here I wanted to discuss PBR rendering, which has been talked about a lot lately, but not seen much.

 

 

  • Like 1
Link to comment
Share on other sites

33 minutes ago, animats said:

When you click on the squggly icon in the toolbar. Anyone who can edit any object in the region can do this. An update takes a few seconds.

Yeah, I didn;t mean to drive off topic, I more was suggesting/wondering if a similar system would make sense for baking environmental low-poly model or anything else.

Link to comment
Share on other sites

Pbr implementation would also require a texture type filter, for instance normal maps flagging as such to get them uploaded and converted to jpeg2000 as 16bit, as it is required for normal maps that actually work as they are intended to. And perhaps also a scalar values texture packing as used in unreal engine, with the greyscale based images packed all together in each texture channel to get ambient occlusion, roughness and metallness in one single color texture. And the good thing is that these textures don't even need to be 16 bit. Optionally, a height map could also be placed in the alpha channel to be used in a parallax displacement, as seen in the unity engine

Edited by OptimoMaximo
Link to comment
Share on other sites

You underplay the excellent work and truly impressive achievements that you have made starting from scratch and building from first principles. What you have as a result is a very nice looking, vivid scene, a great illustration of how with a more capable renderer things can look quite striking.

The lighting is truly a massive uplift and the single biggest failing in the current render. 

However, I'm far from convinced that calling this PBR makes it PBR. Though perhaps, some of what you show here can help reduce the painful visual conflicts that will result if you simply place a proper PBR system alongside the poorly lit and lifeless SL system. In a sense, what you have is akin to my shader setup I use in Blender. A principled shader being fed with the existing SL maps in a way that mostly works inside the PBR concept but is not physically correct.

I am intrigued by your spec colour averaging to grey as I  would imagine that it should break a lot of metal in SL where the spec colour, combined with the inverse roughness channel in the normal map alpha, is used to give a present day approximation of metalness, resulting in the "glint" of gold. 

I have to agree that the most important step is to fix the abhorrent mess that environmental reflection mixing causes; this alone accounts for why so many attempts to use materials in SL today wind up with things looking either wet or plastic coated, rather than leather, or polished. We've talked recently about the poor resolution of the cubemaps, it is not even that. I have been running an FS with higher resolution cubemaps for both the skymap and the shinymap and whilst I cannot rule out oversights in my hacked together test, it is my belief that the generation is too poor, e.g. the skymap rendering is sampling from the skydome but does not appear to sample the clouds. The shinymap is a just a weird dirty grey to get around the fact that you note, we cannot differentiate indoor from outdoor, occluded and open.

Great work, I love seeing how you are innovating within each cycle that you show us.

 

  • Like 3
  • Thanks 1
Link to comment
Share on other sites

It's a limited form of PBR, yes. I'm using Rend3 and WGPU underneath, and they're not finished yet. They don't yet have a full set of PBR features. No light maps, no environment maps, etc.

What WGPU does is provide a common interface to Direct-X (Microsoft), Vulkan (Microsoft and Linux), and Metal (Apple). It basically makes them all look like Vulkan. I can compile on Linux and generate a Windows program. So this is a way to get multiple platform support.

On top of WGPU is Rend3. Rend3 is mostly about  memory allocation. The Vulkan/Metal interface gives you access to GPU memory, and something has to manage that, and interlock it for multiple threads. That's what Rend3 does.

WGPU and Rend3 are about 50%-70% implemented. They are not ready for prime time, and it's too early to bet a must-work product upon. Which is what I've told the Lindens who have expressed interest. Those projects are getting more attention, more users, and more developers, so progress is being made. Another year or so, and this approach will be more viable. For now, I have a tech demo.

My main points here are 1) PBR can work on existing SL content, 2) the look is not a perfect match, but it's not that far off, 3) it will never be worse than rendering with ALM off, and is usually better, and 4) future content built for PBR, probably uploaded in the now-standard GLTF format, can use full PBR features alongside old content. Think of this as a demo of a migration path to improved technology.

  • Like 2
Link to comment
Share on other sites

That's the thing Joe ~  I've been fairly mute in mentioning it myself until now, but now that the Lindens are loudly tooting their horns about "We're going to be implementing PBR" seemingly without any understanding of what that actually entails~ I am starting to get increasingly pedantic about what PBR means, and Beq is as well.   PBR by definition isn't just 'a set of maps' and 'a reflection model' and 'light is calculated in lumens' ~   When someone sticks a "PBR' label on something, then I am promising to the world that: 
 

"If I feed this system the PBR standardized values for glossy varnished wood, and shine 800 lumens of soft yellow light on it, it will look as close to it as it possibly can to a wooden surface with a 60 watt bulb shining on it in Real Life."

"If I feed it the system the PBR standardized values for a polished gold material in 20,000 lumens per square meter, to simulate broad sunlight and I snap a screenshot of that render with an Shutter Speed , ISO etc, to match a Real Life camera, and I compare it to an actual photo taken with a camera at that rating, of a gold ring, in broad sunlight where the light-meter for the photo registers 20,000 lumens, it's going to match up."  I'll be able to composite those two images together and it has a prayer of looking believable.

PBR ~ at it's core is an equation. I give you a known data value for a material ~ and I add it to a known render environment, and I get a pretty picture that matches how it should look in the real world.  The promise is ~ that in "this PBR world we've created" : 

2 ( accurate PBR texture data ) + 2 ( the accurate rendering environment ) == 4 ( the correct pretty picture )

The promise of PBR is that 2+2 = 4, it always will, and that's what makes it PBR.

 

If a system has the "PBR" label, it means that if take these values and cram them into the input of this lighting model, I'm going to get known results.  Rend3 is a PBR system, if you feed it correct values and correct lighting information, it will produce those known results.  However, when it comes to SL, we don't have light measured in lumens, and the texture input data you're using is back-converted: Diffuse, Normal , Spec data that is littered with pre-baked lighting information, (( Which by the way , to echo Beq's sentiment ~ I am so so so very impressed that you were able to do.  The adaptability you display is continually astounding, both with your pathfinding project and this project. )) and shining SL sun and SL lighting info on it, which are entirely guesswork numbers, then looking at what the PBR system of Rend3 spits out when you feed it nonsense inputs can be fun!! ~~ Beautiful even!  But it's not "Implementing PBR for Second Life".    We don't have a reflection system to reflect the surroundings, we don't have light measured in Lumens, all we're doing is feeding the Rend3 PBR systems a "mystery meat" data dump and seeing what it does with it.  It lacks the promise of equivalence, which by it's very definition means it's not PBR.

I would never bother to correct you on a detail so minute were it not for the fact that the Lindens are now looking and saying "Well Joe implemented PBR for Second Life, so we can too", when the fundamental promise that the data and math of PBR offers of, "Wood on a sunny day RL is going to look like wood on a sunny day in SL" being entirely absent.  The way the Lindens are talking about implementing "PBR" right now is basically "Yes what you see in the substance painter PBR lighting environment and you see in SL will be totally different but we're going to call it "PBR" anyways.  That breaks the fundamental mathematical promise of PBR.

The Lindens offering to 'implement pbr without doing lighting and reflections overhaul" is tantamount to saying that :

2+Y = N  Where the values of Y and N are somewhere between 0 and infinity, and the values of N might be somewhere near 4, if you're lucky, but they might be 6 or 15 or 0.002, cause the environment is nonstandardized, so who @#)% knows?  They don't get to call this 'PBR", it breaks the fundamental promise of PBR, and calling it such is misleading at best, and false advertising and prosecutable in court at worst.

What you're doing is the opposite~  You have the Rend3 Environment so :

X + 2N = X + 2N

At least your equation is correct but what X is ~ is entirely a mystery.  You can feed actual PBR data (2) into X and get 4, but by definition since you're using SL data as inputs, you don't know when it will actually be 2, so the promise is broken as well.

Neither model offers the required solidity of 2+2=4

If 2+2 sometimes equals 5 then that's not PBR, and I'm getting somewhat exhausted trying to explain that to the Lindens.

I'm terribly sorry to bring that exhausted irritation to your doorstep, as I really do admire the work you're doing.  It's fantastic! It looks amazing!  You're cramming SL data through a Rube-Goldberg device you built so that on the other end it produces pretty pictures.  The Rube-Goldberg device in of itself is fascinating, the fact that it makes pretty pictures is frankly astounding.  It's a magic trick of the highest order.  But it's not PBR.  The fact that you keep calling it that, makes my life difficult when I have to try and explain to the Lindens that when they're talking about "doing half of a PBR project" that's not PBR either.

Edited by polysail
  • Like 6
  • Thanks 3
Link to comment
Share on other sites

I will add to Polysail's legitimate concern a simple question: what PBR would bring at all to SLers at large ?...

I mean, we already got ”Extended Environment” crammed down our throats when no one ever asked for it in the first place (*), so I'm a ”little” worried that we would equally get (a so-called) PBR imposed on us with a negative performances impact (or higher hardware requirements, which is not exactly the same thing but just as bad), for a disputable ”benefit” (if any).

Here is what I get today without PBR, just with the Cool VL Viewer v1.29.0.3, all graphics settings pushed to the max, imposed local environment/time and scene gamma of 1.5 (to try and match your PBR screenshot lighting) and with Classic Clouds (for a better steampunk ambience :P).

PortBabbage-CoolVLViewer-1_29.0.3.thumb.png.4ffbfc132e4c88d67d980ac4afa985f8.png

--------------------------------------------------

(*) And because EE had been unreasonably rushed out, it has taken months for it to become acceptable in performances terms (not to mention the many render glitches and discrepancies, a few of them still unsolved), compared to Windlight (thanks to the improvements that went into the ”performance viewer” project; it must be noted however that those improvements could just as well have benefited to the Windlight renderer, which would then have remained faster).

Edited by Henri Beauchamp
Added a pretty picture
  • Thanks 1
Link to comment
Share on other sites

Going to join the pedantry train here (sorry in advance!), but accuracy is important when it comes to lighting -

1 hour ago, polysail said:

soft yellow light

3200 kelvin, 0 Δuv - assuming the source is a tungsten-halogen filament lamp. (Translated to RGB in Rec. 709 space (What SL uses), that's <255,190,121>).

Incandescent lamps are slightly warmer, at ~2800 kelvin, 0 Δuv; Ideally 2856k (Also known as CIE Illuminant A - in SL terms, that's <255,178,99>)

1 hour ago, polysail said:

simulate broad sunlight

Sunlight can be really tricky to simulate as it's colour temperature isn't fixed - it can range from 2400k ('Horizon' sunlight) to ~6500k (Overcast afternoon sunlight); though in the film industry direct sunlight is often assumed to be 5600k for simplicity. (For accuracy's sake, i'm using 0.002 Δuv; as daylight does not lie directly on the Planckian locus - in SL terms, this translates to <255,240,228>).

 

Anyway, nonetheless a very interesting writeup :)

  • Like 1
Link to comment
Share on other sites

Elizabeth Jarvenen (polysail) is right. I'm hammering SL data not designed for PBR into a half-implemented PBR renderer, and I know this.

Consider the alternatives:

  1. We're stuck with the OpenGL model of diffuse color, specular color, and lights from 0 to 1.0 intensity, because old content won't work with a modern renderer. So nothing can be done.
  2. Old content has to be manually rebuilt using modern versions of Blender and Maya which do proper illumination, reflection, and materials, uploaded in GLTF format, and rendered with a proper PBR renderer, yielding images that match the renderers in Blender and Maya. That's what Sansar did. Sansar failed.

Those are the extreme cases, neither of which is useful.

What I'm suggesting is this:

  • A real PBR renderer in the viewer, with a reflection model that at least understands outdoors and indoors.
  • Uploads optionally in GLTF format, with a few more layers, at least subsurface reflection for skin and maybe clearcoat color for cars.
  • Upload both mesh and materials together, so what you see in Blender/Maya is what you get in world.
  • High dynamic range lighting, where sunlight is far brighter than most other lights, and automatic "tone mapping" to adjust scene brightness as you move from dark to bright areas. (Look at this video of Cyberpunk 2077 to see this done well. There's a huge lighting range as the viewpoint walks through the city, going from bright sunlight to dark interiors and back. It looks right.)
  • Reasonable behavior for existing content.

What I'm demonstrating here is a proof of concept for "reasonable behavior for existing content".

Second Life's great asset is all that detailed content, developed over two decades, creators who know how to make it, and a culture that appreciates it. I'm proposing engineering solutions which preserve that content while allowing forward progress.

  • Thanks 1
  • Haha 1
Link to comment
Share on other sites

5 hours ago, animats said:

We're stuck with the OpenGL model of diffuse color, specular color, and lights from 0 to 1.0 intensity, because old content won't work with a modern renderer. So nothing can be done.

We shouldn't blame OpenGL for these shortcomings these are entirely on historical choices made in SL, for valid reasons that have been undone by time. OpenGL has many issues but take a look at Blender, it is OpenGL and has absolutely no issues at all with rendering PB, nor does it have issues with HDR. We have an awful lot of old cruft built around OpenGL, and we are apparently hamstrung because Apple made unilateral choices to undermine open standards (even though the actual number of Apple users is less than those on Windows 7/8.)

 

5 hours ago, animats said:

A real PBR renderer in the viewer, with a reflection model that at least understands outdoors and indoors.

Yep, no quibbles around that, though again I detach PBR from renderer as an exclusive. Consider the principled shader, developed originally by Pixar, and commonly regarded as the standard for things such as VFX. It is at its heat adherent to PBR principle, but it offer simplified artist friendly inputs .Pixar on Principled Shaders

5 hours ago, animats said:

Uploads optionally in GLTF format, with a few more layers, at least subsurface reflection for skin and maybe clearcoat color for cars.

Yes, maybe... see later. if you want to pick a format then GLTF is one and it is open and not tainted by adobe or autodesk. stink. But it cannot be the only option because for many many content creators in SL, 3d tools are not the path of choice right now.

5 hours ago, animats said:

Upload both mesh and materials together, so what you see in Blender/Maya is what you get in world.

Again, as an option this makes sense, but it can only be an option because it is not fit for purpose for many (possibly most?) workflows and usecases. But you are mixing up different things here too. 

1) Bringing things in together, a single shot upload of a fully textured asset.

Not every map that comes into SL starts in Blender/Maya/Substance. In fact today the vast majority come via Photoshop/Gimp.

Consider, clothing makers. Very few creators make money out of a single item with a single colour. The time in creation is dominated by mesh design and creation, but the revenue is derived from the colour sets and designs, the multipacks and fatpacks. Very few designers will want to upload a new mesh with every colour of fatpack. In fact, if they did it would not only cost them a fortune but be a content performance disaster. 

Consider also the extensive set of creators that are texture artists, those that buy template mesh, many of whom never have a Collada asset but receive a full perm inworld asset and full per m maps from which to work. This is how many creators started out, it is how many still earn an SL living today. Their tool of choice is Photoshop/Gimp or another texture painting program and their preview (WYSIWYG) space is through temporary textures inworld.

It is imperative that single map, bulk upload of some form is supported. GLTF is almost certainly not the right vehicle for this and it worries me a lot that LL are racing ahead to pick a single means of transport (GLTF) without actually knowing what the full range of cargo is that needs to be transported looks like. Like buying a new motorbike, before working out how often you need to take you entire family out with you.

I have appealed for a specification to be published ahead of any deliverables but the lab's record on such things is beyond poor. We have seen time and again that an idea is formulated, code is laid down and then when the shortcomings are pointed out it is "too late, you should have said before". I really don't think we (or they) can afford for this to happen. This set of changes has to be done right, and right from the very outset. There is no halfway house on this. We need to see the list of use cases and make sure that it fits.

2) what you see in your tools is what you get in world

Yes 150% agreed... and this has almost nothing to do with PBR explicitly. You can call the new renderer Colin or Mildred, it does not matter one iota. If our creators can make something in Blender/Substace/May/Daz etc etc and through a simple well-defined workflow transfer that creation to Second Life so that their artistic visions are realised inworld nobody gives a damn what name us rendering geeks and nerds give it.

"Wow that new Mildred update is awesome, my stuff looks exactly like I designed it."

and it is the need to be able to realise our artistic visions that drives SL creators to push the envelope all the time and why, therefore, a halfway house, partial solution will fail to meet the actual needs of the creator community and cause more strife.

I voiced my concerns last week that the plans to create PBR assets and a lightweight PBR render pass BEFORE considering the lighting changes and other aspects are fundamentally flawed and dangerous. 

Consider the problem you have highlighted above. A lot of content looks shabby, poorly lit and dull. There is also a lot of really impressive content that works really hard to make the best of a bad lot; such content calls upon numerous features of the existing renderer, adds backed lighting, additional AO (to compensate for the very weak SSAO in the viewer). Such content when placed into a PBR pipeline is likely to be detrimentally impacted, compared to the content that was less artistically designed (for want of a better term) We saw good evidence of this with EEP. A number of the issues that had to be "rolled back" and "toned down" were actually moving the lighting to more physically correct levels. The outcry of OMG all my things are overexposed, were only drowned out by those who thought their shadows were now too dark. Applying you current content to a PBR shading model will work as you have shown, improve the lighting solutions will lift average content to a new level. But in doing so it will break the content that had already gone the extra mile to compensate.

So... if the Lab delivers upon the plan to rollout out some halfway house, nod towards PBR that does not include proper lighting and appropriate calibrations then the first thing that will happen is people will realised that the lovely items in their PBR workflow tools still do not look the same in SL, they'll perhaps look better, but they won't be right. the next thing that happens is that they come up with workarounds and solutions to compensate for these shortcomings (baked reflections and lighting will be the immediate ones). What we've created overnight is yet another set of content that is going to be ruined by the proper PBR when it arrives. Another mob with pitchforks and torches to shout down the proper revision of the pipeline.

This is the number 1 reason I am most agitated about all this pseudo-PBR talk. 

At the end of the day maybe 10% of creators (I am probably being generous) actually care about PBR, in the sense that they understand the PBR concept and are fluent in what it does and why). A far far larger proportion want to see PBR because the know that's what they are working in in Blender/Substance/Maya/Cinema4D/Daz etc. These combine with the third demographic, those who don't care about the acronyms, the physics models, the lights....The "I just want my stuff to look right when I upload it!!" crowd, which in the end, is the real problem. you spend hours crafting an amazing model, then painting it in glorious materials, only to have the SL lighting take all the life out of it and make it look sad.

They just need Mildred to do what they want, they don't care how (those of us who do care, want the same but are more hung up on the science)

6 hours ago, animats said:

High dynamic range lighting, where sunlight is far brighter than most other lights, and automatic "tone mapping" to adjust scene brightness as you move from dark to bright areas.

Again 150% agreed. This is need more than any change in what maps we have. As you have shown (we are singing from the same songsheet here) is that by improving the shader, upgrading the lighting and modernising the rendering we can lift up the dull dark, muddy content we have today. Sure PBR/Colin/Mildred whatever will make it look even nicer and open up new possibilities but this has to come first. Partly because it will break content that already compensates and the sooner we understand those extents the better., but partly because we all win.

6 hours ago, animats said:

Reasonable behavior for existing content.

Yes, and to be fair this is what the lab are proposing in a different way. The plans that we have heard so far state that the "PBR" will be an additional render pass alongside the other passes we have today. Let's ignore my look of incredulity that PBR could be a single additional pass (I'm assuming that's a lost in translation problem) the concept being touted is to make the new rendering appear in parallel to the old. This preserves old content look and feel and introduces new. 

In theory that can work though I struggle here because the simple fact that PBR is all about the light. this means that to deliver PBR we have to have updated lighting (see above, I believe we need this to be first not last in the plan) if the parallel legacy pipeline retains the legacy lighting that will be incongruous at best.

Another solution that does not have this problem is your path, where the new environmental models are used and we have a best endeavours attitude to existing content as it gets passed through the new shaders.

What I would not like to say is whether your solution is more acceptable for the majority of modern content and older content alike than the other one. Which causes the least screaming? This is going to upset some apple carts. I don't think that can be entirely avoided, the real challenge here is to find the solution that results in the fewest apples rolling in the gutter.

Henri, raised the important concern about performance and inclusion. I did a lot of datamining recently to analyse the nature of the FS user base. We know that many users have ALM off and that there are multiple reasons for this (an entire separate thread could spawn from that), notably among this though are those with lower RAM quantities and those who are on limited (and even metered) networks. A fallback rendering solution is required. I have proposed this through the provision is an exclusive baked light map. distinct from any future albedo, which would allow those that only want the simplified view to choose to only ever render the baked light. I am sure there are many other (better) solutions to this too, but it does need to be a consideration somewhere. We've probably all forgotten the days when travel was a thing, but as the world returns to work and conferences, more and more of us will resuming spending a part of their time on less that typical networks and devices and irrespective of the Liquid-cooled AlienWare, neon meltdown super rig, that is sitting in their home, be needing to connect to SL on the shoddy shared wifi of the downtown AirBnb, or low cost hotel. Fallbacks are good, if we can fit them in.

The other question, that I have asked and not really had a convincing answer to, and which you are screaming from the rooftops, with much the same result. Is "Should we really be doing this on top of OpenGL at all?" 

OpenGL can deliver all this, but we know it has limited miles left, we have the issues around rendering performance and platform support to name just two reasons we expect OpenGL to be replaced. Sadly, we've been talking about it for at least three years with no actual progress on this (you've made more progress).

If we build PBR on top of OpenGL, can we be sure that when we subsequently migrate to A.N.Other graphics API that it will 100% match? Or are we going to have another round of breaking things?

Would we be far better placed to bite the bullet and replace the pipeline, "do a Joe" and upgrade the lighting and rendering (but let's not call it PBR....) and then once that's in place and we're all in a better, faster, lighter new world. we can introduce the new assets, new material support with full  PBR (because our engine already does it)

Personally, I think that this is the best path and should have been the path that was taken before now BUT consider it from the other side here... Is it better to give our long suffering content creators something that helps them really push SL content forward NOW rather than later? the big problem LL have and I do not envy them in this, is coordinating these disparate concerns and needs. Forming all the needs into a cohesive plan that can be delivered in a timeline that makes sense, shows continual progress and does not break more stuff in the process is no small task.

 

 

 

  • Like 1
  • Thanks 2
Link to comment
Share on other sites

8 hours ago, animats said:

 

  1. We're stuck with the OpenGL model of diffuse color, specular color, and lights from 0 to 1.0 intensity, because old content won't work with a modern renderer. So nothing can be done.

I wouldn't be opposed to a new lighting model with PBR, a work around to translate simple light values to the new system isn't the end of the world. We're always going to need some way to make a quick and dirty RGB source with an amount of brightness. Will it be accurate .. who cares, not the people using those lights that's for sure.

Seriously .. there are literal day stars on sticks claiming to be tiki torches that wouldn't know which way up the inverse square law even goes. That content is not a serious block and any change occurring from translating to a new system can't make it worse.

As for the rest, change the renderer and the content will look different, and we have been though that more than once. Different isn't bad, it's just different and everything we actually care about can be updated and the platform moves forward.

The entire SL economy revolves around people constantly buying new things they already own, if it wasn't there would be no need for a constant stream of events selling nothing but body suits for kupra users, being able to upcycle some old models with a lick of new paint is a gift to creators.

Link to comment
Share on other sites

I have been advocating for doing a lighting and reflections overhaul before doing any new content creation pipeline graphics changes, loudly, stubbornly, even belligerently for quite some time.  It's been met with a deafening silence from the decision makers at the Lab, the only break in which was Vir, after much indignant protestation on my part asking in what I perceived to be a somewhat irritated tone, "what would updating reflections [[ and by extent lighting ]] actually help with" while we were discussing the topic of PBR at the most recent Content Creator Meeting ... or something to that general effect.  I don't remember the exchange perfectly word for word, so my recanting of it might be somewhat flawed...

Either way though, my advocacy for such a change has not waned.

As far as I'm concerned, there is not much to be lost from creating a new "Experimental High Definition" Render mode, with a fairly short list of new features actually :

  • Better sky interpolation and an actual reflection model that isn't "Environment Shiny overwrites Specular Color"
  • Lumen (LLumen?) based lighting values ( thought we're still in SL, so there will have to be some auto convert option from 0-1.0 intensity into a lumen scale, but add a "Lumens" spinner next to the Intensity dial on the UI that spins with the Intensity slider~  This might force us to allow greater than 1.0 intensity but, I'm pretty sure that's okay~ 'cause in the present system if I put two "1.0" intensity lights right next to each other, I get a nice 2.0 Intensity light....  We can fiddle with it, this is largely just a UI change.  The nice part of this is we get to just make up a lumen value for 1.0 intensity so that we can implement a proper inverse square lighting falloff and have things not look too poo.
  • A user adjustable "Default world Glossy Value" that fits as a stand-in for all assets that are diffuse only.  Does it lack a glossiness parameter ?? That's fine~ Make one up. I nominate 0.9 (Roughness) aka  0.1 ( Glossy ) for the world default value.  That's mostly matte but will reflect a small amount of light, and it'll do Fresnel things which is cool.  Most fabric, unpolished wood, asphalt, brick, unpolished stone and boring paint will render out quite fine with a 0.1 Glossy value. 
  • Auto camera exposure ( like Joe highlighted in the Cyberpunk 2077 walkthrough )
  • Fresnel reflections on surfaces where the lighting angle requires it ( which we STILL don't have despite the current SL Renderer being littered with code that has this notion in mind )
  • Possibly some sort of reflection probe or possibly a 360 snapshot cube map "volume prim" that can be plopped down in an area as an override for skymap reflections. ( People already make these sorts of things with kind of hacked-together methods with 6 projection lamps from each side ). 

Doing this removes the expectation on the part of the users that "Yes, your stuff will look the same" that was the downfall of EEP.  No more of that, new model, it's gonna look different!! It SHOULD look different!!  If you don't like it? No big deal~  just turn it off and use regular ALM.  Call it EEP Try #2 ~ or something, who knows... Though that might leave a bad taste in people's mouths~

None of this stuff is OpenGL of Vulkan or Metal or DirectX specific.  it's a short list of fixes and it lays the groundwork for everything else.  It's just simple math additions to the existing environment. ( okay not that simple.. blending between cubemap spaces is actually kinda annoying and difficult to implement~ but that's the only 'actually hard thing' that there is that requires a change to the actual SL environment assets.  Everything else is low-hanging fruit.) 

This lays the groundwork for PBR.  It should be done first.  It's not a huge massive project, it's a small bitesize thing that the Lindens can toss a few hundred man-hours at and ( hopefully? ) get results.  Or Beq could probably do most of it in her rare moments of spare time in... like a  couple months~  'cause she's awesome like that.

It completely boggles my mind that this is not the 'next project'.

EEP 1.0 : Make sure we get parity for everyone who wants it.

EEP 2.0 : Blow people away with cool new graphics stuff Woooo!

EEP 2.0 will be required for viewing PBR assets.  "But Liz, what about all the people who don't like it ~ then we have a new asset type that can't.. "  ~~ Yeah yeah I know that's why I've also been demanding that any implementation of PBR includes a "Diffuse only Baked Lighting Slot" that the content creator will fill in a DIFFUSE texture fallback for their PBR asset.

Why isn't this the roadmap?  It's so obvious to me this is how you do this...

Edited by polysail
  • Thanks 1
Link to comment
Share on other sites

i wonder sometimes if pretty much all of the issues in getting stuff to look 'correct' in the viewer is because there is no SL Content Editor

a SL Content Editor program which can not only make meshes, but also textures and animations

if there was a SL Content Editor then creators be able to make stuff which always looks exactly the same in the SL Viewer

then when start thinking about adding some new feature in the SL Viewer then the same happens in the SL Content Editor

if did this then all the other tools like Blender, Maya, Photoshop, Gimp, QAvimator, etc would all be dumped in the bin by the large majority of the creative user group

learning to make stuff for SL would be as easy as it once was when we used to make objects using the SL Prim Editor

Edited by Mollymews
objects
  • Haha 1
Link to comment
Share on other sites

24 minutes ago, Mollymews said:

i wonder sometimes if pretty much all of the issues in getting stuff to look 'correct' in the viewer is because there is no SL Content Editor

Creating a whole entire piece of secondary external software is a largely obtuse solution to an easily solvable problem.  The notion of "Local Mesh" has been discussed and even been partially developed by a number of Third Party Viewer Devs to allow people to just view a DAE file inside SL without actively uploading it.  This would allow for the addition of textures as well.  Whole entire external software packages are largely a silly idea when it's not entirely difficult to just add a temporary upload feature.   Yes the content creation pipeline needs work, but not with a software package.

  • Like 1
Link to comment
Share on other sites

33 minutes ago, Mollymews said:

i wonder sometimes if pretty much all of the issues in getting stuff to look 'correct' in the viewer is because there is no SL Content Editor

Having played with a lot of modelling software and game engines over the years .. with the best of intentions, there are always differences between content in the editing software and content rendered at run time. Some of these differences is fundamental to the real time renderer, sometimes it's just artistic tweaking and messing with shaders. You just have to get a feel for what stuff is going to look like.

SL has a style and it's certainly not ultra realism.

Replacing industry standard tools and methods for a propriety suite is a mistake, we can't hope to keep up with what's going on with professional and studio level tools, nor can we really afford to be without those advancements. 

I would really like to see clear documented workflows for a core set of accessible (and affordable) tools that builds on established industry practices. Blender is by far the best candidate for this as it's not only free and extremely competent, but extendable with plugins and well covered from an educational perspective.  The skills leant working with it are transferable which is a huge advantage, people can start making hobby content for SL and end up with real skills that can be applied elsewhere.

I totally appreciate that blender is a monster and some commitment is required to learn it's strengths, but I would rather we focus around this than say .. Maya, which is $1700 a year or "simpler" SL focused tools that just end up being a halfway house sitting between professional workflows and SL.

Edited by Coffee Pancake
words things wrong
Link to comment
Share on other sites

55 minutes ago, polysail said:

Creating a whole entire piece of secondary external software is a largely obtuse solution to an easily solvable problem.  The notion of "Local Mesh" has been discussed and even been partially developed by a number of Third Party Viewer Devs to allow people to just view a DAE file inside SL without actively uploading it.  This would allow for the addition of textures as well.  Whole entire external software packages are largely a silly idea when it's not entirely difficult to just add a temporary upload feature.   Yes the content creation pipeline needs work, but not with a software package.

says the person with the chops to invent liquid mesh clothing :)

 

56 minutes ago, Coffee Pancake said:

I totally appreciate that blender is a monster and some commitment is required to learn it's strengths

i was thinking of Slender myself

a chopped down version for us amateurs who don't have the chops. Us amateurs who want things with the least amount of effort and commitment. Like this

On 4/23/2022 at 7:21 AM, animats said:

Then there's a polishing step, where people using tools such as Unreal Editor tighten up the content. They reduce mesh complexity, create lower levels of detail, create shadow maps, lighting maps, and environment maps. This used to be a manual process, but today, it's mostly automated

 

ps. We are amateurs who just want to make stuff to enhance our SL, we are not looking for a realworld job

  • Haha 1
Link to comment
Share on other sites

2 hours ago, Mollymews said:

i was thinking of Slender myself

haha, that's the name I gave my blender addon for SL. I never quite get around to telling people about it. esp as this year RL has been throwing too many curve balls.

https://github.com/beqjanus/slender

My plans for this, that get constantly derailed, is to use it as a bridge between SL and Blender. 

3 hours ago, polysail said:

The notion of "Local Mesh" has been discussed and even been partially developed by a number of Third Party Viewer Devs to allow people to just view a DAE file inside SL without actively uploading it. 

Very much an active work in progress. I have started and shelved the project many times, but @Vaalith Jinn who previously built the temporary textures facility that you may be familiar with is working on it and I hope to be able to integrate their work once they are ready.

4 hours ago, Mollymews said:

f there was a SL Content Editor then creators be able to make stuff which always looks exactly the same in the SL Viewer

The external toolkit is not entirely alien as a concept, consider Robolox Studio. I don't think it is a viable product for SL as the effort needed to maintain it barely justifies it. We struggle enough to find viewer developers, let alone people willing to throw their spare time into a toolkit. Roblox has the advantage of having very deep pockets.

 

  • Like 3
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 479 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...