Jump to content

Shader Modding and Physically Based Rendering


Biggay Koray
 Share

You are about to reply to a thread that has been inactive for 1158 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

So, hello everyone.

I have been mucking around with Second Life lighting's model, and I thought I would share some of my results.

Current Iteration of Cook Torrance Physically Based BRDF

Video of it in Action

PBR rock texture straight from some website with scan data into sl.

Video of an older Revision

My goal has been slowly working towards unraveling the demon that is the rendering engine and attempting to improve what we get out of this beast. Currently, I find it very difficult to produce good looking metals and glass in Second Life due to the abysmal environmental reflections. Additionally, I have noticed that many people do not exploit the additional specular exponent/gloss map that may be encoded in the normal map's alpha channel, so part of my motivation was to raise awareness with this project and really push the limits of what Second Life can do.

I have been primarily recoding and modifying the shaders found in the appsettings folder of the client and have managed to port over the lighting model used in spotlights, point lights, and environment lighting to use cook torrance, since it is one of the industry's defacto standards nowadays. At the moment, I am still trudging through learning opengl but my current goals are to add support for screen space reflections, screen space directional occlusion; and recycle the diffuse alpha layer when not used for emissive masking or transparency, to add support for displacement maps used for parallax occlusion mapping.

For those curious about how this is all implemented:

  • Diffuse is being used for albedo.
  • Specular Alpha/Environmental Reflection intensity is being used for metallic.
  • Normal Alpha/Specular Exponent/Glossiness is being inverted and being used as roughness for the cook torrence brdf.
  • Spec RGB is being mixed with the specular output of the brdf to preserve specular tinting.

As of currently, I have not noticed any broken content from these shader changes. Additionally, I have ditched a significant amount of branching statements in my version of the shaders so there is a mild performance uplift.

Edited by Biggay Koray
typos
  • Like 3
Link to comment
Share on other sites

I've been trying to advocate for a better use of glossiness and environment maps since when materials were introduced. Nobody ever listened. 

Anyway, since Cook Torrance model is NOT the industry standard, in order to get decent metals from the base shader model it is based on (blinn, aka old gen) the diffuse component should be full black and the color should be transfered over to the specular map. The environment map you can use as metalness map is useful to achieve the black diffuse if you use it as a subtracting map to alter the base diffuse. 

Why isn't cook Torrance model the industry standard? Because it is and has always been a flavor of the Blinn model. Blender has always had that shader since its inception, when there was the Blender render engine, waaaay before cycles was ever even thought. The fact that Blender still uses it as the base model on which the current bdrf models build upon doesn't make it an industry standard. Indeed those that use it (unity and Sansar for example) are usually subpar in comparison to the others using the actual standard model (unreal, redshift, Arnold, to name a few) originally developer by Disney for their renderman render engine. 

Link to comment
Share on other sites

4 hours ago, Biggay Koray said:

...

As of currently, I have not noticed any broken content from these shader changes. Additionally, I have ditched a significant amount of branching statements in my version of the shaders so there is a mild performance uplift.

A performance improvement would be good. I've played around with alpha channels in normal maps using the information in the SL Wiki, http://wiki.secondlife.com/wiki/Material_Data

Is the information here up to date? http://wiki.secondlife.com/wiki/Second_Life's_light_model_for_materials

Link to comment
Share on other sites

41 minutes ago, KjartanEno said:

A performance improvement would be good. I've played around with alpha channels in normal maps using the information in the SL Wiki, http://wiki.secondlife.com/wiki/Material_Data

Is the information here up to date? http://wiki.secondlife.com/wiki/Second_Life's_light_model_for_materials

It is still relevant as no changes occurred in the meantime. The OP also states that he could not notice any broken content from the shader changes because cook torrence model is another flavor of the same type of shader mentioned in the light model page you pointed at, both are a lambertian diffuse and specular gaussian mixed shader. The difference sits in the exposed properties, their name and their interactions with each other. 

I really would welcome a shader change like that, seriously. Metals kinda suck at current state, and other materials are less than convincing. Probably true physically based shader models are too expensive and prone to break older content, but a good brdf shader flavor could do very good to this platform, also offering a more standardized base for a little bit more modern workflow

Link to comment
Share on other sites

5 hours ago, OptimoMaximo said:

I've been trying to advocate for a better use of glossiness and environment maps since when materials were introduced. Nobody ever listened. 

Anyway, since Cook Torrance model is NOT the industry standard, in order to get decent metals from the base shader model it is based on (blinn, aka old gen) the diffuse component should be full black and the color should be transfered over to the specular map. The environment map you can use as metalness map is useful to achieve the black diffuse if you use it as a subtracting map to alter the base diffuse. 

Why isn't cook Torrance model the industry standard? Because it is and has always been a flavor of the Blinn model. Blender has always had that shader since its inception, when there was the Blender render engine, waaaay before cycles was ever even thought. The fact that Blender still uses it as the base model on which the current bdrf models build upon doesn't make it an industry standard. Indeed those that use it (unity and Sansar for example) are usually subpar in comparison to the others using the actual standard model (unreal, redshift, Arnold, to name a few) originally developer by Disney for their renderman render engine. 

Firstly, the UE4 devs looked at the Disney BRDF but nowhere did they implement it, and I can provide proof of such in the form of the Siggraph paper that they wrote about the subject. That said, while the Disney BRDF is more commonly used overall, it is used in a lot more applications like visualization and real time rendering out side of games. In regard to the industry standard comment, I was speaking with respect to modern game engines, as Second Life is a game not a 3D modelling rendering engine being run on an expensive workstation card. You will find that implementing subsurface scattering, anisotropy, clearcoat, and sheen which the UE4 devs completely passed up on, is overkill for most games and what they end up with is something closer to the aforementioned Cook Torrance. Moreover, if you want to look at Guerilla Games, Frostbite, and yes Unity you will find many using something a lot like Cook Torrance there too. I can provide papers on the subject if you would like.

As a tid bit, my borked environmental reflections using some jury rigged image based lighting is actually modified from the stuff talked about in the Unreal Paper, and my other goto for resources have been LearnOpenGL, Realtime Rendering Fourth Edition, Physically Based Shader Development in Unity, and Physically Based Rendering from Theory to Implementation.

Edited by Biggay Koray
typos
Link to comment
Share on other sites

3 minutes ago, Biggay Koray said:

Firstly, the UE4 devs looked at the Disney BRDF but nowhere did they implement it, and I can provide proof of such in the form of the Siggraph paper that they wrote about the subject. That said, while the Disney BRDF is more commonly used overall, it is used in a lot more applications like visualization and real time rendering out side of games. In regard to the industry standard comment, I was speaking with respect to modern game engines, as Second Life is a game not a 3D modelling rendering engine being run on an expensive workstation card. You will find that implementing subsurface scattering, anisotropy, clearcoat, and sheen which the UE4 devs completely passed up on, is overkill for most games and what they end up with is something closer to the aforementioned Cook Torrance. Moreover, if you want to look at Guerilla Games, Frostbite, and yes Unity you will find many using something a lot like Cook Torrance there too. I can provide papers on the subject if you would like.

As a tid bit, my borked environmental reflections are using some jury rigged image based lighting is actually modified from the stuff talked about in the Unreal Paper, and my other goto for resources have been LearnOpenGL, Realtime Rendering Fourth Edition, Physically Based Shader Development in Unity, and Physically Based Rendering from Theory to Implementation.

Yes, as I was saying, cook Torrance model is still a flavor of the basic blinn. The difference with what the Disney bdrf implements, and that the unreal engine implements, is the so called specular tail, which in cook Torrance is set to gaussian, while the Disney and derived model instead set to a particular model called ggx. Now, the fact that the fully fledged renderman model, with clear coat, sheen, bells and whistles was stripped of the excessively resource demanding features doesn't change the fact that their shader model is a lot different from the ordinary cook Torrance, especially for their implementation of specular level input. 

Don't take my comment as an opposing argument, it's not. I would welcome a shader change like that as the messiah! It would automatically work with the content I've been making so far, so please keep going and associate with one of the main viewers devs! 

  • Thanks 1
Link to comment
Share on other sites

5 hours ago, KjartanEno said:

A performance improvement would be good. I've played around with alpha channels in normal maps using the information in the SL Wiki, http://wiki.secondlife.com/wiki/Material_Data

Is the information here up to date? http://wiki.secondlife.com/wiki/Second_Life's_light_model_for_materials

It seems to be. I just think the way its explained is a bit technical for the layman lol. I really wish they would add some pictures of how to implement the extra alpha channels in photoshop or something using the color channels, as that would probably help a lot of people. One of my jobs when I was working at a tech company was doing technical writing and documentation, and man oh man, they need to hire some interns or something at Linden Labs and just get them to document stuff.

 

Hint hint, im available Linden Labs, as long as the pay is decent for part time work!

Edited by Biggay Koray
Link to comment
Share on other sites

  • 1 month later...

Just a mild poke.

https://i.gyazo.com/75df3048684a095ab5d2b6a8eb3957d3.mp4

https://i.gyazo.com/462d2db1effc304b09b0adfc9614ed5b.mp4

I have been reading about image based lighting and split sum approximations used for certain lighting models. I managed to add Second Life's pre integrated indirect lighting to my reflectance model. As a side note, this whole project has kind of ballooned, and I have been working on my own fork of Firestorm. Currently my goal is to gut anything pre opengl 4.0 and move towards a more modern pipeline, current testing has shown promising results.

I really would like to add some sort of a global illumination system back in, like bouncy light and the baked gi lightmaps that second life had years ago. However, I am intrigued by the results of screen space directional occlusion to achieve radiosity. Additionally, ive been working on trying to sort out and modernize the shaders using layout qualifiers to make them more deterministic, full optimization passes utilizing fused multiply add functions, and I have gutted anything that looks for fixed function checks at the moment. Im hoping to fix up the way the current transformation matrices are handled because oh man is it god awful.

Edited by Biggay Koray
  • Like 3
Link to comment
Share on other sites

6 hours ago, Fritigern Gothly said:

Would Vulkan be a possible avenue of exploration in this case?

 

I literally started learning opengl just a few months ago. To put something in perspective, just to render a single triangle in opengl takes maybe 100 lines of code. To render a triangle in vulkan usually takes 500+ lines of code. So nooo.

Edited by Biggay Koray
  • Like 2
Link to comment
Share on other sites

7 hours ago, Biggay Koray said:

I'm hoping to fix up the way the current transformation matrices are handled because oh man is it god awful.

49 minutes ago, Biggay Koray said:

To put something in perspective, just to render a single triangle in opengl takes maybe 100 lines of code. To render a triangle in vulkan usually takes 500+ lines of code.

Truer words have never been spoken.

  • Like 1
Link to comment
Share on other sites

If we could get subsurface scattering into skin rendering, that would be a big help. Skin looks either dead or too glossy.

Bakes on Mesh makes this harder. You don't have a "skin layer" in the viewer.  You have a merged layer of skin and some clothing. So it's hard to do this from viewer side without more help from the assets. Without BOM, you could identify the skin layer and give it special treatment.

Link to comment
Share on other sites

9 hours ago, animats said:

If we could get subsurface scattering into skin rendering, that would be a big help. Skin looks either dead or too glossy.

Bakes on Mesh makes this harder. You don't have a "skin layer" in the viewer.  You have a merged layer of skin and some clothing. So it's hard to do this from viewer side without more help from the assets. Without BOM, you could identify the skin layer and give it special treatment.

BOM layer 0 is the skin de facto, this could be used in a viewer getting the layer info send by the server, when user applies it.

  • Like 1
Link to comment
Share on other sites

On 2/2/2021 at 3:30 AM, Biggay Koray said:

Just a mild poke.

https://i.gyazo.com/75df3048684a095ab5d2b6a8eb3957d3.mp4

https://i.gyazo.com/462d2db1effc304b09b0adfc9614ed5b.mp4

I have been reading about image based lighting and split sum approximations used for certain lighting models. I managed to add Second Life's pre integrated indirect lighting to my reflectance model. As a side note, this whole project has kind of ballooned, and I have been working on my own fork of Firestorm. Currently my goal is to gut anything pre opengl 4.0 and move towards a more modern pipeline, current testing has shown promising results.

I really would like to add some sort of a global illumination system back in, like bouncy light and the baked gi lightmaps that second life had years ago. However, I am intrigued by the results of screen space directional occlusion to achieve radiosity. Additionally, ive been working on trying to sort out and modernize the shaders using layout qualifiers to make them more deterministic, full optimization passes utilizing fused multiply add functions, and I have gutted anything that looks for fixed function checks at the moment. Im hoping to fix up the way the current transformation matrices are handled because oh man is it god awful.

Brilliant, not only potential speed optimizations, but dramatic visual upgrades too. I'd bet removing the pre Ogl-4 stuff makes future transitioning into vulkan smoother as well.
Im looking forward to see what more you come up with here, the examples are impressive so far. 🤩

  • Haha 1
Link to comment
Share on other sites

  • 2 weeks later...
On 2/2/2021 at 9:30 AM, Biggay Koray said:

Just a mild poke.

https://i.gyazo.com/75df3048684a095ab5d2b6a8eb3957d3.mp4

https://i.gyazo.com/462d2db1effc304b09b0adfc9614ed5b.mp4

I have been reading about image based lighting and split sum approximations used for certain lighting models. I managed to add Second Life's pre integrated indirect lighting to my reflectance model. As a side note, this whole project has kind of ballooned, and I have been working on my own fork of Firestorm. Currently my goal is to gut anything pre opengl 4.0 and move towards a more modern pipeline, current testing has shown promising results.

I really would like to add some sort of a global illumination system back in, like bouncy light and the baked gi lightmaps that second life had years ago. However, I am intrigued by the results of screen space directional occlusion to achieve radiosity. Additionally, ive been working on trying to sort out and modernize the shaders using layout qualifiers to make them more deterministic, full optimization passes utilizing fused multiply add functions, and I have gutted anything that looks for fixed function checks at the moment. Im hoping to fix up the way the current transformation matrices are handled because oh man is it god awful.

Glad to see somebody else does it correct! That what you did is basically doing a proper workflow to convert pbr maps into specular maps. With specular maps we can achive physically correct shading more precisely than with pbr but face other limitations. The only SL engine limitation is the amount of projector lights that can be rendered same time and how the engine handles texture memory. + Creators havent gotten behind the knowledge to do a proper roughness+metal to specular conversion as it takes extra time and the willpower to improve your skills apart from making profit. Thats why many "materials" just look shiny overall, but have no "selective" specularity and physical correctness in SL, but it is possible to do with a correct conversion of maps.

Barely seeing any creator doing such. The potential of specular maps is just belittled because pbr workflow is easy to handle while working on the textures with quick but often inaccurate results. "Creators" jumped on the substance painter train without understanding light and how materials react. - Just like how "creators" are too lazy to help the SL engine by learning to retopo the meshes properly. Cant be that a lunar top has 900k tris multifaced - thats worth a small but detailed landscape in unity in a medium sized studio game.. Now think of a club with 10 people wearing that.. A full Assassins creed origins character has 10k tris just saying...In SL we prolly need to be around 30k as we cannot play with displacement and its perks.

Specular maps, multi-faced have their benefit in various older engines, if the knowledge is there and you don't lazily just invert a roughness map on photoshop quick and have a vague shinyness. Its all about the selective conversion of the parts that need to react on light.

Solution for people and I sincerely hope this adds to SL improving:

1. Learn to convert pbr maps correctly into specular maps. 

(helpful link: https://marmoset.co/posts/pbr-texture-conversion/ )

2. Learn to retopo you meshes correctly.

(Helpful Link: https://www.flickr.com/photos/33077515@N07/49207937263/in/dateposted/ )

 

Best wishes !

  • Thanks 1
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 1158 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...