Jump to content

Normal & Specular Maps


You are about to reply to a thread that has been inactive for 3900 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

I have no idea how these will be controlled in preferences and so what combinations will be available. I think what some people do is to use full bright with their baked textures, which effectively removes the inworld lighting effects. Otherwise it's an ugly compromise. Usually ambient occlusion doesn't conflict too much with the inworld effects, but directional lighting does. Of course it then all depends on how the use of the new maps interacts with the full bright setting. Have to expect that it turns them off. Otherwise it's a complete change in meaning. That would mean you either have to sacrifice specular and normal maps or risk conflict with inworld lighting. As far as I understand it (which is not very far at all) baking is not a good substitute for normal/specular maps, as their effects depend on the position the camera as well as that of lights.

Link to comment
Share on other sites


Qie Niangao wrote:

I'm sure that for a long time there will be folks using rigs for which dynamic lighting and shadows yield unacceptable performance. But similar to an earlier question: is deferred rendering required for these material effects to work, and if so, does that imply dynamic lighting and shadows will always be present anyway?

Normal and specular mapping require per-pixel lighting but not deferred rendering. The water plane in SL viewer 1 was normal-mapped, even before WindLight arrived.

WindLight added a dynamic environment map to the water which requires an offline render pass of the entire scene. Deferred rendering added dynamic shadow maps which require yet another offline render pass. That's what made these features so taxing. Normal and specular mapping are cheap in comparison because they are static. So is light mapping.

Link to comment
Share on other sites


Drongle McMahon wrote:

Yes. If we are correct in interpreting the wiki as saying that each of the maps is already going to have independent scaling and offset parameters, then I would guess it should not be difficult to add this, even if it's not there at the outset. People are going to go on using baked lighting, and the potential for saving total texture downloads is HUGE.

We need these features now, not at some point in the future. The materials system is already late to the party. The longer we wait for normal mapping, the more meshes with excessive geometry get uploaded because in 2012 people will not settle for a vertex-shaded low-poly look any more. The longer we wait for light mapping, the more oversized diffuse maps with baked shadows end up inworld because people will not settle for a world without depth.

Excessive geometry and excessive texture sizes are the two primary sources of lag in SL. You can educate creators to use resources efficiently, but you can't persuade consumers to spend money on ugly stuff.

Link to comment
Share on other sites


Normal and specular mapping require per-pixel lighting but not deferred rendering. [...]

Thanks, Masami. Obviously I'm well out of my depth here, and appreciate the patience. I'm now wondering how lightmaps would interact (if at all) with dynamic shadow maps, when they're enabled in the viewer.

Naively, I can think of three possibilities:

  1. Dynamic shadows disable lightmap rendering altogether.
  2. Lightmaps for a surface are rendered along with dynamic shadows, as if they were an additional "light source".
  3. Dynamic shadows are not rendered at all on surfaces with lightmaps.

Assuming that 3 is what you get only when fullbright is switched on, I think I can imagine advantages to either 1 or 2, but probably one or the other is "standard". (2 seems most what one might expect, but 1 would be kind of like being able to lose shadow prims and baked lighting that look so awful when the rest of the scene is lit dynamically.)

Link to comment
Share on other sites


Qie Niangao wrote:

I'm now wondering how lightmaps would interact (if at all) with dynamic shadow maps, when they're enabled in the viewer.

Naively, I can think of three possibilities:
  1. Dynamic shadows disable lightmap rendering altogether.
  2. Lightmaps for a surface are rendered along with dynamic shadows, as if they were an additional "light source".
  3. Dynamic shadows are not rendered at all on surfaces with lightmaps.

Assuming that 3 is what you get only when fullbright is switched on, I think I can imagine advantages to either 1 or 2, but probably one or the other is "standard". (2 seems most what one might expect, but 1 would be kind of like being able to lose shadow prims and baked lighting that look
so
awful when the rest of the scene is lit dynamically.)

The video I linked to post #24 shows dynamic shadows and static lightmaps in a single scene. They obviously work very well together. You can see that the sunlight is bright enough to eliminate any baked shadows, while the lightmaps are very effective in those spaces where the sun doesn't shine.

How exactly this is implemented in the pixel shader is up to the developer. If you look at Blender's texture influence panel, you can see that a texture can affect multiple shader properties at the same time. The panel is basically a graphical UI for GLSL shader programs -- the same type that is used by the SL viewer. It would be awesome if we had the same flexibility in SL, instead of a few fixed implementations.

My example above uses the lightmap to attenuate the intensity of diffuse and specular reflection. It does not touch any other parameters, so the floor is not emissive. It still requires an external light source. Fullbright is not an option at all because it eliminates all shading effects and renders the normal and specular maps useless.

Link to comment
Share on other sites


Drongle McMahon wrote:

I would have given light maps the highest priority.

I am genuinely surprised by their absence from the project's "user stories" on the wiki page. It claims there has been a review process involving the "Content Creation Improvement Informal User Group". And development has been outsourced to the Exodus viewer team. Yet no one has considered light maps? They've got to be kidding.

All I can say is, they better get this material system right the first time, because there may be no second time to fix it.

Link to comment
Share on other sites


Drongle McMahon wrote:

"We need these features now"

I do agree. I would have given light maps the highest priority. I guess they are even more important as the amount of data in maps with baked lighting can increase threefold with the new maps.

You might want to consider the extreme amount of geometry one can save by using normal maps. Where you can save 75% in textures by going one resolution lower, the example here saves 95% in geometry:

normal mapped buddha

The extra light map would be very useful and can save tons of memory, but I would never give it higher priority than normal mapping.

Personally I would like to see the ligtmap aswell, with a 256 or 512 max. Specular and normal would be better off with 512 as a max aswell I think, I really do fear people are going to fill every object they make with three 1024 textures.

Link to comment
Share on other sites


Drongle McMahon wrote:

Can anyone give, point me to, a clear explanation of how the RGB values in a tangent space normal map are converted into object/world normals (essentially asking, how are the G and B axes set in the tangent plane relative to object and/or UV map space?).

 

Not sure why you are exluding R and combining G and B.

The RGB values in a normal map are  XYZ direction values, compared to the default normal of the actual face they are on. It's a bit like a sculpt map. Here's an example with a hemisphere:

Hemisphere Normals.PNG

On the left a real hemisphere, on the right the matching normal map. The direction of the normals on the left are represented in RGB values on the right.

XYZ RGB Normal.PNG

 

So if we look in a 2d section to make it more readable, we can see how the angle of the normal is calculated. (I'll leave out the maths because the picture shows it so much clearer.)

The normal is calculated between (128,128) and the R and B value in the normal map. So at X=64 the normal is at a 60 degree angle.

Ehh, is this what you were asking?

Link to comment
Share on other sites

"Ehh, is this what you were asking?"

Partly. My mistake - I meant to say what are R and G. I knew B (Z) was the normal to the actual geometric surface. I'm puzzled why tour diagrams treat the bytes as unsigned. Much easier to ubderstand if you treat them as signed so that waht was 128 is now 0 and means no component in that axis. However, the thing I remain confused about is how the directions of X and Y (R and G) are set in the tangent plane. Without that, the diagrams could be spun around the normal without changing anything. What decides which direction is the X axis here? My guess would be that it's the U and V axes of the UV map. So I want to know if that's true.

Link to comment
Share on other sites

"Yet no one has considered light maps?"

Well, it was mentioned in in this thread over a year ago (at least in the context of ao). I think that thread was started after a suggestion at the Conent tools group. Then again a month ago in the thread we are in now (same context). I have no idea whether it was condidered by the people doing the development or not. Maybe they had a reason for not doing it.

Link to comment
Share on other sites


Drongle McMahon wrote:

I knew B (Z) was the normal to the actual geometric surface.

Blue is only one of the three components, you really need Red and Green aswell to pinpoint the vector, which is the direction of the normal. What you determine with the three components is the end position of the normal. That's why you can't use all colours, the normal has a set length.

 

 


Drongle McMahon wrote:

I'm puzzled why tour diagrams treat the bytes as unsigned. Much easier to ubderstand if you treat them as signed so that waht was 128 is now 0 and means no component in that axis.

The thing is without any normal mapping or smooth shading, the vector (normal) is perpendicular to the surface. One could argue rotating over more than 180 degrees is really rotating the other way, but I think it makes sense you can go both ways, so 128 is the middle then.

Like the diagram shows, 128 is dead center of all possible vectors, you can't have a RGB colour that's not on the circle. If you start at 0 and point 45 degrees counterclockwise, you'd be in the negative which is not possible with RGB.

I don't understand what you mean by signed or unsigned bytes.

 

 


Drongle McMahon wrote:

However, the thing I remain confused about is how the directions of X and Y (R and G) are set in the tangent plane. Without that, the diagrams could be spun around the normal without changing anything. What decides which direction is the X axis here? My guess would be that it's the U and V axes of the UV map. So I want to know if that's true.

 

If that was your question I could have saved myself some time:)

Yes the UV layout determines the UV or XY direction of all maps applied to it, normal maps being one of them. As far as a 3d program is concerned there is little difference between all the maps that can be applied. All need UV mapping.

You can test it for yourself by rotating the UV layout 90 degrees and keeping the same normal map. Or keeping the same UV layout and compare the two different normal maps they'd have to produce the same result.

 

EDIT maybe I now understand your confusion, forgive me if I'm wrong.

R and G don't specify a place on the surface, it looks like that's what you are thinking..and my example doesn't help a lot then, since in this particular example they match 100%.

R, G AND B are vector components (or like I said really the endpoint of the vector), the place on the map (pixel) determines the place on the UV map. If you make three of the hemispheres on the normal map, they will all look exactly the same, with R,G and B exactly the same for all three.

  • Like 1
Link to comment
Share on other sites

No problem with vectors. I did some experiments and begin to get the idea of how it works with RG = UV.

Unsigned byte :00000000 = 0, 10000000 = 128, 11111111 = 255

Signed byte : 00000000 = -128, 10000000 = 0, 1111111 = 127 ***

So then with signed, R=0 is no U component, R<0 = negative, R>0 - positive etc.

ETA - and many thanks for your patient explanations.

ETA - *** this is wrong :matte-motes-agape: see below.

Link to comment
Share on other sites


Drongle McMahon wrote:

Signed byte : 00000000 = -128, 10000000 = 0, 1111111 = 127

Also known as 8 bit excess-128.

However, I'd rather think of percentages here, since RGB components are not necessarily bytes. Nor integers.

By the way, since the Z coordinate length of the vector in tangent space normal maps is always 1, the B channel is redundant and can be used for something else. For example, it's perfectly possible to implement a pixel shader that uses R and G for the surface normal and B to control specular reflection.

Link to comment
Share on other sites

@drongle

I think I get the idea of the signed and unsigned bytes, I just made the diagram from what I noticed though :)

@ masami

Interesting, but aren't you overlooking the fact that for every RB value there are two G values possible and for every GB value two R values? (128+G , 128-G and 128+R , 128 - R). Am I missing something?

What you can do (and is used already) is using the fourth (alpha) channel for specular maps. No idea if this saves a lot of memory opposed to a seperate grayscale picture, as long as that's stored just like that internally.

EDIT... I see I read it the wrong way, The B or Z value only covers half a circle, not a full one..you're absolutely right. I then wonder why the B is still used, maybe faster calculation? Might it be possible to flip normals using a normal map (something that's not very useful I'd say, but WOULD mean B covers a full circle aswell)? I also seam to remember something about RG being used for normals then B and A for specular. Any idea what the second channel for specular does?

Link to comment
Share on other sites

Actually, I got it quite wrong! What was I thinking of??? :matte-motes-agape:

Signed byte is  [0] 00000000 = 0 ... [127] 01111111 = 127 ; [128] 10000000 = -128 ... [255] 11111111 = -1

So that doesn't work nicely the way I said. Masami was right. What I described is a biased 8-bit integer!

I should be ashamed :matte-motes-confused:

They are sort of similar if you think of the numbers -128 .. + 127 as points around the perimiter of a circle. Just start at a different place. That's my ecxuse, anyway!

bytes.png           bytes2.png

Link to comment
Share on other sites

Glad you got that figured out, to me it's all confusing though.

Trigonometry is where I surpassed my highschool teacher, but all this very abstract numberstuff is beyond me... :) I think my brain needs to be able to "paint a picture". How the data is stored in the file is not something one needs to understand to understand how the normal is calculated.

Using the RGB values in the normal map to draw a diagram, the result is a hemisphere with a radius of 128, where the direction of the normal is the vector from 128,128,128 (the centre of the hemi) to any place on the surface of the hemi. The place on the surface is the RGB (xyz) value on the map. That's how it makes sense to me.

I guess the two outer circles in your diagram visualise signed and unsigned?

Still wondering why "they" decided to use the B value for normals aswell. Masami? Chosen? Anyone?

Link to comment
Share on other sites


Kwakkelde Kwak wrote:

Still wondering why "they" decided to use the B value for normals aswell.

I guess this goes all the way back to the fixed function pipeline, before shaders became programmable. Object and world space normal maps actually need the third component.

Today you can mix & match data in those RGBA channels in any way you like. Here's one example:

http://www.polycount.com/forum/showpost.php?p=1506050&postcount=212

Link to comment
Share on other sites


Masami Kuramoto wrote:

Object and world space normal maps actually need the third component.

Well the question remains.

As you said yourself, with an x and y (or red and green) plus the set length of the normal, the third component is not needed to calculate the vector. There are only two values possible for z (or blue) and one of them returns a negative normal so that can be discarded.

Link to comment
Share on other sites

Mathematical laws didn't change since then, so it should have been possible back then aswell.

I can understand the decision was made back then, for some reason, to use all three channels though. I suspect it's because the calculation is faster using the z component instead of calculating the two possible z's then discarding one of them. Afterall the blue channel is already there for use.

No point in overthinking this, we'll have to work with what SL swallows anyway.

Link to comment
Share on other sites


Kwakkelde Kwak wrote:

No point in overthinking this, we'll have to work with what SL swallows anyway.

For better or worse.

From Nalates' blog:


#SL CCIIUG Week 37

These meetings are getting smaller. There were about 8 to 10 people and this one had 2 griefer idiots at this one. Blocking took care of clowns that eventually wondered off.  I’m pretty sure User Interface design is not a popular subject in SL.

The problem is that this is not just about user interfaces. It involves modifications on the server, and my concern is that whatever these few people come up with, we'll be stuck with for years.

Maybe lightmaps fell under the bus because the Lindens drew a line at three textures per material. However, in this case a 4th texture can make the other three considerably smaller, as shown earlier. Unlike the other three maps, baked lighting is low-frequency non-repetitive content. It can rarely be tiled, but it can be stored at very low resolutions. There is a reason why tileable material textures are usually high-pass filtered, while soft-edged shadows are not only acceptable but often desirable.

Mixing high and low frequency image content leads to large, non-repeating textures. These are already being used at too many places in SL, but the current materials proposal does nothing to address this problem.

Another problem with the proposal is the idea to use the diffuse alpha channel to control either opacity or emission/glow. Again, emission/glow is low frequency content and should be controlled by the alpha channel of a lightmap. It should be possible to use transparency and emission/glow in the same material. The current proposal rules that out.

Link to comment
Share on other sites


Masami Kuramoto wrote:

For better or worse.


I'll adept I think, even the way it is proposed right now means a great improvement.

__

 

So what should be possible with three textures is this I guess:

1 High: RGB Diffuse, A transparency (like we have now)

2 High: RGB Normal, A Specular

3 Low: R Light, A Emission/Glow

Where "high" would have a max of 1024 and "low" either 512 or even 256. The third map even has two spare channels and like you pointed out earlier, the B channel of the second map could be used for something else aswell. You could put the emission/glow in the second texture and use RGB of the third for color intensity or something.

The second texture could also be 512 I think, clothes textures shouldn't be more than 512 and for brick walls for example you could use a 2x2 or higher repeat in normals to match a 1x1 diffuse map. It should be possible to have different offsets and repeats for different channels within one texture.

I was also wondering if it would be possible to have "the system" read one of the channels and use a resolution one step lower than the entire map. If that's the case, two textures would be enough for everything LL wants to implement. The emission/glow could go on the B channel of the normal map at a lower resolution. One huge drawback in using all these seperate channels is ofcourse Blender/3ds max/Maya etc do not produce these maps by default.

I can see how seperate textures are easier to work with, especially with different sizes. That must have been a reason to propose it the way they did.

 

Link to comment
Share on other sites


Kwakkelde Kwak wrote:

So what should be possible with three textures is this I guess:

1 High: RGB Diffuse, A transparency (like we have now)

2 High: RGB Normal, A Specular

3 Low: R Light, A Emission/Glow

Let's make the third one RGB light + A emission/glow. RGB lightmaps are actually quite useful for radiosity effects or coloured shadows. Quick example following...

No lightmap:

screen1.png

Intensity lightmap:

screen2.png

RGB lightmap:

screen3.png

The textures:

diffuse2.png light2.png

specular2.png normal2.png

Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 3900 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...