Jump to content

Normal & Specular Maps


You are about to reply to a thread that has been inactive for 3927 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

The announcement by the Lab is here: Materials System

Oz has created a JIRA for the development process and bug reporting. Discussion should occur in the forum, preferably here in the Building and Texturing section.

JIRA: Add support for Normal & Specular Maps - STORM-1905

Feature requests should be entered as separate JIRA items in the VWR (Viewer) Project. Oz has recommended you add feature requests in the forum before making a JIRA request.

Some threads on normal maps have started in the Mesh section. Those may get moved to here, I suppose it depends on how busy the forum mods are.

Oz advises that this first pass is not likely to see many feature requests picked up. They will more likely be additions and fixes in the second pass. So, patience...

Information is from the 8/20 Open Source meeting.

Link to comment
Share on other sites

Fullbright (or "emissive" as I like to call it) functionality is somewhat up in the air right now.  Currently I'm considering the following approaches:

A) Make fullbright objects accept a normal map, and only apply "shiny" lighting as it were to them like all other objects instead of having it handled as its own separate effect (since this will be a controllable set of parameters, it may make the most sense to go this route)

B) Keep fullbright objects as-is, and follow up with a more comprehensive solution later for emissive materials

Link to comment
Share on other sites

In case Geenz wants to answer more questions :matte-motes-smile: ...

I would like to know whether the size, repeats, offsets and orientation of the proposed maps will (initially) br locked to those of the applied texture. I guess making them independent might have to be a feature request?

Secondly, even more than specular and normal maps, I would like to be able superimpose (multiply) a tiled texture with a (smoothed) low resolution untiled ao mask. That would avoid sacrificing the high detail of the tiled texture to use ao, and allow both textures to be smaller than the large baked texture they would replace. Is ther anything in the plans that would lend itself to that application? Or would that have to be a feature request too?

  • Like 2
Link to comment
Share on other sites

I second Drongle's suggestion of a multiply-effect overlay - it would make life HUGELY easier to create AO shadow effects.

Currently, I'm forced to work with smaller surface areas to achieve a decent texture resolution AND faked AO shadowing, but this tends to be very limiting (in large architectural builds, in order to reduce texture overhead, I have to constantly repeat the same textures with included AO shadow effect, which gets repetitive very quickly, even if I have a couple of variations to mix things around).
If some kind of multiply mask was possible (even if limited to low resolution, say 256x256) it would go a long way to solving the issue - especially if this multiply AO masking is independent of the underlying diffuse texture (with standard tiling/repeat capabilities) - I could keep the AO shadow in place, while changing up the diffuse texture's rotation, repeats and so on, which would vastly increase visual diversity while still maintaining a small texture palette overall.

(Still, I am drooling over the possibilities of normal and specular maps, definitely!)  :matte-motes-smile:

Link to comment
Share on other sites

I can't wait to test this feature out. I have wanted normal and specular maps in SL for years now. 

I'd also like to add a +1 request for an AO shader. I brought up the need for this material in the informal user group meeting.  I won't repeat everything I said there since Drongle pretty much covered why it is immensely useful. It just needs to be repeated that the AO function in the viewer does not give the fine detail control over AO that an AO map does. Plus, there are many that can't even use the viewer feature because of hardware issues.

Link to comment
Share on other sites

At the mesh UG, Geenz indicated that the initial implementation would have the specular and normal maps tied to the same size and parameters as the texture. So anything with Independent parameters will presumably have to wait for consideration as added fearures. One step at a time, I guess. Without knowing the details of the implementation, it isn't possible to see how hard it might be to make such additions.

Link to comment
Share on other sites


Drongle McMahon wrote:

At the mesh UG, Geenz indicated that the initial implementation would have the specular and normal maps tied to the same size and parameters as the texture.

I'm by no means an expert of what goes on inside a server or graphics card or anything, but I have the vague idea a texture has to be baked onto a surface before it can be rendered onto your screen. This would mean when you add a layer of occlusion to a diffuse map (with a different UV), all the textured surfaces either become unique or turn into one huge surface instantly.

Even 3ds max won't allow you to show two "effects" (diffuse, ambient, normal, bump, specular etc) at the same time in your viewport.

Maybe someone with some more technical knowledge could shed their light on this? Afterall it is possible to have shadows and occlusion through the renderer realtime.

 

Link to comment
Share on other sites

I can see how that makes sense as a starting point, but it straight-off loses some options. Why shouldn't a normal-map have a different resolution to the visible texture?

Being able to use different parameters needs to be something the code does not prevent by accident. There needs to be distinct storage in place for the parameters of all the textures, a standard data structure, even if, initially, the values are matched automatically. It needs to be in the programmers mind now, or we risk another bodged UI change.

 

Just an example: you want to create some natural surface, rough stone, perhaps. Currently, tiling the visible texture can leave an obvious pattern. A normal-map on the same parameters doesn't change that, but a slight difference in the tiling rate will break up the repetition. That regular blob in the tiled texture doesn't have the same normals every time.

 

Don't resort to a quick hack on the first stage.

Link to comment
Share on other sites

At the immediate level, yes, but as long as the maps are provided as separate images at the start, they could be independently scaled, offset, repeated etc before bieng combined, and a UI could be added to control that. Obviously that would add significant extra computation, which is why I rather expected the answer that I was given. Nevertheless, substantial texture volume savings are possible if the parameters can be made independent, such as the use of smaller (interpolated) normal maps to give gentle shaping effects to larger detailed surfaces.

Not sure what you are saying about 3Ds. In Blender, it is very easy to combine diffuse and/or specular and/or normal maps at different repeats etc, even to use different mappings/projections. It's also easy to use different modes of superimposition (mix, multiply, add, overlay, difference etc...). You have to bake them into one texture to see them in real time in the 3d editing window though. Maybe that's what you mean? So you could say it's that sort of baking that I would like to see in the viewer. Probably too resource hungry*?

*eta: that's to say, the trade-off between increasing cpu/gpu resource and saving texture download resource may weight the balance against this.

Link to comment
Share on other sites

I'm thinking we need a short glossary, so we can use the same terms for the same things.

If you go look at the older viewers, the texture layer on the AV skin was labelled "Tattoo". and skin colour and such was set by sliders, for a rather cel-animation look. Content Creators soon figured out that they could use this original Tattoo layer for texture-based skins, but the name of the layer didn't change.

And then, a couple of years ago, new layers were added, a new sort of inventory item, called a Tattoo. It was confusing.

I think we want to avoid that.

 

Link to comment
Share on other sites


Drongle McMahon wrote:

 

Not sure what you are saying about 3Ds. In Blender, it is very easy to combine diffuse and/or specular and/or normal maps at different repeats etc, even to use different mappings/projections. It's also easy to use different modes of superimposition (mix, multiply, add, overlay, difference etc...). You have to bake them into one texture to see them in real time in the 3d editing window though. Maybe that's what you mean? So you could say it's that sort of baking that I would like to see in the viewer. Probably too resource hungry*?

That is exactly what I ment yes. Ofcourse 3ds can have various UVs, but it won't show them realtime, in the viewport.

 

Link to comment
Share on other sites

I've mentioned it in another forum, but just to add here: It's critical that there be a scripting interface to the server side of this as soon as it's available in the viewer. At the very least, we need to be able to set the maps. Assuming they aren't locked to textures (and they sure better not be), they'll also need to be animated, aligned, and scaled by script, too.

(Futile though it may be to mention: it's not necessary to further clutter-up the viewer's build tool with any UI for this.  I don't actually expect that viewer devs (LL and TPV) will be able to stop themselves from hanging yet more floating complexity off what are already UI abominations, but it's an option to instead simply let scripts define how users manipulate these.)

Link to comment
Share on other sites


Qie Niangao wrote:

 

(Futile though it may be to mention: it's not necessary to further clutter-up the viewer's build tool with any UI for this.  I don't actually expect that viewer devs (LL and TPV) will be able to stop themselves from hanging yet more floating complexity off what are already UI abominations, but it's an option to instead simply let scripts define how users manipulate these.)

I couldn't disagree more (and I doubt LL will not use the UI for this). Writing a script, finding the face number,  finding out I made a typo, having to resave the script over and over and over when trying to find a nice repeat, my god, that's the last thing I need when I simply want to apply a texture.

I don't see where you find all this UI clutter when building? Pretty much everything you need is in a couple of tabs in one single menu.

 

Link to comment
Share on other sites

I don't personally care whether or not they keep adding more floaters to sprout from the Build tool, as the texture/sculptmap and color palettes do now. But somebody evidently cares about that: There's a whole initiative to re-engineer the Build floater to handle all the complexity with which it's being laden.

I do very much care that there be the ability to make more flexible and powerful tools through scripting than any viewer-side-only UI would support. Consider, for example, tools used to manipulate particles and animated textures, that would be a waste of developer time to put in the viewer.

There are, of course, other reasons for wanting script interface to these features -- notably, dynamic and interactive content -- the same reasons scripters have been so disappointed for two years now by the lack of such an interface to projected textures.  See SVC-6390 and SCR-163.

Link to comment
Share on other sites

I'm well aware of the possibilities with scripts, but that doesn't mean a script is easier to use than the UI, even if they make the UI ten times more complicated. I would like to see control by script aswell.

Particles controlled by UI would be a blessing to a lot of people. Not everybody likes the coloured text input which, although I more or less know how to use it, is nowhere as intuitive or fast as buttons one can click. Same for animated textures. People are more likely to experiment with a menu than with scripting which might frighten them.

Like stated in the other answer to your post, you must be out of your mind saying the UI is complex when you're comparing it to scripting, imperfect as the building menu may be.

Link to comment
Share on other sites

I'm not saying that the end result of having a script interface to features is that everybody has to write scripts, but that scripts can serve end users to interact with the features. The particle example wasn't to advocate everyone write llParticleSystem scripts, but rather to demonstrate one area where many are already familiar with using scripted UIs to get the results they want.

As a scripter myself, I'm selfishly interested in LL expending effort to expand the palette of UI features available to scripts--the most recent of which was llTextBox, still crippled in some viewers these many years hence--so that scripts can be the preferred tools for manipulating features of the world, rather than always having to incorporate everything into the viewer.

In any case, I'm 100% sure that TPV devs--and, for that matter, LL viewer devs--will build in the functionality for these particular -map features, and that's fine. In some alternate universe, however, where LL invested more in scripting, nobody would want those features in the viewer because those better scripts could produce better, more flexible and user-customizable UIs, at lower development cost, than can viewers. Granted, that's not entirely realistic, given the current state of SL scripting support.

Link to comment
Share on other sites

No one said anything about not being able to use different resolution texture maps on a surface, just you won't be able to get the same offset and scale parameters for each individual texture map in a material for the first version of materials.  That's not to say that it won't happen in a future version however.

There won't be anything that'll block this from being supported in the future, neither by accident or intentionally.

Link to comment
Share on other sites


Kwakkelde Kwak wrote:

I'm by no means an expert of what goes on inside a server or graphics card or anything, but I have the vague idea a texture has to be baked onto a surface before it can be rendered onto your screen. This would mean when you add a layer of occlusion to a diffuse map (with a different UV), all the textured surfaces either become unique or turn into one huge surface instantly.

Even 3ds max won't allow you to show two "effects" (diffuse, ambient, normal, bump, specular etc) at the same time in your viewport.

Maybe someone with some more technical knowledge could shed their light on this? Afterall it
is
possible to have shadows and occlusion through the renderer realtime.

It's all a matter of shader programming. A shader can be implemented so that it reads multiple textures using different UV maps and mixes them in realtime for the final rendering. A static lightmap has to be pre-baked of course, but it remains separate from the diffuse map, so you can use various diffuse maps on different parts of the model and apply a single global lightmap to all of them. This can considerably reduce texture memory usage because lightmaps with soft shadows look good even at low resolutions, while diffuse/normal/specular maps need to be hi-res but can be tiled or otherwise re-used through clever UV mapping.

Blender's viewport supports programmable shaders since 2008 (version 2.48), so it can preview all these realtime effects just like they would appear in a game. The only requirement is that the graphics card support GLSL (OpenGL Shading Language). See here for more info: http://www.blender.org/development/release-logs/blender-248/realtime-glsl-materials/

For an example of how programmable shaders can save texture memory, check this out:

http://vimeo.com/35470093

The scene in that video uses only two 256x512 textures for diffuse, normal and specular mapping. The rest is baked light and environment mapping. Support for multiple UV maps is a key feature here, because it allows clever reusing of texture details at multiple places while keeping the lightmap entirely separate. If the developers are serious about the upcoming materials system, flexible UV mapping should be near the top of their to-do list.

  • Like 1
Link to comment
Share on other sites

Very nice, now that you mentioned "shaders" I was able to find realtime shaders for 3ds max, seems they've been around for a couple of years. That might come in handy when the new materials are introduced.

For the normal high poly scenes I build, I suspect those shaders will kill even my brand new computer.

Either way, good to know it can be done, if I understand correctly, nothing is standing in the way of adding these features in the new SL material system then?

What I don't understand about your example video, is you mentioning baked lights. Does this mean the textures sent to your graphics card are not the 256x512 ones, but a whole load of unique textures baked using those small textures? That is maybe low in memory as far as bandwidth goes, but sounds like it would result in a far higher memory usage for a graphics card.

If this baking is done viewer side that shouldn't be a real issue I assume. People with a slower computer can then turn it off like they now can with shadows and other fancy features.

Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 3927 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...