Jump to content

Masami Kuramoto

  • Posts

  • Joined

  • Last visited

Everything posted by Masami Kuramoto

  1. Kwakkelde Kwak wrote: No point in overthinking this, we'll have to work with what SL swallows anyway. For better or worse. From Nalates' blog: #SL CCIIUG Week 37 These meetings are getting smaller. There were about 8 to 10 people and this one had 2 griefer idiots at this one. Blocking took care of clowns that eventually wondered off. I’m pretty sure User Interface design is not a popular subject in SL. The problem is that this is not just about user interfaces. It involves modifications on the server, and my concern is that whatever these few people come up with, we'll be stuck with for years. Maybe lightmaps fell under the bus because the Lindens drew a line at three textures per material. However, in this case a 4th texture can make the other three considerably smaller, as shown earlier. Unlike the other three maps, baked lighting is low-frequency non-repetitive content. It can rarely be tiled, but it can be stored at very low resolutions. There is a reason why tileable material textures are usually high-pass filtered, while soft-edged shadows are not only acceptable but often desirable. Mixing high and low frequency image content leads to large, non-repeating textures. These are already being used at too many places in SL, but the current materials proposal does nothing to address this problem. Another problem with the proposal is the idea to use the diffuse alpha channel to control either opacity or emission/glow. Again, emission/glow is low frequency content and should be controlled by the alpha channel of a lightmap. It should be possible to use transparency and emission/glow in the same material. The current proposal rules that out.
  2. Well, as I said earlier, it may be due to historical reasons. Today you can program your own shaders, so you can decide to compute the third vector component on the fly and use the blue channel for something else instead. I doubt that this was possible back in 2003.
  3. Kwakkelde Kwak wrote: Still wondering why "they" decided to use the B value for normals aswell. I guess this goes all the way back to the fixed function pipeline, before shaders became programmable. Object and world space normal maps actually need the third component. Today you can mix & match data in those RGBA channels in any way you like. Here's one example: http://www.polycount.com/forum/showpost.php?p=1506050&postcount=212
  4. Drongle McMahon wrote: Signed byte : 00000000 = -128, 10000000 = 0, 1111111 = 127 Also known as 8 bit excess-128. However, I'd rather think of percentages here, since RGB components are not necessarily bytes. Nor integers. By the way, since the Z coordinate length of the vector in tangent space normal maps is always 1, the B channel is redundant and can be used for something else. For example, it's perfectly possible to implement a pixel shader that uses R and G for the surface normal and B to control specular reflection.
  5. Indigo Mertel wrote: The texture has moldings, so it must match the height of the wall and must repeat for its length. I'm surprised no one has suggested cube projection yet. Select the walls, press U, choose "cube projection", adjust the "cube size" parameter until the texture starts repeating at the top edge of the wall. This is best done with a UV test grid and with the 3D view in textured mode, so that you have instant feedback. Holding shift while dragging the widget will increase/decrease the value in smaller steps. When done, move the entire UV layout 50% down. Now the top and bottom edges of the texture should be aligned with the top and bottom edges of the wall, while repeating horizontally and maintaining the original aspect ratio.
  6. Drongle McMahon wrote: I would have given light maps the highest priority. I am genuinely surprised by their absence from the project's "user stories" on the wiki page. It claims there has been a review process involving the "Content Creation Improvement Informal User Group". And development has been outsourced to the Exodus viewer team. Yet no one has considered light maps? They've got to be kidding. All I can say is, they better get this material system right the first time, because there may be no second time to fix it.
  7. Qie Niangao wrote: I'm now wondering how lightmaps would interact (if at all) with dynamic shadow maps, when they're enabled in the viewer. Naively, I can think of three possibilities: Dynamic shadows disable lightmap rendering altogether. Lightmaps for a surface are rendered along with dynamic shadows, as if they were an additional "light source". Dynamic shadows are not rendered at all on surfaces with lightmaps. Assuming that 3 is what you get only when fullbright is switched on, I think I can imagine advantages to either 1 or 2, but probably one or the other is "standard". (2 seems most what one might expect, but 1 would be kind of like being able to lose shadow prims and baked lighting that look so awful when the rest of the scene is lit dynamically.) The video I linked to post #24 shows dynamic shadows and static lightmaps in a single scene. They obviously work very well together. You can see that the sunlight is bright enough to eliminate any baked shadows, while the lightmaps are very effective in those spaces where the sun doesn't shine. How exactly this is implemented in the pixel shader is up to the developer. If you look at Blender's texture influence panel, you can see that a texture can affect multiple shader properties at the same time. The panel is basically a graphical UI for GLSL shader programs -- the same type that is used by the SL viewer. It would be awesome if we had the same flexibility in SL, instead of a few fixed implementations. My example above uses the lightmap to attenuate the intensity of diffuse and specular reflection. It does not touch any other parameters, so the floor is not emissive. It still requires an external light source. Fullbright is not an option at all because it eliminates all shading effects and renders the normal and specular maps useless.
  8. Drongle McMahon wrote: Yes. If we are correct in interpreting the wiki as saying that each of the maps is already going to have independent scaling and offset parameters, then I would guess it should not be difficult to add this, even if it's not there at the outset. People are going to go on using baked lighting, and the potential for saving total texture downloads is HUGE. We need these features now, not at some point in the future. The materials system is already late to the party. The longer we wait for normal mapping, the more meshes with excessive geometry get uploaded because in 2012 people will not settle for a vertex-shaded low-poly look any more. The longer we wait for light mapping, the more oversized diffuse maps with baked shadows end up inworld because people will not settle for a world without depth. Excessive geometry and excessive texture sizes are the two primary sources of lag in SL. You can educate creators to use resources efficiently, but you can't persuade consumers to spend money on ugly stuff.
  9. Qie Niangao wrote: I'm sure that for a long time there will be folks using rigs for which dynamic lighting and shadows yield unacceptable performance. But similar to an earlier question: is deferred rendering required for these material effects to work, and if so, does that imply dynamic lighting and shadows will always be present anyway? Normal and specular mapping require per-pixel lighting but not deferred rendering. The water plane in SL viewer 1 was normal-mapped, even before WindLight arrived. WindLight added a dynamic environment map to the water which requires an offline render pass of the entire scene. Deferred rendering added dynamic shadow maps which require yet another offline render pass. That's what made these features so taxing. Normal and specular mapping are cheap in comparison because they are static. So is light mapping.
  10. Darien Caldwell wrote: I don't see how this is any better than baked lighting. From the looks of it, it's still static; if you moved the light to the other side of the rail, then what? You're going to have to explain why this is better than a baked shadow, when both don't move with changes to the light source. To achieve the same look without lightmapping, you have to bake the shadows into the diffuse map and into the specular map. So instead of four 256x256 maps, you'll need a 8192x8192 diffuse map, a 8192x8192 specular map, and the 256x256 normal map. You'll waste 512 times more texture space than necessary. SL doesn't even support textures that large, so you'll either have to use multiple materials with 1024x1024 textures mapped to the floor, or an alpha plane hovering above it, or you'll have to settle for a blurry 1024x1024 floor texture (and still waste 8 times more space than necessary).
  11. Oz Linden wrote: We now have a wiki page on how Material Data is interpreted and used. Note that nothing there is final until it's all working... I could write all day about why lightmaps are a must-have feature, but maybe a few pictures will help bring the point across. This is where we are now: tinted diffuse mapping: This is where we are going: tinted diffuse + specular + normal mapping: And this is where we should be: tinted diffuse + specular + normal + lightmapping: The diffuse map: The specular map: The normal map: The lightmap: Total texture size: 512x512. Effective texture size visible inworld for this example: 8192x8192. No blur, no pixelation, no alpha sorting issues, and the floor tiles can be reused elsewhere because the shadows are not baked in but kept separate. Furthermore, static lightmapping is much less taxing to low-end hardware than dynamic shadows. It's ideal for indoor scenes with multiple lights and soft shadows. Static lightmapping was a feature of the Quake game engine released in 1996. Please, for the love of Philip, give us access to this ancient technology, so that SL can finally look decent.
  12. In recent versions of Blender, empties are the preferred solution because they allow easy rotation without messing with UV maps. Background pictures are merely for backwards compatibility now.
  13. Kwakkelde Kwak wrote: I'm by no means an expert of what goes on inside a server or graphics card or anything, but I have the vague idea a texture has to be baked onto a surface before it can be rendered onto your screen. This would mean when you add a layer of occlusion to a diffuse map (with a different UV), all the textured surfaces either become unique or turn into one huge surface instantly. Even 3ds max won't allow you to show two "effects" (diffuse, ambient, normal, bump, specular etc) at the same time in your viewport. Maybe someone with some more technical knowledge could shed their light on this? Afterall it is possible to have shadows and occlusion through the renderer realtime. It's all a matter of shader programming. A shader can be implemented so that it reads multiple textures using different UV maps and mixes them in realtime for the final rendering. A static lightmap has to be pre-baked of course, but it remains separate from the diffuse map, so you can use various diffuse maps on different parts of the model and apply a single global lightmap to all of them. This can considerably reduce texture memory usage because lightmaps with soft shadows look good even at low resolutions, while diffuse/normal/specular maps need to be hi-res but can be tiled or otherwise re-used through clever UV mapping. Blender's viewport supports programmable shaders since 2008 (version 2.48), so it can preview all these realtime effects just like they would appear in a game. The only requirement is that the graphics card support GLSL (OpenGL Shading Language). See here for more info: http://www.blender.org/development/release-logs/blender-248/realtime-glsl-materials/ For an example of how programmable shaders can save texture memory, check this out: http://vimeo.com/35470093 The scene in that video uses only two 256x512 textures for diffuse, normal and specular mapping. The rest is baked light and environment mapping. Support for multiple UV maps is a key feature here, because it allows clever reusing of texture details at multiple places while keeping the lightmap entirely separate. If the developers are serious about the upcoming materials system, flexible UV mapping should be near the top of their to-do list.
  14. Automatic weighting uses the so-called "bone heat" algorithm, which means that the bones emit "heat" that gets absorbed by nearby meshes and increases their vertex weights. There are two things to keep in mind: Only the backsides of faces can absorb heat. Heat travels from the bone until it gets absorbed by geometry. Once absorbed, it cannot travel any further. If automatic weighting does not work correctly, it's either a case of flipped normals (i.e. the bones are not "inside" the mesh) or geometry with an onion layer characteristic (i.e. the outer shells are shielded by the inner ones). In the latter case, you have to separate the shells before weighting them.
  15. Spinell wrote: Oh, I just like to use solidify in most of my builds, but I guess it didn't work very well with this fabric. It works well with fabric if you disable "fill rim". Anyways, here are some pics about the strain map. Take a look: it does look similar to the weight maps in blender, right? Not sure if they're the same. A strain map in MD is the same thing as a stress map in Blender. It can be used to control material features during rendering or baking. Vertex weights can also be used to control cloth properties such as pinning, structural stiffness and bending stiffness, which are essential to produce realistic clothing items. However, none of these are related to rigging. I exported this skirt as an OBJ and checked if it had any weights in weight paint mode, but alass, it didn't. MD's feature set is limited in many ways. That's why it's so easy to use after all (and so horribly inefficient for game asset creation).
  16. Careful when removing doubles from meshes with lots of intersecting geometry, because this is what can make such a mesh non-manifold in the first place. You don't need addons to fix this. Here's how you can make the mesh manifold again: Select the non-manifold object and switch to edit mode. Unselect all, then switch to edge select mode and choose Select --> Non Manifold from the menu. Choose Mesh --> Edges --> Mark Sharp. Switch to object mode. Above the Decimate modifier, insert an Edge Split modifier with the Edge Angle option disabled and the Sharp Edges option enabled. Apply the Edge Split modifier.
  17. Compare the render previews between the material node and the output node. If they look different, your node setup is wrong. Make sure that the color mix node is set to "Add" and factor 1. The default "Mix" setting will yield the weighted mean of the two input colors, so the result will be darker than the original. If you use Blender 2.49, you have to connect "Spec" with "Color1". In current versions of Blender the order doesn't matter. Specular highlights don't use the mirror settings at all. If you actually want to bake reflections of the environment, you have to use the material's "Color" output instead of "Diffuse". If none of the above fixes your problem, I want to see the node setup.
  18. Dilbert Dilweg wrote: That can happen if you dont apply your modifiers before rigging... Any subsurf modifiers or any modeling modifiers should be applied before rigging... Subdivision modifiers are usually not a problem because they interpolate vertex weights automatically. In fact you can save a lot of weight painting time by rigging a low-poly cage and subdividing it later.
  19. Alexandra Barcelos wrote: Hello i use avastar to rig my meshes in blender but the shoulders and sometimes the back come out deformed once added to the avatar in second life ... i tried changing the pose of the avastar and rigging i also tried modifying the mesh . Are you saying that the deformation in SL is different so that you can't preview it in Blender? In that case your armature modifier may be misconfigured. The options "Bone Envelopes" and "Preserve Volume" should be disabled. I see developpers in the marketplace that also use garment maker to make their clothing meshes and the shoulders are perfect ... Garment maker is useful to simulate the physics of cloth, but it produces meshes not suitable for realtime rendering. Of course that doesn't stop many merchants from importing garment meshes to SL anyway, since we can always blame LL for the lag they cause. Is there a trick or a fix for this ? It depends. Volume loss due to bending or twisting can be reduced by inserting additional edge loops around the joints. But without any screenshots, we can only guess what's going on.
  20. Switch to edit mode. Select all the vertices that you want to assign exclusively to the neck bone. In the object data panel, select the neck bone vertex group, set the weight slider to 1.0 and click the assign button. Switch to weight paint mode. In the weight tools panel, click the "normalize all" button. In the tool properties panel, make sure that "lock active" is enabled.
  21. How about migrating to a grid where mesh uploads are free, such as OSGrid?
  22. If at the start of the simulation the garment is already within the minimum distance of the collision mesh, it will get pushed away immediately (and up, if necessary). To fix this, either reduce the distance value or scale up the meshes before starting the simulation.
  23. So basically the main part material doesn't show up. Could it be transparent? Can you see the missing part in the preview window before uploading? Can you see it after you enable "highlight transparent"?
  24. Fairy Fanshaw wrote: The head did not match the body, so we thought we make the head unrigged Did you upload the rigged and unrigged parts together or separately?
  25. How many materials did you use?
  • Create New...