Jump to content

Drongle McMahon

Advisor
  • Posts

    3,539
  • Joined

  • Last visited

Everything posted by Drongle McMahon

  1. In Blender, you can map all the materials to the same UV Map in such a way that each on its own uses the whole area of the UV space. If you select all together, then the UV islands will overlap, but that doesn't matter. Then you can assign a different image to each material in the UV editor and bake themn all at the same time. The baked texture for each will appear just on the assigned image, and they won't interfere with each other. Unless you really need to use cages for baking, which Blender doesn't have yet, this is the easiest solution to your problem. Here is the workflow (roughly)... 1. Assign the materials to the low poly mesh faces. 2. With each material in turn selected (button under material list) ... a. Create a new image (preferably with name of material). The selected faces will be assigned to this for baking. b. UV unwrap (method of choice, plus editing). 3. Set up the high poly for baking (no UV mapping required). 4. Select high poly, then shift select low poly (in outliner if it's hidden). 5. Bake. 6. Save each of the images. The result of baking of each set of materials in a face wiill be output only onto the assigned image. There will be no interference between them as long as the faces assigned to a material and to the corresponding image are identical. The same effect can be achieved with a different sequence of actions as long as that condition applies at the point of baking.
  2. To rotate just the preview, place the mouse cursor over the preview and hold the control key while moving the mouse. This will only rotate the view, not vthe model after it's uploaded. For that you do need to rotate in the authoring program.
  3. You can solidify only the selected faces by using the solidify operator instead of the modifier that affects the whole mesh. (Ctrl+F,S or Mesh->Faces->Solidify). However, this gives you real thickness (+/- an edge) which you don't need. The simple duplicate and flip normals is easier. Both duplicate the UV map of the duplicated faces. ETA you don't really need to scale the duplicated.flipped faces, as the inside and the outside at the same point can never be seen at the same time, just as with a piece of cloth.
  4. "Too bad we can't animate normal maps and specular maps like we can diffuse maps" Not sure why you say that. The normal and specular maps get animated along with the diffuse map. This is especially useful for rippling water. They can't, however, be animated independently, whichj is a pity. So you can't have the ripples moving while the diffuse texture stays put. Maybe that's what you meant? It should work with a solid color texture, but not with one where there's a pattern that needs to stay fixed.
  5. Incidentall, here are some glasses. These are lit to emphasize problems. On the left is thye best I can do with a tube prim. Notice the faulty banded shading of the inside of the hollow. It also has a small dimple in the middle of the base. Next is a mesh with just one surface each for inside and outside, each separate materials. Then another mesh, but now with each of the inside and outside faces duplicated with normals flipped. This is necessary to get the expected reflections from the base. Compare it with the simpler mesh. It uses four materials. It has proper LOD meshes, but the extra geometry raises it from 1 to 2 LI. Adding a fifth material, you can put a thin label on it. Here the UV mapping makes the label text visible mirrored on the inside. If it needs to be opaque, that would need yet another material. I didn;t do the LODs for the labelled ones, so I don't have LI values, but I am sure it would still be 2. (size of glasses is 0.1 x 0.1 x 0.3m).
  6. If the glass is a prim cylinder, a mesh label will work best if it's LOD meshes have exactly the same geometry as the prim. For perfection, the LOD switches should also happen at the same camera distance. This means the label has to be extended to the same dimensions as the glass. All things considered, if you go the mesh route, you are probably better off using a single mesh for the whole thing, glass+label. (For a mesh glass vessel, you also need to use separate materials (SL faces) on the inside and outside, even if they have the same texture, to avoid shading artefacts).
  7. These sound like the names of unwrapping options presented by the software you are using. While there may be similar sets of functions in different programs, the names differ, and the exact functions will be specific for that program. So, to get the best advice, it would be useful to say which software you are using. Then someone familiar with it can give the best answer. Then, of course, the best choice depends very much on the mesh you are trying to unwrap, whether you are doing it all at once, etc. That's why the different functions are there. So some information about the mesh would probably also help with getting the most useful answer. In the end, experimenting while using the maps to apply a test texture, is the way to learn for yourself which is most appropriate for your model. Sometimes the UV mapping can be even more difficult and complicated than making the mesh, and it is crucial for obtaining satisfactory texturing.
  8. Sounds like too many materials. If you have more than 8, the uploader simply discards all polys with the 9th and higher materials, without giving any error message. Look at the materials list in Blender (see picture) and count them, not forgetting to scroll. If there are more than 8, you need to select groups of mesh faces and then click the Assign button (edit mode only) until there are on;y 8 assigned. Then you can remove the rest (blue ring), although that shouldn't matter. If reducing the number of materials means you can't texture as many SL faces independently as you need to, then you will have to do the joining into more than one objects, as you can only have 8 materials per object.
  9. "1000% no alpha" If there is an alpha cnannel, then perhaps you may get the alpha glitch even if all the pixels are completely opaque. If that's the case, open in Gimp, Remove the alpha channel (Layers menu), and re-export. Alternatively, save in a non-alpha format, such as bmp. See if that makes the difference. I am guessing, but perhaps the other grid removes the alpha channel if it's completely opaque?
  10. Hmm. I too can't log in with main account, but can with an Alt. I'll have to do the IP quiz yet again!! :smileyfrustrated: So it's not everyone, it's account-specific. I will try to get to the Server beta user group tomorrow to ask what's going on. If I don't make it, I hope someone else can ask.
  11. "I was thinking that you couldn't use the shape type change with sculpted prims without substantially increasing the LI for the entire object.." The dowload weight of a sculpty is now capped at 2. When they are being set to physics type "None", their physics weight wil be zero. So no matter how big they are, the LI can't go above 1+2x number of sculpties. It would still be better as mesh because the LOD behavior is more controllable, texturing better, and rendering more efficient, but that's another story.
  12. If my recollection is right, the part of Havok used by the mesh uploader is the decomposition into convex hulls that happens when you click "Analyze" on the physics tab to generate a physics shape that is a collection of convex hulls. If the viewer in question has that function, I suppose it must be using an alternative, probably open source, library. So for a particular mesh, the result may be different from that obtained with the Havok libraries. However, the most reliable way of controlling the generation of hull-based shapes is to provide a custom physics mesh that already consists of non-overlapping hulls. In that case, the results would probably be the same whtever the library used, as it should just be the set of hulls in the provided mesh. So if you do you physics shapes that way, you should not care at all. If you don't. then you might care, but just a very little bit.
  13. "I really don't understand why they don't pursue it" I have always assumed it is because pursuing an active copyright filtering process would imply their acceptance of responsibility for all infringing content, and thus take them outside the safe harbor provision of the DMCA. That's not my own idea, but I can't remember where I first saw it. Maybe someone who, unlike myself, has the necessary legal knowledge can tell us?
  14. "As far as I know, a blank map would be <127,127,255>" Ah. That seems to be right. I was assuming that my 128,128,255 was correct because it gives the identical result to LL's Blank normal map, but from the picture below, I think me and LL got it wrong. Interestingly, it appears that Blender uses <127,127,254>. At least that's what I get when baking a flat surface onto itself. So I added that as well. The 127,127 ones are obviously much better, although you can still see a seam if you turn environmental reflection right up. (I think I'll have to delete my comment from the jira.)
  15. "It's also a result of baking to the wrong kind of geometry..." I disagree. I deliberately chose this extreme case because the purpose was to obtain the maximum effect from the tangent basis mismatch, and thus the maximum improvement from changing it, because that ishat I was discussing. In real cases, the effects are much more subtle. It is true that a nicer result can often be obtained by using hard edges in the low poly (albeit at the cost of extra vertices because they get split). That is really beside the point here though.
  16. Comment added, but it seems that I can't add attachments, which makes it practically useless. If you can add the two pictures from my message above (46), that will help. I could add the uv baked from high poly here (although it is 1024x1024 !), but I don't know how to get you the dae file, which is the crucial thing :matte-motes-confused:
  17. Using you example, I can now reproduce the seam effect in a simple pipe bend with a suitably strained UV map. The effect is very obvious for the map baked from high poly with sculpted effects (left). However, it's more interesting comparing no normal map (middle) with the blank normal map (right, mine or the inbuilt blank). With no map, there is no seam. With the blank map, there is a seam. So it's something happening with any UV map. The effect of the blank map can't have anything to do with tangent basis etc. Since there is no seam with no normal map, it's not to do with the basic rendering in ALM. So it must be happening in the code that applies the normal map. My blank is <128,128,255>. It might be interesting to try slightly different colours. The pictures use blank spec map with the default shiny settings, g=51, e=0. ETA let us know if/when you submit a jira, and I will add this example (I think we can now before triage, no?). ETA2 Here's the model and uv map - affected seam highlighted.
  18. I haven't been able to make a simple reproduction of this. So this reply is not directly about the seam problem. However, it is about artefacts based on mismatched tangent base calculation in programs generating and rendering a normal map. There is a freeware thingy called Handplane that can take an object-space normal map and a low-poly obj file and generate output tangent-space normal maps for different rendering engines. SL isn't among the target rendering engines, but, for me so far, using the Unity engine output seems to give maps with much less distortion than the maps pruduced in Blender. It's just possible that this might also mitigate the seam problems decribed here. Here is a picture of the same scene with a model using the Blender tangent-space map (top) and the one generated by Handplane (below). The workflow was... Low poly is a simple cube, triangulated before doing anything, all smooth shaded; Make traditional UV unwrap after specifying seams; Export collada and obj (latter including normals - I don't know if that matters); Bake normals from high poly to tangent space and save as Blender-generated normal map. Bake normals from high poly to object space ans save for input to Handplane; Use Handplane with low poly being the exported obj file, object space normal map from Blender, using xNormal as input map generator (Blender and xNormal use the same tangent basis calculation, so, maybe same for object space too?), using Unity as output rendrer. Other settings (channel flips) left at default; use output as Handplane generated map. Upload lo-poly collada and both normal maps to Aditi; set up scene with two objects to maximize shading artefacts with Blender normal map; screencapture; replace Blender map with Handplane map; screencapture again. Here are the two maps (reduced), with the differences (value x 5) in between.
  19. 1. Object mode: Add->Mesh->Cube 2. Edit mode: Select all, U->Unwrap (default unwrap, each face fills whole UV space) (note a). 3. Select material tab (8th button with sphere in Properties bar). 4. Click "+" on right of top box six times (six material slots appear). 5. Select each material slot in turn and ... 5a. Click "+New" button underneath (new material name appears in slot) 5b. Click the colour picker box under ">Diffuse" and choose a distinct colour. (note b). (all the faces will appear to be the first added colour at this point, note c) 6. Select each face of the cube in turn and... 6a. Select a different material slot and click "assgn" button beneath. (you should have a cube with different colours on each face. 7. Export collada file. This is essentially what Aquila described. It will give you the cube with six materials. In SL, each face can be selected and a different texture can be applied, just as with a prim cube. Some additional info... Inworld "Faces" are independently textured groups of mesh polygons. In Blender, these groups of are defined by the assignment of different materials to groups of faces. You have to assign different materials to get different faces. Applying textures to mesh faces in Blender by selecting the faces and loading an image in the UV editor will cause them to be used in the 3D view when Textured solid or textured display mode is selected. This is a convenience, but these textures will not appear in renders, and they will not cause the assignment of materials (since 2.49) in the exported Collada file. So they will not make different faces inworld. This is true even if you select the "Include UV textures" exporter option. Once you have assigned materials, they will be included in the Collada export whether you choose to include textures or not. So getting distinct faces does not require including textures in the export. Although you can do so if it helps, there is no need to separate the mesh to make the UV maps of the different materials. Simply select the material (under the material slot list in the Properties panel) and proceed to map however you choose. This will use the full UV space for each material, so that the UV maps will be overlapping. This does not matter because they will be textured completely independently and the textures don't interfere between materials/faces. If the materials/faces have different areas, requiring difefrent texture resolutions, you can simply use different sized textures. It doesn't matter for the mapping step. Of course you can still map the whole thing in a single go, before or after materialm assignment, so that none of the UVs overlap, but that means that a large proportion of the texture for each material will be wasted. Notes... a. This is exceptionally simple for the cube. For anything more complex, it's better to do the UV mapping separately for the faces comprising each material. This can be done either before creating and assigning the material, but it's easier to get it right if you do it afterwards, using the material to select. b. The different material colours aren't necessary, but make it easier to see what you are doing. They also show up when you first rez the mesh, so you can easily select faces for texturing. c. The first material you create will automatically be assigned to the whole mesh. d. If you already have materials, for example on other objects, then you can simply assign those to the material slots,using the selector to the left of the "New" button, instead of creating new ones. PS. I recommend against including textures with mesh upload bnecause (1) you have to pay each time even if the textures are unchanged, (2) you will accumulate multiople copies of the textures in folders that are not obvious. Point (1) is overwhelming if you are using general-purpose textures that will be used on many objects. All this is accurate to the best of my knowledge, but that is far from perfect and I welcome any corrections.
  20. Oh yes, but an additional explicit "x+y+z+" would make it easier to relate that with the notation generally used elsewhere, I think. That's what I was looking for.
  21. Certainly no more stupid than me. I wasn't at all sure till I retested it. Here's the (simplest) experiment. Bottom left is the Blender 3D view with the "high poly" mesh in edit mode, and the low poly (one quad) beneath it, looking from above in Y direction. Above that is the normal map baked from the high to low poly (UV map project from view (bounds) from directly above). Top right is the view of the two meshes rendered in Blender, from a similar view, with a single light above left of the camera. Bottom right is the low poly mesh with that map applied in SL, midnight, single local light in the visible sphere. This is all with no channel inversions or rotating anything. So I think that comfirms SL and Blender are the same. All the googled references I found said that Blender is (now) x+y+z+. So if that's right, I think that means SL is x+y+z+ too. I think I saw something saying that Blender object-space maps were x+y-z+, and there were suggestions that it should be changed to the same as the tangent space maps. So maybe that difference may be the source of some confusion. I don't find all this easy, so I just try to remember that it has to look like red light from the right and green light from the top. I assume that is x+y+, but I am ready to be corrected if it's wrong. ETA: This is something thar seriously needs to be put in the wiki if and when we are certain.
  22. "Second Life uses +x +y tangent space normal maps." Yes. I second that. I just did a whole lot of experiments to re-convince myself. It's the same as Blender (2.6+), so that no channel inversion should be necessary. On the other hand, I have to invert the green in normal maps made with the GIMP normalmap pluigin; so I suppose that uses +x-y by default. I also noticed that rotating the normal map inworld messes the normals up completely, but rotating the UV map in the model, after baking, doesn't seem to. I have to try to get my head round that now.
  23. I also see in your pictures a sortb of banding effect that looks like a problem with the UV map. When two edge loops get placed on topm of each other in the UV map. then the surface between them gets textured with a single pixel at each point that is stretched to a horizontal/vertical line. here's an example of a cylinder, with the left hand one having the usual UV map (top) and the right hand one having two sets of edge loops superimposed with a small gap between, as shown in the lower UV map. The dotted lines show how the loops were moved. Used just a cloud texture to illustrate the effect.
  24. "Something that got fixed..." I think the hack was a subversion of the local avatar texture baking in the viewer, from where the baked texture was uploaded to the server and on to other viewers. Since the move to serve-side baking, the code used for the hack was no longer available (or at least, no longer functional).
×
×
  • Create New...