Jump to content

Drongle McMahon

Advisor
  • Posts

    3,539
  • Joined

  • Last visited

Everything posted by Drongle McMahon

  1. I think you should uncheck "Generate Normals". If you find it checked when you try again, click the Reset & Clear button.
  2. Brief introductory guide... You need a phsyics shape other than the default convex hull. You can choose either of two types, triangle-based (3) or hull-based (4).... 1. Your simple opened cube. (Backface culling on to see invisibility of backfaces). 2. Select all and use Mesh->Faces->Solidify to make inner surfaces. 3. Remove narrow edges, for triangle-based phsyics mesh. DON'T "Analyze". 4. Matching mesh made with simple non-overlapping pieces for hull-based physics mesh. DO "Analyze". 5. Use 2 for visible meshes and 3 or 4 for physics mesh. Set physics shape typem to "Prim" inworld. Notes - The physics mesh is added on the Physics tab of the uploader - the "Anaylze" button is there too - the physics shape type is set on the Features tab of the inworld edit dialog - Use Mesh->Vertices->Separate->Selection to make the cubes separate objects - select and export each using SL preset, with "Selection Only" checked. ETA - there are a million threads in this forum about physics shapes, with whatever level of technical detail you like.
  3. Nice. Music too! :matte-motes-grin: I guess changing the label on the option would be better than changing it's action, because that way it would not break any existing workflows. Much easier too!
  4. Here is how it seems to work... If you join objects, the combined object will have a set of UV maps consisting of one map for each distinct map name in the objects before they were joined. In each map, if an object had a map with that name before joining, that will be incorporated, with the coordinates unchanged, into the map of the same name in the combined object; if it did not, then all the vertices of the object will be set to 0,0 in that map of the joined object. In other words, the UV coordinates for each object will remain the same in any map for which it has a defined map, but for any maps it did not already have, it will be coalesced at 0,0. It is the coalesced maps that give the 1-pixel texturing. Now the dae file - If the Active UV Map Only option is unchecked, the exporter will export all the UV maps of the joined object, but the SL uploader will only include one. In the experiments I did some time ago, that was always the last one in the file (this would happen if it read them sequentially and overwrote the previous one each time, although it's not really that simple). If the option is set, then, presumably, the dae file should only get the active UV map. However, in my Blender (2.71) it exports the Selected one, not the Active one* You can select the active UV map in the UV Maps section of the Object Data tab of the Properties panel. This is shown in the picture. The maps are listed, and the small camera icon is the button for selecting the active map. It is shaded for the inactive maps, but clear for the active map. When it is active, it will be used in rendering (and baking?). It will also be displayed in the UV map edit window where you can see whether this map has proper or coalesced UV coordinates for selected vertices in this map. Here, the map with the green arrow is active, while the other two are inactive. Note that the map with the blue arrow is selected, but NOT active. You can rename the maps here (eg, before joining) by double clicking on their names. So if you have a set of UV mapped objects you want to join, the best thing is to make sure the UV map you want to use for each has the same name (this is automatic if you have just the default map name for each object). You can ensure this by renaming them if it is required. Then the joined object will get a map with that name that has all the desired mappings. You can then delete any other UV maps in the joined object. Alternatively you can make sure you make the desired map active by clicking its camera AND selecting it, before exporting with the Active UV Map Only option (This is automatically selected if you choose an SL export preset, but that doesn't mean it makes the right map active!). *I assume this is a bug. I don't know if it has been corrected in later Blender versions. Meanwhile, some of the confusion people have experienced may be because of this. If you always leave the correct UV map both active and selected, you should be immune from this bug or its correction. ETA - checked and discovered that it is selected, not active, map that gets exported in Blender 2.71 !! ETA - see message from Gaia below, that the ambiguity will soon be removed by changing the option from "Active" to "Selected".
  5. Did you join the seven objects after doing the UV mapping? If so, you need to make sure the UV maps all had the same name, I think. Here is a thread about it. Note that the SL uploader will only upload one UV map. So if your joined mesh has more than one UV map, you also need to make sure it's the right one.
  6. Increasing LI when you shrink a mesh can only happen if the the physics shape is triangle-based (mesh specified, but "Analyze" button not clicked in the physics tab), you have selected physics shape type "Prim" inworld, and the physics weight is higher than the download weight so that it becomes the LI. As Rolig says, this is because the physics engine collision detection hates small/narrow triangles. What kind of physics shape is best depends on what your model is. Removing narrow triangles along edges will work to reduce the physics weight for walls etc. Otherwise you can use a hull-based shape (click "Analyse" - this works best if your physics msh is made of non-overlapping simple comvex pieces). Which kind of shape gives the lower physics weight is dependent on the details, including the size. Either way, you can reduce the physics weight by simplifying the physics mesh as far as possible while keeping just enough geometry to get the desired collision behaviour. Using the same mesh(es) as for the visible meshes only produces a good physics shape in rare cases.
  7. It's not completely clear what your question is. If you upload a mesh object that has up to eight materials, then those will be different "faces" of the object in SL and can be textured separately using the same methods as used for texturing separate faces of a cube etc. There are no special steps needed. If the material assignments are there in the dae file, they will be uploaded without any further effort. So you seem to be asking how to construct a mesh with multiple materials and export it to a dae file with the material assignments included. That depends on what 3D modelling software you are using. So I think you will have to tell us that before we can offer the help you are looking for.
  8. Nearly right. In the internal upload format there is a list of vertices in which each item has geometric position (xyz), normal (x,y,z) and UV coordinates (uv). Several triangles usually share each geometric position, where they meet, but if the shading is flat and the triangles have different normals, then that causes a new vertex item to be entered for each different normal. The same thing happens if the same geometric position appears in the UV map with different coordinates because the triangles are either side of a UV seam. Finally, each material has completely separate triangle and vertex lists. So vertices with the same geometric position along a material seam are duplicated for each material. In Blender the vertex count is the number of different geometric positions, but in the uploader it is the number after all that splitting. There are several implications. Smooth shading, where all triangles at a vertex share a normal, is nearly always going give fewer vertices. Smooth shading is a win-win because it can also make a smooth appearance with far fewer vertices than subdivision. Minimising the UV seams/islands can also reduce the vertex count. Once a vertex is split for normals or UV coordinates, it doesn't need to be split again for the other. So keeping UV seams along any avail;able sharp edges can also reduce vertex count. So in conclusion, vertices are frequently, but not always split, and by careful design you can minmise the splitting and thus minimise the download weight.
  9. The magic number where the uploader starts a new material is 21844 triangles. When all eight material slots get filled up that way, it goes on making more, but only eight will be included in the upload. So with more than 174752 triangles the remaining triangles are simply eliminated from the uploaded mesh.
  10. Sorry, but I don't know Hexagon. I was hoping that identifying the problem would prompt a Hexagon user to helop with the specific methods.
  11. Uh! I can't beleive they didn't fix thatn one in more than three years! Unfortunately, I am blocked from seeing the Jira evn thyough it'smy report - a leftover from then closed jira era - so I can't see if anything was done. Maybe if someone with full jira privileges sees this they can enlighten us. (it was VWR-27992, I think).
  12. Your jagged shapes come because you are subdividing (smoothing) polygons with more than four edges (n-gons). You can avoid that by making sure you have only quads (+ maybe a few triangles where you can't avoid them). In the meshes you show, you could achieve that by making new edges between appropriate pairs of vertices of the n-gons.
  13. The uploader only shows the physics weight of the default convex hull, That shape is always made, whether you make another shape or not. It is used when the physics shape type is set to convex hull, which it is when you first rez the mesh. Showing the Prim type weight in the uploader has been requested in the jira since years ago. I don't know why they don't implement it. The data has to be temporarily uploaded anyway to get the other weights. Unfortunately this means you can't see the prim-type weight until you upload and rez, and it can often be a shock. This is one of the good reasons for always testing your mesh on the beta grid before spending L$ uploading on the main grid. As far as the actual physics weight is concerned, there are a lot of threads discussing this subject. We would need to know some details about how you made the shape, what were the physics upload parameters (especially whether analyzed or not), how big the mesh is and what you need the physics shape to do, before being able to offer specific advice on optimising it. ETA: Ha ha - snap! Arton.
  14. Use Backface Culling, in the Shading section of the Properties panel, to make sure normals are the right way rounf. Make sure you don't have accidentally duplicated object or geometry on top of itself. Make sure all the faces are there in the active UV map. Wait until the bake finishes.
  15. Have you set the color/tint on the texture tab inworld to plain white?
  16. Can't really tell, but it looks like what happens if you have two copies of the mesh on top of each other.
  17. Your file - exported using SL preset static - uploaded with Second Life 3.7.24 (297623) Dec 19 2014 06:55:47 (Second Life Release). OK for me too. Does Firestorm use the same Havok library for decomposition? It used to use something different, but I thought they had licensed Havok now.
  18. I made an approximate copy of your mesh and didn't have any problem with it, using the release LL viewer. So I guess there's no fundamental problem. Are you perhaps using the project viewer that is supposed to have more detailed mesh upload error detection? That could have bugs in the new code. I ask because I don't recall seeing the error message you described. This was made entirely by extrusion, then mirroring twice (not modifier), not forgetting to flip normals each time. No UV. At this size, download weight was about 12, Analyzed (decomposed) physics weight about 30, triangle-based physics weight about 50. If you are going to use this, you should probably make a simplified physics mesh to get the weight down. Also, the auto-generated LOD meshes were HORRIBLE! Dare I ask what it is? :matte-motes-smile:
  19. I think you are saying that you have the same materials on (parts of) the model at all LODs, but that when you replaced a texture on one of the SL faces at the high LOD, you did not see the new texture on the SL face at the lower LOPD, despite that fact that it had the same material assignment. I have certainly never seen that behaviour. I would have to agree with Arton that the most likely reason is that you didn't get the material assignments as you thought you had, so that the unchanged low LOD faces are actually a different material. You need to check that out. The uploader now uses the material names to identify distinct materials. So it must be the exact same name, not, for example, "nice_material" and "nice_material.001". That's what you would get in Blender if you duplicated the material on the second LOD instead of just assigning it.
  20. Aquila's method is more direct, but here's another example. A few notes - at 1, actually it seems not to matter whether the underneath layer has an alpha channel or not. I didn't see any alpha glitch with the test example either way, maybe because the layers are so close. - At 7, this caught me out! If you have the masked layer in multiply mode, the AO shows up in Gimp, but not in the exported file (a bug?). You have to put the mode of the layer to Normal to export it! Here are these being used inworld. Top layer is a flat box prim, all set to default transparent texture, except for the ao layer on the top. Anothe box as close as possible underneath has the plank texture. On the right, the latter is tiled 4x4. At the bottom is a closeup of that. (I know it isn't tiled seamlessly - that's because I only used a part of the plank texture!). Of course, with mesh the layers would most likely be two materials on the same mesh. but th principle is the same. There can be a problem accessing the underneath layer to put the right texture on it. Some thought in designing the mesh might be required (eg make it accessible at lowest LOD and use RenderVolumeLODFactor to see it), or you can always access the materials by face number in a script. ETA - you can weaken the strength of the shadowing by dialing up the transparency of the AO layer.
  21. Here's another attemp to explain, as this question does come up quite often... On the top left here is a small (256x256) piece of a 1024x1024 floor texture. Let's suppose the floor is using a 4x4 tiling of this texture, and looking at it from a certain place in SL I see one pixel of the texture per pixel of the screen. Then it looks exactly as it does at the top left. Now I want to combine this floor texturing with an AO/lighting map. So I make a texture 4096x4096 by tiling it 4x4. I combine that with the AO map expanded to 4096x4096 with appropriate smoothing. Now my texture has the same number of pixels per plank as before. However, I have to shrink it to 1024x1024 to upload it to SL. (If I don't, the uploader will shrink it for me.) So I do that in my image editor. Now the same number of pixels looks like the top right image. If I zoom in 4x, so that I am looking at the same area of floor as before, I see the image at the lower left. Each of the pixels in the data is now 4x4 pixels on screen, and I can see their sharp edges. In SL, I now see 4x4 times fewer pixels of the texture when I look at the floor from the same place as before. I don't see the sharp edges because the rendering tries to smooth out the sharp edges. This gives something like the image at the bottom right. It looks blurry because it is based on too little texture data. In fact, it is more complicated than this because the UV mapping means that the floor actually has less than the whole texture on it, and that contributes to the lower pixel density as well, making the situation even worse, but the principle is similar. There is (presently) no way around this in SL as long as externally combined textures are used. That is frustrating because we would like to use a small tiled texture with a small untiled AO map superimposed. The AO map can be small because blurring it (a bit) doesn't matter. Thw two small textures would be much more efficient than one large combined one, as well as overcoming this resolution limitation. The only solutio, as Arton pointed out, is to overlay the AO on an extra surface with different tiling. However, that means you have to use alpha blending, which is very gpu resource hungry, as well as extra geometry. It's not really very satisfactory.
  22. Physics properties defined in Blender cannot be used by the SL uploader. Instead, the uploader generates a physics/collision shape from ordinary mesh(es). There is a Physics tab in the upload dialog that controls this process. The first thing to know is that every mesh gets a default physics shape which is used when the object is first rezzed, and whenever the Physics Shape Type is set to "Convex Hull" on the features tab of the edit dialog. If you do nothing with the physics tab, then this default shape is made from the convex hull (shrink wrapped shape with no indetations) of the Low (3rd) LOD visible mesh. You can tell the uploader to use a specific hysics mesh, either choosing one of the visible LOD meshes or specifying a different file on the physics tab. If you do that, the default physics shape will be the convex hull of the specified mesh. This will also generate another physics shape that can be used inworld if you switch the physics shape type to "Prim". Unlike the default shape, the Prim-type shape can have concavities. So that is what is required of you want to be able to go inside the mesh. There are two kinds of Prim-type physics shapes. If you specify a mesh but do not click "Analyze", you will get a triangle-based physics shape, consisting of the triangles of the specified mesh. The strange thing about triangle-based shapes is that their physics weight gets larger as the triangles get smaller. If the physics weight is higher than the download weight (of the visible mesh) it becomes the LI. So small triangles (including long narrow triangles) are the enemy of low LI. For walls, you have to delete the thin edges. There are also some strange effects with some triangle-based shapes that make them easier to penetrate by accident, and you can get stuck between the two layers of walls with two separate triangle-based sides. Finally. if any dimension of a triangle-based shape goes below 0.5m it will be switched to type convex hull, so that you can't go through holes any more. That often catches people out with doors they can't get through. So although triangle-based shapes can produced the lowest physics weights for many buildings, there are a few pitfalls to deal with. If you do click "Analyze", then the uploader will generate a physics shape made of a collection of convex hulls. There are several parameters that control how the decomposition is done, but the best advice especially for buildings, is to start with a mesh that consists only of non-ovelapping simple shapes that are already convex hulls, with as few curve segments as possible. So each wall should be a single stretched cube, three if it has a door and four if it has a window, etc. You can see in the preview whether the shape still has openings where you need them, by using the exploded view slider. The weight will depend on the number of hulls and vertices, which are indicated after you do the "Analyze". If your model has multiple separate mesh objects, then you need to have a separate physics mesh object for each. Also, they need to be in the corresponding order in the collada files for the uploader knows to use the right physics with each visible object. In Blender, you can do that by naming them in the same alphabetical order and then using the "Sort by Object Name" option on export. For each mesh object, the physics msh will be stretched and/or squeezed to fit the bounding box of the corresponding visible mesh object. So it's easiest if you make sure they both fit the same bounding box before uploading. The bounding box is the smallest box aligned with all three. xyz, axes that will fit round the mesh geometry. So my advice here is to first make a physics shape mesh from non-overlapping cubes that matches your visible mesh, and use that on the physics tab with "Analyze". Then you can experiment with parameters and triangles.
  23. You will need to make a proper physics shape. I had a look at your bridge, as you had sensibly left it on the beta grid. Here is a picture of it with the the overlayed physics shape display (Develop->Rendder Meatadata->Physics Shapes). It looks like you have used deault LOD and physics meas. What happens then is that the uploader makes the default physics shape (the one you get with physics shape type set to Convex Hull) from the convex hull of the Low LOD mesh. Here is your Low LOD (seen by setting very low RenderVolumeLODFactor), with the relevant mesh highlighted. You can see that shrink wrapping that gives you the blue physics shape above. What you need is a specifically made physics mesh. For the bridge, I would stick to shapes made of non-overlapping simple stretched cubes, something like this for that piece of the mesh . Upload these in the physics tab and click "Analyze" button. Then inworld, set the physics mesh type to "Prim" on the features tab of the edit dialog. You need to make a separate physics mesh object for each of the separate onbjects in your visible mesh file, and they have to be in the same order in the file as their corresponding visible meshes are. In Blender, you can assure this by naming them in the same alphabetical order and checking the Sort by Object Name box (automatic if you choose SL presets). Another thing about your bridge - It migh be better with fewer separate meshes. For example, the curved arches are separate from the main bridge. This means they have to have their own separate physics mesh object. If they are combined, then the combined physics mesh can be much simpler. Combining affects the LOD switch distance if it affects the size of the bounding box, in this case just of the curved arches. This would have a large effect on LI for the whole bridge, but not for combining just the arch. For this kind of mesh, you can also improve the LOD behaviour by making your own LOD meshes, but that is more work, and you have to decide for yourself whether you want to do that.
  24. The easiest way to meet the requirement would be to add a single triangle and put the missing material on it. You might find it more satisfactory to do the decimation yourself than to use the modifier. The Delete>Dissolve Edges (with Dissolve Vertices box checked) is your friend there, because it keeps the UV map intact. That way you can avoid deleting UV island seams, which can mess things up, and make sure you don't remove the last poly of any material. It all depends what the mesh is like. The Decimate modifier has been improved since I really used it much. So it probably works well for some meshes.
×
×
  • Create New...