Jump to content

Drongle McMahon

Advisor
  • Posts

    3,539
  • Joined

  • Last visited

Everything posted by Drongle McMahon

  1. I'm sure I am not following this entirely, but I am guessing (some of) the problem is that you may have missed one important point that also took me a while to realise: The target image for baking is the one referenced in the selected Image Texture node in the node setup for each material. There can be other Image Texture nodes, unselected, that can refer to other, source, images. So you have to make sure (a) that each mayetrial node setup has an Image Texture node selected, and (b) that that node references the intended output image. Then to bake multiple materials to one image, use UV map where the materials' islands don't overlap*; select the target image in the output each of the materials' output Image Texture nodes, make sure those nodes are selected, and bake. It is very easy to forget, but vitally important to remember, to reselect the output Image Texture node when you have been adjusting other nodes. If you do this, I don't think there should be a problem. *I think the currently selected UV map is used by default. If you have different UV maps for applying and baking, you can add UVmap input nodes, select the appropriate maps, and connect them to the vector input of source and/or target Image Texture nodes, to make sure the right map gets used for each purpose. Here's an example. The planter has three materials, base, wood, and compost. There are two UV maps, UVsources, with overlapping maps for applying the textures, and UVtarget for baking, where thye materials don't overlap. UVsources is shown in the blue image surrounded with green. It is applied to the vector input of all the input Image Texture nodes. UVtarget is applied to the vector input of the output Image texture of all three materials. The image selected in all these three are the same, as indicated by the yellow lines. The images selected in the input Image Texture nodes are different, and are shown by the green connecting lines. The base texturing is a straightforward application of the input diffuse texture. The compost is complicated just by the addition of a normal map. The wood is more complicated because the input texture has to be repeated and rotated, by the Mapping node. It also uses the diffuse texture as a displacement map, and uses a mixture of diffuse and glossy shading. Notice that the output Image Texture nodes are selected (orange outline) for each material. In this state, the "combined" bake produces the image shown (given suitable lighting). The rendered image is shown in the second picture (i.e. I forgot to include it!).
  2. If you upload a mesh without a UV map, the result is unpredictable, because the uploader appears to use uninitialised data instead. Sometimes its all zeros, so the surface is all textured with one pixel; sometimes it's random, which gives the sort of effect I think you are describing. The latter also usually means the map is completely fragmented, which gives the highest possible download weight - each vertex gets repeated for every triangle (usually 6) that shares it. So the LI you are getting may be much higher than you would get in a UV mapped version. The minimum LI for any mesh object is 0.5 (sever weight). So if you have separate objects for cap and jar, you will always end up with at least 1 LI. Instead, you can make them two materials on the same mesh. Then you can aim at 0.5 for the combination. You seem to have four segments of bevel on your jar top. It should be possible with less. Try experimenting with bevels with two segments and varying the Profile (I am assuming Blender here!). A profile of 1.0 will keep the beveled edges in the plane of the adjacent faces (as if you used loop cut & slide instead). This has the effect of leaving the adjacent triangles shaded as they were before bevelling, but with rounded shading along the edge. Profile 0.5 is the default, which leaks the rounding effect into the adjacent faces. In all cases, you need to be using smooth shading. Use Matcap shading to see the effects in Blender. Instead of bevelling, you can use an all-smooth smooth shaded object and a normal map to "repair" the horrible shading that gives you. In the picture, both the left two jars have download weights of less than 0.3 (size 0.1 x 0.1 x 0.1), because they share the same horribly decimated lowest LOD (LODs top=high to bottom=lowest). The left mesh uses bevels to get the right shading. That's a lot of extra geometry which means it has to use medium and low LOD meshes to get that low weight. The middle jar is the same geometry as that on the right, no bevels and smooth shading. On the right it has no normal map. In the middle it has a normal map baked from the high LOD mesh on the left. Tha noamal map is only 128x128, so the texture load is small. As it is much less geometry, this jar can use the same mesh for the three higher LODs. It isn't quite as nice for the highest LOD as the jar on the left, but it's better at the intermediate LODs. However, the normal map will only work for viewers using advanced lighting. Without the normal map, it will look like the jar on the right. So the choice may depend on who is going to look at it.
  3. The first thing you need to do is to look at the download and physics weights separately. You don't tell us how you specified the physics, or it it was default, but the physics weight can be responsible for high LI. The uploader tells you the default "convex hull" physics weight. If you are using "prim" type physics, you need to look inworld using the "More Info" link on the edit browser to see these weights. Next, what about size? The download weight part of LI is very dependent on size. Are you looking at the LI at the final intended size? Then, for small items the download weight is dominated by the lowest LOD version of your object. How are you making the lower LOD meshes? The default automatic LODs may be inappropriate. Lastly, the thing that correlates most closely with the download weight is not the number of triangles (12 bytes of data each), but the count of vertexes you see in the uploader (16 bytes each). This is not generally the same as the vertex count in your modelling software because vertices get duplicated if the same goemetric point appears in adjacent triangles with different normals and/or different UV coordinates. So flat shading (sharp edges) and fragmented UV maps can lead to very large increases in vertex count, download weight and LI.
  4. I was sure there must be a way of doing this without curves. Here is one ... 1. Start as before, but do two turns of the screw. 2. With everything selected, do Inset Faces (shortcut I, then drag). 3. Alt-right-click to select the middle faces. 4. Delete faces. 5. Alt-right-click to select outer spiral faces and delete them. 6. Select the two distorted faces and delete them too. 7. Select everything. 8. Extrude (shortcut E, then drag). 9. Finish off rounded end manually. Set smooth shading. Add Bevel with profile=1, segements=2.
  5. Here's a way to make wrought iron spirals. 1. Start with a single edge of a plane, just two verts and an edge, centered at 0,0 and with the cursor snapped to it. Then apply the screw tool with the parameters shown. 2. Now select one radial edge, use select-similar-length to select them all, and delete these edges, leaving just the spiral. 3. Convert it to a curve. 4. Now if you select one point and do Ctrl+L, you see it's actually two curves, one from each end of the original edge*. 5. So select one of the overlapping points and delete it, leaving a gap. 6. Then select the two free ends and connect them with the F shortcut. (See the arrows now all point the same way). 7. Now that you have one curve, you can put a bevel shape curve (red, just a circle with resolution=1, rotated 45deg and squeezed), and a taper curve (orange, just to taper one end). Convert the result to mesh and carry on as usual. Some excess edge loop removal will be necessary to get reasonable LI. *ETA: If you do Mesh-Vertices-Remove Doubles before converting to curve, then you get only one curve and don't need steps 4-6!
  6. Thak you. A couple of things I learned, that might be useful for anyone wanting to do similar stuff... 1. When you do the curve twisting with rotate, set the the "Z-Up" twist method in the Curve "Shape" properties section. Otherwise you get unevenly twisted parts. (Eta: I was twisting around the X axis, The effect may be axis-dependent.) 2. Don't use sharp edges. Use a narrow Bevel with Segments set to 2, Profile set to 1.0, and Limit Method set to angle 60deg, so that the shader produces the highlights along the edges. Without these it just doesn't look right. It's 3 x the number of triangles, but only 1.5 x the vertices,compare with sharp edges. (I put the bevels in with loop cut&slide, which, in retrospect, was a waste of time. I had forgotten you can use those bevel settings to keep the adjacent faces completely flat). The limit angle stops it bevelling between segments. The UV map survives the bevel with these parameters. 3. I use a nearly black colour with blank texture, a normal map I made for sand (with no alpha) and blank specular map, glossiness 25, environment 12.
  7. I discovered this blacksmithing is hard work. This started as a curve with bevel and tape, duplicated four times with 90 degree rotations and then proportional edit rotation on each end of the combined curve. Then changed it into mesh. Still a lot of work needed including UV map so I could use a normal map. Just about worth the effort, although I'm not sure I will do any more. This is inworld with 3pm daylight a single additional light source.
  8. Yes, those are good too. :matte-motes-smile: Now I'm looking for a way to thicken in one direction at a time, or to have two taper objects, one for each direction. Is there any way to do that? I suppose the bevel extrude, but that seems onlky to work on the whole thing if you have no bevel oblect? Has anyone done a wrought-iron addon?
  9. Fixed. It works :matte-motes-smile: before on left, after on right.
  10. If you learn to use Bevel and Taper objects instead of simple bevel, you can be a much more accomplished blacksmith...
  11. Sadly, for me that nightly build Blender (64bit) is crashing as soon as I select Custom normals for data trandfer either using tool or modifier. So I can't try anything out. :matte-motes-crying: Is there a proper place to report that for these builds?
  12. "One reason for "missing required level of detail" is due to using ngons (polygons with 5 or more faces) in your mesh." That's interesting, especially as correcting it did solve the problem. But what circumstances make it a problem? I just tried uploading a dodecahedron with all twelve faces as pentagons (triangulate off in exporter - checked collada and they were all 5-sided in the <polylist>). There was no problem. Same for a cylinder with 32gon caps. The uploader triangulated them as it is supposed to. So there must be a some other factor that makes this a problem. Do you know what that might be? Is it only for rigged mesh?
  13. triangle count is: 9822 OK.It's not the 21844 problem then, which is what I was thinking of. Next thing is to check all the normals.
  14. Please tell us whether this happens if you rez on the ground instead of wearing it, and if it does, how many triangles it has. You can see the triangle count in the uploader, or you can turn on Show Info->Show Render Info in the Develop menu - then if you select something the third, indented line from the bottom of the overlay text shows the triangle and vertex counts of the selected object. Finally, your 3D software should give you a triangle count, which might be different.
  15. "Maybe you can comment on this and explain..." TLDR: It gives uploaded objects the right names. Oh dear .... how long have you got? :matte-motes-smile: The Project-Importer viewer is the branch of development where there are supposedly going to be several improvements to the mesh uploader code. One of these is to change how objects in multi-object LOD files are associated so that the right objects get associated at each LOD (and physics). In the release viewer, this is done by the order the objects appear in the file. This has often led to problems we have had to deal with in this forum, because it was difficult to control the order of objects in the file (specifically, it was the order of <geometry> sections that mattered). Gaia solved this for Blender users by adding the sort-by-object-name export option, so that we could control them by a flexible naming convention. The new scheme in the Project-Importer viewer is in development, and it can't be certain how much of it will get through to release. At the moment it associates files at different LODs (and physics) using a new and inflexible naming convention (not so far documented except by reading the code). This specifies precise names for objects in the lower LODs. They have to be the high LOD name with postfixes "_LOD2", "_LOD1" and "LOD0" (and _PHYS"). The information available suggests that if you don't use this scheme, the uploader will "fall back" to use the old index-based scheme, but so far that isn't working (jiras done). There are all sorts of problems with this, but hopefully they can be worked out. Now, to get the names, there are several places the uploader can look. For the names eventually attached to the objects, it is currently using the <instance_geometry> tag of the <visual_scene> section of the collada file. This is the part that says "this scene contains an instance of this geometry", where the geometry is referenced by its ID attribute. The <instance_geometry> is allowed to have an optional "name" attribute and an optional "ID" attribute (the latter has to be unique in the file), and that is where the uploader looks first for a name. If the uploader doesn't find one of these, then it will look in the parent section instead, which is a <node>. However, because a <node> may have multiple <instance geometry>s*, it needs to make a unique name for each. It does that by inserting a sibling index at the end of the <node> name. So if there is no name in <instance_geometry>, then the first object will be called "nodename_1", then "nodename_2" etc. For the lower LODs, it actually (tries to) insert the index before the LOD postfix, making names like "nodename_1_LOD2" etc. Whether it has to add the sibling index or gets the name directly from <instance_geometry>, that name is used as the object name inworld, the name you see in your inventory when you upload the object(s). The Blender exporter uses the Blender object name to make both the name and ID attributes of the <node> by which each object is made part of the <visual_scene>, but it didn't put any name or ID attribute in the <instance_geometry> inside the <node>. So the uploader was adding the sibling index. Obviously we would prefer to have the object appear inworld with the name we gave it in Blender, without that index, and that should now be accomplished by putting the object name in the name attribute of the <instance_geometry>, as Gaia has now kindly done. However nice, that convenience alone would not have been enough for me to request this change. In fact, there was something wrong with the code that added the sibling index, putting it at the end in the LOD names, when it was supposed to be in the middle. The result was that the rest of the code could never find the expected names for the LOD meshes, and it was impossible to upload any LOD files at all, even if they used the correct naming scheme. Putting the name in the <instance_geometry> was a way around that, as it would avoid the sibling indices altogether. That was why I requested it - to make it possible to upload LOD meshes in the Project-Importer viewer. As it turns out, they have fixed the index problem in the meantime, and it is now possible to upload LOD files as long as they adhere to the very specific naming convention. The fallback to the old indexing system is still not working. So the change in the Blender exporter was no longer absolutely necessary for uploading, but it is still very welcome because it should give the the intended name for the uploaded object, without an unnecessary "_1" attached. The other big change in the Project-Importer viewer is that it allows the upload of models, single or multiple object, with more than eight materials per object. It does this by splitting the object into multiple objects. Unfortunately there are some rather unpleasant complications when this gets combined with the old >21844 triangle problem. I would like to hope that they will take this opportunity to make the 21844 triangles per material an official and enforced limit, so that all the old and new problems could be dismissed at one go. After all, if you have as many materials as you like, you can just use multiple identical ones of you really do need to have them with so may triangles. However, experience tells me not to be optimistic. *I don't think the Blender exporter does this but other exporters may. PS For anyone whom found this too simplified, try jiras BUG-83734 and BUG-8996 for the naming stuff and BUG-9015 for the >21844 triangle stuff.
  16. "making the faces of the triangles invisible in SL" Just to be annoyingly pedantic - :matte-motes-evil: - In fact the SL uploader does notice the collapsed triangle(s) used in this sort of technique, and it discards them from the uploaded mesh*, However, the technique works because it doesn't do that before using the positions of the verttices to update the bounding box. So you are left with the bounding box you want, but not the triangles. The best of both worlds.They are not just invisible, they are non-existent! *You can see this is the case if you turn on Show Info->Render Info on the debug menu. Then you can see the selection triangle count near the bottom of the text that gets overlaid on the screen. It doesn't include the degenerate triangle(s).
  17. Probably not going to be much help, but... The uploader tries to work out the bounding box from the most extreme coordinates it finds as it loads in the triangles (or polys) from the coaalada file. However, at least for now, it does this by starting with the first position in the <geometry> section to set minimum and maximum for each dimension. Then as it reads in the triangles, it adjust them when it finds a position smaller than the minimum or larger than the maximum. That works ok as long as the first position in the <geometry> is referenced in a triangle. If it isn't, and it is outside the geometry, then the bounding box will be left including it, and consequently will extend beyond the geometry. I don't have Mayal so I can't check, but that is one possibility, that your operations are leaving an unreferenced vertex that gets included first in the collada file. You can look for this if you know how to peruse a collada file. Geometry can also be "lost" if you have too many triangles. I haven't tested this except with the Project Import development viewer, but it is possible that could have the same sort of effect. It takes a lot of triangles though. If you have only one material, it would have to be more than 174752. You certainly shouldn't have that many in a small thing like that.
  18. Hmm. Things are never as easy as they should be. :matte-motes-dead:
  19. Brilliant. :smileyvery-happy: Are we getting the custom normal export too? Or will that take longer? What about object names in <instance geometry> name attribute ?
  20. Also, Sansara is the name of the first continent in SL.
  21. "no difference between a)rotating by 180 deg and b)changing the sign of the y-offset" I think there should be a difference, but you have to look at the right lighting conditions. I find using a prim light source and moving it around is best, with a bit of shininess - blank specular map with default spinners is ok.Move the light back and forth along either axis. If you have just the y flipped (a common problem where different software uses different swizzles), you get it behaving like a bump in one direction but like a cavity in the other. That can be a bit disturbing, as it is physically inconsistent. Your brain tries to suppress it. Same thing with x axis flipped on its own. Rotating 180 is the same as flipping both axes, I think, which completes the job, making it all bump or all cavity.
  22. Well, I thought I had better try this stuff out. Turns out I got one thing wrong for the 90 degrees. After you have put the red channel map into the green channel, you have to invert it. Otherwise, you get something flipped and inside-out (inside-out can be corrected inworld) as well as rotated. I did the maths that showed me the error. In the end I did it by: decompose; invert red layer; compose putting red layer into green channel and green layer into red channel. The 45 degree rotation is MUCH harder. The combinations of the two channels are complicated, and you have to make sure you don't get intermediate pixel values outside the range 0:255. Otherwise they get truncated. You also have to keep the whole thing scaled down because the rotated thing has to fit in the bounding box. I tried but didn't quite succeed, You can see there is a truncated bit. Still, that's enough to convince me it can be done. The pic shaws the original on the left, the 90 degree rotated one in the middle and the 45 degree one on the right. All unrotated inworld. The 45 degree one is stretched tothe size of the others because of the shrinking done for the maths. Someone more numerate than me could probably write a script for generalised rotations using matrices. My attemp was: decompose; make three copies each of red and green layers; scale one of each to 127/255 with the colour level editor and merge the with Addition mode; scale another one of each to 22/255 and then subtracy each of those from the previously added layer; that's the new red channel; Now scale the last two copies by 149/255; make a new white layer and scale it to 106/255 and add it to the scaled green ;ayer; then subtract the scaled red layer from the result; that's the new green channel. Now just compose using these new channels and the old blue channel.
  23. Lots of people use Blender, but very few still use it for sculpties. I have already forgotten how to make them. There is the Primstar addon (google it), but that's no free any more. If the change you need is a (combination of) 90 degree rotation, you can achive this without Blender by exchanging the channels. Red=x, green=y, blue=z. So you can just use Gimp (it's free) Color>Components>Channel mixer to swap the channel data, and thus switch the axes. For example, to exchange red and green, set red channel inputs to 0,100,0 and green to 100,0,0. To flip an axis, set it to -100. Or you can use Color>Components>Decompose. Then you can manipulate each channel to your hearts content, then use Color>Components>Compose to put them back in another order. If you need rotations other than 90 degrees, you are going to have to get into some complex (pun intended) mathematics. I suspect you could achieve that with the decomposed image and a lot of manipultion, but I'm not going to try it myself just now. I can't reme,ber where my sculptmaps are, or what they are supposed to be.
  24. Out of interest, I thought I would see whether I could make a normal map "deeper" in Gimp. The problem with using the usual colour manipulations is that the colours you are left with are not normalised, As Arton said, the rgb value of eaxh pixel is a vector controlling the angle between the geometric normal and the normal you want. They are encoded as signed buyes, so that the byte value 0 is -128, byte value 128 is 0 and byte 255 is 127. Since (you don't want to have normals pointing into the surface, blue is always >= 128, while red and green can vary over the who;e range. However, the values are supposed to be normalised. That is to say the vectors should always have the same length, 127. To satisfy that, you need sqrt(r²+g²+c²) = 127. Although rendering software will generally do its best to deal with un-normalised values, the normalmap plugin in Gimp has an option to just normalise the pixels in a normal map. I started with a normal map baked in Blender (top left) and use curves to stretch the red and green chanels as shown at the top right. Then I applied the normalise funtion from normalmap. That gave the map at the bottom left. To its right is a map baked in Blender from the same geometry after stretching it 2x aling the baking ray direction (ie z for the horizontal plane baking target). You can see it's fairly similar to the one strtched in Gimp. At the bottom are the three maps applied to a prim cube inworld. I seems to have worked reasonably well. I would still prefer to do the bake with the scaled geometry, but where you don't have the geometry (e.g. in normal maps made from images), it should work ok. It's important to keep the curves symetrical so that you don't tilt the whole face one way or the other. By the way, these maps are the right way round. They should look as if there is a red light on the right and a green light at the top. Otherwise, the "swizzle" is wrong and you may get conflictiing highlighting in one direction or the other. If both are wrong, then it looks as if the whole thing is the wrong vway bround, bumps turn into hollows and vice-versa. What matter here is the orientation of the map with respect to the U and V axes of your UV map. So that is why rotating the map before applying it has the same effects. By sefault, the normalmap plugin for Gimp uses different "swizzle" from the Blender baker. You have to check the Invert Y checkbox to get the right output fot SL. Alternatively, if you do have a map with the wrong swizzle, you can just flip the vertical repeat in inworld (by setting it to a negative number). You would have to flip the diffuse texture too if it's matched with the normal map.
×
×
  • Create New...