Jump to content

Drongle McMahon

Advisor
  • Posts

    3,539
  • Joined

  • Last visited

Everything posted by Drongle McMahon

  1. Looks to me like a UV mapping issue. The texture on the "top" is too stretched out. Increasing repeats. in one direction or both, might help. The sculpty UV map is fixed, which does place severe constraints on what you can do with texturing. With mesh, however, you can do anything you like. So proper UV mapping for good texturing of mesh is entirely dependent on the skill and effort of the creator.
  2. http://wiki.secondlife.com/wiki/Linden_Lab_Official:Intellectual_Property
  3. Argghh! I couldn't get it to work - then I realised that the 64-bit version I installed is a day behind the 32-bit version :matte-motes-confused:
  4. Oh dear. It seems to be getting more and more frequent. Soon it will be impossible to stay logged in long enough to post a reply. Then we will have perfect peace, I suppose. And it's spread to the jira, which had mercifully escaped up till now, at least for me.
  5. I think lots of people will be as pleased with that as I will be.
  6. "Drongle, I learnt a new word from a french guy yesterday on the beta. Merlon." A good one, but where would the merlons be without their crenels?
  7. Thanks, especially for introducing me to the word "greeeble". :matte-motes-grin:
  8. It should be as easy as "select the 2 objects, press the weld button, done" :matte-motes-grin: Is that just in Avastar or Blender? If it's Blender, can we have an intra-object version too? "select two edge loops > average edge normals" would give more explicit control. Object could be separated after that, it that is the intention. Maybe it could use existing code from bridge edgeloops to work out vertex correspondences and reject invalid cases. PS. While you are here, can we have the object name put into the <instance geometry> as a name attribute to cope with innovations in Project-Importer (see your private messages:-)
  9. Ah. The problem with that is that he selects all the edges and makes them inti seams. That makes each face a separate UV 'island', so that the texture doesn't spread smoothly from one face to the next. It's usually better to use the minimum number of seams. Using too few will leave you with distorted textures. In the picture you can see on the left a sword with seams highlighted. Doing the simple unwrap with these seams gives the islands shown in the upper UV map. Now, on the right, the texture spreads nicely, with discontinuities in acceptable places. Note that here the three different parts have non-overlapping UV maps. That means a single image can be used to put different textures on each part. Then just that one texture can be applied in SL (or uploaded with the mesh). Also on the left, the three different colours are showing three different materials. In this case, the UV map of each part can fill up the entire area of the UV map. Each material in Blender becomes a different face in SL, to which you can apply a different texture. That means there are more pixels per area on the object in SL, which sometimes works better. It also means you can use general purpose textures, say steel, leather and brass, without having to make a custom texture for each sword. The overlapping of the UV maps doesn't matter because the faces are textured independently. In fact it's an advantage because you get more pixels per unit area from the same texture. Watch the tutorials Aquila linked to. They are excellent. ETA - the unwrapping was the default you get by pressing the U key twice. Just make sure everything is selected, or all of one material if you have materials, before doing that. If the map is horribly distorted, it's most likely that you need more seams somewhere. The worst is where you have a cylinder without an edge seam. I edited these a bit for neatness sake, but avoided changing them too much. Keep looking at the test texture to see where there is distortion. I usually spend a lot more time getting the UV mapping right than making the mesh!
  10. Those both look like Blender ... can see the grid .... from opposite sides. One thing is clear. Youf UV mapping is completely fragmented, so that the texture doesn't flow across edges. I'm not sure how that might have happened. Maybe you marked all edges as seams before unwrapping? Or was this deliberate? I need to see it inworld to see whether it's the same or different UV map. You need to distinguish between images and UV maps. This is something many people have difficulty with at first. A UV map is just a list of numbers. They tell the renderer which positions in the texture image will be made to lie at each vertex when it is stretched over the mesh. When you look at the lines and vertices in the UV editor, that is just a way of visualising and editing those positions. It doesn't have any real existence as an image. When you put an image underneath it, Blender will do the strtching of the texture over the selected faces, according to the map, and you see it on the mesh (if you have texture mode in the 3D view). The image is not a UV map. It doesn't contain the information about what goes where. Using the same image with a different UV map will give a completely different result. You can get this idea fixed by dragging some of the vertices around while looking at the results on the mesh. I'm not sure what you mean by "parts" here. Is it even more than three objects (which could explain the fragmented UV mapping). Are they separate objects? For a model like this, I would make it all in one object. The you have two chouces with the texturing. If you leave it as just one material, that means you can only apply one texture to it in SL. So then the UV maps of the different parts must not overlap, and the texture must have the parts all laid out in the same image. If you make three separate materials, then you can apply a different texture to each. In that case, the UV maps that are going to be used for different textures can overlap. I'll make a couple of pictures to illustrate (or if you are lucky, Aquila will beat me to it!)
  11. Can you put this texture on; in Blender, load it in the UV edit window wgile all faces are selected, then set texture display type and shadeless shading; inworld, just drop it on each face. With pictures of those it will be easier to see what's going on. Meanwhile, do you have more than one object? and do you have more than one UV map per object? You can tell by looking at the properties panel on the right after clicking the little triangle mesh tab icon. There is a list there. The exporter will export multiple UV maps, but the uploader will only use one of the. Getting the wrong one often causes people problems. You can select the right one in the list and make sure the Only active (or selected) UV option of the exporter.
  12. "Does increasing the number of bounces under the Light Paths do much for me?" Here's an example. This is lit with a set of strip lights (emitting planes) above left, plus a bit of ambient occlusion. There's a red cylinder off to the left. The top row is rendered view, middle is (part of) a combined bake, and bottom is the baked texture (set shadeless in Shading, texture display). On the left is zero bounces, in the middle one, and at the right two. You can see the effect of adding bounces inside the holes, and by the appearance of the indirect light from the red cylinder. At the bottom, you can see that there is a similar effect on the baked texture, but also that the baked 'highlights' are nothing like the rendered ones. The indirect red light is also different. That's the effect of the strange camera used by the baking, always looking along the surface's normal.It's as if for each pixel, you moved the camera for to a point of the the normal coming out of the surface where that pixel is.
  13. That makes sense. I guess it must have been picking the wrong one of the normals at that vertex. So that removing them forced it to pick the right one (Maybe that could be fixed in Avastar by picking the most similar normal when the relevant vertex isn't smooth). Good - Now we have ways to do it with or without Avastar. I guess both will be redundant if Gaia gets the custom normal export into the Blender exporter, although the Avastar one may still be simpler.
  14. "Blender bakes the specular reflection for each pixel as if it was looking straight at it." Yes, along the normal, which means it doesn't even represent the real highlights for any real camera position. I once spent a week trying to make a complicated Cycles node setup, by manipulating the normals, that would give the baked highlights at least for one camera angle and lighting setup. I got fairly close, but in the end there were always some horrible artefacts left. Anyway, as you say, that wouldn't be much use in SL because it would only work with static lighting and camera. I also tried desperately to make a Cycles setup that would mimic SL shading +/- ALM, for testing baked textures. Failed there too. If anyone with better knowledge of Cycles etc. can do better with these attempts, it would be valuable.
  15. "As I said, I didn't read the entire thread .... No idea if there is a solution or workaround for that too." Naughty Kwak. That's not like you. I already gave two work-arounds for the bb problem. :matte-motes-wink:
  16. "I would bet then that a 1024 texture using png creates much less lag in sl then a 1024 in tga." Sadly, I'm afraid you would lose that bet. Both are lossless storage methods. Png is compressed, while tga isn't. So the data read into the viewer (and decompressed for png) when you upload is exactly the same. The viewer then converts the image to JPEG2000 (a lossy compression method) before uploading it to the asset server, and it's that JPEG2000 that gets downloaded again to viewers looking at your texture. So with identical input data, the uploaded and downloaded data size will be identical, irrespective of which of these two on-disc formats you started with. You will save space on your disc though with png. More important, by making sure you have no alpha channel, you can reduce all of them, including the JPEG2000.
  17. I principle, yes you could. You would have to know two things, which normals to edit, and what they need to be changed to. For the first, you have to negotiate the indexing from the <polylist>, finding which normal goes with which position, and using the position to tell whether that's one of the ones you need to edit and which normal you need to put there. Then you have to work out what the new normal needs to be at each of the relevant vertices, which means interpolating between the face normals of the actual face and the imaginary one you will be joining it to. The curved road section is just about as easy as it gets for that. I think there would be only twelve normals needing editing. More helpful, because the joined edges are in planes perpendicular to one axis, the new normals all happen to be pointing along another axis. So no calculation is needed, just making sure you get it right twelve times. However, even in this case I think that is enormously more demanding and error prone than simply deleting a nicely delimited chunk of text. In more general case, with many more vertices and with joints at all sorts of angles, it would be extremely laborious and difficult, while still only one chunk of text has to be deleted in the alternative. Of course if it were done in software, the work and potential error would be removed, but I guess that is what Gaia already did in Avastar. If it were required to work on an already exported collada file, I think the hardest part there would be having sufficiently reliable and flexible ways to identify which edges of different objects are going to get joined and where. I don't immediately see how it could be done at all without including in the file both/all objects that need to be joined, or at least surrogates like the extra geometry that I delete by hand.That is then just about as much work in both methods. Someone could write a simple script to delete the "deleteme" polylist though. That would help people who don't feel comfortable editing dae files. It could easily check for and correct the problem that was extending the bounding box, which is a bit harder than the simple deletion. In fact, it could quite easily remove all the unreferenced data from the float arrays (position, normal, uv) as well, updating the indices in the polylist appropriately. Hmm. If I wrote it in R, I guess nobody would use it. :matte-motes-frown:
  18. "It's possible you can find a way of changing that within Blender" In fact it turns out to be very easy! In my file, at least, I just selected everything and did Menu > Mesh > Sort Elements > Reverse, before exporting and deleting the "deleteme" polylist. That worked. Of course it won't work if the last element is also from an extension, but I don't know that that will ever be the case. If it is, then some of the other sort options might work.
  19. "However i will now go and look into support of custom normals" :matte-motes-grin: Excellent. One day I'll have to look into building BLender so I can help. For now I find it too daunting, not least because I haven't found any clear specification of Blenders internal data structures. So meanwhile please accept lots of gratitude on behalf of all of us for you work on this.
  20. There must be at least a million very different ways to do this. Of those I tried out, here is the one that seemed to offer best compromise between easiness and quality. Others will prefer different approaches. Experiment is the key. The LI could be a lot lower with more removalof redundant triangles, but that is left to the reader. The basic idea with this kind of height map is to concentrate vertices in the border where the height variation is, while reducing the redundant triangle count elsewhere. Also, the edges loops are made to approximately follow the contours of the slopes to avoid nasty triangulation artefacts that happen otherwise. These artefacts lead to over-subdivision to compensate, and resulting high LI. The high LOD mesh is also used for triangle based physics to ensure accurate walkability. To avoid excessive physics weight, it is therefore essential to remove small triangles, as done at several steps here. 1. Delete all objects; Object mode, top view {NumPad7}. 2. Set background image to heightmap; Size -> 32. 3. Click 0,0; Object > Snap > cursor to grid. 4. Add > Mesh > plane; Edit mode; Select all; {S32}. 5. Click [subdivide] 4xx. Set Wireframe view {Z} [Pic 1] 6. Select inside verts; Delete; Select all [Pic 2] 7. Move vertices {G, drag each} to surround raise area [Pic 3] 8. Mesh > Faces > Inset; Drag to inside flat area [Pic 4] 9. Select {AB} and merge at center {AltMA} to merge close vertices [Pic 5] 10. Add verts with LoopCut&Slide where needed, and nudge into place [Pic 6] 11. Select the bridging edges and use [subdivide] with cuts=3 [Pic 7] 12. Divide central ngon into quads; here using lots of alternating J and [subdivide] [Pic 8] 13. Manually decimate interior; here using alternating edge slides and remove doubles [Pic 9] 14. Remove some redundant edge loops [Pic 10] 15. UV map with Project from view (Bounds) 16. Add Displace modifier with height map via UV mapping, Strength to taste [Pic 11a] 18. Optional for flat base: Select outer edge loop, {SZ0}, move down a bit [Pics 11b-d] 17. UV map using plain Unwrap and adjust to fill UV area [Pic 12] Notes: Last step is necessary for even texturing of slopes. It would be easy to reduce LI further with more stringent manual decimation. Imported with autoLOD, high LOD as physics, NOT Analyzed. LI=24, dlwt=23.9, phwt=18.3 ETA: just noticed, the wrond edges highlighted in pic 6. The added ones are one step anticlockwise from these :matte-motes-dont-cry: ETA: Here is a MUCH faster and nicer way to make the decimated middle: Just do a couple of rounds of Mesh > faces > Inset faces on the big central ngon, with a bunch of vertex merging after each, then some more redundant edge removal. Now LI=20, dlwt=20, phwt=13.4 Also, proper normal and spec maps make it better (if you use advanced lighting).
  21. Previously, and by default, Blender has contsructs vertex normals on the fly by using the face normals, for flat shading, or by interpolating adjacent face normals, for smooth shading. The latest version of Blender (2.74) has added methods to use and modify explicit vertex normals per (face+vertex). These are called custom normals. One of the tools provided allows the copying of normals from one mesh object to another. This can be used to make the normals at the edges of a separated model the same as those on the joined model, which is more or less what we have been trying to do here, However, so far the Blender Collada exporter only exports the old normals, nit the custom normals. So this facility can't be used in Collada exported directly from Blender. Gaia, who is one of the managers of Collada exporter development in Blender, has confirmed that future versions of Blender are planned to have the ability to export the custom normals. That will make solution of the normal-matching problem, the subject of this thread, much easier.
  22. "I will have to look into this more closely." Ah. I found the problem. The way the uploader works out the extents of the geometry is to initialise minimum and maximum values for each coordinate using the first point in the list of vertex positions in the collada file. Then as it adds each vertex to the actual geometry, it compares its coordinates with those in the minimum and maximum, and if the new point is outside that range, it replaces maximum or minimum as appropriate. If all the vertex positions are used in the geometry, that ends up with the extremes of the coordinates in each dimension. However, if the first position in the vertex position list, used to initialise the values, is from the deleted extension, it may lie outside the bounds of the remaining geometry. In that case, that extreme will never be replaced by a point from the actual geometry. That's what happens with my file. The first point is from the extension that sticks out in the +x direction. So the maximum x extent is left at that point, outside the geometry that gets used in the polylist. If I just replace the x value of that point with a value inside the real geometry, the bounding box is now correct. Whether this problem arises depends on the order of the polys and vertices in the collada file. It's possible you can find a way of changing that within Blender, but meanwhile, there is a workaround, although a bit tricky. If your mesh has this problem, the first position in the geometry must be from an unused extension. You can't simply delete it, because these values are referred to by index in the <polylist>. So instead, you need to replace it with a value that is within the bounds of the used mesh. It can be found at the beginning of a section like this ... <library_geometries> <geometry id="Cylinder-mesh" name="Cylinder"> <mesh> <source id="Cylinder-mesh-positions"> <float_array id="Cylinder-mesh-positions-array" count="561">2.193989 11.02993 0.6526969 These are triples of x, y and z values. The red number is the offending x coordinate which I changed to 0 to correct my file. Zero won't always work, as the whole mesh could be one side of zero. You have to look at other points, or check positions in Blender, to get an appropriate value. If the bounding box extends in the y direction, you need to alter the second number, and if in the z direction, the third. As far as I am aware, there's nothing in the collada specification that says all points in the vertex position list have to be used in the geometry, so that the edited file that has the problem is perfectly legal. So I will report this as a bug. It should be pretty simple to rewrite the initialisation of the extents with the first point referenced from the polylist rather than the first in the vertex position list. So it would be easy to correct. But I won't hold my breath. ETA. If you want to play with this test case, the files are now in the jira, which is BUG-8987.
  23. One more question. For the other LODs I will have to do the same thing, but how about the physics model? If it fits the BB of the LOD0, the original physics model with no extensions should work right? I doesn't. It gets stretched to the extended bounding box. So it doesn't fit and doesn't work at all. If you use the model with the deleted geometry as physics shape, however, it does fit and does work. Hmm. That's all using triangle-based shapes.
  24. The deleted geometry still counts when calculating Li Well, the LI (download weight) depends on the "radius" (half diagonal) of the bounding box. So it might be that effect you are seeing, rather than the extra geometry itself.
×
×
  • Create New...