Jump to content

Drongle McMahon

Advisor
  • Posts

    3,539
  • Joined

  • Last visited

Everything posted by Drongle McMahon

  1. One thing that helps some people is understand that the UV map and thevtexture are completely differentb things.The UV map is a list of numbers that tell the renderer which parts of thev texture go on which parts of thev mesh. It is easy to lokk at it, understand it, and edit bit, when it is presented as an image with the vertices and edges on it. That is the represerntation we see in editing programs. It makes it easy to see what parts of the texture go where, and it works as a guide to painting a texture. But, the image is not the UV map itself. Just a representation of it. I hope that helps. If not, just erase it from your momory.
  2. How many triangles do you have, according to the the uploader, before and after you apply the modofier?
  3. OK. I had a look at the Wimgs3D sculpty exporter. At least the one I could find is very old-fashioned. It actually produces a 128x128 bitmap, but that's really a 32x32 bitmap with each pixel expanded to 4x4 identical pixels. That was the sort of thing you had to do before lossless upload of small images was added to SL. The extra pixels compensated for loss of accuracy when all uploads used lossy compression. Since the lossless compression was added, this is no longer necessary. Still, the underlying bitmap is still 32x32, which, of course cannot have the necessary 33 different vertical positions needed for a 32x32 face sphere. With the 128x128 sculpt map, the code that turns the map into geometry just samples pixels twice as far apart as it would with the standard 64x64 map. So instead of [0,2,4,...,60,62,63] it uses pixels [0,4,8,....,120,124,126]. In the Wings3d map though, pixels 124 and 126 are the same, because they come from the same pixel of the underlying 32x32 map, which is the pole of the sphere. This means that the last-but-one row of vertices is indeed collapsed onto the pole as I described. What this means is that the top, 32nd, ring of faces disappears from view on the rebdered sculpty. However, the UV map used by the sculpty code still thinks it is there. So the UV map still has to have 32x32 faces. The top row of these will be hidden because of the collapsed vertices. Did you try applying the texture I provided? If you do, you should see that it fits the faces of your sculpty perfectly, with offsets 0 and scaling=1, but that the top row (might be bottom row, depending on the details) is missing on the sculpty because it it hidden. If you draw your texture on the bottom 31x32 part to fit your 31x32 faces, but leave the top row in the texture, it should fit as you expect.
  4. I assume you refer to the tab in the SL import window? Yes. If you did nothing with it, then your three objects should each have the default physics shape. That is, for each separately, the "convex hull" of the low-LOD version of the visual mesh. I don't think this would extend below the visible mesh. In that case, I can't right now see how it could account for the whole thing floating. However, as long as it is a linkset, then it could be that only one of the three objects rests on the ground. Metadata Physic Shapes ... What is this menu about? (In LL viewer) It's supposed to show you the physics shapes. They are usually light blue, but go through beige, orange, red as the physics weights become excessive. For triangle based shapes you can see all the internal edges. For hull-based shapes, you just see solid blue (etc). Sometimes you will see objects with just an outline of the bounding box, no blue. I'm not sure what that is meant to be, but it happens at least with some invalid physics shapes, or sometimes when the server decides for some unknown reason. Mesh with None physics shape type are also just barely visible ghosts.
  5. I think we need more detail to give a useful answer, maybe some pictures. There is one possibility that comes to mind, but maybe because I am not certian about what you did. It sounds as if you may have joined the objects for the physics shape, while you still have separate objects in the visible mesh. The uploader needs to have a separate physics object for each visible object. If there's only one physics shape obhect, that will get used for just the first object, and it will be squeezed (and.or stretched) to fit the bounding box of the that object. Any other objects in the visible model will each get the default physics shape, which is the convex hull of the low-LOD visible mesh. If vthis is the problem, then there is another requirement. The objects in the physics model must be in the same order in the collada file as their corresponding objects in the visible model. In Blender, you can assure this by naming them in the same alphabetical order and checking the Sort by Object Name option in the exporter.
  6. Do you mean you exported three Blender objects in a single collada file, without joining them, getting a linkset of three objects in SL? Or did you make/join them into one Blender object? In either case, how did you make the physics shape(s)? Did you look at the physics shape (Develop->Render Metadata->Physics Shapes)? Can you show us a picture? One thing to note - the origin in Blender is not relevant. A new origin is effectively placed at the geometric center of the xyz-aligned bounding box of any object's geometry. This is part of the method used to maximise geometric precision within the limitations of the 16-bit internal vertex data format. After that is done for the high LOD, all the other LODs and the physics shape are stretched or squeezed to fit the same bounding box, one object at a time.
  7. why are we still told to rummage through the debug settings Because so many people want you to set it to 4 to suppress their bad LODs?
  8. but where is the squares-row, that is not shown on the objekt? All 32 are shown on the object. It is possible the sculpt map you have has the last set of vertices (y-pixel 63) collapsed, so they are on top of the last-but-one (y-pixel 62). In that case, the last face next to the pole is xero area, so you don't see it. This is not uncommon when the person or software making the sculpt map doesn't know there is a 33rd vertex in pixel 63. it´s still not placed exactly Difficult to say why without having the sculptmap and texture to examine in detail. Meanwhile, here is a test texture you can use, which will show you exactly how the texture fits on your sculpty. It's 512x512 to allow the numbers to be clear. It has 32x32 equal squares. They will fit exactly to the faces of any sculpty with a 64x64 map. Below that is this texture applied to the default sculpt map in SL, with no offset or scaling, with the wireframe superimposed*. You can see the precise fit. Finally, a picture of a sphere in blender showing why you have 33 vertices to get 32 rows of faces. * in case you wonder how: screen captures, one normal; one in wireframe view (Develop->Render->Wireframe), with sky, water and surface patch rendering off (Advanced->Render Types), texture color black; with suitable superimposition and colour adjustments in Gimp. PS: What were your exact adjustments that gave your corrected texturing? That might give a clue.
  9. With default texture mapping in SL... I should perhaps be more explicit. It depends on what the collada exporter puts in the collada file. So that depends on the 3D program and it's exporter. In the case of Blender, which OP referred to, if you don't unwrap at all, you get no UV map data in the collada file. That's when the exporter appears to use uninitialised data. Long time since I did all this, so I checked - the exporter still gives no UV data. I uploaded a cube like that with no UV data three times and got the single-pixel effect (all black with my grid test texture). That used to be the most common outcome (most uninitialised memory is all zeroes), but also it may be that they fixed the uploader to always give this outcome. I would have to do lots more uploads to see. Anyway, certianly not one texture per mesh face. Here is a picture of some examples I did of multiple uploades of the the same mesh, ages ago. Top two with no unwrap (no UV data), bottom unwrapped. You can see the random data effect. But... Now that's all with Default mapping, because I never use planar mapping. So I just tried a mesh with no UV map, planar mapping applied in SL. That's the two on the left. Parts of it look a bit like one texture per face mesh face, but other parts don't look at all like that. It may depend on triangulation too, as these two were triangulated in Blender (left) and by the uploader (middle). Tthe same mesh with default mapping in SL is at the right - so I guess they didn't fix the uninitialised data thing after all. You are right that the planar mapping is much better, but I don't think it's regular enough for serious use, except maybe for some simple anisotropic textures, like sand.
  10. I would recommend that you always unwrap to make a UV map. If you upload a model with no UV map, the uploader uses uninitialised data instead, and the effects are undefined. You may get different maps each time you uploade the same file. You may get the whole surface mapped to one pixel of the texture. It all depends what the uninitialised data happens to contain. If it's not all one pixel, it will usually be highly fragmented, which increases LI (via download weight). It will never be any use for applying textures unless you use a completely monochrome texture (or blank texture + color). If you want to do good texturing, you will have to understand and master UV unwrapping. UV mapping is usually more work than making the geometry, at least for the sort of things I make. Then making good textures can even more work. You can avoid that if you use general purpose tileable textures, but you have to do the UV mapping in a special way for those to work well.
  11. First dealing with sculpties with square maps of 64x64 pixels... UV maps... These all have the same UV map, which is 32x32 equal squares. That does not depend on the stitching type. So your template texture for 8x8 pixel squares should be 256x256. Geometry... The plane stiching topology (ie unstitched) uses 33 pixels in each direction. For a 64x64 unstitched sculpt map, the XYZ positions of these vertices are defined by the RGB from pixels 0, 2, 4, 6, ...., 58, 60, 62, 63. For a cylinder stiching, pixel 63 is effectively substituted by pixel 0 in the X direction, which produces the stiching into cylindrical topology. In the torus stitching, this happens in both X and Y directions. Thus pixel 63 is ignored for these topologies. In the sphere, the X direction is the same as the cylinder. In the Y direction, the top (x,63) and bottom (x,0) rows are sampled only at the middle (32,63 and 32,0) to get the vertices of the poles. This connects all 32 vertical edges at the poles. In all these cases, the stitching is ignored in the UV map, so that there are always the 33 vertices (32 faces) in each direction in the UV map. For (oblong and small) sculpt maps with different dimensions, the same principles apply. So if the map is 2a x 2b pixels (a and b being always powers of two) then there will be a x b squares in the UV map and (a+1) x (b+1) vertices, with the stitched ones being redundant.
  12. A clean install ist not so often a solution to any occuring bug in sl. ??? Did I suggest it was ??? I found you jira and used the dae files you put there. I did the following with currrent SL viewer and on two servers; beta grid, 296624 (older than yours)' main grid 297277 (newer than yours, which was 296988 in the jira). Results were the same on both servers. All three: Upload with default auto-generated LODs. Rez at uploded scale (at least one dimension < 0.5m). Set physics shape type to Prim. Scale up to all dimensions just greater than 0.5m. Note physics weight (More Info from edit dialog). Scale up to about 10m X dimension. Note physics weight. Three versions had different physics: A=your physics mesh, not Analyzed (triangle-based); B=your physics mesh Analyzed with all default settings (hull based); C=removed all narrow triangles from edges of your physics mesh (84 of 136 triangles left; still some redundant triangles, could be fewer), not Analyzed (triangle-based). The Analyze in C produced 72 hulls with 272 vertices. These numbers are large because the uploader does a bad job decomposing a mesh like this into hulls. The physics weight is roughly 0.04 x (number of hulls + number of vertices). For these numbers, it is expected to be about 14. Inworld, the Prim type physics weight for this is 15, independent of size, which is pretty much as expected. Using your physics mesh without Analyze, to get the triangle-based shape, the physics weight depends on size, as expected. As long as at least one dimension is less than 0.5m (which it is as uploaded without a scale factor), the server secretly treats the type as Convex Hull instead of Prim, and the physics weight is correspondingly low. As soon as one dimension exceeds 0.5m, the triangle-based shape gets used, and the physics weight at that size is 28. As expected for triangle-based shapes, it decreases as the house is stretched to a larger size, until at about 10m it is 1.7. Using the alternative physics mesh, with the narrow triangles removed, the physics weight at just over 0.5m is 13, decreasing to 0.7 at about 10m. All that seems to me to be completely normal behaviour for SL. You didn't say whether you used Analyze or not. If you did, the high weight is because the decomposer that turns the mesh into a set of hulls does a very bad job with the kind of mesh you have given it. Looking at it by hand in Blender, it should be at most 14 hulls with 72 vertices. That would give a Prim type weight of 3. Usually, the only way to get the uploader to do this is by simplifying the physics mesh so that it already consists of the hulls you want, with none of them connected or overlapping. In the case of this model, assuming it's scaled to about 10m, you are better off anyway with the triangle-based shape, Even with the narrow triangles left in, that gives you a LI of 2. With them taken out, it drops to 1. Removing the narrow triangles is a good idea because it reduces the work for the physics engine while making no practical difference to the collision behaviour, as well as reducing the physics weight. As far as the question whether anyone else has seen this: by looking at your model, I haven't seen anything different with server versions with version numbers either side of the one used for your jira report. That doesn't necessarily mean your server version was not different though. Changes can appear and then be reverted. In the versions I tested, I did not see any unusual behaviour. It is also possible that a different viewer version may use a different version of the decomposition library giving different results for the hull-based shape. I was using Second Life 3.7.22 (297128) Dec 1 2014. Yours was later (Second Life 3.7.23 (297272) Dec 5 2014). I don't know why mine didn't auto-update yet. Do you get the same hull/veretex counts if you Analyze?
  13. First thing is the number of samples. Camera tab; Sampling section; Set Render to at least 32, then check the "Square Samples" box. That box isn't about shape, it squares the number in the slots below. So 24 becomes 24 x 24 = 576. I use 32 squared (1024) for a 512x512 image at 100% resolution. It's SLOW. That's the price of using Cycles. There are a lot of other settings that might have effects, especially Filter Glossy in the Light Paths section.
  14. Are your physics shapes hull-based (Analyze clicked") or triangle-based (no Analyze")? Simple things first. The physics weight you see in the uploader is the weight for the default physics you get when you set the type to Convex Hull. That is a single convex hull for the whole object (Low LOD by default). You can't see the Prim type weights until you upload and rez the mesh. Jiras requesting that the uploader display the Prim type weight were made a very long time ago, apparently to no effect. If the physics is triangle-based, it is dependent on the size of the object, and will change if it it stretched or squeezed. Now for more complicated issues... For triangle-based shapes, I did an extensive analysis (links below) of the physics weight. The outcome was to show that the physics weight was hugely dependent on the order of the triangles in the uploaded collada file, and on the order of vertices within the triangle(s). I could not find a universal method of optimising that order. This was all done by manual editing of the collada file. Authoring programs, and their exporters, do not generally give you an explicit way of setting these orders. So the physics weight becomes highly variable and unpredictable for identical geometry when it is constructed and/or manipulated differently. In general, the "same mesh" could give very different weights unless you used exactly the same collada file. Now, the uploader doesn't upload collada to the servers. It converts the data into an efficient internal format first. Changes to that process could certainly affect the order of triangles in the uploaded data (to be more precise: could change the effect of the collada order on the structure of the uploaded data). So changes to the resulting physics weight could happen as the result of changes in the viewer code, even if you use the same collada file. I am not aware of any such changes, but I haven't been looking for them. Since the physics weights are calculated by the server, it is also possible for changes in the server to change the weights. Indeed, it's even possible that the server could have been changed to eliminate the inconsistencies I documented (although I doubt the because the jiras I submitted were dismissed). However, in that case, as triangle-based weights are recalculated whenever an object changes size, the weights of previously uploaded objects would change too. LL are usually very reluctant to make changes that affect existing content. The situation for hull-based weights is much simpler. They don't (generally) change with size. The calculation specified in the wiki is incorrect, but an alternative is used consistently. However, the "Analyzed" hulls are made by a third party library used by the viewer. So if that changed, the physics weight could change with a new viewer, as an identical collada file could produce different hulls. Also be aware that the results may be different unless all the settings on the physics tab are identical and any changes to their defaults were made in the same order. I guess the calculation on the server could have been changed too, although my jira for this was rejected, and I would say it's the wiki documentation that's wrong rather than the calculation. ---------------------- Here is the triangle weight story, four episodes.... Episode 1 Episode 2 Episode 3 Episode 4
  15. I need a five-sided one "Thou shall not threaten to teleport thy neighbour from his own land every five seconds".
  16. go look at the rocks and trees Oh yes, I recognise that. Took me hours to stop looking for LOD effects in Skyrim, so that I could play it properly. That was an unexpected and not very welcome side effect of learning SL mesh.
  17. I think that would work, but you would quickly run out of materials, and you are just increasing the number of textures, which is not good. As long as you are aware of the potential problem and keep an eye on the effects, it should be possible to come up with an acceptable compromise between geometry and stability without using so many textures. After all, the LODs aren't supposed to be perfect.
  18. why do users tend to begin with the most complex projects ? Because they are optimists? I am a pessimist, which is why I don't do rigging at all. :matte-motes-smile:
  19. Normal and spec maps have their own LOD known as mipmaps that would hide that detail. I don't think that' can be generally true. The switches are not coordinated, and simply interpolating the higher res normal map doesn't necessarily make the right kind of adjustment. Also, it depends on the nature of the LOD meshes. Here's a very simple example, showing minor and extreme effects. The high LOD is a cube with extra edge loops near the edges to give the rounded shading. The lower LOD is the plain cube, flat shaded on the left, and smooth shaded on the right (as often used to minimised split vertex count and thus LI, and like the high LOD). The normal map is baked from a high poly mesh onto the high LOD mesh. LOD switching only by zooming, with RenderVolumeLODFactor at constant LL viewer default 1.125. (hysteresis on zoom in/out allows identical image sizes). High LOD above, medium LOD below. The one on the right is useless. The one on the left might be acceptable, but note the artefact at the cube edges because of the sharp edges that aren't there in the high LOD. I don't suppose the MIP level chaged at all here. Certainly there is nor rescue of the edege detail. Nobody is likely to use the situation on the right, as it messes up diffuse shading anyway, irrespective of the normal map. The one on the left is the sort of thing that is more likely. So there is a need to see whether this sort of thing as acceptable in any particular model (not that I have a way of correcting it in this case without extra geometry). ... that's not how it would be viewed "in the wild Indeed, the final test always has to be with normal viewing, preferably with default low graphics setting (RnderVolumeLODFactor=1.125 in LL viewer) which will be most sensitive to undesirable effects.
  20. How about SPEC and NORM maps, I assume they also work for the different LODs? Yes. They stay on the same faces, just like the diffuse texture. That can be a problem with the normal map because the map specifies the difference between the geometric normal and the normal to be used by the shader. If your lower LOD removes detail, or removes edgeloops used to sharpen corners, then the normal map for the high LOD won't be correct for the lower LOD. That's just an extra thing you need to check if you are using materials. The best way to see exactly what's going on is to dial in lower/higher RenderVolumeLODFactor on the Show Degug Settings dialog, rather than zooming. If it's hard to see the switches, you can use wireframe view to check (Develop->Render->Wireframe). Then, to see what the LODs actually look like with the LOD switches at a particular factor, you can either zoom or stretch/squeeze the object (that's how I did the bed ones). As far as appearance is concerned, it's the same thing.
  21. ...do the LOD models always use the textures of the high poly model...? Each material (SL face) of the different LOD models always have the same textures at all LODs. So if you want to use a "billboard" technique, where you use a picture of a higher LOD model to texture a very simple low LOD model, then this texture has to be present in all the LODs. Also, all the high LOD materials/textures have to be present on the low LOD model. This generally means that you have to use a hidden triangle in the high LOD for the low LOD texture, and hidden triangles for the high LOD textures in the low LOD. Of course, this means more triangles in the lowest LOD, just to carry the textures. Nevertheless, this is one of the most effective ways of reducing LI while keeping good LOD appearance. It is used for the headboard rails in the three lower LODs in my 1-prim brass bed. However, these are alpha textures with the rails on. and alpha textures are one of the worst things for performance. So although this reduces LI effectively, it's not necessarily a good idea. A bit of a cheat perhaps, that would not work if the LI system penalised alpha textures appropriately! By the way, you seem to indicate that you are uploading textures with the models. There are good reasons for not doing that, which I have listed elsewhere. I recommend uploading all textures separately and applying them inworld. ETA. ChinRey said "not a very useful technique" : not sure I can agree with that. It can be very effective in a lot of models, not just beds, but other furniture, fences, window frames, etc., etc.... However, as I said, with alpha textures it may be useful, but counter-productive performance-wise. Also, it is a lot of work.
  22. I hope it's OK to repost it here OK, but I should add that these are the basic distances. There are various adjustments that the viewer makes that mean these are quite approximate. Also the switches can get delayed, and they tend to be somwhat different depending on whether you (ie the camera) are approaching or receding from the object. This is in addition to the lag-like effects you have described.
  23. That all depends on the size of the object(s). For small objects, the lowest LOD has by far the greatest effect on LI (the download weight part; if physics weight is higher, that's a different matter). As the size increases, the higher LODs become more influential. The lower LODs each become irrelevant at a threshold size. For wall/house sized objects, you are probably well into the range where low and/or medium will have a large effect. So keeping them the same as the high LOD is probably not a good idea (unless your house is made of lots of small pieces). It is a bit complicated, but the details, including the meaning of "size" and the thresholds sizes etc., are available in this old thread.
  24. Blender's decimator preserves the UV map now. Oh goody. I'll have to go and play with it again. Does it respect UV seams etc.....never mind, I'll soon find out. ETA: Ah. On first glance: unsubdivide doesn't conserve the UV. The other modes do up to a point, but I think I can get better results manually, especially for drastic reduction. Certainly better than it used to be. Might be some use for it, but ....
×
×
  • Create New...