Jump to content

Drongle McMahon

  • Posts

  • Joined

  • Last visited

Everything posted by Drongle McMahon

  1. Might I not just as well say that's simply an alternative kludge to the balance triangle, either being used to overcome the omission of origin offset from mesh uploads, both wasting resources? Origin offset was tested in beta but the dropped. Did anyone discover the reason for that?
  2. I just want to add a point about the option(s) for including textures in the mesh upload (ass opposed to uploading them separately and applying them inworld). I have used a lot of re-usable (usually seamless) textures on many different items. This has the advantage of minimizing the texture downloading lag where the texture is used on multiple items in a location. Each time you upload the exact same texture with a different (or the same) mesh, however, the system treats it as a completely separate texture. Thus it receives a new ID, occupies a new slot in the asset database, and has to be downloaded repeatedly by all viewers seeing it on multiple items, even when the same texture with a different ID is already in the cache. The duplicated textures are also added to your inventory, in places you may have difficulty finding. These effects happens whether you are re-using a general purpose texture or simply uploading a mesh after some non-texture alterations. So, unless you never re-use a texture, uploading textures with the mesh always causes unnecessary duplication and consequent waste of resources. That's why I have always advised against it. There are also some minor problems often encountered by beginners in the course of uploading textures with the mesh.
  3. I'vw always wondered why this can work, because the viewer has code that is supposed to eliminate deegenerate triangles before uploading them. It even has an minimum tolerance built in so that vertices don't have to be exactly identical to make a triangle degenerate. It may be similar to the unused first vertex trick. That is, if the degenerate triangle culling only happens after the update of the mesh extents. I'll have to see if I can get motivated to look into the source code again!
  4. Also note that if any of the maximum eight materiasls has more than 21844 triangles in a <polylist> a new material will be silently generated. Then there will be more than eight and the model will be split. This process can be accompanied by several other horrible effects.
  5. Are those black parts of the preview part of the model that you removed in the physics mesh? If so, you need to be aware that the physics model (of each mesh object if there are more than one) gets stretched and/or squashed to fit the bounding box of the high LOD (reference) mesh. To stop that distorting the physics, you need to make sure the physics model has the same bounding box as the main model. In this case, you may have to add back some geometry to bring its height to the same as that of the visual model. Otherwise, if those are parts of the physics, so that the physics is taller than the visual model, there may be some extra geometry up there in the visual model that isn't meant to be there. In some circumstances, this can be invisible in the rendered model (even just a single floating vertex if it's the first in the file). So then you need to check that there is nothing that got left up there by mistake.
  6. Hmm. In the mesh forum we have often had long discussions, lasting several days, that have resulted in notes being added to the earliest replies correcting or extending the adivice there. With the time limit, this will no longer be possible. That will mean users who share the problem having to read thtough the whole long thread to discover the corrections. That is a pity. I suppose I will have to assume the perceived benefits of the time limit, whatever they may be, are worth this negative effect.
  7. "You need to look more closely Drongle" Oh well, there's hope then. Still, I'm not going to be happy until we can upload images to the forum's own archive. I don't want to have to starty using third party sites. The accumulated images we have posted, especially you, Aquila, comprise a valuable store of guidance that needs to be kep together with the accompanying texts. I also agree with your wish to be able to peruse my own images - that's by far the most effective method I found for locating and linking to old posts. I haven't used a quote here because it's far to large and intrusive. Also, I couldn't delete part of it. I miss the edit-as-html too.
  8. From my first glance at my own posts in the Mesh Forum, it appears that all pictures have disappeared. That renders most of the more useful posts in that forum completely useless. Anyone know if this is a permanent situation? I see there's an insert "image from url" at the bottom, but apparently no direct upload option, and nothing exists in in the "existing attachments".
  9. When you link prims to mesh, they switch from legfacy prim LI accounting to mesh-type LI accounting. In Many cases this will increase the LI. The more you distort the prims, the more the increase is likely to be. Toruses are particularly bad. If you restrict your physics prims, as far as possible, to cubes with no distortion other than stretching, cylinders with only stretching the long axis (ie perfectly circular section), and undistorted spheres, these will use physics engine primitives which are the lowest possible physics weight. As soon as you distort them further, the physics weights will jump. One way of testing, to see which of your prims is making the worst LI contribution, is to unlink them, then add a blank normal or specular map. That also switches them to the new accounting. Make sure they are set to physics shape type "Prim", and click the "More Info" link on the edit dialog to see the separate weights.
  10. Degenerate triangles are triangles with (nearly) zero area. This can be either because two vertices are identical or because all three lie on a straight line. Doesn't have to be exactly zero because the code uses a very small number that it treats as effectively zero. So if you have very small triangles they might be treated as degenerate although they aren't exactly zero. In Blender you can do Mesh->Clean up->Degenerate dissolve, which will eliminate them. An option will appear at the bottom of the toolbox (on the left) where you can set how close vertices have to be to be merged. As long as the problem isn't just that your mesh has far too many triangles anyway, that should help you. PS. You are probably going to get a better response asking this sort of question in the Mesh forum.
  11. True - you can avoid the alpha problem with the (new) alpha mode settings. But - False - png format can be 24 or 32 bit (i.e. with or without alpha channel. Not sure how you tell photoshop, but in Gimp I just merge image to one layer and remove the alpha channel. Then it automatically exports png as 24 bit without an alpha channel. You can see that both in the export dialog and after import where there's no alpha mode options.
  12. In addition to the per-file overhead, the multiple texture option also involves more transactions with the database and server. It will also allow finer distribution of avalable texture area between faces according to their sizes (unless you need only fixed size sub-textures. However, there are a couple of considerations that might swing the comparison the other way. You should avoid mixing alpha and non-alpha areas in the same big texture, as doing so will use more data for the latter (32 bit/pixel vs24) and will probably cause alpha sorting artefacts with the textures that don't need alpha channels. Otherwise, it may depend on how your use of the textures is likely to be distributed among multiple objects inworld. You could be downloading multiple redundant texture components in the 1024 if they aren't always going to be used all together. There might also be complications if you want to allow users to change textures (either for you or for the user). Finally, your composite 1024 texture won't be tileable, and tiling small textures can be one way of improving efficiency of resource consumption. Again, that depends on the exact application.
  13. Just to add to Aquila's post - one of the 9 materials here is actually only on the back and bottom of the steps. Both these surfaces are hidden and will never be rendered (unless you turn the whole thing upside-down). So you can safely delete these faces, and then you have only the permitted eight. (SL will now import objects with more than eight materials, but it does so by splitting them into multiple objects). More generally, we have known for a long time that Sketchup works in ways that are very problematic for mesh imported to SL. One of the major problems is that it splits things into many small objects as you see here. While it is just about useable for a simple mesh like this, the problems become very serious for more complicated objects. In particular, the splitting up leads to inefficiencies that are punsihed by high Land Impact. It will also lead to problematic LOD behaviour (changes in appearance as you zoom out). So I would very strongly second Aquila's suggestion that you would do well to invest some time learning Blender instead.
  14. "Is there an easy way to sort the vertice list in a dae file (or in Blender) in an exact specific order?" In Blender, there is a Mesh->Sort Elements menu that offers a few ways of sorting. It does sort the order the vertices come out in the collada file, as used in my recent offset origin kludge (using sort by distance from cursor). Whether that could produce the ordering you need, I don't know. The coordinate sorts are relative to the 3D view. Together with the cursor-relative one, you could achieve some complicated sorts by sequentially applying these, I suppose, but only a few simple ones. Beyond that, I think you have to resort to scripts. I have done some of that for experimental purposes by using R, which is particularly good for large arrays, and has a function library for reading and writing XML. I was just doing the rouding we have discussed, but it gets more complicated for sorting because changing the order of the vertices means you have to change all the indices referencing them in the triangle or polygon lists. If the desired effect is sufficiently general that you could reuse the same script for all dae files, then once the script is written it is simple to apply. Otherwise it's a bit of nightmare.
  15. In my RL house, all the doors have exactly the same woodgrain pattern under the paint. That's because they are fake embossed hardboard underneath. It makes them look cheap and nasty to me (which, of course, they are). I am trying to decide whether I can afford to replace them with real wooden doors, because they annoy me too much. Nobody else notices. I think the same applies to objects and surfaces in SL. Avoiding repetition, even beyond simple tiling artefacts, can yield great improvement to the critical eye. But most eyes are not critical and you really have to weigh up the cost/benefit carefully.
  16. You mean the image size you construct the UV map on in Blender etc., as opposed to the actual size of the texture applied? I don't think the size of the former makes any difference. If I remember correctly, the UV coordinates are normalised for upload anyway. There could be a very small effect because exactly matching values after rounding might be more frequent for smaller numbers, allowing more efficient compression, but I doubt that ever has even an observable effect.
  17. Oh, but all the moss and algae and cobwebs and snail tracks ... :matte-motes-smile:
  18. Also, when you have multiple materials, the UV mapping of each can ovelap. They don't interfere because each becomes a "face" in SL, with a different texture applied. So the UV map of each material can fill the whole UV unit square. That means you can have more texture pixels per unit of surface area, improving the texture resolution over what you can achieve if you cram all the surface into one UV map.
  19. "...so have no idea what Drongle's message means LOL). I pretend the camera isn't even there." Exactly. The camerat isn't anywhere. Or, it's actually at a different place fo each pixel in the bake. That's why the all-round lighting you describe works well to give you lot's of shininess - there's usually a light source in the right sort of place for most of the pixels. But, it will never give you the same distribution of shininess that you get with a render from a particular camera angle.
  20. "I need to have separate meshes for logical (texture) groups of the building. " Do you need more than eight? You can use up to eight different materials in each mesh object, which can be independently textured.The materials don't have to be continuous. They can be spread over many discontinuous patches of the mesh surface. In fact you can now upload objects with more than eight material, but then the uploader arbitrarily splits it into multiple objecs, each with eight or less materials. I don't recommend this, as there are some nasty side effects. Better to break the model up yourself so that the results are predictable.
  21. Without "Analyze", you get a triangle-based pysics shape*. Inworld, anything with a triangle-based shape which has any dimension less than 0.5m uses the default (unu-analyzed) single convex hull for physics instead of the specified shape. That is the default convex hull you get when you set the shape type to "Convex Hull" inworld, but when the mesh is that thin, it is still used when you set it to "Prim". The triangle-based shape will still be there in the asset, but it isn't used. If you stretch it to 0.5m thick, then it will switch to the expected shape. The other way of making it have that shape ius to add some geometry that makes it at least 0.5m thick while leaving most of it thin. By the way, if you are using a triangle-based (un-analyzed) shape for something thin like this, you should delete the narrow edges from the mesh in the physics shape file, because narrow trianglkes are heavily penalized in the physics weight and have little effect on the collision behaviour. The viewer physics shape display doesn't reflect this "secret" switch to convex hull. It's done by the server, and the viewer doesn't seem to know about it. To see the actual physics shape on the server, you can use the Build-Pathfinding menu: -Linksets to set the object to be a Static Obstacle (remember to Apply Changes); then -Rebake Region, to rebake the navmesh; then -View/Test and check static objects, to see the shape. Physics shapes of static objects will be red. You might have to move the object out of the way, or to mak it transparent to see it. (Note that the collision shape of a static object will stay where it is if you move the object, until the region is rebaked again.) ETA: Tested the alternatives for your octahedral ring - If we make it 0.5m thick, so that the triangle based shape can actually be used, the (un-analyzed) triangle-based physics weight with the mesh with removed edges is 0.5. If the edges are left there, then the triangle-based weight is 7.2 and the analyzed weight is 2.9. So, if the thicker version is acceptable, there's a big saving using the un-analyzed shape. If it has to be thin, then analyzed is the only option. I also thought I would be clever - tilt both rings 1 degree so that the bouding box was 0.58 thicj although the ring was still only 0.1 thick, then titl it back inworld so that it was flat. Then we would have a shape with a hole in a thin ring. That was true, BUT it runs into an old problem/bug with the server physics weigh calculation - the physics weighjt was more than 38000 !!! It got returned.
  22. I don't know whether this will help. I am assuming your complaint is that the intense highlights of the rendered image do not appear in the baked texture. Depending on the details of the lighting, this may be predominantly a result of something that is often overlooked, that baking does no use a fixed camera position. In a rendered image, specular highlights are very dependent on the camera angle, as well as the incident lighting. When an image is baked, there is no camera, and so there is no fixed camera angle. Instead the lighting at each pixel is effectively rendered as if the camera was pointing along the normal at the coddesponding point in the mesh surface. Except in the rare case that that happens to coincide with the camera angle in the rendered view, this means that specular reflections will not match what you see in the rtendered image. So setting up the lighting and camera to give desired specular effects in the rendered view will not, in general, yield the desired highlighting in the baked image. I'm not sure there is a general solution to this problem. I guess you have to arrangle the lighting with this restraint in mind, in some way that I don't know.
  23. The "correct" way to make sure the physics objects are associated with the right visual objects is by means of the object naming scheme described here (NOTE) in the "Uploading your own LOD files" section. Basically, each object in the physics file should have the name* of the corresponding object in the high-LOD mesh file, with the addition of the suffix "_PHYS". It used to be done differently. Originally it was done by the order of the objects in the two files. Also, there was then added some code that sorts the objects by name. I think this is supposed to be before the associations are set up. So it should be done on the basis of alphabetical ordering of the names, even if they aren't in the right order. The uploader is supposed to fall back to the old method if the naming convention isn't used, but the last time I tried it (a long time ago) this fall-back was not reliable. So you are definately advised to use the naming convention. (See also the ImporterLegacyMatching note on the cited KB page). *Avoid spaces in object (and material) names - there is now some code that should deal with names with spaces, but I haven't tested it (Blender exporter replaces spaces with underscores). NOTE: The KB page still taks about file names instead of object names. That is completely wrong. The file names are irrelevant. It is the object names that have to use the naming convention. (Does anyone know how to persuade them, to correct this? I pointed it out in the comment over a year ago!!!!).
  24. Do you have exactly one mesh object in the physics file for each mesh object in the visual mesh file? It's not clear from you description whether you are using, for physics, a set of cubes stretched/squeezed to fit each wall etc, or whether you are using a simple cube for each of multiple objects comprising your house. (Or whatever else). If, for example, your visual house model is all one object, but the cubes you made for physics are separate objects, then the uploaded model will just use the first one for the physics of the whole house, after stretching it to fill the bounding box. So that will leave you with an impenetrable block. Unfortunately, the mismatch in the number of objects is not recognised by the uploader. So it will give you the same hull and vertex counts (after Analyze) whether the cubes are separate objects or all combined into one object. If this is the problem, then you just need to combine the set of cube-derived objects into one mesh object (for each object comprising the house). Alternatively, if your house is multiple objects, and you are using an unedited cube as the physics of each, then for each object, the physics will fill the whole bouding box of that object. Thus if any of the objects is concave, so that the bouding box includes (part of) the interior of the house, then the physics will fill that space.
  25. What are the dimensions of your sculpt maps? If they are square, then there is nothing to be gained by having them larger than 64x64 pixels. If they are oblong, then they still don't need more than 4096 pixels, so 128x32, 256x16 etc. Both dimensions should be exact powers of two, otherwise conversion to the nearest powers of two will cause blurring by interpolation. Lossless compression, necessary to avoid compression blurring, (check box on the upload dialog) is only available for images below a certain size (I think the max is 128x128).
  • Create New...