Jump to content

Drongle McMahon

Advisor
  • Posts

    3,539
  • Joined

  • Last visited

Everything posted by Drongle McMahon

  1. If the physics is triangle-based (not "Analyzed") and your editing reduces any dimensioin to less than 0.5m, the server secretly changes the physics shape type tp Convex Hull, so that you can no longer go through the door. (You cannot see the change in the physics shape display setting. You can see it if you make the object a static pathfinding object and rebake the navmesh). The solution is either to increase the small dimension, by stretching or adding geometry, or to "Analyze" the physics shape.
  2. "so how can I use that ONE option to upload 8 different shapes?" Moving this to the top, as it's probably all you need... In other words: You must upload one physics shape Object for each mesh Object in the main File. Different Materials within a single Object do not have separate physics Objects. However, since the mesh of a single Object can have disconnected "pieces", you can have multiple "pieces" for each Object, which may or may not correspond to different Materials in your main Object. Indeed, if you are going to use "Analyze" in the physics tab of the uploader, it is recommended that your physics Objects) each consist of non-overlapping convex "pieces". tldr version... Let me try to be more clear.... I think confusion here usually stems from ambiguity in the terminology used to describe the "parts" of mesh models. So I will try to define that better first. In Blender, there is a hierarchy of entities. Let us call the whole thing you are making (eg your house) a Model. In Blender (etc.), a mesh Model can be built from one or more mesh Objects. In each of those Objects, polygonal faces will be allocated to one or more Materials,each of which is a collection of polygons that will receive the same texture, colour etc. It is important to note that the polygons in (allocated to) a Material do NOT have to be contiguous. They can be discontinuously dispersed over the surface of the Object. Similarly, the polygons in an Object don't have to be all joined up. So one Object can consist of several unconnected "pieces". Neither do the Objects that comprise a Model have to be connected. This is where the misunderstanding comes from when we use terms like "pieces", as it is not clear exactly which level of the hierarchy we are referring to. When we want to upload the Model to SL, we have to present it to the uploader in one or more Files. The primary File, the one we open the uploader with, becomes the highest LOD (level of detail)of the upload and appears inworld as a Linkset. It can contain multiple Objects, which become the multiple linked Prims. Each of the Prims can have multiple Materials, which become multiple independently textured Faces inworld. We don't have to upload our entire Model in on File. Instead, we can upload multiple Files, each with a single Object. The we can assemble the Linkset inworld from the separately uploaded Prims. The same is NOT true for the Materials of a single Object. They cannot be separated into different Files. If we want to do that, then they must be "promoted" to being separate Objects. In the uploader, there are slots where we can specify additional Files that are used together with the main File to specify non-default aspects of the uploaded Model. Three are for different LOD (level of detail) versions, and the fourth is for the physical and collision behaviour. There are important requirements for these Files (to work properly), whether we uploade our Model's Objects all together in one File, or separately in different Files. For each uploader invocation, each of these subsidiary Files MUST contain the same number of objects as the main File, so that the uploader can associate one Object from each with the corresponding Object from the main File. To ensure the correct associations, we should use the naming convention whereby all of the Objects in each subsidiary File are names with the same name as the corresponding Object in the main File, with the specified suffix (e.g. "_PHYS" for the physics File). So all Objects in any subsidiary file should have the same suffix.
  3. This is (almost certainly) an instance of the long-standing problem with triangle-based physics weight calculations. For example, see post 19 onwards in this thread. This was reported as a bug (BUG-965), but left because of reluctance to change the physics weight calculation. You might be able to fix it by editing the collada file to make the offending coordinates exactly the same, as described in the thread. ETA: Aquila beat me!
  4. "What are the mesh items? Where can we see them?" Yes. That would be necessary as a minimum for any really useful comment. Even better would be to see the collada file(s) and know exactly what options and parameters were used in the uploader. ETA - actually, Theresa's comment below is really useful without that info. So my statement is not accurate. However, being able to look at it would quickly reveal whether it is indeed the effect of excessive triangle counts.
  5. Just uploaded a model with physics "From file", analzed and non-analyzed, on Mesh Sandbox 3. Prim type could be set as expected in either case and behaved as expected.
  6.  Hope that's ok. Have the output image open in the UV/Image editor window when you do the bake, then do Image->Save_as from its menu after baking. (Same for getting the AO image out after its bake, after which open it in the lower left Image TYexture node.)
  7. If you are using the Cycles renderer, and know how to use nodes, it is very easy to combine textures inside Blender. Here is an illustration for a simple case, where two input textures and the output textures use the same UV map. The first image node is the granite texture. The second is the (baked) AO map. They are combined using a Color->MixRGB node, set to Multiply. (This node also provides many other methods of combining textures.) The disconnected image node on the right is the one that receives the baked images. It has to be selecvted, as shown, when the bake is done. In the middle are three images baked out with this setup (parameters below). First the AO map - this is then saved and opened in the second input image node. Then the diffuse color map before connecting the AO map input (or after setting the multiply node factor to 0). Then the diffuse color map after connecting the AO map and setting the multiply factor to 1.00. The last is the output you want.  There are lots of other ways of getting similar output, especially using the Combined bake method. However, I couldn't find one that was exactly the combination shown here. You can also achieve the same effect using two images as inputs for a material in the Blender renderer. There are many ways this can be extended, using different Mix node methods, adding nodes to modify either map, using different UV maps for different input and output textures. There have been a few threads here describing the use of different UV maps, and the precautions needed to make sure you get the right one in the exported collada file. You might find then by searching for "Cycles". ETA ... since the magnifier doesn't work in Firefox ... 
  8. Yes, and to be even more clear, perhaps - You can't put different LOD (including physics) models in one file. The LOD step (or physics) for all the objects in one file are always the same. Then you specify the file for each LOD/physics in the appreopriate slot in the upload dialog. If you want to use the naming convention to ensure the correct associations between different LOD/physics, for multiple objects, then all the files should contain the same number of objects, and all the objects in each file will have the same suffix (bit added after the name). For the file with the physics objects, this will be "_PHYS", so that the names are "name1_PHYS", "name2_PHYS", "name3_PHYS", ...., "name1234_PHYS", ..... (It is not a very good idea to have 1234 objects in one file though!)
  9. The official way to do this is by using the new naming scheme for your models. See this KB article. The naming scheme is titled "Uploading your own LOD files". That's mainly about LOD models, but in this limited context, the physics shape file is counted as another LOD. Each object in the physics mesh file needs to have the same name as the corresponding object in the high LOD file, with the postfix "_PHYS" added. If you adhere to this convention, the correct associations will be preserved. Otherwise, there are fallbacks to the old method, which don't work infallibly.
  10. I'm not sure, but I don't think the navmesh will show you dynamic physics chnges like these that are affecting your buildings. It's more accurate than physics shapes view* for objects that you have made into static obstacles, via the pathfinding menu, but I think that's all. You have to force a rebake and download the navmesh even for that. So it probably wouldn't have helped here.
  11. "If your door script uses a llTargetOmega" Very likely the case, as it did produce a nice slow, smooth rotation.
  12. Not really explained though. It looks like something the particular no-mod (ei can't look at it) door script is doing to the physics of the linked mesh, but all sorts of different effects with different meshes that Aquila made. Sometimes looked like switch to convex huill, sometimes phantom or None. Sometimes transient, sometimes lasting. I think it might have something to do with making the door physical. That needs a vehicle expert to offer an explanation, which I am not. Other door scripts didn't have the same effect in the same door.
  13. "Blender is not complicated..." I think you just demonstrated the converse. Thepreferences choices are just too many and varied for me to risk getting into, especially as I would certainly forget what changes I had made. My brain is already well past overflowing with the unchanged defaults. :matte-motes-sour: Still, that does show again how powerful it is for those who can make more effort than me.
  14. Like most (all?) tool options, it's sticky during one session - so once you've turned it off, it stays off until you turn it back on - but I didn't see a way to keep it off between sessions. I usually have to toggle it on and off for different uses. So I don't really mind that it has a default. Mostly, if it's off, I end up with lots of extra vertices in the middle of what should be clean edges. So I prefer the on default.
  15. Tried, assuming this is a plane. You get the result you show if you leave the "Dissolve vertices" option checked in the tool properties. (It seems to be checked by default). If you uncheck it, the vertices remain, and you have a concave six-sided n-gon. I saw no differences between 2.77a and 2.78 here. ETA ...as Gaia said :matte-motes-confused:
  16. "you are stuck with 1024 vertices, nothing more and nothing less" Not quite true. Although of somewhat limited use, smaller sculpt maps yield fewer vertices. The oblong maps with 1024 verts are really very useful though. Details of all the numbers are in this very old post (post 40 in thread).
  17. Don't know Maya. So you'll have to hope for an answer from a Maya user. It certainly looks like that to my ignorant eye. This kind of thing arises in Blender when you use one UV mapping to apply textures to various parts of the mesh, then another to bake the whole lot to one image. That's when both can get exported to the collada file and the wrong one may get used by SL. I would not be surprised if the same sort of thing can happen with Maya.
  18. I can think of only three ways you could get the whole of the texture shown to show up on just the top surface of the stool. [1] You have a single UV map in the dae file, and it's the one illustrated in your post, but you have adjusted the repeats and offsets of the texture in SL so that the whole of it appears instead of just the part that's supposed to be there. [2] You have used Planar mapping instead of defualt mapping on the texture tab in SL. [3] The UV map being used is not the one shown in your post, but one in which the whole UV area is occupied by the top of the stool. That is most likely to happen if the dae file carries multiple UV maps and the wrong one is chosen. It might also be that there is only one, but the wrong one. You can find out whether your dae file has multiple maps by looking at it in a text editor and searching for all occurences of "TEXCOORD". Each of these should be in a line something linke this... <input semantic="TEXCOORD" source="#Cube-mesh-map-0" offset="2" set="0"/> The quoted text after "source=" (here "#Cube-mesh-map-0") is a reference to the UV map. If there are instances with differences in this text, then you have more than one map. If you search for these references without the "#", you should find for each a different <source> section, which contains the actual UV map coordinate data. In the case shown, this is ... <source id="Cube-mesh-map-0"> ....data here <technique_common> <accessor source="#Cube-mesh-map-0-array" count="56" stride="2"> <param name="S" type="float"/> <param name="T" type="float"/> </accessor> </technique_common> </source> Note that the "#" is missing, which is why you have to omit it. It's just collada's way of indicating that this is a reference to somewhere else in the file. ETA: Some readers may be confused because it is often thought that a UV map is an image. It isn't. It is simply a list of the coordinates of points in texture space that correspond to vertices and polygons* of the mesh geometry. These numbers guide the stretching of the texture over the mesh surface. They are often illustrated as an image of a distorted 2D version of the mesh mesh in the texture space, and that is often superimposed on an image of the texture to be applied, as shown in the OP. Sometimes the texture itself is referrd to as the UV map, but this can be misleading when technical issues need to be discussed. *ETA added "and polygons"
  19. Just a guess, as I don't use Maya. Have you used more than one UV map? Maybe one for applying the texture and the one shown for baking. If you export more than one map into the dae file, SL will use only one of them. So you have to remove the unwanted UV map(s) first. Do it after saving, so that you don't lose your other maps.
  20. Quite reasonably, there is never enough time to implement all possible nice features, and bugs have to have much higher priority. Then, once the system is released, there has to be overwhelming benefit of any changes that will affect existing content. It's just the necessary and inevitable winnowing process, I guess. I never moved it into the new jira, which means it's not possible to consder now. I suppose someone else do a ne equivalent one. I wouldn't be optimistic though. In some ways, it would just make things even more complicated!
  21. "The LoD factor skews this estimate so what you essentially do is bypass the calculations and induce higher LI and render cost than the nominal ones." If you are implying that the download weight (and thus usually the LI) calculation depends of the setting of RenderVolumeLODFactor, I don't think that is the case. The calculations that remain in the viewer code essentially assume a fixed value of 1 (i.e. it is ignored). While the calculations are now done by server code we can't see, I am not aware of any differences being introduced since it was moved. It would indeed be interesting to see the effect of introducing the factor into LI. That would penalise higher settings, even causing the return of items as it was raised. Instead, I am pretty sure you mean it introduces the resource consumption effects that would have caused increased LI and render cost if the calculation didn't ignore it. Just trying to resolve this slight ambiguity. The suggestion of customisable LODFactors, or the equivalent customisable switch distances, as a property of the mesh asset design, was raised along time ago. I think it's in an old jira*, which I will look for. If not, I am sure it was raised at a content creation meeting, perhaps as long ago as the closed beta. I think it's a pity it wasn't adopted. It would have been a valuable tool for mesh creators. It would have been very easy to include it in the download weight calculation. *ETA Yes CTS-631, but that's an old mesh beta one, in discontinued jira section, June 2011. I don't suppose many will be able to see it, and I can't see a way to change it's visibility.
  22. If you are interested in the LOD techniques used, or if you want to examine the LODs before buying, you can see what's going on much better by manipulation RenderVolumeLODFactor. his avoids Setting it to 0 will guarantee you see the lowest LOD model. Then as you raise it towards normal levels, you should see three switches higher LODs (lo, medium, high). (Some switches may be missing if the creator has used "Use LOD above" while uploading). A bit of zooming in and out after each adjustment can help with missing switches because of the "hysteresis" deliberately used by the viewer. This process can be quite revealing of the effort and skill that have been used in making the LOD models.
  23. Ah. That explains a lot. Sketchup works really badly for SL.
  24. Looking at the UV mapping, maybe built inworld from Prims, then exported to dae, then subject to partial simplification in Blender without preserving or remapping UV maps.
  25. Thanks Aquila. I was just beginning to do a similar optimisation and physics mesh from the dae file. Now I don't have to. Phew. I will say a few things about the file though. It has some disconnected edges in that appear in the collada as <lines>. I think these are probably ignored by the uploader, but should be removed. It has tons of completely hidden faces and a huge amount of other redundant geometry. If it's used as the physics shape unanalyzed, it produces an error "degenerate triangles" which means triangles with zero area. I'm not sure where these come from. I guess all these problems could be from conversion from prims. So all that needs correcting as well as the excessive materials. ETA: In fact, I think it would be much easier to make a completely new model, using the imported mesh as a plan, than to do all the necessary decimation. I expect it's the UV mapping that would give the most trouble for someone not familiar with it.
×
×
  • Create New...