Jump to content

Drongle McMahon

Advisor
  • Posts

    3,539
  • Joined

  • Last visited

Everything posted by Drongle McMahon

  1. Here's one more picture. Some outside/inside combinations. Highlighted yellow cut lines are those that coincide with the cuts when both are the same shape. All lines at steps of 1/40 of the perimeter lemgths, corresponding to spinner clicks. I also checked in the source code that it does ideed linearly interpolate the perimiter.
  2. Here's another way of looking at the cube with triangular hollow. The spinners in the edit dialog inworld increment the cut by 0.025 (1/40) per click. The lines on the sections in the first picture connect the points at each 0.025 step on the perimeter of the cube and the triangle, starting from the highlighted zero position. These are where the cuts appear at each step. The blue parts show the remaining solid with the beginning and ending cut parameters shown. These are all the four that would give nice square-cut quarter cubes if the hollow was square. (Actually, the bottom right one has to be two pieces, as the part remaining after the cut can't pass across zero.) The top left is the one from the original example in the thread. The four cubes with these cut parameters are shown inworld in the next picture (the bottom right one is two prims). It is clear that they match exactly the sections above (which were constructed in Blender). None of them has nice cuts along radii from the centre.
  3. The cut parameter is a proportion of the perimeter of the profile, for both the outside and inside. This is not the same as that proportion of the 360 angle around the centre, except in the case of the circle. Presumably the reason for choosing perimeter rather than angle is that it avoids uneven stretching of textures (although that could theoretically have been corrrected with some complicated non-linear UV map calculations*). The result of that choice is that the angles are not generally the same for a given proportion of the perimeter when the outside and inside profiles are different shapes. Consequently the face bridging the cuts are not generally aligned along radii from the center. In the picture below, there are circle, square and equilateral triangle with the same centre and with the point at 1,0 (right middle) coinciding. That is the zero point for the cut. The perimeters are subdivided into the same number (24 here) equal lengths, and the joining points are connected to the centre. You can see that in most cases the equivalent subdividing radii do not coincide between different shapes, so that the cut between these positions will be angled. There are exceptions, including the zero point. In particular, the radii to the corners and the middle of the sides of the square all coincide with equivalent radii of the circle, so that circle-square combinations always have nice "square" cut edges at these angles. In contrast, the triangle radii coincide only at the three corners and directly opposite the zero point, for the circle, and only at zero and exactly opposite for the square. So there are rather few few nice "square" cuts if either profile is a triangle. I suppose the "reason" for the orientation of the triangle is simply the result of starting both profiles off at the same zero point. *ETA: On reflection, I think this would only work approximately, as the correction has to be discontinuous. So it probably really is necessary for acceptable texturing of the inside of the hollow. ETA2: At first sight, it looks as if there is another triangle-square coincidence at about +/- 100 degrees, but if you count the radii going round from the zero point, this is the sixth for the triangle and the seventh for the square. So in fact, this is nearly the most extremely non-squared off cut. ETA3: oops. a minor correction... tri/circle coincidences mare than tri/square.
  4. In the picture you show, it looks as if you have made a physics mesh (as you have selected from file). It isn't clear whether you are going to upload this without clicking "Analyse" or after clicking it. If you don't "Analyze" you will have a triangle-based physics shape. This shoulf work, but with the small triangles for the trunk in your physics mesh, it might have a high physics weight, which might lead to high LI. If you do "Analyze", you will get a hull-based shape - the uploader will make a set of convex hulls approximating your shape. Looking at your mesh, this might be expensive too. If you are going to use a hull-based shape, the most effective way to get a good shape is to use a model that is already a collection of non-overlapping convex hulls. That gives you the closes control. If you have to start using the Analyze parameters inatead, it becomes difficult to avoid the filling in of parts where you don't want it. The picture of the physics shape display is no very clear because of the thing in the background, but I think it may be revealing why your collision shape isn't what you expected. Whatever you do on the physics tab in the uploader (including doing nothing), the uploader always makes a default convex hull for the whole (of each) mesh object. This is what you get if you leave the physics shape type set to "Convex Hull". It is NOT the same as the shape generated by "Analyze". It has all the concave spaces filled in. It's not what you want here. To use either triangle-based or Analysed, hull-based shapes, you have to set the type to "Prim". Whether it's triangle or hull based then depends on what you did in the uploader. You can't switch between those after upload. The shape you show for the trunk looks like the default convex hull. So my guess is that it is set to type "Convex hull" when it needs to be "Prim". Over-use of the Analyze simplification parameters could have a similar effect. I can tell it's not a triangle-based shape because those show the edges of the tringles in physics shape view. So if you didn't Analyze, then it's certain that you have the type set to Convex hull. For your tree trunk, my physics shape would be something like this, using Analyze to make a hull-based shape. This is four simple non-overlapping (gaos exaggerated) boxes, each a convex hull. After Analyze, this ahould give 4 hulls and 32 vertice, and the uploader will report that at the bottom. That should give a physics weight of 1.44, which won't increase the LI above 1. ETA: I see I was too slow, as you found the answer already. :matte-motes-frown:
  5. "How to set up my blender mesh & UV so that..." Material in Blender = Face in SL. Foe each texture you want to apply, select all the faces, make a new material and click the assign button. Set each material colour different to see what you are doing. This is MUCH easier using the Blender rendering instead of the default Cycles rendering (button at top to select). The UV maps for each material can overlap, or not, depending on how you want to use the, If you want them non-overlapping, select all faces and unwrap. If you want overlapping maps, to maximise detail for each, the select and unwrap each material on its own. There are lots of different ways to arrange the UVmapping, and care will be needed to align and scale UV islands if you want people to be able to use general-purpose (tileable) textures. In SL, you can apply textures by dragging, with select face, or with scripts, exactly the same as for the faces of a legacy box prim. There are dozens of youtube tutorials for UV mapping in Blender. Use search tools there.
  6. "updates he didnt even know had been downloaded to him" Neither did he know where they came from. It's the p2p updates that would concern me most. Does precedent really allow us to believe MS security it so tight that nobody will find a way to inject malware as it passes through their machine? - Malware that thereby gets inserted into the heart of the OS? Oh yes, I was forgetting - with the default data harvesting, Windows 10 is malware already. So there's nothing left to worry about. :matte-motes-dont-cry: The only solution I can see is to fork out the fee for W10 Pro, which claims to be free of these user-exploitation hacks, but can we even trust that claim? :matte-motes-impatient: ETA - does W10 have a malware removal tool? If so, does it remove itself?
  7. Here's the promised pictures. There is more to learn, but that can wait. First, with separate objects for the trunk and foliage. The picture is from superimposed pictures of solid, wireframe and bounding box views in Blender. It shows one way of making the physics shape. The trunk is a proper shape to give the collision behaviour you want, The single triangle is the shape for the foliage. It's only purpose is to fill the bounding box. In fact, it really doesn't matter what this shape is, because once it's inworld, you are going to make sure the trunk is the root of the linkset and then set the foliage to physics shape type "none" so it is ignored by the physics engine. Keeping it simple though will help to minimize download weight, and thence LI. You do need to make sure the objects in the physics model file get associated with the right objects in the visible mesh file. At the moment this is done simply by the ordering in the collada file. In Blender this can be controlled by using the Sort by Object name in the exporter (after giving the objects appropriate names). I don't know how to do that in other software. This is differenr in the Project Import development viewer, where (last time I looked) special naming conventions are used to match objects in LOD and physics files. Second, the two objects have now been cimbined into a single object, with two materials for trunk and foliage. Now that there is only one object instead of a linkset, the single prim is the root prim, and it can't be set to physics shape type "none". So the big triangle would be a problem. It is replaced by two tiny triangles that are just enough to make sure the physics shape fills the bounding box, so that the important trunk part doesn't get distorted by stretching to fit. (There is a way of making the two little triangles invisible to the physics engine, by making sure their normals point away from the rest and using "solid" option of Analyze, but they won't be a serious problem here anyway). You will need to "Analyze" a shape like this. A triangle-based shape, that you get by not Analyzing, hates small triangles which will therefore cause high physics weights and LI. ETA: added sentence about Project Import viewer.
  8. Also , why when I attach all together in one element the thngs get more primmy ... like 12 prims impact , while if I leave separate 5 prims impact? The LOD switch distances depend on the size of the object, and the download weight then depends on that switch distance. The download weight (usually ending up as LI) calculation takes that into account. In effect, the high LOD version is seen from larger distances when combined into one larger object, than when it's in several smaller objectv that switch much closer. So the combined object get downloaded and rendered more of ten. The LI is taking the increased resource use into account. You are getting improved appearance at longer distances and avoiding uncoordinated LOD switching as the reward.
  9. Hello I am using 3dsmax and the latest test mesh viewer from LL ... Again, I'm not sue which viewer that is. Can you paste the top line from the box that appears when you do "Help>About SecondLife" in the viewer before you log on. I ask because there is a development viewer "Project Importer" that has some big differences in mesh upload, especially for materials and niming of LOD and physics models. If that is what you mean by "latest test mesh viewer", then there are different issues to consider. I will stick to the current release viewer for now. as for the collision so far I tried to upload a mesh that I exported as 6 objects , ( or 4 recently ) that use one multimaterial with 4 submaterials . this because I wanted to fine tune the 3 objects that share the same material with different coloring , but then I opted to make them all one object and so I exported as 4 in total . I am unfamiliar with the concept "multmaterial". In the exported collada, there are just materials. Each mesh object is allowed up to eight different materials. The parts of the mesh assigned to each material will end up as an independently textured face in SL, which can have its own texture, colour and animation. So it sounds as if you can combine the whole model into a single mesh. since my modelis a tree I only need the collision for the main trunk , how I am supposed to make the collision for the other stuff? that should be passthrought? As I said elsewhere, you need a collision model for each mesh object, or the uploader will make one for you. If you keep the other parts as separate meshes, and make the trunk the root of the linkset, then you can set them to physics type "none" inworld. You can't set the physics type of the root prim to "none" and you can't set different physics shape types for different materials. Shoudl I make just as triangle? And how I tell SL engine wich collision correspond to wich object? will I have just to use the same materials on each or name them in a certain fashion? Also how I do to make only some of those collideable ( actually just the trunk ) and the rest no collision at all? Remember that the physics shape, of each mesh object, is stretched to the size of the visible object. If you combine all into one mesh, you can use a solid box or cylinder for the trunk physics, but you will need to add at least two triangles in the top opposite corners of the bounding box to make the shape fit the whole tree, so that it isn't stretched. (pic coming). If you keep separate objects, then for the ones destined to be physics type "none", you can use a single triangle, ignoring how it's stretched because you aren't going to use it anyway. In fact, you can probably let the uploader generate them, fo the same reason. (If they are complex, that may add to the download weight. though)
  10. "my mesh is made of 6 faces or parts , 4 materials , 4 lods" Not sure exactly what you mean here. Don't know what software you are using. ??? A mesh model in the 3D program should be model < objects(meshes) < materials* < faces(polygons)>>> In SL, these become Linkset < prims < faces < triangles >>> The differences in terminology can be confusing. In the LOD and physics models, there has to be one object for each object in the high LOD (reference) model. If there aren't, the uploader will use the default (low LOD) mesh for each object where it doesn't find an object in the physics model. So if you want to control the physics shape completely, you need to have a separate mesh object for each of your high LOD objects. You can't just use one box for the whole collection. Note also that the physics mesh for each object will be stretched/squeezed to fit the bounding box of the corresponding high LOD object, not the whole model. If you make the physics model just a collection of identical cubes, one for each high LOD object, then each will end up with a physics shape fitted to its bounding box (which sounds like good enough for what you want?). *the same material properties may be re-used on different objects, but this doesn't affect the data structures uploaded to SL, where that common reference is lost.
  11. The faces receiving one (animated) texture in SL are collections of polygons assigned to different materials in the 3D authoring software. If there is only one material, the default, then the mesh has only one face, face 0. To be able to apply textures and/or animations to different parts of the mesh, you have to create (a mximum of eight) different materials in you 3D authoring program, and assign the appropriate mesh polygons to them. These then appear as numbered faces in SL. That applies to a single mesh object. If the leaves and the rest of the tree are separate (linked) objects, then of course you can texture/animate them independently, but each will be only face 0 unless it has multiple materials.
  12. If you log in on the jira page retrieved by your second link, you should see "+ Create Issue" at the right of the tab bar near the top, just before the search block. Before you use that though, are you sure you didn't have any relative external references in the collada file, such as those to to any textures included in the upload? Those could become invalid whenever you move the file, irrespective of the path nesting level.
  13. First: From your description of the things you tried, I would guess your original detail model had more than 21844 triangles in one material. In that case, the uploader starts a new material without telling you. If this happens with your high LOD but not your lower LOD, because the latter has fewer triangles, then the two LODs no longer have the same number of materials, and you get the error message. There are other, more suble, problems that also happen as a result of the hidden meterial additions. The solution is to never use more than 21844 triangles in any material of a model. The is presumably what happened when you reduced the poly count. Second: All lower LOD meshes, and the physics mesh if supplied, are always stretched and/or squeezed along x, y ans z azes, until they fit into the same bounding box as the corresponding "reference£ high LOD mesh. That explains why your hidden triangle expanded. To avoid this you have to include geometry that extends to the edges of the bounding box. (There are devious ways around this, including by manual editing of the collada file to include unreferenced geometry, but I won't elaborate as these exploit bugs that may get corrected). ETA: if you want grizzly details, you can look in the jira at BUG-9015 and BUG-8987.
  14. You haven't mentioned whether the different LI effects are due to different download weights or different physics weights. You can determine this after uploading the whole set by unlinking and using the More Info button. However, one possible explanation could affect either of these weights. The default LOD generator is non-deterministic. That is to say it can produce different results each time it is run on the same input data. This means you can get different low LOD meshes and thus different download weights. Now the default, convex-hull-type, physics shape is made by default from the convex hull of the low LOD mesh. If you specify a physics mesh, either by choosing one of the LOD meshes, or by a new file, then it will be the convex hull of that mesh. It looks like you have specified the lowest LOD mesh. Then the default convex-hull-type shape may be different when the LOD generator generates different results each time for that lowest LOD mesh. The same may be true for the prim-type shape generated after you click the Analyze button. You might be able to see differences in physics shapes by using the Develop->Show Metadata->Physics Shapes menu). The solution to this problem (and some others), is to make your own meshes for the lower LODs and physics. The optimal meshes for LOD and physics are rarely the same. At the moment, the important thing for LOD and physics files for multi-object models is that the corresponding objects in each file are in the same order, as the uploader uses the order to decide which mesh goes with which in the LOD/physics files. In Blender, you can assure this by using a naming scheme that keeps the right alphabetical order, and checking the Sort-by-Object-Name option in the Collada exporter. (In the Project-Importer development viewer, there is a more demanding naming convention, which may or may not find its way into the regular viewer).
  15. I guess the phong shading break is the equivalent of Blender's edge split. So it that makes sense. It worked in the case in the earlier thread too. The different materials method has the advantage that it doesn't produce sharp edges (doesn't interrupt the phong shading). If you set a crease angle, it will override the sharp/split edges in the model. I would recommend not using it at all. You have much better control using the facilities of your authoring software to control sharp edges. The creasing is off by default, but gets turned on as soon as you touch the crease angle spinner. Then you have to start over (reset or delete slm file) to turn it off again. I think its a complete pain, and I never use it.
  16. Not sure about the normal maps, but the general effect looks like what we discussed in this thread. Try the solution I came up with there, using different materials. I don't think crease angle is a good way to do this. If you need sharp edges (which can solve the problem too), use the edge split modifier. (Auto-smooth doesn't get exported). Looking at the top edge in the next-to-last of your pictures, it seems to me the effect is there, just not obvious with the particular angles. In that case the normal map might just be showing it up by making lots of different angles?
  17. Yes, that's true. Baking at a higher resolution and then scaling the image is a bit like anti-aliasing, and can reduce some imperfections. I would rather do the scaling before uploading, to see what it looks like, rather than letting the uploader do it, but that probably doesn't make much difference. The uploader is always going to introduce some distortion with the jpeg2000 compression anyway.
  18. It is a bit unclear what aspect of texturing you are asking about. Do you mean painting? generated textures? texture baking? lighting? or....? If you can identify in which aspect of texturing you think the differences in quality apply, it might be easier to offer an answer. What is the nature of the difference in quality you think is there? Note that as far as SL is concerned, it just gets a mesh with a UV map and an image to apply. So the appearance when rendered by Maya or Blender is irrelevant. It's the quality of the (baked?) image thast matters, and that is limited by SL to 1024x1024. Differences in UV mapping capabilities might also be relevant.
  19. To add to that, here is an illustration that might help to explain why people get disappointed by the effect of specular highlights. There are three versions of the 2nd order Blender Menger cube. At the left is a simple flat-shaded version. In the middle, this has had rounded edges added by using a single segment bevel and vertex normals transferred from the flat shaded model. The bevels mean this has much less geometry. On the right is a smooth-shaded version with a normal map baked onto it from a high poly version with three levels of Catmull-Clark subdivision. These were given a blank diffuse texture, blue colour and blank specular map. Environmental reflection was left at zero, and Glossiness was set to the figures at the top left of each panel. Glossiness is the specular exponent, which controls the tightness of the cone of specular reflection from a point. Te left had panels are in 3pm sunlight, with the sun behind the camera. On the right, there is just 12am moonlight. and most of the light is from a single local light source above and in from of each cube. Looking at the pictures, we can see where the problems are. 1. The flat shaded model doesn't ever show any highlights at all. 2. So you have to add geometry and/or normal maps to get highlights. 3. As the glossiness is increased, the highlights get smaller and smaller in both the other models. 4. There are much more highlights with the local lighting, but at the lower glossiness settings needed to see highlights in sunlight alone, there is too much light from the rest of the surface. So there is always a compromise required for acceptable appearance under both sunlight and local lighting. The reason the highlights are so sparse compared with RL is because RL has a huge amount of sources of indirect lighting, so that highlights, of all sorts of intensity and colours, can be seen at many different angles of view. In SL there is no indirect lighting, so that highlights from reflected sunlight are visible over a very restricted range of angles. There's a bit more with added light sources, but nothing like the variety in RL. Environmental reflection in SL is an attempt to mimic indirect lighting, but it only uses the sky and the sun, and has too little detail to be very realistic.
  20. "The other possibility is that Sansar as a whole evolves into this bigger virtual world." I think this has to be the key. LL wants to provide the substrate on which that evolution can happen. If they manage to provide something versatile enough and customisable enough, then the "experience creators" will come in sifficient number and variety for the evolutionary process to start. Prokofy is right to pont out how great are the hurdles to be overcome, but that is the challenge I assume LL is addressing; produce a platform where a viable VR can evolve*. Neither we nor LL can know, or need to know, in advance what the size or nature of the experiences will be that may win the evolutionary struggle. Nor do we know that any will survive. That is the nature of the evolutionary process, and also its power to achieve unimagined results. *this is all, of course, only speculation based on no more than my own unconstrained interpretation of the strange but consistent use of "experience creator". I'm not aware of any statement from Ebbe that should restrain that interpretation. Indeed, if it's meaning cannot be defined because it is to be the unpredictable product of evolution, the absence of definition may be deliberate and appropriate. Perhaps we should simply ask the questions "What is are experience creators, and what will they create?".
  21. Oh yes. As you said, the material output node, not the output image node. I must read more carefully.
  22. Thanks for distilling out those questions. The second is so important that I want to repeat it. Lack of clear documentation, and the variable quality of what is available, especially concerning efficiency of resource consumption, has been a significant disadvantage to SL. Critical documentation can be an effective way to uncover hidden points of weakness before they become fixed. Can we be assured that LL will focus sufficient resource on documentation for Sansar, preferably from the beginning?
  23. Aquila, what do you mean by "disconnecting"? In my node setups, they are not connected. If I deselect them, they still get baked (consistent with the manual which says baking is to the last-selected image node). If I remove them, then I get an error and no baking happens.
  24. "... high end gaming computer crowd or .... "10s and 100s of millions of people."? ...the two are mutually exclusive and any attempt to do both at the same time is doomed." I'd like to suggest an alternative view. My impression is based on a perceived distinction between the "experience creators" invariably discussed in the context of Sansar and the familiar "content creators" we are used to in SL. It must be a major, probably overriding, aim of Sansar to achieve end-user numbers higher than SL's by orders of magnitude. A prerequisite for this is escape from the constraints of the direct LL-to-user relationship, most explicit in mainland and Linden homes. That is where the experience creators come in. They are the necessary middleman between LL and the end user. Their role will have to be be higher level, more complex and more integrated than even the most sophisticated content creators in SL. The wider set of skills required in that role will mean they are more likely to be organised groups than individuals, ranging from informal groups to formal companies. To create experiences with the required level of autonomy, the experience creators will need to be provided with controls much more fine-grained and much wider in scope than those available in SL. Among those could be controls over acceptable rendering, physics and script processing demands of content. In that case, the opportunity would be there for the creation of different experiences with very different demands on client hardware. The success or failure of those experiences would then depend on, and respond to, the spectrum of hardware available to the potential clients of those experiences, as it varies geographically, demographically and over time - evolution by natural selection. It is easy for those of us who are comfortable in SL, to lament the potential demise of our favourite features envisaged here, be that the anarchic creativity of mainland, the role of the individualist content creator, or whatever. However, we cannot be the target audience for Sansar. If it were to appeal only to the SL diehards, it would be a failure. If Sansar does succeed in providing sufficient scope and flexibility for experience creators, there is no reason why they should not produce an analogue of mainland, that would live or die according to its success in attracting users, or experiences that gave users the same sort of content-creation opportunities presented by SL, or .... The success of any such endeavours, as well as that of other experiences, will also depend on one important set of answers to Danger's question about mainland, the perception of responsibility, trust and reliability of the experience providers. For me this is the crucial attraction, among many, of mainland. It avoids the middleman. If it is true that Sansar will be dominated by experiences provided by middlemen, then establishment of these virtues among them will be crucial for success. That may be the hardest problem of all.
  25. Good question. I don't know of a "proper" way to do this, as I can't find a way of toggling the rendering of individual materials or vertex groups. Hiding doesn't seem to work. But... the bake time does depend on the destination image size. So my suggestion is to add a new image with 1x1 pixel; then select that in the output image node of all the other materials you don't want to bake. Now do the bake. Hopefully it will be much faster, and the previously baked images for all those textures baked to the single pixel will still be there. Using the single pixel does work with a cube with three materials, but I can't tell whether it gives the desired speed-up with a huge object.
×
×
  • Create New...