Jump to content

Drongle McMahon

Advisor
  • Posts

    3,539
  • Joined

  • Last visited

Everything posted by Drongle McMahon

  1. The data used for convex hulls is just a list of vertices.. The havok engine doesn't need to know what the triangles are. When you upload a physocs shape without doing "Analyze", the engine gets the triangles and has to work out whether you are colliding with each triangle. That's expensive. When you do "Analyze" the shape is changed into a set of subshapes, each of which is a convex hull. These have no inward going surfaces, so that every surface can lie on a flat table. The engine doesn't have to know about the triangles for these. It just gets a list of the vertices. From this, knowing it's convex, it can calculate collisions more efficiently than with the same shape made of triangles. That's why triangles are more expensive if you use all those unecessary surfaces, but you don't - you just use the two planes for a wall. Then the triangles can compete. The engine can also tell whether you are inside a convex hull and push you out. The breaking down of a concave shape into convex components is complicated. That's why it works best if you give it a set of shapes that are already convex - simple non-overlapping shapes without any indentations or L-shapes. Then it will just use these.
  2. It dependes. Kwak is talking about triangle-based shapes. They are usually the lowest weight optiuon. For these you should avoid including any edges, to, bottom or sides, unless you really need them to collide with. They will make narrow triangles that will push up the weight. If you use "Analyze", that's completely different. The result is a collection of convex hulls whose weight doesn't change at all with size. So for these you get the best results by using solid blocks, preferably not overlapping or connected. With either type of shape, you usually aim to get the physics cost lower than the download weight. It's rare to need a phyics shape that's higher. Unfortunately you can't see the physics weight until you upload the mesh and set it to type "Prim", because the uploader only tells you the default convex hull weight - the one that's used when you set the physics type to "Convex hull". That is the convex hull of the physics mesh if you provide it, or of the low LOD mesh if you don't. It stops you going inside.
  3. - Why this limit ? A bug yes but where ? I would point you to the jira, but now you can't see that so... It's in source code file llmodel.cpp, around the line "if (indices.size()%3 == 0 && verts.size() >= 65532)". the size being tested is the length of an array of all the indices from the triangle list into the vertex list. Every triangle has to have three vertex pointers. So this limit is always reached before the documented limit of 85536 vertices, even if there is no redundant use of vertex list entries. The code following that line starts a new material, which has its own new triangle and vertex lists. The odd thing is that it has the same material name, which may explain some strangeness in the effects. If you upload a mesh with more than 21844 (65532/3) triangles in one material, then check Select face and click on it and change the colour, you will se that it isn't all one face! As for why, I don't know. I am not aware of anything that should limit the size of the triangle list, asI don't know where a pointer into it has to be only 16 bits. - Why does the problem exist, why "instability" from the same collada file ? And i precise again that the problem exists on non rigged meshes too... and with both sl and firestorm viewers... My guess would be that this depends on the internal state of the collada dom library, the part of the viewer that reads the collada and turnes it into data structures. It may be that this will present the triangles to the viewer code in a different order under different initial conditions. - How this automatic material change affects the sl material count limit ? (if it does) If you have just one material in the source, then when it gets beyond 8 x 21844 triangles, it looks like it goes on reading in triangles, but after it's finished some later code simply discards anything belonging to the 9th and subsequent materials. The result is that those triangles are missing from the upload. If you try to ulpoad more than 8 x 21844 = 174752 triangles, the upload counter will always say 174752, and there will be holes in the mesh where the triangles are missing. I didn't test how it works with more materials, but I suspect it would have holes with many fewer triangles because some of the materials would be used up with less than 21844 each. I should emphasize that I haven't retested this for a while. There could be a fix on the way or even already done. I can't tell because bugs now get spirited away into the internal jira which even the submitter can't see.
  4. "Instead of using a seperate physical object, you can "fake" or "make" a bounding box before you actually build the physical shape for your meshes. You can use two single triangles for that, one in the lower corner, one in the opposite upper corner. That way the bounding box is the same, anything in between the two triangles won't change the overall dimensions." Furthermore, if you have their normals pointing inwards and use "Solid" with "Analyze", they will still have the desired effect on the boundinmg box, but should disappear from the final physics shape, thus avoiding unwanted collision effects.
  5. Two things... The internal mesh data format uses 16 bit values to define geometric positions. If you have triangles that are very small, the rounding of the coordinates to 16 bits may sometimes make two vertices the same, producing a redundant triangle with non area which will be culled from the triangle list. This could explain the small loss of triangles count that you describe. I have described elsewhere a bug that limits the number of triangles for one material to 21844. The uploader secretly starts a new material when the count reaches this limit. I have only studies in detail the effects of this on texturing/colouring, but it is quite possible that it affects rigging as well. Which triangles go into which material depends on the order they are seen by the importer, I would expect that to depend only on the order in the collada file so that the effects should not vary unless the file is changed. However, there may well be some other things affectin that order. It might depend on the internal state of the collada library in the viewer. Anyway, I suggest you try reducing the triangle count below 21844 (a good thing anyway) and see if that solves the problem.
  6. "I don't understand how I would make a physics shape for the entire house when the walls etc are different meshes." It's just like the LOD meshes. You have to have a file with a physics shape (object in Blender) for each of the high LOD meshes (objects). There can be problems making sure the right physics mesh gets attached to the right high-LOD mesh. It's done by the order in the two files. So that has to be the same. Gaia and Co. got the "Sort by Object Name" option into the Blender exporter so that this could be done by using an appropriate naming convention (like mesha_hi, meshb_hi ....; mesha_phys, meshb_phys, ....). If you want to have a single shape (object) that covers several visual objects, then you are stuck with uploading that separately and linking it. That would still work, but with the risk that it can become unlinked oe separately edited.
  7. If you use one plane, the collision isn't very accurate. Triangle based shapes (no "Analyze") can be the cheapest for walls as ling as you avoid small/narrow triangles, but they do have disadvantages. Fast moving avatars can penetrate the surface before the collision is detected. For a single plane, that means occasionally passing through it. If it's two planes, it's possible to collide with the second one and get stuck inside the wall. You can collide with both sides of a single plane. However, I have the dtrong impression that it's easier to penetrate if from the back (that is, traveling in the direction of the mormal) than from the front. That doesn't make much sense to me because physics shouldn' care about the normal. With the convex hulls you get after "Analyze", you will always get pushed out even if you do penetrate to the inside. Don't confuse this with the default convex hull you get for the whole thing when you set the type to Convex Hull. You still have to set the type to Prim to used the "Analyze"d shape.
  8. Yes. Prim cubes with invisible texture linked to the mesh. One of those has to be the root, so that you can set the physics type of the mesh parts to "none". Then for physics, those meshes cost nothing and each box costs 0.1. Their LI will still be 0.5 each, from the server weight, but whether that matters depends on what the other weights are. This only works if the only changes tob the boxes are stretching. If you alter any other parameters, twist, taper, hollow etc., you will likely get a very larhge increase in physics weight. If you are selling the house no-mod, then there should be no problem with editing, but if it is mod, then an unwise purchaser could edit the hidden prims with something other than stretch and get an increase in the physics weight. In the worst case, this could lead to return of the whole house. He could also accidentally move the linked prims and find phantom walls an blocked interiors. I prefer physics shapes that are part of the mesh so that this sort of thing can't happen, and also because I just find it more intellectually satisfying (don't ask me to explain that!).
  9. No need for me to comment on the LIs because Arton's explanation is perfect. Using linked prim boxes, undistorted, as physics is very efficient for the engine because it uses havok primitives. That's why the only cost 0.1 weight each. However, it does mean there can be a risk of unlinking or distortions leading to sudden increase in cost if they are editable. If that's nor a concern, they are fine. I prefer the unbreakable shape you get with an uploaded shape. You can upload a more complex physics shape and link it, but then you still have the risks associated with it being unlinkable. The bounding box requirement of a physica sjape mesh is the same as for LOD meshes - they get stretched/squashed to fit the high LOD. There is no requirement for matching materials in the physics mesh (it was there as a bug in some viewers, but that was corrected a long time ago).
  10. Are you talking about a free-standing window? What was your renderVolumeLODFactor? Or is it part of a larger mesh? If it's free-standing, then it must be quite large to stay at medium LOD at 256. That implies at least one dimension of about 20m for a thin object (at default rvlf=1.125; at rvlf=4, that would need to be be only 5-6m). On the other hand, if the window is part of a big mesh (same mes, not linkset), then its LOD behaviour would be determined by the whole building size, That might explain what you are saying, that even a medium LOD doesn't affect LI. When the "radius" (half bb diagonal) gets near 40m, even the medoium LOD has little or no effect. In that case, you are missing an opportunity for saving LI that you could get if you made the window(s) a separate mesh. That way they would switch LOD earlier and consequently the lower LOs would then contribute to lowering the overall LI.
  11. I agree. As I said, it all depends on what you are trying to achieve. Everyone will balance things differently. For example, combining windows increases their LOD distance (and this disappearing effect), as you point out, but it reduces the versatlility of the uploaded mesh. So if you want to make windows that can be used by anyone, anywhere, that's completely different from making windows that will only be part of one specific building. Same with stuff inside that could be used in quite different environments. Another factor is that different people have very different sensitivities to visual accuracy. I guess I am pretty intolerant in that regard. The disappearing window in the picture I showed is a major irritant to me. Others will not even notice it, as you say.
  12. The things that disappear! They are within the draw distance, and it has nothing to do with LODs. The meshes have the same mesh at low and lowest LOD, and you can see that the 4m to the 8m have already switched (with the lighting effect I mentioned). The lowest LOD the blue prims is still a box, and it shouldn't disappear at until they go beyond the draw distance. This was with rVLF=2. It was exaclty the same with it at 0.2 orv 0. Things disappearing when they shouldn't is, in my view, a disaster. Note that the bigger object to the left of the pillars also disappeared. This is a linkset of the smaller hecagonal mesh. So a whole linkset made of small pieces can disappear before it is supposed to. I didn't finish setting up the window part of it because this effect pre-empted that. I could do it with very low rVLF, but that wouldn;t be a realistic situation.
  13. Oh dear. I was setting up a demo to show you the effect of disappearing your windows for people with default settings (it's not about looking through them or not; it's about sudden changes of appearance), when I found an effect that ruins the appearance anyway, rendering the question moot. It seems that there is an effect where things get culled at a certain distance unrelated to renderVolumeLODFactor. Look at the picture here. The upper shot is with the camera 149m from the stepped height boxes (mesh girder, prim box). The lower is from 151m. The stepped boxes are from 4-9m tall in 1m steps, 1x1 section. . You can see the window disappear too. Draw distance is set to 512. What's going on here? Seems a disaster to me. Is this new, or have I been missing it for ages? ETA: by the way, my point is that even a very large window has download weight much lower than 1 if you use two planes - so what is the point of reducing it further, especially as only one will get rendered, depending on which side you are on?
  14. "Why would I need a back side for the LOD, if no one inside the house will be far away enough from the window to see any LOD degradation?" (a) Because someone outside might be able to see through one window into the back of another, (b) if the building is large enough. It all depends on the size of the window and the size and geometry of the building. (and on people's settings for renderVolume LODFactor). It's only two more triangles to have both sides, and then it will do for all puposes. Mesh HQ 3 is on Aditi. You have to teleport in at height to avoid getting trapped under lanscape meshes. "What I want to do is use the picture for the lowest two LODs, and a stripped down mesh for the 2nd one." Sounds good for most purposes. Again, it all depends on exectly what you are making, how big it is, and what you want to achieve.
  15. Sounds like you have the main points alright. Windows are the things that work best of all with this technique. The lowest LOD is just two planes, one for each direction, unless you are certain it will never be seen from the inside at the low LODs (even through another window?). Don't use a cube - the edges are wasted triangles. I use picture al lowest LOD, then add solid outer frame at the next step, and add the remainder of the frame at the next step. Which step is which depends on the overall size. In my gallery, I also had several windows on the same mesh, to get the LODs switching where I wanted them to. You still have to have the same number of materials, with the same names, at each LOD, as they never fixed the bug with subsets.So you have to have triangles at the low LODs to hold the high LOD textures as well as the other way round. To try to save materials, and therefore the triangles to hold them at the lowest LOD, I have tried using the same alpha texture with the frame at all the LODs, making sure that solid frames completely covered the right parts of the picture, so that at the high LOD, only the glass parts are visible. That was quite hard to get right. Examples are the skylights on the roof of the gallery in Mesh HQ 3. (You can see how they work by dialling down renderVolumeLODFactor while standing next to them. They aren't completely optimised for LI, as they were made before the weighting was finalised.) The least satisfactory part of the method is that the lighting effects can't be reproduced on the low LOD picture. In particular, if you use shiny on the solid frame, you can't use it on the alpha texture with the low LOD picture. That can make very abrupt changes to percieved colour under some lighting conditions. I think the materials project will help to alleviate that problem.
  16. Join the whole thing into one Blender object. Then you won't have a thin mesh any more. You can use (up to 8) different materials to put different textures on different parts. I am assuming it's not rigged. If it is, you csn't resize it after it's uploaded and attached. By the way, it looks as if it has far too many polygons.
  17. As Arton says. There is no need to upload the physcs separately. Much better to have it as an integral part of the mesh, which is what you get if you specify it in the physics tab. You are right about the limitations of the decomposer. Here is a picture of the sort of mesh you appear to have used, on the left, and, on the right, the sort of thing you need to get the best results from the decomposer. The secret is that it is made up of unconnected pieces (it's still one mesh though) each of which is already a convex hull, and that there is no touching or overlapping. Using "Analyze" on this with either surface or solid options, should give you what you need without having to use "Simplify", which is where the worst problems come from. The alternative, using a triangle-based shape (no "Analyse") is shown here for the same mesh. The secret with these is to use the minimum number of triangles, and especially to avoid small and narrow triangles. That's why all the uneccesary edge faces have been removed here. This kind of shape will generally give lower physics weight than the decomposed version as long as the walls etc are large enough*. Counterintuitively, the weight increases as the sizes of the triangles decrease. You can halve the weight by using only a single plane with no thickness, but that makes the collision less accurate. The main problem with triangle-based shapes is that the are a bit leaky, and an avatar (or a moving prim) can occasionally go through a single wall, or can get trapped between the two planes of a double wall, especially if it is moving fast when it collides. Actually, convex hulls, default or parts of a decomposition, are leaky too, but the physics engine pushes you out again if you penetrate into the inside of one. So it's less of a problem. Whichever you use, you have to remember to make the physics mesh fit exactly the same bounding boz as the high LOD mesh. Otherwise the uploader will stretch it out of shape. Then you have to remember to switch the mesh to physics shape type "Prim" *Unfortunately there are serious peculiarities in the triangle-based physics weight calculations that make them very unpredictable, but these principles still generally apply. You can read about these problems in other thread (episodes 3 and 4).
  18. It all depends on the size of your wall. Its radius is defined as half the square root of the sum of the squares of its three dimensions (sqrt(x*x + y*y + z*z)/2). If this is more than 5.43m, the lowest LOD will not have any efect on the LI; so you can use the low LOD again in that slot. If it's bigger than 10.86m, then the low LOD will have no effect; so you can use the medium LOD for the three lowest slots. These will only be seen by people using higher than standard draw distances. If you need to simplify the medium or low LODs to get a reasonable LI, you can use a simple box with pictures of the high LOD model on alpha textures on the sides, so that it still looks as if the detail was there. You will have to use a hidden triangle to hide the unused material(s) at each LOD. This will work much better when normal maps are available to mimic lighting effects.
  19. Two important parameters: In the Gather section of the World properties panel, check fallof and set it quite high (see picture). Higher values make the darkness spread less far. In the Bake section of the Camera panel, check Normalize before you bake. That will make it use the whole range from black to white, avoiding a general greying out. You can also adjust this quite easily in any image editor (for instance using the Colors/Curves tool in Gimp). The picture shows four textures in three lighting conditions. Left-to-right, blank, ao with falloff=10, ao with falloff=1, ao with falloff=0. Top-to-bottom, no lighting and shadows, lighting and shadows but no AO, lighting and shadows with AO. Crudely done so there's visible UV seams. Current release viewer. The three baked textures are at the right, 10,1,0 fallof from top to bottom. ETA - changed "lower" to "higher" :smileysurprised:
  20. I would also usually aim for acceptable appearance at all distances for rvlf = 1.125 and dd = 128. Of course, what that requires is totally dependent on the object size. Here are some pictures illustrating the situation for ultra settings (top 6) and high settings (bottom 6). The six panels are for a cube-shaped object with dimensions from 1m increasing by powers of two (1,2,4,8,16,32). It's at the corner of four regions shown by the four squares. The dotted circle is the draw distance. If your camera is outside this (white area), you don;t see the object at all. Otherwise, the coloured circles show which LOD you see when the camera is in them; red=high LOD, orange=medium LOD, yellow= low LOD and cyan=lowest LOD. The important thing to notice is that for larger objects the lower LODs may never be seen at all, but for small objects it is often seen for both these settings. This effect is what underlies the decreasing effect of lowest LOD (etc) on LI as the size increases. The download weight calculation implicitly assumes renderVolumeLODFactor = 1.0 and draw distance = 181m. (that's the radius of a circle that encircles a whole region). PS. I made a program to generate these for any parameters ... ask if you want to see others. ETA: altered pictures so that area of object invisibility is always white, and text accordingly.
  21. "they look just like normal flat textured prims though" That's because you are looking at a static picture. Inworld, when you walk around the one with materials, or make it rotate, you see the highlights and lighting change with your angle of view. If the one on the left was a normal texture, it would look all wrong if it was rotated 180 degrees. With the material maps, it doesn't. All the shininess and other variation behaves properly with respect to the lighting inworld. Whoops - did this before seeing everyone else alreadsy made the point. Sorry.
  22. Nice. Here is another (not very accurately aligned) example. Just to show how careful people will have to be about compatability, I took a picture of the same thing with Advanced Lighting Model on and off :smileysurprised:
  23. "As of 2011 it was mentioned that SL respects instance flags in uploaded collada files, but the resulting instances would be able to be scaled / modified / limited independently from each other. Using the same model multiple times in a build can thereby save downloading / loading times." I don't think I have understood wht you are saying here. What do you mean by "instance flags"? If you upload collada files with (1) the same <geometry> ID in two <instance_geometry> tags in one <node>, (2) two different <geometry> IDs in the same <node> or (3) the same <geometry> ID in two different <nodes>, the results are always the same; that is you get a linkset with two prims. Now since the characteristic of a <node> is the same as that of a prim, independent transformations (translate, rotate, scale), this means that the <instance _geometry> tag is used to decide what is in a prim, without reference to the loss of independent transformability in the <node>. Did you mean something else? When I constructyed a "fractal" tree , whose trunk and branches were all instances of the same <geometry> with each in it's own <node> for different transformations, the upload weight was the same as when the tree was constructed with multiple copies of the same single trunk mesh. I suggested an option toggle to use this sort of instancing on rezzing after download, sacrificing the separation into a linkset, to save download volume (at that time that was seen as the major concern rather tha rendering overhead). This would have made a huge dowload time saving for all meshes with extensive internal repetitions. However, it was considered too complicated to implement, and anyway, it would hav meant more cpu time and no saving in rendering resource. So we are left with instancing where the same model is used in many replicates in a region. That clearly is used, as you can see by the simultaneous reappearance of multiple replicates after a cache clear, bu that isn't banything to do with the collada file. Meanwhile Concerning vertex splitting, as discussed in one of the links you gave - In SL, this all takes place before the model gets uploaded. That's why people often ask about the unexpectedly high vertex counts in the uploader. The effects are thus already reflected in the download weight, which should incentivise people to make exactly the kind of optimisations suggested in the linked page. That is one beneficial result of the internal upload format. It makes for a good correlation between download weight and render resouce use.
  24. Chosen, CS5 was apparently released about three years ago. So you must have had CS4 for at least three years, I suppose. At that rate, the full new user photoshop cost is $20 x 36 = $720 over three years (assuming no increase in rent!), and at the end, you have nothing. I looked up a wole lot of UK cloud prices vs purchase prices and that seems cto be about the same for everything; old fashioned license = 3 years cloud rent. Fo me, I am unlikely to want to upgrade in less than three years, and I want to have something I can use indefinitely without incurring new or continuing expense. So I don't like the creative cloud at all. For others, the balance may be different, especially if immediate fubds are a problem. For Adobe, it's clear enough that they will benefit from steady and predictable income rather than uncertain surges at each release. I would have thought that might have been worth an even lower rental price. Now, if it was a mortgage instead of a rental....
×
×
  • Create New...