Jump to content

Drongle McMahon

Advisor
  • Posts

    3,539
  • Joined

  • Last visited

Everything posted by Drongle McMahon

  1. SL cylinders have 24 segments at high LOD, not 32. I guess that was mixing it up with sculpty cylinders with square maps, which have 32 segments. You can see the structure of prims most clearly if you view in wireframe mode (Develop->Rendering->Wireframe). You can turn off sky, surface patch and water (Advanced->Render Types) if you need to to make it easier to see the object wireframes.
  2. It all depends exactly how you made the physics shape. It might be easier to say if we had a picture of the physics shape (Develop->Render Metadata->Pjysics Shapes). Did you xhexk that it had a hole when you made the physics shape in the uploader? Did you use a LOD mesh or a dedicated physics mesh? Was the hole big enough?
  3. Maybe dialling RenderVolumeLODFactor up to an absurdly high value would do it too.
  4. See if this old thread gives you a clue. Unless things have changed, The avatar always floats a little way above the surface of the physics shapes. This happens even with legacy prims. You can solve it by making the physics shape's surface lower than the to of the visible shape. as described there. Of course you can also always use a linked (invisible) prim as the physics shape too, with the visible floor set to type "None". Perfectly flat planes are a bit odd because the uploader has to pretend they have thickness, to avoid a divide-by-zero error, I suppose. If you edit them and click "Stretch", you will se a bounding box far away from the plane. I don't know why it's so big. They could have used 0.01m, which you can squeeze it down to. The thick bounding box can also lead to unexpected effexts if you make a plane physical! ETA: There's also the strange way that they made the default convex hull (the one you get if you don't do the physics shape) smaller than the bounding box. This meand you can actually sink into the visible surface if the object is thick enough.
  5. I would be good. LI, at least the download weight part of it, was originally designed and calculated to account for the use of bandwith, downloading it to everyone who needed to see it. Hence the name. However, when it was realised that rendering resource was of similar importance, there was thought to be sufficient correlation between that and the download weight so that it could be used to control that too. Because of that, it must be unlikely that instancing will ever be rewarded with lower LI, since it doesn't help with the rendering load. It's still a good idea to use it as far as possible to minimise lag. Same with textures. The more they can be re-used the better. Unfortunately, the ever-increasing use of use of oject-specific baked textures ans normal/spec maps works against this.
  6. As Arton says, during the beta, the mesh data was a separate asset, so that using it repeatedly, even with different scaling and textures, required only one download. So there was instancing (at the whole mesh level, not, unfortunately, for repeated structure within one mesh). It was like the sculpty map for sculpties. I would be surprised if the system doesn't still work that way, with the mesh asset still there but not accessible by the user, especially not in LSL.
  7. "to turn my model in blender to be converted into mesh made out of tirangles" Select faces, then Ctrl+T. (or Mesh->Faces->Triangulate Faces).
  8. Too many triangles. See this jira. Basically, if you have more than 21844 triangles in any material, the uploader secretly starts a new material for susequent triangles. When all eight permitted materials are used up (174752 triangles), further triangles are simply omitted.This is far and away too many triangles to use for anything in a real-time game like SL. You will have to do some form of geometry reduction. (Note that all polygons are triangulated by the uploader if not before.)
  9. Degenerate triangles are triangles that have zero area. Te test for this actually test for very small area rather than actually zero. So it looks like the two vertices mentioned were considered to be too close for the triangle using them to have an acceptable area. You pobably need to find and eliminate small triangles from your mesh. You may also need to eliminate any free edges that aren't part of triangles. I suppose they just might be arising in the LOD generator if you are using that, but I never found that to hasppen. Not sure what byou do then. Probably you would have to provide the LOD meshes.
  10. Take me a while to digest the nodes too. Meanwhile, have a look at this blender wiki page. I says what the different bale options do. Now I am also just beginning with this, and I can't say yet exactly what goes where in the SL maps, but I think at least you need to use the "Diffuse color" for diffuse map in SL, and "Glossy color" for specular map in SL if you want to see only lighting effects of the ALM shader. The direct (first light bounce) and indirect (multiply bounced light) maps are the results of combining these inputs with the geometry and lighting in your scene*. They can presumably be used for baked-in lighting, but I think you have to composite them with the colour maps in PS or GIMP to get what you need far baked lighting that will work without ALM. It seems to be very often that people confuse the specular map with a map of baked highlights under specific lighting and camera angle. Ive made a picture and will do another post explaining the distinction. In summary, if you use baked highlights as a specular map, you will only see them in ALM when lighting and camera angles coincide with those used to do the bake. To get proper effects under ALM, the specular map must define the reflective properties everywhere on the surface, not just show the places where you happen to have highlights under one set of conditions. If the surface is homogenous, you can just use the "Blank" specular map with appropriate slider adjustments. The "Glossy color" map is only the eqivalent of the RGB channels of the SL specular map. It seems you have to rely on playing with the Glossiness and Environmental sliders to get effects equivalent to the "Roughness" of the BSDF nodes and the differences between Glossy and Diffuse. If you need the roughness to vary across the texture, I can't help you yet. That needs a map of the specular exponent to go in the alpha channel of the normal map (and maybe of the specular map too)! Possibly you could get that by setting up to bake the roughness input to the BSDF nodes as the diffuse texture. As you can tell, I don't have well worked out solutions, particularly for the last point. I hope someone else has better and will add them here. Meanwhile, I hope this helps a bit. *ETA: I think it would be more accurate to say that these (direct and indirect) maps comprise the modulation of the incident light by the geometry and lighting and camera angles. This is then filtered by the diffuse or specular colour to produce the combined output. Si with ALM, this modulation is done within SL, according to the light an camera angles there which are generally different from the baking conditions. So you only need to use the color map in the diffuse and specular slots..
  11. Can you show us your material nodes and tell us which "Bake Type" setting(s) you are using?
  12. Good philosophy. I hope they vsee it that way too. :matte-motes-smile:
  13. Yes. My worry would be what might happen if Autodesk decided it didn't want people using Blender instead of its products to develop for the game engines. What little I havr found on legalities of recerse engineering formats is all uncertain, but if not legal action, control of the format gives them other blocking options. I hope and expect the Blender guys have that all under control. The other thing that makes me feel bad is the thought of having to learn to read fbx files (maybe from binary!). It's too horrible. I think I would have to give up detective work.
  14. "It can really confuse people who are come to the mesh forum trying to learn at a beginner level when the most basic terminology is not used correctly." In that case, perhaps we should make it clear that ambient occlusion is essentially a means of calculating, at each point of a surface, the directly accessible surrounding unoccupied space, with distance-dependent falloff and threshold. The presence nearby of other surfaces, or other parts of the same surface, reduces the accessible space. This can be used in simulating the effect of occlusion by nearby geometry on the intensity of "ambient" light falling on a surface, where "ambient" light means the sum of all indirect lighting sufficiently dispersed that it can be modelled by a completely non-directional source uniformly distibuted throughout space.This is what most people mean by an ambient occlusion map. The ambient occlusion effect does not depend on accumulation of dirt or photobleaching of more exposed surfaces. However, since the distribution of these other effects is somewhat similarly distributed (although with different fallof and threshold), ambient occlusion maps can be used to simulate these effects too. When used in this way, the map might also be referred to as a dirt map or a crevice map.
  15. "here is why the Blender developers have decided to put effort into FBX:" Quite understandable and reasonable, but given the lack of definitive specification, that way they surely put theselves at the mercy of Autodesk if and when they are seen as significant competition. I do hope they have a good understanding with Autodesk about that. Meanwhile, we could do with some tools to make the (esp binary) fbx data human-readable.
  16. Yes. I can see the usefulness of both those, expecially the UUID one.
  17. Are you sure it's not transparency limiting reflections? No effect of animation for me, but .... ETA. Seems to be total number of lights reflected from all alpha surfaces, not per surface.
  18. Oh yes. I was forgetting that it's only the shadow casters that add all those extra triangles to the rendering queue. I corrected my post. Thaks for reminding me.
  19. Yes. you are right. The sculpty's physics shape doesn't change, while the mesh's does (generally). So the rationale is even stronger for mesh than for sculpties. I suppose you could restrict swapping to meshes with "None" physics to get around that.
  20. Let me explainn to those who might not know that there was an early setup where a rezzed mesh object could be changed by dropping in a different mmesh asset, just as you can change the geometry of a sculpty by dropping in a different sculpt map. That capability was removed by making the object and the mesh data inseparable. Sculpt map switching was used to animate sculpties, and I think the resulting resource consumption was considered unacceptable. As well as using a large amount of download and/or cache resource, this could also lead to unsatisfactory stuttering and lag because of dowload and/or disc access for caching. With sculpties, it also meant recalculating the mesh for each frame (which would not be needeed for mesh). I am pretty sure it was to prevent this use that the facility to do this with mesh was removed. Why a throttling alternative, which might have left Qie's uses possible, was rejected, I don't know. On the other hand, I can't immediately think what hourly or fortnightly changes would be so much better met by an assignable mesh asset than rezzing a new instance. Can you enlighten me with some examples?
  21. "But what is your reason why you think it is a bad thing..." Well, no doubt partly because I am biased by the amount of effort I put into understanding collada, but also... Mainly because, as far as I can see, there is no publicly available definition of the format, and it can therefore be varied at will by Autodesk without telling anyone.So there is no authoritative resource to tell us what to expect in the way LL interprets it. I'm afraid all that means Blender users like you and me may be at a severe disadvantage compared to those who can afford the Autodesk software. I guess that may be the intention, but I hope not. Rather, it appears that LL will use an established engine, and the import formats are then fixed by that choice. Let's hope LL will at least come out with a precise definition of their interpretation of the format and exactly what they will and will not read from it (like they never did for collada). The Blender guys are clearly doing a good job of reverse-engineering the specification, but it's beyond their control to know whether the result is going to be reliable (is there a risk that Autodesk will stop them anyway?). I guess we don't know whether the accepted input will be ascii or binary. If it's only binary, that makes it inaccessible for trying to work out problems (like the material naming issues, or the effect of vertex/triangle ordering on physics weights, for example). Even the ascii version seems to be a lot harder to read and manipulate than collada, and you can't (yet) re-import ascii into Blender to check things out.
  22. Does it? I retested with current LL viewer before answering, and had 12 lights on at the same time. What viewer are you using? They were 12 ordinary local lights. I'm pretty sure you can have more projected lights too.
  23. Unless Advanced Lighting is turned on, you are limited to six simultaneous local lights. This is because they consume a lot of rendering resource from the gpu (effectively, each source adds a whole new set of all the triangles in the scene to be draw from the light's point of view). So even if you want to enable ALM, you should probably try to avoid using too many. ETA - Mistake there - it's only the shadow-casting lights that add that many extra effective triangles!!!
  24. "If the new world uses FBX with morphs included..." Sadly, it seems it will use FBX, but that is not a condition for morphs, as Collada includes them too.
  25. It's not the last think ypu link, it's the last you clicked on when you select them before linking. The last selected, not the last linked.
×
×
  • Create New...