Jump to content

Drongle McMahon

Advisor
  • Posts

    3,539
  • Joined

  • Last visited

Everything posted by Drongle McMahon

  1. I'm so glad I don't make clothes :matte-motes-big-grin-evil:
  2. "when you upload it, you are claiming that it has no such requirements" That's the problem. What it actually says is that you are claimimg you have the rights to grant the rights needed for the uses contemplated in the ToS, but there is nothing I can see in the ToS where any contemplated use excludes attribution to third parties. In that case, your claim does not need to include the right to waive the requirements for attribution to third parties. Can you point to a contemplated use that requires this waiver? Do you think "uncondional" implies that? Only theuploader's attribution rights are explicitly excluded. As I said, I am sure that isn't what they intended. The intention is clearly to enable them to ignore absolutely all rights of anybody and everybody in all circumstances. For that reason, I have treated it as having the meaning you indicate, and avoided uploading anything with any license except CC0, even though I'm not sure that's what it says. For most people, it may be the retrospective aspect that is the problem. As far as I can see, nobody could have agreed to the ToS if they had ever uploaded anything with a CC BY (or any nore restrictive) license. I have little doubt that this has been widely overlooked, at least by people who didn't read the ToS, or who read it but didn't didn't appreciate its retrospective action. It means that there are likely very many residents who have technically made fraudulent claims, effectively violating bith the ToS and the licence. From my point of view, that is the most serious aspect of the ToS.
  3. I think the situation with CC BY licenses is unclear. The TOS requires you to waive YOUR attribution rights when you upload, but it doesn't say anything explicitly* about third party attribution rights. So I can't see anything in the letter of the TOS that stops you uploading a CC BY asset, as long as you attach the required attribution. If that is so, then when they proceed to exercise the other rights they claim, they are presumably still bound by the CC BY attribution reqirement. However, that's a very odd situation. They would be free of attribution requirements on content that was entirely yours, but bound by third party attribution requirements for content you got under the CC BY license. I suspect this is a drafting error, as it appears to be inconsistent with the evident intent is to be free of anyone else's moral rights with all content. I am not a lawyer. So none of this constitutes legal advice, It is merely puzzled reflection. *or implicitly, as far as I can see (which isn't very far, of course).
  4. "On curved surfaces this won't work without vertex normal matching." Thr crate map was meant for a prim cube, where you can't change the UV map, but for a mesh cube, I think your material split idea might be a good compromise - just a little more geometry to get the different normal map repeat on different material. It might depend on the regular feature spacing though. I don't follow the quoted bit though. Aren't vertex normals preserved across material boundaries? I'll have to try it.
  5. I guess it's a matter of perspective. Others coming from high poly modelling for static rendering, used to using high poly tools and methods, will see it the other way round. There are clearly tools and tutorials out there that use polygons with complete abandon, because it doesn't matter for their purposes. Unfortunately, some continue to do that in SL. Mostly this happens with clothing and other attachments, because those escape the LI penalty which is LL's way of encouraging polygonal thrift. Now, if we could add a texture component to LI, that would give a basis for balancing the two. No real hope of that though, because they are too reluctant to break old content. Even those who aren't extravagant with geometry sometimes have older content made before the materials stuff came out, where they had to use geometry. So they can literally replace their geometry with normal maps to improve the efficiency of their stuff. I have a few things that I really should do that with, including a machine with 300 hexagonal bolt heads on it made of geometry :smileysurprised: (OK - it was an experiment to see what it would cost! ).
  6. "Also while the detail may be non-uniform on the mesh it doesn't have to be on the UV map." Yes. In fact that's the thing that really renders (sorry!) my questions a bit moot. In reality, I suppose, you simply don't do only the same the same thing with a normal map that you would otherwise do with geometry. In the actual crate, the normal map had woodgrain and pitted rust as well as the bolt heads. Those would have been beyond impractical with geometry. It would be nice to have numbers to back up the argument for normal maps, and to make some decisions, but I guess I'll have to do without. Right or wrong, I have a fairly developed sense of "that's too much geometry", but no such clear "That's too much normal map", and texture download delay is the major annoyance of SL, at least on my slow connection.
  7. I'm sure we all agree that we want to replace geometry with normal maps. The questions here are what are the limits of effectiveness in that effort, if any, and what are the quantitative bases that allow that to be estimated in any particular case. In particular, that requires deciding what resolution of normal map is needed to provide acceptable appearance. Generalisations without quantitative basis don't really help answering those questions (although they may well be all that is available). As far as the composition of maps from separately baked parts is concerned, I don't know that that will help. The undesirable effects of the lower resolution maps, the blockiness and the glazed look at edges, appear to be the result of interpolation between pixels in the UV map. That will only depend on the pixel resolution of the map. Surely antialiasing or blurring will only exacerbate that, or can they let the gpu interpolation work better? I will certainly grant that compositing is easier than making the geometry, but it sacrifices the possibility of direct comparison of geometry and normal map baked from it, which was my objective here. Yes, the effect of lacking parallax when viewing the normal map at anything other than perpendicular is fairly obvious when you think about it. I made myself a picture to see just how bad it was, looking at 45 degrees at geometry and normal map. The squishing of the steep bit is quite nasty. Then the white line shows a bit that shouldn't even be visible. Oh well, you never get something for nothing.
  8. Yes. I should try that. It takes a lot of extra geometry to get an acceptably sharp edge withyout actually using one, which would mean much more geometry than one would ever use for the no-normal map version. So I wouldn't be able to do the comparison. I'll try it though, and put the results here. Here we are. New geometry with five-segment bevels instead of sharp edges in the high poly. Not much different really, but note the slight artefacts due to bleeding of the normals into the square around the bolt. It's to control that that I used sharp edhes before. Could use more subdivision instead, maybe. The high poly is already far more verts than I would ever use though.
  9. Yes. I went into the house to look for the middle picture, but it quite soon told me I shouldn't be there and I had 10 secs to get out. That's not vety long, but I just made it. I hate to think what would have happened otherwise. It didn't say.
  10. The mesh developers have always recognised that generalized mesh animation is a desirable feature, and that if it were to be implemented it would be done by arbitrary skeletons, but, as far as know, there has never been any indication that the necessary resources would be allocated to that development any time soon.
  11. "That means your normal map is too big." I don't think I can entirely agree with that assertion, so far. The density of geometry can be highly variable. In my example, it's mostly in rounded bolt heads that occupy less that 2% of the surface area. The remainder doesn't use much geometry, but the normal map has to be uniform and of sufficient resolution to give the required detail for the bolt heads. In other words, the normal map resolution is determined by the finest detail to be represented. If the distribution of detail is non-uniform, that means it has to carry a great deal of redundant information. In contrast, the geometry only contains detail where it is required. There's a worse problem if you want your normal map to include sharp edges, as around the edges of the bolts. With geometry, these remain sharp no matter how closely you view the object, but with a normal map, even with very high resolution, the sharpness breaks down as you approach. Of course, what I actually did was to use the redundancy in the normal map to add textural detail on top of the geometry detail. So in a practical case, the normal map gets used to carry more information that the geometry did, and we end up not comparing like with like. Here's a closeup of the crate, normal maps on the left, geometry it was baked from on the right (I delibarately left it looking thicker - that's how it is - why?), using different normal map resolutions. Bevels are left sharp to emphasise the effect on sharp edges. I would say even the 512x512 is unsatisfactory around the edges of the bolthead at this view distance. However, the straight sharp edges actually look nicer at lower resolutions, benefitting from the interpolation. So I guess the balance depends heavily on the eactly what the geometry is. Notes: Bakes done in Blender. Maybe Normalmap etc would do better. Blank diffuse texture and spec map. Default shininess settings (51,0). AO turned off because it affects geometry but not normalmap effects. Looking down towards sun at 3pm with dfefault settings. I underestimated the geometry - forgot vertex duplications at sharp edges etc. It's abour 3x what I said, giving the normalmap a 2-fold advantage in the 40b/vert calculation. The render info says 50.1/73.1 KB with the object selected. I never know exactly what that is supposed to mean. If it's the memory used by the geometry, and it's the sum of both numbers, that would be about 27 bytes/vert. ETA: Whoops - forgot numbers on pic! For what it may be worth, here is the bolt geometry. I'm pretty sure it would look fine with a lot less.
  12. That sort of fits for a recent example of a crate I made. High poly was 17520 verts -> about 700k at that rate. Baked normal map was 512x512x4 bytes (alpha used for spec exponent) -> about 1000k. However, I used a spec map too, and I would have used a lot less vertices to do it with geometry, at most 1/4. So in that case, the memory used for materials was up to 8 times what the geometry alternative would be. Different cases will be very different, of course. Anyway, the real test involves much more than just these numbers, as you say. The way to tell would be to make equivalent scenes both ways and measure fps, I suppose. I'm not going to do that!
  13. :smileyindifferent: Well, I could, I suppose, but I decided I would respect the intent behind request to leave.
  14. I found your place in your profile and visited to see if I could detect anythging. From the outside views in your pics, I was getting steady > 100fps (GTX 670, ultra+ALM+shadows, dd 270m rvlf 4.0). I guess it must be something else or a transient problem. Or did you change it back? Couldn't check inside because it told me to go away.
  15. You are almost certainly right. I don't really know anything about vehicles. I am considering the difference between freely rolling a sphere or cylinder with a physics shape made on upload as opposed to one with a linked prim physics shape. Apart from being far cheaper in physics weight, the latter rolls much better. The faceted shape comes to a premature stop, rocking on its facets before finally stopping. That is, of course, the correct behavior for a faceted shape. Of course the effect gets less noticeable as the number of facets increases, but then the physics weight and work for the engine goes up rapidly. The visible prim is, of course, faceted, but as long as it's not distorted, it is treated by Havok as a perfect sphere or cylinder and rolls perfectly as a result. Now, on a vehicle. unless you are doing something very complicated with real mechanics (which is a very bad idea), the wheels are not freely rotating. So I guess it doesn't much matter what shape they are. However, the collisions must be more realistic with cylindrical wheel physics, and the weight will be the same as for wheel cubes. I seem to recall that the collision detection may be faster with boxes that with cylinders, but not by a great deal. So the boxes might be more efficient despite having the same weights. Of course a single box for the whole vehicle will be the least demanding of the physics engine, but also perhaps the least realistic for detecting collisions. In any case, because either boxes or cylinders, as linked prims, use the Havok primitives, either will always be more efficient than any uploaded physics shape, because even a single convex hull is much harder for the engine than any primitive, as reflected in the higher physics weight (0.36 for an uploaded cube convex hull).
  16. Geometry/normalmap tradeof - Yes, understand that the rendering itself is much simpler, but I was more interested in the overall performance, including gpu memorey consumption, texture cache thrashing etc. I guess it's probably something that would have to be measured experimentally, and would vary a lot with the exact situation, including with different graphics cards. Not sure I have the patience to investigate any of that.
  17. When Chic says test for LOD 2, I think she means a setting of 2 for the debug parameter Render?VolumeLODFactor (rvlf). In the cLL viewer, the default is 1.125 for all except "ultra" graphics settings, where it is 2 (see xxx_graphics.xml in the app_settings folder). You can increase it as far as 2 by maximising the Mesh Detail: Objects slider. To go higher, you need to use the Advanced->Show Debug Settings menu. You may be able to get "insanely" high details for the high LOD, but that doesn't mean it's a good idea. Remember that the rider of the bike will always be very close to it, and his/her viewer will always have to render the high LOD. That sort of makes a nonsense of the download weight as a measure of gpu load*, which is calculated on the assumption of random relative camera locations and the effect of that on which LODs are displayed. Use as little geometry as you can for acceptable detail. Noirmal maps can replace geometry, but they are not without their own cost. Does anyone know the gpu cost ratio of triangles vs pixels of normal map? For wheel physics, if you need them to roll, I would suggest using linked invisible cylinders (and visible mesh set to physics shape type "None"). This is because the cylinder, as long as it isn't squased, is recognised by the Havok physics engine as a physics primitive cylinder. This not only rolls perfectly, but is is very efficient for collision detection. In contrast, any uploaded physics will always be much less efficient and will behave as faceted, which it is. Unfortunately, the uploader isn't able to recognise perfect cylinders and use the Havok primitive. The physics weight (0.1) of the linked cylinder reflects its greater efficiency. *of course, it's really calculating the expected download resource used, but we are assured that is highly corellated with actual gpu burden. Both depend in the same way on LOD and distance.
  18. And of course, it could always be a problem with the metering rather than the truth! A question for Nyx at the cc meeting perhaps.
  19. Thaks for that explanation. Here's some interesting info though. Again I can't explain... I am standing on an Aditi mesh sandbox looking down at our favourite 12288 triangle cube. I turn of everything in Advanced Rendering Types. Sure enough, I can't see anything and the statistics bar triabgle count is 0, whether shadows are enabled or not. Now I turn back on just Volume. I still can't see anything, but the triangle count says 50 with shadows enabled and 0 with them disabled. What are those 50,000 triangles I can't see? I can't see anything by looking around, although the blackness changes to dark blue if I look upwards. Now, looking downwards again, I turn Volume off and Simple on. Nothing visible, triangle count zero whether shadows or not. Turn Volume on, so I now have just Simple+Volume. Now the cube appears, and the triangle count is 12 without shadows and 62 with shadows. So that's just the expected 12,000 extra in either case. Now I add a single projected light shining at the cube. Count is 12 with no shadows, 62 with just sun and moon, 74 with sun moon and projctor shadows. So that fits - one extra set of triangles for the projected light. Duplicate the projected light and the count goes up to 88 - not too far off the expected 86, only if projector shadows are on. Duplicating more has no effect (it looks like there can only be two projectors more than whatever is already there - I can only get two shadows if I turn the surface patches back on). Now I delet the cube. The triangle count goes back to zero. So the unexplaned 50 does belong to the cube. It would be just right for four sets of triangles. Does sun+moon take up two sets of triangles each, even when they aren't in the sky? (this was all at noon). I suppose there vmay be two light somewhere I don't know about, but then I can add two, which would make six altogether. Need to sort out what's going on here to know how to use the triangle count for estimating the burden of triangles. Meanwhile, I guess the moral is to only do the measurem,ents with shadows off. You can also get the triangle count of an object you can select by Using Develop->Show Info->Render Info. Near the bottom it gives the KTris forv the scene, but if you selkect something it changes to the KTris of the selection. It changes with LOD too (done by changing RVLF or moving away). The advantage of this is that you don;t have to subtract the count in the scene without the object.
  20. I'm pretty sure it doesn't render (most) invisible triangles. So I think the figure shown in the statistics bar must just be the number of triangles before invisible face culling. However, looking again, it is now changing with the LOD, which makes it much moew useful. I can't imaginw why it wasn't doing that last time I looked though. Maybe an updating problem.
  21. Hmm. Same in LL viewer - statistics bar. I'm not sure how good an indication it is because it doesn't seem to take account of occlusion or of LOD (at least in the LL viewer). It stays at 12K for me even with RVLF = 0, when you can see in wireframe that it's the lowest LOD that's being rendered (see ETA note though). I see the same huge increase in triangles (12K->61K)* when turning on shadows. It seems to be a multiplier of about 5x, because a mesh with fewer triangles gives roughly proportionall fewer shadow increase. No idea why that should be, but it would certainly explain the effect of sgadows on fps if it's actually rendering thast many extra triangles! Does anyone know how the shadows are made? *That's with surface patch etc off - so there isn't anything to put the shadow on. ETA. Ah! Now it IS changing with LOD switches. There must have been an updatiing problem when I first looked.
  22. I need to know where you are getting the numbers of (K)Tris/frame that you are telling us about. Otherwise I can't really work out what we are talking about. How do you get that number? (Yes K=kilo=1000).
  23. Why bother suing you? They could just buy you instead.
  24. "i can't get the computer to draw more than 28 Tris per frame" I'm baffled. How do you obtain that figure? Here I am looking at your 12288 triangle cube (two LODs full detail, then auto), on Aditi, with surface patch, sky, water an avatars turned off (Advanced->Render Types)*, and with draw distance 64m (won't go lower). Using the Develop->Consoles->Scene Statistics console to see how many visible triangles there are. It says 12760. I have no idea where the extra 480 come from. * Actually, that's not necessary - they don't get counted anyway.
  25. Drongle McMahon

    NEVERMIND

    "you could set the maximum number of vertices a single mesh could have" I'm not sure an absolute number would work, becuase it wouldn't be appropriate for small and large meshes at the same time, say a teaspoon and an antique dining table. Somewhere deep in the jira there is a feature request for a per-object parameter that would scale renderVolumeLODFactor for each object (by 0..1). That would allow object owners to tune the LOD display distances of different objects according to the circumstances of their use and amount of detail, provided the LODs are well designed. That might have provided a useful means of lag control. "the problem I am seeing with normal maps is because ... it is being compressed blurring the image" I have wondered how much of the glazed look in close view is due to that. For maps with specific detail, small maps would never be adequate, but for general roughening, perhaps high repeatitions of textures small enough to get lossless upload could be used. I wonder if anyone has tested that?
×
×
  • Create New...