Jump to content

Land Impact for detailed bike parts.


Lycanpoet
 Share

You are about to reply to a thread that has been inactive for 3668 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

Hey there.

I've been 3D modelling for a while now, but have recently started doing it on SL. I've been stumped on how to lower the land impact on my parts. I've tried making less detailed, physics shapes. But the Download Weight makes it high LI for simple mesh like frames seats ect.

I have seen mesh made that have amazing detail yet have a LI of 1.

Any guidance would be apreiciated.

Thank you in Advance

Link to comment
Share on other sites

My first advice would be to try uploading in SECTIONS. While joining items will sometimes give you a lower Land Imapact in your case all the little picky parts (speedometer comes to mind and maybe handlebar covers?) don't need to be seen from half a sim away.  So making sure the general shape can be seen at a fair distance is important but not the tiny pieces that you might have.

 

TEST on the beta grid (you might already be doing that but who knows) to see what works and what falls apart.

You may need to make custom LODs for 2 and 3 (you probably don't need 4 but I haven't made a bike so I don't actually KNOW that) .


Be sure and test your build with LOD 2 setting (assuming you are using 4) as that is the default. So if it looks really bad in 2 -- that's not good :D

 

ALSO if you have been used to high poly mesh and just LOVE LOVE LOVE that sub surf modifier --- BAD NEWS. You may be starting out with a very heavy mesh. You need to think simplistically if that is the issue. Sub surf 1 (or 2 if you REALLY need it) is good for SL. Beyond that you will have issues. Better yet, do without LOL.

That's my input.

Good luck and have fun.

 

PS. where the heck did spellcheck go? I haven't had it as an option in a couple of days now. Pardon any errors pleaeaease.

Link to comment
Share on other sites

LI is a frustratingly complex beast. I'm not the best to explain it, but here we go:

If you're talking a bike, you should be able to get insanely detailed for the highest LOD, but you'll have to compromise with the lower LODs. Plenty of creators are stuck in the "sculpty" mode: Do one high LOD, cannibalize the low LODs. The result? Looks good close up, looks like crap at a distance. You can honestly keep the LI at 1 for _most_ small objects, even if they have a lot of detail. Just make sure you have nice low tri models for the lower LODs. At the lowest, you might just want a sort of "impostor", i.e. flat panes with a very VERY rough approximation of the actual model.

Another important thing is of course physics. You're practically always better off creating a custom physics shape. For a bike, I'd say it's mor or less something like a hexagon for the wheels and a box for the body.

Last but not least: Use materials to display fine details.

Oh and welcome to the world of game-ready modeling... it's by necessity low poly, unless render-ready models.

Link to comment
Share on other sites

When Chic says test for LOD 2, I think she means a setting of 2 for the debug parameter Render?VolumeLODFactor (rvlf). In the cLL viewer, the default is 1.125 for all except "ultra" graphics settings, where it is 2 (see xxx_graphics.xml in the app_settings folder). You can increase it as far as 2 by maximising the Mesh Detail: Objects slider. To go higher, you need to use the Advanced->Show Debug Settings menu.

You may be able to get "insanely" high details for the high LOD, but that doesn't mean it's a good idea. Remember that the rider of the bike will always be very close to it, and his/her viewer will always have to render the high LOD. That sort of makes a nonsense of the download weight as a measure of gpu load*, which is calculated on the assumption of random relative camera locations and the effect of that on which LODs are displayed. Use as little geometry as you can for acceptable detail. Noirmal maps can replace geometry, but they are not without their own cost. 

Does anyone know the gpu cost ratio of triangles vs pixels of normal map?

For wheel physics, if you need them to roll, I would suggest using linked invisible cylinders (and visible mesh set to physics shape type "None"). This is because the cylinder, as long as it isn't squased, is recognised by the Havok physics engine as a physics primitive cylinder. This not only rolls perfectly, but is is very efficient for collision detection. In contrast, any uploaded physics will always be much less efficient and will behave as faceted, which it is. Unfortunately, the uploader isn't able to recognise perfect cylinders and use the Havok primitive. The physics weight (0.1) of the linked cylinder reflects its greater efficiency.

*of course, it's really calculating the expected download resource used, but we are assured that is highly corellated with actual gpu burden. Both depend in the same way on LOD and distance.

Link to comment
Share on other sites

Thanks for the excellent (as always) clarification Drongle :)

I can shed some light on the math behind normalmaps vs. "real" geometry. It's been a while though, things likely progressed since the late '90s :)

For geometry, you'll always need per-pixel Z-order calculations (ignoring transparent surfaces for a moment). Depending on optimizations, this at least needs to figure out which triangles are close enough to be either in front or behind the surface in question. That calculation at least used to be o^n, i.e. pretty bad. In addition, since the Z-order changes with camera angle changes, you get into some very fun performance territory. If you notice FPS drops when turning, that's why.

Normalmaps on the other hand are merely a simple vector calculation performed on each _visible_ pixel. For each pixel you'd use the normalmaps' offset to the triangle normal, then calculate the angle between viewer and nearby light sources and their distance/falloff and you're done (ignoring any additional shader operations). IIRC normalmaps are merely a shader operation on a GPU.

If you're curious, displacement maps are more or less treated like actual, existing geometry as far as I know, i.e. the same drawbacks as "real" geometry apply.

Link to comment
Share on other sites

Hi :)


Drongle McMahon wrote:


For wheel physics, if you need them to roll, I would suggest using linked invisible cylinders (and visible mesh set to physics shape type "None"). This is because the cylinder, as long as it isn't squased, is recognised by the Havok physics engine as a physics primitive cylinder. This not only rolls perfectly, but is is very efficient for collision detection. In contrast, any uploaded physics will always be much less efficient and will behave as faceted, which it is.



When I read Lyconpoet's post my advice would have been, make the physics for all the parts as simple as possible , ie for each mesh object that makes up the motorbike upload a mesh cube for the physics/colision mesh and analyze in Step 2 of the physics tab of the uploader. 

One, because its more important for a moving vehicle to have very simple Physics than a static mesh, less work for the Physics engine to do so a vehicle that is less likely to "snag" or get stuck in barriers when colliding with them at high speeds.

Two , for the same reason alot of the parts should be set to Physic Shape None anyways, so why bother giving them anything more than a simple mesh cube for physics. The only parts that need physics apart from the root are those that will be in contact with the ground and if you are using one, a specially shaped invisible collision mesh so you slide of barriers easier at high speeds.

When I say give all the mesh parts mesh cubes for physics I include the wheels . A box Physics shape for round wheels.

I have made a few vehicles in the last couple of years and since the third they have all had square physics shape for the wheels and most often have Physics Shape Type: Prim. All use rotating mesh scripts for the wheels , not rotating textures. And they all "roll" very well .

Maybe  I have missunderstood  what you mean by "roll". I understood it to mean that the Physics/Collision shape of the wheels were actually rotating . As i have said the wheels have rotating scripts so why aren't the square  wheels going bump bump bump as i drive along ?

So just now I tried 3 vehicles all with rotating/spinning wheels set to Physics type prim. 

     Mesh wheels with round physics mesh .

     Mesh wheels with mesh cubes as physics .

     SL cylinders for wheels .

I enabled Show Physics Shapes in the veiwer and as I expected in the 2 vehicles with mesh wheels I didn't see the physics rotating at all.

In the vehicle with the Prim cylinders it wasn't possible to see if the physics shape was rotating.

So my question is, if the Physics shape isn't actually rotating why bother with Cylinders when cubes will do?

 

 

Link to comment
Share on other sites


Aquila Kytori wrote:

 

So my question is, if the Physics shape isn't actually rotating why bother with Cylinders when cubes will do?

 

I haven't tested with square wheels, but I suspect you can run into some problems when the surface you "drive" on isn't smooth. The vertical front of a box can bump into small things rather than deflect.

btw The reason why the physics shape isn't rotating is because the wheels move with a local (local as in on your computer rather than the server) rotation. With an actual rotation they wouldn't work.

Link to comment
Share on other sites

Geometry/normalmap tradeof - Yes, understand that the rendering itself is much simpler, but I was more interested in the overall performance, including gpu memorey consumption, texture cache thrashing etc. I guess it's probably something that would have to be measured experimentally, and would vary a lot with the exact situation, including with different graphics cards. Not sure I have the patience to investigate any of that.

Link to comment
Share on other sites

You are almost certainly right. I don't really know anything about vehicles. I am considering the difference between freely rolling a sphere or cylinder with a physics shape made on upload as opposed to one with a linked prim physics shape. Apart from being far cheaper in physics weight, the latter rolls much better. The faceted shape comes to a premature stop, rocking on its facets before finally stopping. That is, of course, the correct behavior for a faceted shape. Of course the effect gets less noticeable as the number of facets increases, but then the physics weight and work for the engine goes up rapidly. 

The visible prim is, of course, faceted, but as long as it's not distorted, it is treated by Havok as a perfect sphere or cylinder and rolls perfectly as a result. Now, on a vehicle. unless you are doing something very complicated with real mechanics (which is a very bad idea), the wheels are not freely rotating. So I guess it doesn't much matter what shape they are. However, the collisions must be more realistic with cylindrical wheel physics, and the weight will be the same as for wheel cubes. I seem to recall that the collision detection may be faster with boxes that with cylinders, but not by a great deal. So the boxes might be more efficient despite having the same weights. Of course a single box for the whole vehicle will be the least demanding of the physics engine, but also perhaps the least realistic for detecting collisions.

In any case, because either boxes or cylinders, as linked prims, use the Havok primitives, either will always be more efficient than any uploaded physics shape, because even a single convex hull is much harder for the engine than any primitive, as reflected in the higher physics weight (0.36 for an uploaded cube convex hull).

 

Link to comment
Share on other sites


Kwakkelde Kwak wrote:


Jenni Darkwatch wrote:

IIRC normalmaps are merely a shader operation on a GPU.

As far as calculations go, but they do use more VRAM than geometry of course. Then again, with a good normal map you can probably get away with a smaller diffuse map.

Vertices use significantly more memory than normal maps. Vertices use 40 bytes for position, normal and color, plus 8 bytes for texcoord and however many for weights for skinning (note color and texcoords use floats). Normal maps are just 3 bytes per pixel. A 512x512 normal map has 262k pixels but takes up the same space as 19.6k vertices @ 40B each. Altho be warned there isn't a 1:1 ratio of nomal map pixels to pixels on your screen, there's a whole lot of filtering and interpolating going on before you see the final result.

Link to comment
Share on other sites

That sort of fits for a recent example of a crate I made. High poly was 17520 verts -> about 700k at that rate. Baked normal map was 512x512x4 bytes (alpha used for spec exponent) -> about 1000k. However, I used a spec map too, and I would have used a lot less vertices to do it with geometry, at most 1/4. So in that case, the memory used for materials was up to 8 times what the geometry alternative would be. Different cases will be very different, of course. Anyway, the real test involves much more than just these numbers, as you say. The way to tell would be to make equivalent scenes both ways and measure fps, I suppose. I'm not going to do that!

Link to comment
Share on other sites

I meant that vertices use more memory for the same amount of detail. Give two objects with the same visible quality the one made with pure geometry will use significantly more memory than the one that uses a low poly base mesh with a normal map. That is to say that normal maps are more efficient at storing high detail than vertex meshes are. That's what they were invented for and why people use them.

The key point is to use the right size normal map. In your example you say you'd use fewer vertices to make that detail than what the normal map has in pixels. That means your normal map is too big.

Granted I did gloss over a few things, such as the index buffers that define how the vertices combine to form triangles (6 bytes per tris). And how normal maps are not the same as the normals buffer. LL also missed the boat on some optimizations, with tanget space normal maps you only need the X & Y channels the Z channel can be reconstructed in the shader and they could have used compression as well which in total would cut the per pixel cost down to 1 byte.

Link to comment
Share on other sites


leliel Mirihi wrote:


Kwakkelde Kwak wrote:


Jenni Darkwatch wrote:

IIRC normalmaps are merely a shader operation on a GPU.

As far as calculations go, but they do use more VRAM than geometry of course. Then again, with a good normal map you can probably get away with a smaller diffuse map.

Vertices use significantly more memory than normal maps. Vertices use 40 bytes for position, normal and color, plus 8 bytes for texcoord and however many for weights for skinning (note color and texcoords use floats). Normal maps are just 3 bytes per pixel. A 512x512 normal map has 262k pixels but takes up the same space as 19.6k vertices @ 40B each. Altho be warned there isn't a 1:1 ratio of nomal map pixels to pixels on your screen, there's a whole lot of filtering and interpolating going on before you see the final result.

Of course when you compare a high poly model to a low poly model with normal maps, the low poly will be better memory wise. Normal maps are textures though and Second Life has a very limited amount of texture memory. In other words, if you have a lot of VRAM, SL will run out of memory before your graphics card does.

Then there's this:

NormalVsGeom.PNG

The left object is rather heavy, with 4410 tris and 8820 verts (if it was uploaded to SL). The right object 18 and 28. Baked onto a normal map, there wouldn't be any difference though (apart from the fact you can of course tile the map of the left object, but this is an example). I'm pretty sure using a normal map for the object on the right, rather than geometry, wouldn't do you any good memory wise.

btw, doesn't a normal map always use the fourth channel's memory, even if it's not used, taking up 4 bytes per pixel?

Link to comment
Share on other sites


Kwakkelde Kwak wrote:

Of course when you compare a high poly model to a low poly model with normal maps, the low poly will be better memory wise. Normal maps are textures though and Second Life has a very limited amount of texture memory. In other words, if you have a lot of VRAM, SL will run out of memory before your graphics card does.

 Vertex date takes memory too. Develop -> Show Info -> Show Render Info to see how much. I'm shure everyone has heard about the GDC presentation approaching zero driver overhead by now. It's worth pointing out that many of their examples were bandwidth limited by the PCIe bus transfering vertex data.


The left object is rather heavy, with 4410 tris and 8820 verts (if it was uploaded to SL). The right object 18 and 28. Baked onto a normal map, there wouldn't be any difference though (apart from the fact you can of course tile the map of the left object, but this is an example). I'm pretty sure using a normal map for the object on the right, rather than geometry, wouldn't do you any good memory wise.

 Obviously there's always exceptions to everything, that goes without saying. Tho I don't think a shape as simple as a pyramid is a good example of that.

 

 


btw, doesn't a normal map always use the fourth channel's memory, even if it's not used, taking up 4 bytes per pixel?

No. Normal map != normal gbuffer. The viewer will use 4 bytes for normals and spec exp even if you don't have a normal or spec map because the normal gbuffer is for the whole screen. The same as how the "framebuffer" (diffuse/albedo gbuffer) always has an alpha channel even if there isn't a single alpha texture visible.

 

 

Link to comment
Share on other sites


leliel Mirihi wrote:


Kwakkelde Kwak wrote:

Of course when you compare a high poly model to a low poly model with normal maps, the low poly will be better memory wise. Normal maps are textures though and Second Life has a very limited amount of texture memory. In other words, if you have a lot of VRAM, SL will run out of memory before your graphics card does.

 Vertex date takes memory too. Develop -> Show Info -> Show Render Info to see how much. I'm shure everyone has heard about the GDC presentation approaching zero driver overhead by now. It's worth pointing out that many of their examples were bandwidth limited by the PCIe bus transfering vertex data.

I don't think one percent of one percent of one percent of the users has ever heard of "GDC presentation approaching zero driver overhead". Anyway, I am talking about the maximum of 512 MB reserved for textures. No matter how muh memory a vertex uses, it doesn't qualify or is used as texture memory. If it is, the term is chosen poorly at best.


 Obviously there's always exceptions to everything, that goes without saying. Tho I don't think a shape as simple as a pyramid is a good example of that.

I think you have false expectations of the SL community then. Most builders don't have a clue about 3d modeling. If they are told normal maps are good and polygons are bad, they might use a normal map for just about anything. So a single rivet on a big plane, or in this case, simply because that was faster to model, a single tapered box in a big plane.

 

 

Going by what I've seen over the years, they'll probably use a normal map on top of a high poly model, making it not too detailed, but superextramega detailed.


No. Normal map != normal gbuffer. The viewer will use 4 bytes for normals and spec exp even if you don't have a normal or spec map because the normal gbuffer is for the whole screen. The same as how the "framebuffer" (diffuse/albedo gbuffer) always has an alpha channel even if there isn't a single alpha texture visible.


I thought read somewhere (it's far far away in the back of my mind) that it's best to use the alpha channel of your normal maps for something because it's used anyway. Maybe the person writing it was mistaken, maybe I misread. I don't know.

 

Anyway, the entire point is: sometimes it's better to use geometry, sometimes it's better to use normal maps. It's always good to use as little as possible.

Link to comment
Share on other sites

"That means your normal map is too big."

I don't think I can entirely agree with that assertion, so far. The density of geometry can be highly variable. In my example, it's mostly in rounded bolt heads that occupy less that 2% of the surface area. The remainder doesn't use much geometry, but the normal map has to be uniform and of sufficient resolution to give the required detail for the bolt heads. In other words, the normal map resolution is determined by the finest detail to be represented. If the distribution of detail is non-uniform, that means it has to carry a great deal of redundant information. In contrast, the geometry only contains detail where it is required.

There's a worse problem if you want your normal map to include sharp edges, as around the edges of the bolts. With geometry, these remain sharp no matter how closely you view the object, but with a normal map, even with very high resolution, the sharpness breaks down as you approach.

Of course, what I actually did was to use the redundancy in the normal map to add textural detail on top of the geometry detail. So in a practical case, the normal map gets used to carry more information that the geometry did, and we end up not comparing like with like.

Here's a closeup of the crate, normal maps on the left, geometry it was baked from on the right (I delibarately left it looking thicker - that's how it is - why?), using different normal map resolutions. Bevels are left sharp to emphasise the effect on sharp edges. I would say even the 512x512 is unsatisfactory around the edges of the bolthead at this view distance. However, the straight sharp edges actually look nicer at lower resolutions, benefitting from the interpolation. So I guess the balance depends heavily on the eactly what the geometry is.

nmapres.jpg

Notes: Bakes done in Blender. Maybe Normalmap etc would do better. Blank diffuse texture and spec map. Default shininess settings (51,0). AO turned off because it affects geometry but not normalmap effects. Looking down towards sun at 3pm with dfefault settings. I underestimated the geometry - forgot vertex duplications at sharp edges etc. It's abour 3x what I said, giving the normalmap a 2-fold advantage in the 40b/vert calculation. The render info says 50.1/73.1 KB with the object selected. I never know exactly what that is supposed to mean. If it's the memory used by the geometry, and it's the sum of both numbers, that would be about 27 bytes/vert. 

ETA: Whoops - forgot numbers on pic!

For what it may be worth, here is the bolt geometry. I'm pretty sure it would look fine with a lot less.

abolt.jpg

Link to comment
Share on other sites

Yes. I should try that. It takes a lot of extra geometry to get an acceptably sharp edge withyout actually using one, which would mean much more geometry than one would ever use for the no-normal map version. So I wouldn't be able to do the comparison. I'll try it though, and put the results here.

Here we are. New geometry with five-segment bevels instead of sharp edges in the high poly. Not much different really, but note the slight artefacts due to bleeding of the normals into the square around the bolt. It's to control that that I used sharp edhes before. Could use more subdivision instead, maybe. The high poly is already far more verts than I would ever use though.

nmapres_new.jpg

Link to comment
Share on other sites

Only in Second Life...

I don't want to sound disparaging of the many talented and self taught builders out there, so I won't rant like I was planning to.


Use less polys, not more. Normal maps make stuff look better, so use them. Treat your triangles and your pixels like non-renewable resources, and really try to understand the mesh uploader and how different LODs contribute to the overall land impact.

Regarding Drongle's bakes and renders (not entirely addressed to you, Drongle - please don't think I'm trying to lecture you...):

The geometry of the bolts appears more raised than the normal map because normal maps don't bake (and therefore lose data from) angles close to 90 degrees, so its pretty much ignoring the bottom, steepest part of the bolt mesh. This is one of the reasons to use ambient occlusion bakes with your diffuse, because they can help to add much needed depth to a normal map.

Even if you intend for your normal map to be 512 or less, work and bake at a much higher res. Photoshop is better at pixel interpolation, and a lot of jagged edges can be mitigated by downscaling your image.

Finally, you absolutely can use hard edges in your normal bakes. Soft/bevelled edges will always be better, but sometimes they're not possible to do. In the case of your rivets, I'd keep the hard edges, and bake them separate to the underlying mesh to avoid any baking artifacts, then combine them together. This gives you a mask around your bolts, so you can have the edges as soft or hard as you like by blurring the edge (another thing to do in this case is to give a slight indentation under the bolt to make it look like its deformed the surface underneath it. Something like this can be a pain to do at the baking stage, but is simple as in PS, when you have separate layers).

 

Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 3668 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...