Jump to content

“Fuzzy” layer on mesh?


Kawaii Spicy
 Share

You are about to reply to a thread that has been inactive for 1572 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

Materials (spec and normal maps) can help with the illusion of fluffiness, too.

So the OP doesn't think these answers are just being dismissive, the reason fuzzy layers way of doing it isn't ideal is twofold:

The extra geometry adds that much more that needs to be rendered, becoming a drag on rendering performance. It may not seem like all that much on its own, but when everyone is doing it (as well as other bad creation habits) it adds up fast.

The second issue is that blended alpha is very difficult to render.  It has its uses, but it should be used sparingly. For that matter, hair, clothing, makeup and tattoo layers on mesh bodies, all become a major impact on rendering performance whenever they're on screen.

The more we reign in these habits among content creators, the better SL will run for everyone.

Link to comment
Share on other sites

11 hours ago, Penny Patton said:

The second issue is that blended alpha is very difficult to render.  It has its uses, but it should be used sparingly. For that matter, hair, clothing, makeup and tattoo layers on mesh bodies, all become a major impact on rendering performance whenever they're on screen.

To help clarify this bit to the OP, Secondlife runs on a model called OpenGL. In such environment, alpha blending goes through 4 alpha depth calculation passes to render the smooth transition from opaque to transparent, whereas alpha masking runs through only one pass. For this reason, alpha blending delivers a lot of issues in regard of depth allocation, giving as result the infamous "alpha glitch", the one that renders alpha textures in their wrong depth order. This has been and still is a common problem in games, where the expertise of developers helped in reducing the effect by using alpha blending sparingly and wisely.

Edited by OptimoMaximo
  • Like 2
Link to comment
Share on other sites

18 minutes ago, ChinRey said:

Do other graphics APIs handle this better?

Those that can, are too costly in terms of resources to implement in a realtime environment. Same goes for different algorithms that have been implemented over the years: Maya viewport for xample, runs on opengl but there is an option for the alpha sorting called depth peeling. Alpha Zdepth fighting vanishes, but it can run well only in the scene itself and, basing off the screen size of transparent things, it turns to masking when the camera is farther away. You wouldn't tell that happens from the looks of things, but again for a game environment that's no feasible implementation (or at least not for the moment) 

Link to comment
Share on other sites

To get this effect, once the product is finalized you scale it up (the fattening/inflate scale) ever so slightly and set the transparency of the 2nd one to something like 50 or 60, or whatever looks good. You can do this in world. With that in mind, since you are doubling the mesh, try to have the base mesh be reasonable in tris count. There are some designers using fibermesh from zbrush, but the polycounts get way out of hand really fast - like a 600k poly sweater that people have been wearing lately.

Edited by imacrabpinch
Link to comment
Share on other sites

A fair reminder is that animeshs have a hard limit of 100K triangles for the entire animesh avatar. Now that this feature is out, it's a fair expectation that people are gonna try to use your products to compose animesh characters.

Edited by Kyrah Abattoir
  • Like 1
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 1572 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...