Jump to content
RohanaRaven Zerbino

Animation facts anyone should know

Recommended Posts

... from my point of view ūü§Ē

Subject of Animation ...

... are bones and bones only!

The system of bones we are using for animating is called "rig".

Rig in second life is invisible unless you make it visible from Advanced/Character/Show Avatar Joints (might vary depending of viewer).

If you are curious, go and check it - it will show all bones of SL avatar rig

Texture_Rig.jpg.f68a017d6c71d303fcc8bc5a79505332.jpg.

 

Animation & Body Shape


There is no animation in Second Life that will fit just every shape perfectly.

When we are talking about Second Life body shape, we are talking about body proportions more than just height. So it is quite possible that animation will work well on petite as well as giant avatars if those shapes have body proportions  similar to the shape used as model for making animation.

Along with the Arms length Shoulders width have great impact on arms position and that is often overlooked.

On the right picture Shoulder value is 100 (not realistic for female avatar), but pay attention on arms position:

638934598_Texture_ShoulderImpact.thumb.jpg.e88fdc514e3556dcc58c1e2c070eb072.jpg

 

Same goes it we talk about relation between Hips and Legs:

1657059379_Texture_HipImpact.jpg.e33c140f4a0340e62fa14587d034ed3b.jpg

 

In Second Life there is no script that would temporarily impact the avatar body shape along with animation, nor (as far as I am informed) scripting options for such a script to be made.

The only solution for perfect animation setup is to wear the model shape animator has used for animation, or to put some effort and adjust your personal shape according the animator's one. As there is an option to Replace Outfit very quickly from your inventory, sometimes some situations are worth of that effort.

But if creator of the furniture has made a choice to use animations from different animators, than there is no other solution but to accept some animation imperfections caused by body shape/proportion mismatch.

Also, human animation will not work on four legged animal shape - for those you will have to lookup for specialized animations.

 

Animation & Hover Height


Hover Height is quick trick to adjust your avatar distance from the ground or some other object. Ergo, it will also impact the animation position in your furniture or HUD, and you might find your self very much displaced from what should be your normal position. Use it wisely and try to remember to check it before you call creator.

Picture on the left: both avatars are on 0 hover height; Picture on the right: female avatar is on -0.15:

1084705170_Texture_Hoverhtimpact.thumb.jpg.3fbde7c0e129a96760330d7fa298a235.jpg

 

Animations and Bento Heads

Bento Head rig  greatly depends of the rigger's skills, and her/his awareness of rig impact on facial animations.

I've seen head rigs so terribly displaced that animations can be made for that and that head only and will not properly work on any other head.

Or, if you prefer, animation made for some other head might appear as  you are looking at clown from your worst nightmare.

That is the reason why animators are testing on different heads, and most of the time in the product description will be stated on what head brands will animation work properly.

I can not emphasize enough the need of testing when it comes to head animations. It is pointless to write the review "it doesn't work on that and that head" and to leave low rating, if your head model is not on the list of tested heads at all!

Even it is stated that animation has been tested with your head model and works properly, but you have changed the shape of the head your self, you should test to see if it really fits you.

Another thing  is when your tongue from some "tongue out" animation or pose gets stuck out of your mouth in every animation you play after it. Looks like as some heads are made to reset to  default position bringing the tongue back inside of the mouth, some are not. How it is done it is not clear to me, but again, try before buy, it might be head problem rather than animation.

 

Animation & Shoes

Shoe Base do impact the Body overall  Height causing displacement of avatar center. For individual animations this is negligible change, but for interactive animations (couples for example) this change will cause displacement in avatars mutual position.

So, if you love to wear high heels all the time, ask your partner to adjust his Avatar Height accordingly.

Differece between flats (on the left) and high heels (on the right):

908085789_Texture_Highheelsbaseimpact.thumb.jpg.3b8e290a94d8a5e5e3309c4004cb2afc.jpg

 

Animation & Broken Ankle

Broken Ankle is caused by rotation of the feet in animation. As Hight Heel base adds more rotation, ankle joint is getting the "broken appearance.

Feet are the joints that have to be animated, and they can not be excluded from animation process.

There are different approaches to this problem depending of animator's choice:

- animator may choose to work with minimal rotation of the Foot with presumption that high heels will be worn. Used with bare feet, those animation will often would look weird when it comes to foot position.

- animator may choose to rotate feet accordingly to the animation (my personal choice) and to leave the decision to end user whether to wear shoes or not. But, without Broken Ankle fix, those animations would also look unnatural when used with high heels.

Broken Ankle Fix will fix the feet and block all  feet animations often causing less realism in some animations. As it is in fact animation with added script to play it on avatar, it has to have higher priority than main animation.

1493276459_Texture_BrokenAnkleFix.thumb.jpg.c7537a71ce1744e5258e13b5ee81e0bc.jpg

 

Animations & Bento

Regardless if you are Bento Body parts user or not, Bento animations will not play on non Bento viewer. If you are not already, be sure to update your favorite viewer.

 

Animations & Spread Fingers on System hands

If you still have system hands, you may expect some Bento animations to be played on you with Spread Hands position.

There is a trick that every animator should implement in animation export to avoid spread position of fingers on system avatars.

If you see such an animation, contact the animation creator - we are just humans, and that step is easily overlooked.

 

  • Like 2
  • Thanks 1

Share this post


Link to post
Share on other sites

You forgot the important reminder for your fellow animators... transposing vs rotating animations...

I began previewing animations with a bladencat avatar... if my arms or legs elongate I consider the animations crap... and you would not believe how many of your competitors lack that simple knowledge - it also might help customers in understanding what products to avoid.

Edited by Fionalein
  • Like 1
  • Haha 2

Share this post


Link to post
Share on other sites
On 1/11/2019 at 7:40 PM, Fionalein said:

transposing vs rotating animations...

Actually, no, there is no "vs" ...

But, before I go any further and in order to define my field of experience, it is crucial to specify my tools: I am using Blender with Avastar add-on, exporting in .anim format.

Back to "vs".

There are 1+8+2 body bones that have to be key framed with location data:

- COG that defines position of avatar body in space
- 8 IK controllers: ikWristLeft, ikElbowTargetLeft, ikWristRight, ikElbowTargetRight, ikHeelLeft,  ikKneeTargetLeft, ikHeelRight, ikKneeTargetRight
- ikFootPivotLeft and ikFootPivotLeft if used in location changing manner

But even they are keyframed with location data, that doesn't mean they should be exported with location data!

Beauty of Avsatar is that exporter is made to convert all those values to be SL compatible, unless you check this field:

 

1018640318_UseTranslationscheckbox.jpg.dc2ba51502fe93292e785cb1d246b119.jpg

 

Before LL opened grid for translations, the mistake would be clear: mesh body would turn into bunch of spikes (not system avatar - painfull personal experience) and you would know you screwed up something. It is not like that now days, but I reckon using translations would be visible in custom made rigs. I don't see any deformations on Maitreya body I am using, but I wouldn't bet some other body wouldn't look weird.

So, yes, ticking "Use translations" box when exporting body animations is not a very good idea, nor it is needed from any aspect.

BUT!

Now we come to Bento heads!

I am not done with all experiments, but so far from what I've seen, there is only one bone in the head that requires location data to be key framed AND exported as such in order to work and that is tongue. But only in "tongue out" animation variations.

Bento heads are the reason why LL opened grid for using translations, but they have to be used very carefully and wise. If implemented to body, they could make a mess. When implemented to head they probably could add to movements realism.  It is just like with a "free will" - it can be both used and abused.

Edited by RohanaRaven Zerbino

Share this post


Link to post
Share on other sites
2 hours ago, RohanaRaven Zerbino said:

Before LL opened grid for translations, the mistake would be clear: mesh body would turn into bunch of spikes (not system avatar - painfull personal experience) and you would know you screwed up something. It is not like that now days, but I reckon using translations would be visible in custom made rigs. I don't see any deformations on Maitreya body I am using, but I wouldn't bet some other body wouldn'tÔĽŅ look weird.ÔĽŅÔĽŅÔĽŅ

To me it even shows on the Maitreya. For example in changing body height because my avatar was twisted to the shape dimensions of the avatar the animation was designed on - because someone didn't take as much care about that detail. So far I have encountered it in animated bento body and head gestures, a horse avatar riding system, a horse avatar's walking animations, a vehicle's sit animation and dances. And not only by some people just dabbling in animations, but sadly by some big names as well. It is actually quite common. That is why I started testing stuff with a small avatar. With a big avatar the changes can be so subtle you miss them... (for example I did with the horse riding system until my friend tried using it with a kid sized avatar).

Share this post


Link to post
Share on other sites
6 hours ago, RohanaRaven Zerbino said:

Actually, no, there is no "vs" ...

But, before I go any further and in order to define my field of experience, it is crucial to specify my tools: I am using Blender with Avastar add-on, exporting in .anim format.

Back to "vs".

There are 1+8+2 body bones that have to be key framed with location data:

- COG that defines position of avatar body in space
- 8 IK controllers: ikWristLeft, ikElbowTargetLeft, ikWristRight, ikElbowTargetRight, ikHeelLeft,  ikKneeTargetLeft, ikHeelRight, ikKneeTargetRight
- ikFootPivotLeft and ikFootPivotLeft if used in location changing mannerÔĽŅ

But even they are keyframed with location data, that doesn't mean they should be exported with location data!

A little imprecise. Animations actually DO have a translation and rotation export mean, depending on their use.

Your point here is assuming that the control rig's bones are being exported, while in reality the SL's skeleton bones that are controlled by the animation rig are being sampled upon export. Especially those used for IK: those are effectors for an IK solver to actually solve the rotation of controlled bones in order to fit the IK controllers' transforms. Therefore the keyframed transforms on those effectors do not reflect the actual bones' channel data that the export bones collect from the solver, even when translation is enabled (because there is no translation channel). 

4 hours ago, Fionalein said:

To me it even shows on the Maitreya

For sure it does. That comes from the buggy method that the bvh files reader/translator uses to handle the file write upon import to SL when location data is enabled during export, writing down the whole hierarchy joints positions (unavoidable as it is key in the BVH file syntax). anim files are more effective since you actually export only the animated bones (but the problem still occurs on those that are being exported)

Share this post


Link to post
Share on other sites
10 hours ago, Fionalein said:

To me it even shows on the Maitreya.

Maitreya is Maitreya, on you or me, it will always behave the same regardless the shape difference. So it can be taken as "constant" in a sense that if animation is deforming  your body it would deform mine too and vice versa.

That leave us with 3  variables:

- Program used for animations - it can be anything, from Qavimator, Pozer, Daz, Maya, Blender ... who knows what else. Blender has Avastar with SL rig included, Maya should have same with Mayastar, I never opened new Bento Qavimator, and who knows what people using Pozer and Daz are using as SL rig

- Exporter and exporting settings used for animations export to SL - again, Avastar exporter has been made specifically for SL requirements, Mayastar probably the same, the rest is beyond me

- animation format

I know nothing about animal avatar animations, except they require custom made rig, and, in old days, they would put the animal shape back to human when played with human animations.

All I could suggest is to open a thread and summon  two Gods on multiply programs/rigging/animation/ technical/mocap/mesh/texturing fields in Second life: Medhue Simoni and OptimoMaximo.  If they can not answer your questions, no one can.

Share this post


Link to post
Share on other sites
5 hours ago, OptimoMaximo said:

Your point here is assuming that the control rig's bones are being exported

With all the risk to sound like a 5 year old kindergarten kid in comparison to you ūü§£, I have to explain:

No, that was not my point at all.  Here we are talking about animation process it self and about  what I consider to be misunderstood concept of translation key framing that could be defined as "don't use translations at all in animation process, use rotations only".  List of bones I gave is simply an example of bones that have to be location keyframed or they would be useless. That way I was simply pointing out the need for location keyframing.

Of course that what we  import in SL is not a raw data simply copied from animation file, and that is why I appreciate the Avastar exporter, as I don't have to bump my head about anything but to be sure I've didn't check "Use translation" or "Apply Armature Scale" check box. That way I can concentrate on animation it self and not on technical details of export. Developers of Avastar (you included) made us lazy!

The fact is that specific order to exporter to use translations is changing the way exporter is converting animation data to be used in Second life, and I am afraid that animators are using it more than necessary as they keyframed location data in file - well, might sound logical, if one already keyframed location, that means it should be exported as such, right? Well ... very much wrong, you now how and why better than me.

As I said before, the only bone I've discovered so far that is not working without "use Translations" is tongue bone, and only in "tongue out" animation variations. Therefor, only face anims with "tongue out" animation variations should be exported that way, with a great attention to what data other bones hold. But at this point I can not claim anything as I am still exploring the head bones and how they behave in world.

But what I am sure of is: if head animations do require translation export as such, than head and body animations should be, even more, must be exported separately,

You are maybe the only person who can explain the technical details of both export processes (with and without "Use translations" check box) in a way we can understand, and I would like to kindly ask you to do so. Also, please name both of them somehow, I'm sick and tired of waving my hands like Tarzan in front of the screen while trying to explain the difference. ūü§£

 

And totally personal note to you from me: thank you for every word you are writing on this forum and knowledge you share ‚̧ԳŹ

Share this post


Link to post
Share on other sites
14 hours ago, RohanaRaven Zerbino said:

But what I am sure of is: if head animations do require translation export as such, than head and body animations should be, even more, must be exported separately,

Sorry for getting you wrong, previously. Indeed the two things must be kept separated. The way data is being sampled and calculated simply exclude each other in the process, if we want to keep things functional.

First of all, let's clarify what translation animation is supposed to handle. Joint repositioning during an animation was initially intended only as a mean for "shape shifting", literally. That's why originally all animations could not include position data except for the hip joint (obviously). That's also why mesh import give us an option for Joint Position: you want to keep the custom positioning this avatar was built on to animate it with only rotation data, since the joint positions are assumed as embedded in the mesh. With the advent of .anim file format, more bones could be exported with translation data that had a meaningful use with no avatar disruption/destruction, for instance the attachment points or some collision volume bones to emulate muscle bulging.

I intentionally did not use the term "shape", in favor to a more generic "custom positioning", as this may be a little confusing: for a SL character, a shape is a set of scale transforms being applied to specific sets of joints to give us the inworld shaping capabilities we all know. The default male and female have, for example, some scale applied on the local Y axis of their arms. And that's supposed to be the default. In Blender+Avastar, you can see that on the female avatar when you switch from default shape to neutral (the male is another handful of issues, with some collision volume bones positioned differently from the female). Neutral is the original shape with all scale transforms set to 1, and the character is way smaller and squashed. However Blender doesn't support bindposes and meshes can be rigged only to neutral transform bones. All the compatibility things for SL are being handled via script when exporting, so you can't directly see any scaling on the bones within the interface.

Because of this, scale animation can not be imported or set (except for the hip bone, but that's another story...), so here is where the pain begins with translation based animations.

Let's leave out the custom joint positions some meshes might have. Just figure that you want a specific shape to animate with (and the default avatar IS a shape, male or female doesn't matter). All of the joints locations, visually, are somewhere in space as result of their scale and all the children bones inherit that shifting, in a cascade fashion because many have their own scale value. However, each transform actually has its own pivot point, and since what is getting sampled is only position and rotation, you may agree that rotation is not an issue (how many degrees has a joint rotated?), but when it comes to position, what are we going to sample? Each transform has it own pivot point, and we said that the scale factor of each joint determines where in space this joint LOOKS to be, but is that its true location? When you then export a BVH file, the joints Tpose raw data is being written in the file's header without accounting for the scale (which is not handled at all by the importer even if you may include it) and only then you get the array of values over time that make the animation. Somehow the conversion from BVH to the internal .anim establishes the position for all joints, also those that aren't animated, and as result of animation playback you get all of them to snap to the Neutral shape absolute location.

With .anim format you get the same or similar result, only applied to the joints that were actually animated and not on all of them. In my plug in for Maya i partially circumvented this issue but freezing the export skeleton's transformations in place where the shape had the joints moved to, however this doesn't make it exempt from undesirable shape shifting. It is just more subtle visual effect, assuming that the final user is starting from the same shape i used (or using this method on custom joint positions, when using a default human avatar skeleton shifting occurs anyway).

But then Bento came into play and things got even worse. Especially when heads and fingers are added in the equation. LL has done some automated fixing inworld so there's not really much to worry about, BUT...

The way joint position is being calculated is not the distance from the parent joint (yay! ūü§¶‚Äć‚ôāÔłŹ), it rather is the joint's distance from the center of the scene (so to keep the individual joint¬†data independent, in case one or more of its parents are missing in the file and avoid cascades of data only used for reference). The automated calculations then are based off the neutral shape (remember? no scaling on any joint so no scale induced repositioning¬†in space), then the shape values are being applied and finally the animation data is transposed by the accumulated shifting that occurs across all the involved bones. Therefore the greatest precision in animation can be achieved animating on a head (or fingers) that were made and rigged on and for the neutral shape, when location data needs to be included.

With all this being said, it should now be clear why location animation should be kept separate from the general animation (rotation only). Of course it is easier to make the whole animation with face and fingers together and it may also work well on a custom (joint position) avatar. However, since the majority of users rely on default-compliant avatars with shape capabilities, this becomes a mess to handle and results in what @Fionalein complains about (rightfully). Being able to export everything (location and rotation) in one file doesn't necessarily mean that one should, for many reasons. On a technical standpoint, this can result in unwanted behaviors (Fiona's case). Under an artistic standpoint, it "freezes" a body animation to a specific set of face expressions. For hands and fingers it is not a big problem, as usually fingers do not dislocate or stretch and location data isn't even necessary as joint positions are being defined in the mesh hands. On a marketing standpoint you can offer a wider range of customization, diversifying the animations that can play together (if you keep them separated). All the issues arise from the face rig, and the need to use location data in both the mesh (head and facial features require that) and in animation (because of the single-joint system in use, that requires the movement of a joint to get a certain mesh surface deformation). And indeed, moving the joints with animation gives "shape shifting" as result, from every standpoint. So, if you wish to get a "naming convention" to define the two things, we can pretty much use this: joint translation animation can be "shape shifting animation" (when it occurs on joints with weights on the avatar mesh) as opposed to "character animation" when it is intended to animate a shape-defined character.

  • Thanks 1

Share this post


Link to post
Share on other sites

Finally! Thank you!! ūü§© Informations, definitions, explanations and terminology - what we need as the air we breath in animation filed!

Animation is most mysterious everyday used aspect of Second Life, and the fact is that informations you just have shared with us can not be, or are extremely hard to find. I do believe that might be one of the reasons why people just dive into it without enough info nor experimenting. It is reaching the point that¬† some people think it is enough to have kinect to be able to produce SL animations. ūü§®

Even your post greatly extends the title of this thread, I can not express enough how grateful I am you wrote it, as this is something that every animator should know.

 

Share this post


Link to post
Share on other sites
2 hours ago, RohanaRaven Zerbino said:

Finally! Thank you!!

You're most welcome :) hopefully things are a bit more clear now. 

2 hours ago, RohanaRaven Zerbino said:

terminology

Something to expand over a little bit more, I think. Reading your initial post:

On 1/11/2019 at 5:33 PM, RohanaRaven Zerbino said:

Subject of Animation ...

... are bones and bones only!

While this seems to be the case for what we can see displayed on screen, it's quite a bit misleading (not what you say, the concept itself).

What makes a human being articulated? Not the bones. If we had one single solid bone holding up our shape, we couldn't move. The pivotal point (in all of its possible meanings) is its joints structure. Also the SL preview, that you rightly show in the OP, is based off joints. What we perceive as "bone" is, actually, a visual representation of the non-unit vector magnitude that leads to the next joint in the hierarchy structure. Every joint has its own local position relative to its parent (a vector), and the bone is nothing more than a line (the vector's magnitude) representing that value. This is why we have a few problems in SL when it comes to animation. There is too much entanglement among all of the three matrix types that govern a skeleton (location, rotation and scale matrices), something that is never done in game development. 

Mixing rotation and scale matrices on the same bone is a bad idea per se, but in animation we also have an interaction with the location matrix at the same time. Indeed that's why we have collision volume bones for shaping the volume of our mesh avatars and the location is handled via main joints' scale: keep the two matrices separated as much as possible, working on different joints. The system avatar uses a different method, since its data is shared across users, locally, using blendshapes (blender's shapekeys) for the volumes through the viewer, and distance between joints through the skeleton. The volume shapes were defined using the collision volume bones and turned into vertex animation based morphs,  were/are streamed to the viewers as local data and the collision volume bones were disabled for such feature, but kept in the skeleton for realtime IK inworld (theoretically, since this feature is broken except for the default animations).

With fitted mesh, LL needed to re-enable those collision volume bones in order to give rigged meshes those fitting capabilities. So you can see that collision volumes are acting within the scale matrix, while the regular joints mostly work within the rotation matrix, with one axis's scale used to alter the location of its child joint(s). That's the reason for the slight imperfections we see on tight fitted clothing, that fit almost everyone's shape (alpha masking ftw!). Add a location matrix, and we get a matrix math massacre party, where one matrix (usually the location) disagrees and fights with another (usually the scale, because that tries to emulate the positioning of joints via scale) :SwingingFriends:

Share this post


Link to post
Share on other sites

Eh... removing my post because this has turned into technical stuff that, though I understand it all, doesn't interest me in SL space. :)

Edited by Alyona Su
  • Haha 1

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×