Jump to content
Linden Lab

Project Bento Feedback Thread

Recommended Posts

I've been in SL for eight years and sell a large amount of clothing and automatons.  But the business depends on new people joining SL and this simply isn't happening nowadays.  Indeed the historical roleplay communities I tend to focus on have almost died out.

Share this post


Link to post
Share on other sites


Oz Linden wrote:

 

I've seen a number of posts here that include some variation on "we have always had to do XYZ this way because of the SOMETHING bug, and so we can't do SO-AND-SO" (for example, joint offsets not loading correctly). If there are existing issues that are directly related to Bento (like joint offsets not loading correctly), we'd like to get them fixed so that we can get some of these obstacles out of the way. So, if you've got one, please describe it (see previous paragraph - concrete examples we can experiment with) by filing a BUG in Jira (put [bento] in the Summary). References to long-standing issues are ok; we're not only trying to do new things, we're trying fix at least some old ones too.

Oh really Oz! Let's test you on this notion.

https://jira.secondlife.com/browse/SH-2550

This bug is over 8 years old now. It affects EVERY SINGLE looping animation in SL. On human avatars standing upright, you won't notice the bug, but anything outside of that, and the bug is plainly evident. So, this bug makes it impossible to blend between 2 looping animations, as the bug happens every time the animation initially starts. Actually it happens everytime the animation loops, but we can route around the first frame when we set the loop settings. This bug is especially problematic on 4 legged avatars, and the only way to quiet the problem is to keep another animation playing over top of everything else. Again tho, this doesn't stop the problem completely.

Yeah, bento is nice, but it doesn't make the quality of animation in SL better. LL is simply building on top of their crippling bugs. Before this bug, I made a cat ao, and with blending animations together, I was able to create a pretty realistic cat AO, and that was 9 years ago. Today, that is really not possible. I just created a set of wolves for the Unity platform, and I created most of the movements for them from real wolf videos. I could easily convert those animations over to the SL rig, but what is the point if the avatar is going to jump inbetween the animations, and I have to try and use tricks to even get it to work?

Allow me to point out again, as I have done many times now, that SL has probably the most amazing animation system there is out there. Even Unity's Mecanim system can't really hold up to what animators can do in SL. The bugs tho, make it junk. If you want quality Oz, then you need to address the bugs when they happen. Not wait 8 years, if you even fix this, which I have my doubts. This bug has cost LL major income, and it frustrates me that LL doesn't understand that.

  • Like 1

Share this post


Link to post
Share on other sites

 

So sorry if I missed this earlier in the thread, but since there's a meeting coming up- maybe it would be a good idea to reiterate to use the meeting minutes effectively:

What are the motivations from the company's perspective to not implement bone translation, if any?

Share this post


Link to post
Share on other sites

I created a JIRA post~  "Bento~ The Yawning Problem"

I set the post to Private, as I don't particularly want to deal with DMCA sillyness.  But it's been posted.

 

As a summary to the people here.  The file I posted is the one used to generate this animation: The Yawning Problem.



The problem itself deals with floating the corners of the mouth to solve for two expressions, a gaping yawn and the rest of the regular expressions a face will generate.

The problem is~ I'm struggling to find a joint location for mFaceCornerRight and mFaceCorner Left that will animate a wide yawn~ and a smile.   This is a very very specific example that highlights the problems with a fixed point rotation based rig.  Not only is it tedious and difficult to use~ but the designer is forced to choose between making the character able to yawn in a convincing manner ~ or making it able to smile in a non-creepy fashion.  It's a blatant either-or design choice that has to be made that shouldn't have to be made to begin with.

  • Like 2

Share this post


Link to post
Share on other sites

The discussion on animating joint positions produced some good examples of use cases where this type of animation is helpful or even required. Based on the feedback to date, we will be re-enabling animation of joint positions on aditi. We are looking forward to seeing what you all can do with the additional capabilities. As before, nothing is final until it goes to the main grid, and as always, please test things, break things, and tell us about any problems you find. Thanks to everyone who participated in the discussion.

So that’s the short answer: if you have been waiting to try to upload animations including joint positions, you can do that now. However, the issues that influenced us to disable such animations in the first place are still present, and in the remainder of the post I will discuss those issues in a bit more detail, along with what implications this may have for development of Bento during this test period. What are the problems associated with animating joint positions? The main ones are:

1. Avatar distortions. Animation of positions (except for the special case of the pelvis) was never designed into the product and is not supported as well as we would like for a supported feature. In particular, it is easy to get the avatar into a distorted shape by running such animations, which can require relogging to fix. This is not a good user experience, and one thing we will be investigating during the test period is whether we can improve this behavior, providing ways to re-initialize joint positions in a predictable way. This will likely require delving into some complex corners of our code, and could affect our schedule if it turns out to be feasible at all.

2. Bypassing the skeleton hierarchy. Given the ability to move joints to arbitrary position, it is possible to use animations to put a mesh into basically any desired shape. This has generated some impressive and creative work, but since it bypasses the intended bone hierarchy, it requires much more complex animations. For example, consider the extreme case where there is no defined hierarchy at all, just a collection of unrelated joints that can be animated independently. To do a “touch your toes” animation, you would have to move every joint of the upper body at every keyframe. With a conventional skeleton, you would have to bend at the waist and shoulders, and the rest of the bones could just follow along based on the skeletal hierarchy. So this means more complex animations, which require more bandwidth to send, and more work to run in the viewer. Result: more lag for many users. Bypassing the hierarchy is especially of concern during this test period, because we are trying to come up with the best set of bones for the extended skeleton. If a large percentage of testers are working with arbitrarily free-floating bones, then they are not actually testing the skeleton as designed, and are not giving us useful feedback. The implication here is that if you are working on a project that seems to require such hierarchy-ignoring free-floating bones, we ask that you (a) do this with as few bones as possible, for efficiency reasons, and (b) let us know about the issues you are encountering so that we can potentially update the skeleton to support those use cases.

3. Scaling issues. One advantage of rotation-only transformations is that they work independently of the size of the avatar. For example, you could make a rotation-based dance animation that worked equally well for a tiny, a normal-sized human, and a giant. But what if you wanted to make a smile animation that included translations, and use it for all those avatars? The magnitude of the translations would be appropriate for only one size of avatar, and would generate effects far too large or too small for other sizes. (This is actually a somewhat more complex issue, since it depends on whether we’re talking about changing the avatar size via sliders that affect the scale of some bones, or via bone position overrides in the mesh - these two mechanisms do not combine with animated positions in the same way). During the test period, we will do some investigation into whether we can improve on this behavior, but at this point it is by no means guaranteed that we will have a fix before Bento goes live, or ever. This means that animations that include position changes may always be tied to particular avatar shapes and sizes, in a way that rotation-based animations are not. It will be important for content creators to be aware of this.


Overall, while we would like to address all the outlined issues and go live with animating bone positions enabled and all the bugs fixed, it may prove impossible to do in the time we have. We look forward to a productive investigation period on this issue. To keep things actionable and efficient, please make sure comments on the subject of bone position animations include reference to specific examples on aditi.  And of course we are still looking forward to your feedback on other Bento-specific topics as well!

  • Like 6

Share this post


Link to post
Share on other sites


Vir Linden wrote:

 To do a “touch your toes” animation, you would have to move every joint of the upper body at every keyframe. With a conventional skeleton, you would have to bend at the waist and shoulders, and the rest of the bones could just follow along based on the skeletal hierarchy.


I believe this is a false assumption. You can never animate a "touch your toes" position by leaving any joints untouched/unanimated with the conventional skeleton. It leaves those joints "free" and they will be animated, but by the default SL animations. This is a very common beginner mistake when people make poses. " I made a sit, but my foot/head/arm is twitching/moving". Every joint has to be moved to become locked in it's position, unless it's planned to be animated by another animation, but eiher way, no joints are ever left unanimated, at least in the current body.

I believe the exceptions are very few, like leaving the neck or head free to move, so the avatars head follows the camera. Most of the time though, it's undesired.

  • Like 2

Share this post


Link to post
Share on other sites

Thanks Vir! I appreciate your going over the issues with us. It makes it a lot easier to understand the parameters and pros and cons for our suggestions.

I also really appreciate that you folks are willing to give the bone translations a shot. That will make a huge difference and I hope it can be maintained in the end.

Share this post


Link to post
Share on other sites


Lexbot Sinister wrote:


Vir Linden wrote:

 To do a “touch your toes” animation, you would have to move every joint of the upper body at every keyframe. With a conventional skeleton, you would have to bend at the waist and shoulders, and the rest of the bones could just follow along based on the skeletal hierarchy.


I believe this is a false assumption. You can never animate a "touch your toes" position by leaving any joints untouched/unanimated with the conventional skeleton. It leaves those joints "free" and they will be animated, but by the default SL animations. This is a very common beginner mistake when people make poses. " I made a sit, but my foot/head/arm is twitching/moving". Every joint has to be moved to become locked in it's position, unless it's planned to be animated by another animation, but eiher way, no joints are ever left unanimated, at least in the current body.

I believe the exceptions are very few, like leaving the neck or head free to move, so the avatars head follows the camera. Most of the time though, it's undesired.

As Lexbot said, the only situations where you can animate only a small number of bones is if another animation is already playing beneath it, which you want to continue playing. In this manner, every bone is always animating. For a "touch your toes" animation that bends only the spine, you would first need a background animation which animates all of the avatar's limbs in a fixed position, so that they're not flailing around. For something like a "take a sip of coffee" animation, you could animate only the avatar's arm, head, and neck while expecting the rest of the body to continue animating with the user's animation overrider.

 

Also, unless I've vastly misunderstood the way animations work on the back end once you've exported them, if you have an animation that moves 3 bones (like an arm) over 100 frames, with a keyframe at 1, 50, and 100, you are not exporting an animation that has a total of 9 keyframes. You're exporting an animation that has 300 keyframes, in which each bone's position is recorded on each of the 100 frames. So by that logic, even if a bone is left freefloating and must be animated along with its parent bone, not much is lost from a performance standpoint. Please do correct me if I'm wrong, though, as my expertise lies in making things look pretty and not in writing the code that makes them function.

 

I do understand that the root of your argument is that it would be BETTER if all the bones were not freefloating, and obviously I agree with that. But let's make sure we're not painting the picture of freefloating bones as the root of all evil, as they can actually serve a great purpose in expanding the limited capabilities of the sl armature when used well. :)

  • Like 1

Share this post


Link to post
Share on other sites


Vir Linden wrote:

Based on the feedback to date, we will be re-enabling animation of joint positions on aditi. We are looking forward to seeing what you all can do with the additional capabilities.

Just thought I'd highlight the important bit, so nobody misses it.

Share this post


Link to post
Share on other sites

"Re-enabling" kinda means that it will work like in the old way with just anim files? Because if thats the case, we cant hardy test anything considering that those animations can be only made with avastar and blender. Any chances of importing animations through DAE? I would like to see FBX support but we already have DAE support which can contain the animated skeleton and which SL could parse into SL animations. Otherwise, animations will be very limited to users considering that blender isnt by far a good animation software at all. And, while BVH support isnt bad, lets not forget that it was an animation format designed mainly for motion capture. Having DAE files support for animations would be really great. It would be very easy to import any type of animation file and export as DAE within all kind of softwares. Also, something to keep in mind, DAE can save individual keyframes meaning that only those keyframes that have been animated would contain info saving a lot of data (ie. you would just get info of the keyframe that have been moved/rotated and the rest would be left blank instead of saving a keyframe on every single frame of the animation). We could get better quality animations in less footprint.

Share this post


Link to post
Share on other sites

I wonder if this solution for scaling would be possible.

Im not programmer so all I can do is just to write this basic math that should do the trick. Would be nice if someone more aknowledge look at this solution and find if is ok or not.

Cant the translation be calculated through the initial parent bone offset?

Lets say that from mTorso to mPelvis there is a distance of 0.084m on Z axis, this would be a value of 1.0.
If an animation move mTorso forward two meters it would be multiplied by the result of the new offset divided by the default offset (the default offset is calculated on the joints offset only once and not after being moved by an animation).

I guess it could look like something like this:

Final Translation = AnimationOffset * (NewParentOffset / DefaultParentOffset)

 

-For example, if you animate the mTorso bone forward 2 meters:

On the default avatar would be calculated like this:

2m * (0.084m / 0.084m) = 2m * 1.0 = 2m

The bone would move 2 meters because the bone offset is the same as the default. But if instead we do the same on an avatar that its half size and with a bone parent offset of 0.042m instead of the default 0.084 it should move half:

2m * (0.042m / 0.084m) = 2m * 0.5 = 1m

The bone would move what it should. This should apply per axis of course but right now I see it working. But we must keep something very important in mind. Animations would have to be uploaded as relative or absolute through some kind of checkbox because, if you make a custom avatar that needs custom animations, these doesnt need and either shouldnt use any kind of calculation. The problem here is that we need a default offset to compare with and we can only get the default avatar unless there is any way to specify those values directly through the animation first frame or such. Anyway, I wouldnt see this as an issue because if you upload an animation to be relative, you should know that you have to use the default avatar size. This would only primarily fix the issue with making AOs on humanoid avatars that requires bones translations but should be also compatible with different avatars sizes. This wouldnt work on custom avatars that requires custom animations to work. It shouldnt be necessary at all because lets say a quadruped wouldnt need to share animations with others quadrupeds. It would still be possible as long as only rotations are used.

My apologies if this have no sense, Im not programmer and I just wrote this quickly as a base to understand what I meant. In now way is intented to be a perfect math form or anything.

Share this post


Link to post
Share on other sites

Okay ~ I want to preface this by stating ~ I have no idea what I'm actually saying here~  this is a guess. 

But when you load an animation on an avatar in SL ~  isn't it calculating how to move and rotate those joints based on a pre-existing keyframed dataset?  There is no realtime adjustment to the actual keyframed data based on the size of the joints in the avatar itself.  That's done during the animation pre-load ~ not while it's being played realtime.  So wouldn't scaling bones mid-animation require an entirely new method of playing animations to be used ?

 

Please, someone who actually knows what they're talking about correct me!

Share this post


Link to post
Share on other sites

I am very happy to hear that. It is the solution to really unleash the creativity!

Regarding the 3. Scaling issues, would it be possible to record the bones translations in percentage (in the .anim format) ?
So translations would stay proportionnal and this would solve the scaling issues...

  • Like 1

Share this post


Link to post
Share on other sites


Kitsune Shan wrote:

Also, something to keep in mind, DAE can save individual keyframes meaning that only those keyframes that have been animated would contain info saving a lot of data (ie. you would just get info of the keyframe that have been moved/rotated and the rest would be left blank instead of saving a keyframe on every single frame of the animation). We could get better quality animations in less footprint.

I have a question about that. Can the internal SL animation files store keyframes? If not, this would only be helpful on upload, saving some microseconds once. If they can, this sounds like a good idea.

Share this post


Link to post
Share on other sites

I ~ uhm.  Err.. What?

The internal animation file for Second Life is a LLKeyframeMotion File ~ also known as a .anim file.  It's a proprietary format.

Share this post


Link to post
Share on other sites

I thought the question was quite clear.

If you upload an animation file with keyframes, is it stored on the SL servers with keyframes or is every change stored per frame?

Going by your answer it would be the former. Just asking because all the .bhv files I uploaded over the years don't have keyframes.

Share this post


Link to post
Share on other sites

Well I have no idea how an internal animation really work but seeing that SL plays them without taking into account the fps that SL is running at the moment, it means that it interpolate every frame so, in theory, you shouldn't need every frame to be store because if you play a 30fps animation while playing at 60, it would be notable right? That's something that only LL can tell us. Anyway we are changing things so is a good opportunity to optimize some aspects about that. As long as it supports dae animations, I don't really mind how SL would store those animations, if optimized or by brute force with every single key frame.

Share this post


Link to post
Share on other sites

I feel like you're somewhat misunderstanding your question ...

 

Animation files are entirely keyframes. if you have no key frames, you have no animation.

 

But yes, the animation file on the server contains only keyframes you have made, points of data that mark where on the timeline any bone changed it's rate of rotation. Everything else is tweened. This is so the animation can be played at any speed with any framerate. (Animations will be as smooth as the refresh rate of your monitor, the timeline is floating-point, not frame number)

Share this post


Link to post
Share on other sites

I know perfectly well what a key frame is :)

If, let's say, my animation has 20 frames and I only use the first and last of those to specify a bone transformation, I have only two keyframes, the remaining 18 are inbetweens. My question was if those inbetweens are stored as keyframes in SL, they are in my bhv files (which I have always used to upload my animations) but not in Poser or 3ds Max for example.

EDIT To clearify the question... I mean the transformation between the file I upload and the file that is stored on the servers, not the transformation between the file on the server and the motion we see on screen.

Share this post


Link to post
Share on other sites

Which do you use Kitsune? Blender, Maya, or 3Ds Max, cause you have to use 1 of those to make an avatar. Ok, yes, Blender has Avastar, which allows you to export anim files. At the same time tho, Maya has Mayastar, and I'm sure Cathy could get that anim file formating from the Machinimatrix team. So, with that, both blender and maya are covered.

I just don't think you are going to get LL to rewrite a whole new uploader. That said, I did bring up the problems with the current bvh uploader, which needs to be rewritten, and if they actually do that, then you might get what you want with extented BVH files.

Share this post


Link to post
Share on other sites


Kitsune Shan wrote:

Otherwise, animations will be very limited to users considering that blender isnt by far a good animation software at all.

What?

Sorry to break this to everyone, but Blender's animation system is FAR superior to any other I've ever touched, and I've pretty much touch them all. Really, there is no other animation system that even comes close. When I use Blender, I feel like I have complete control over everything animation. From mocaps, to IK systems, to having half a dozen ways to just rotate a bone. I really would not want to animate in any other program. Heck, I should do a video on just Blender animation tricks, as they would blow away Maya users. Not Blender users tho, cause those tricks are second nature to us.

Now, of course, some of this is opinion, but to say Blender isn't good for animation, is just flat out silly, IMHO.

 

How many programs can do this, without using any other program? If you ask me, Blender is the king, at least to us peasants.

  • Like 1

Share this post


Link to post
Share on other sites


Kitsune Shan wrote:

"Re-enabling" kinda means that it will work like in the old way with just anim files? Because if thats the case, we cant hardy test anything considering that those animations can be only made with avastar and blender.

Would it help when we give away Avastar-2-beta for testers for free?


Kitsune Shan wrote:

Any chances of importing animations through DAE? I would like to see FBX support but we already have DAE support which can contain the animated skeleton and which SL could parse into SL animations.

If (if ever!) LindenLab provides an Importer for animations based on Collada or FBX we are ready to adjust Blender's exporters (if necesssary).


Kitsune Shan wrote:

Otherwise, animations will be very limited to users considering that blender isnt by far a good animation software at all.

Can you give a few examples where blender's animation system is bad compared to other tools? I am asking out of cusiosity. Maybe we can improve things when we know what non blender users think is bad. (no offense intended, i just have no experience in tools other than qavimator and blender so i do not know what cool things other tools offer)


Kitsune Shan wrote:

Also, something to keep in mind, DAE can save individual keyframes meaning that only those keyframes that have been animated would contain info saving a lot of data (ie. you would just get info of the keyframe that have been moved/rotated and the rest would be left blank instead of saving a keyframe on every single frame of the animation). We could get better quality animations in less footprint.

When you transport only keyframes to the target system then you let the target system decide how to interpolate. This is not bad at all and i do not criticise this.

But doesn't interpolation possibly result in slightly different animations depending on how the interpolation is made? So how else is a keyframe based import less quality compared to a frame based import?

Share this post


Link to post
Share on other sites


Gaia Clary wrote:


But doesn't interpolation possibly result in slightly different animations depending on how the interpolation is made? So how else is a keyframe based import less quality compared to a frame based import?

Not sure I can answer that, but I can talk about SL's importer, and the problems people experience of the animations not turning out how they made them. Yes, it is about how LL interpolates the keyframes, and the rules it uses. When 2 keyframes are right after each other, and no movement, or very little movement is changed, the importer seeminly ignores the keyframe. It is my experience, that this happens mostly because users are using an extremely high frame rate. When 1 thinks about animation optimization, 1 has to think about frame rates. Not all animations need 60fps, or even 30 fps. Most animations could get away with less than 10 fps. It all depends on how fast the movement is. When SL animators try to import an animation with very slight movements, at 30 fps, then they are going to see issues, for sure. If they just lower their fps tho, then the movements between keyframes will be greater, and the importer will upload the animation correctly.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×