Linden Lab

Project Bento Feedback Thread

Recommended Posts


Whirly Fizzle wrote:

Discussion about Project Bento from todays third party viewer meeting - including

 

 

Ha! You beat me to it! The bone translation discussion starts at 26:45

 

At 1 point, Vir talks about how these translated expressions wouldn't scale, or something like that. I really don't think anyone thinks that we can use the same animations on every avatar. Yes, if the avatars are human, and the facial bones aren't moved, then any facial expression made from those default positions should work. For custom avatars tho, like a wolf, he would need his own expressions, and facial bone positions. Maybe I just don't understand what Vir is saying.

  • Like 1

Share this post


Link to post
Share on other sites

Whirly Fizzle quoted:

Some older graphics cards and drivers may encounter difficulty rendering the increased number of joints, and you may experience a change in framerate as a result.  If possible, upgrade your OS or driver to the latest version...

 I missed that part but it goes without saying really.

But a very imprtant question here is exactly how old a driver will have to be before we run into trouble. 14.4 is still and will probably always be the last AMD driver to work reliably with Second Life. And of course, Intel graphics cards never worked well here. Does this mean Second Life will only work with Nvidia cards from now on? That would be a disaster for SL

Then again, I wrote that in terms of lag Bento is probably the least of two evils. We have all kinds creative use of fitted mesh, flexis and scripted attachments instead now and that was just beginning to take completely off recently. Those things aren't exactly low lag either and not nearly as potentially useful as those new bones are.

Share this post


Link to post
Share on other sites

Thanks Lindens for the new tools, They are great additions and they will make SL much better and much more immersive. Would it be possible if you can make animation imports simpler  with more widely used formats (such as fbx). It looks like the .anim format Maya exports is not the same .anim format SL expects (or are they the same?).

Share this post


Link to post
Share on other sites

Cheers, joint positions work for me now, for some reason. :3



Regarding potential lag from extra bones, couldn't be any worse than flipping through a bunch of different meshes/faces to animate a tail.

I don't know enough about the .anim format to have an opinion. All I know is that it appears to work, and the bvh doesn't, but I'm sure I'm not utilizing it correctly.

Share this post


Link to post
Share on other sites

A friend of mine just asked me a question and I have no idea about the answer so I pass it on.

How modifiable will a Bento avatar be inworld?

Share this post


Link to post
Share on other sites


ChinRey wrote:

A friend of mine just asked me a question and I have no idea about the answer so I pass it on.

How modifiable will a Bento avatar be inworld?

Well, outside of the faces, as modifiable as the default, with even more options with all those bones. If you notice in the bug fixes for Bento, a bug was fixed which stopped creators from using fitted mesh with different joint positions. So, now that the bug is fixed, I could make a wolf that could be fat, or skinny, big head, huge paws and what not, just like the default, again outside of facial changes. These facial changes could also be done with these new facial bones, but then you would likely lose your expressions.

Another point to add, was Vir responding to me in this thread, that LL is planning on adding more collision bones to give us more sliders to use. He didn't expand on this, but I imagine LL will only be adding collision bones for facial sliders, and not the body, as that would mean old fitted mesh clothing would break, Because of that, I'm assuming Vir just means facial bones. With this addition tho, our full mesh avatar will be just as modifiable as the default, and likely much more with all the options we have now.

Of course, this is all just my observations, and please don't take them as fact.

Share this post


Link to post
Share on other sites

The original work-around involved wearing a small invisible cube or something that was brought in with joint positions checked, and then wearing the multiple other fitted mesh pieces that didn't come in with joint positions. I didn't know that was fixed. Handy! And yes, ideally, if you move your fitted mesh bones correctly with the proper offset, which is a tedious task (though avastar was working on those bones getting snapped automatically), you can get non-human sliders to work. That is my eventual dream. X)

Share this post


Link to post
Share on other sites


Tornleaf wrote:

The original work-around involved wearing a small invisible cube or something that was brought in with joint positions checked, and then wearing the multiple other fitted mesh pieces that didn't come in with joint positions. I didn't know that was fixed. Handy! And yes, ideally, if you move your fitted mesh bones
correctly
with the proper offset, which is a tedious task (though avastar was working on those bones getting snapped automatically), you can get non-human sliders to work. That is my eventual dream. X)

I played with this for a bit, but it just got far too messy when having the compounding bugs. I got a good understanding of it, but didn't think the final result was worth the headaches. Now, it seems very easily doable, but will be very tedious and time consuming to get just right. I really can't wait to put full expressions on my wolf and Lycan avatars tho. It will really make them come alive.

 

OH, and to avatar creators out there, you can also creat speech gestures for your custom avatars, and with these new bones, it should be much more interesting. Speech gestures are trigged when using voice, and the different pitches of your voice can trigger different animations. It really does make your customers feel more real when the avatars are moving with the voice.

Share this post


Link to post
Share on other sites

Since potentially everyone's going to be having their avatar_lad.xml files updated with the coming Project Bento, now might be the perfect time to fix some the slightly different origin points for right-side-of-body attachments. Previously the problem with fixing it was not everyone seeing the same thing until everyone was updated.

 

Fixing it would make having perfectly mirrored attachments much easier since they'd be on the same coordinate system. I believe it was shown to be far too subtle to break existing content.

  • Like 1

Share this post


Link to post
Share on other sites

The Lindens have stated that they want concrete examples of what why bone translations are important - examples of what can be done with translations, and what work-arounds and compromises must be done to accomplish similar results with rotations only. That's a very good point, and yes, we really need some emperical evidence here.

 

But one problem I feel Linden Lab really should bring to mind is that they didn't actually block bone translations on Aditi. Many people exersizing the Bento skeleton right now have no idea bone translations were even intended to be blocked. You can rig and animate the face just fine right now, and many people are just declaring it "ALL CLEAR!"

Share this post


Link to post
Share on other sites

 Just so folk here are 100% clear, and I'll risk being annoying about it if they already are - animated bone translations are not joint positions. Joint positions are the initial altered location of joints in a skeleton enabled on a rigged mesh on upload. Animated bone translations is moving these joints via animation, even if it's just one frame, to act similar to joint positions. Stopping the animation, however, doesn't return the bones to the previous shape/location, in the same manner that joint position does, until relogging. Though, from reading here, apparently popping off a joint positions mesh in some cases with some viewers doesn't always return the shape?

I could be wrong about this, but I was told a little while ago that .bvh could be imported with animated bone translations by altering something in the export file itself, and apparently had to have 1 bvh file per bone translation. I think some MLP animations make use of this? Again, I could be misremembering. Just a sidenote.

Share this post


Link to post
Share on other sites

That's correct; bone translation via BVH only worked by fluke. Bone translation via SL's internal animation format, .anim, was supported natively, but not on purpose.

 

Also: Many avatars get around the need to relog by playing the reverse animation when a deformed attachment is removed. So long as one is removed before a new one is added, they clean up after themselves and don't trash the avatar.

 

This of course would not be an acceptable solution for any formal LL feature, and there'd need to be a way to clean up the avatar whenever an animation stops.

 

But first let's focus on why we need it: Rally creators to make some example rigs and animations, showing what is possible with bone translations, and what work around compromises must be done to achive similar results with rotations only.

  • Like 2

Share this post


Link to post
Share on other sites


Adeon Writer wrote:

The Lindens have stated that they want concrete examples of what why bone translations are important - examples of what can be done with translations, and what work-arounds and compromises must be done to accomplish similar results with rotations only. That's a very good point, and yes, we really need some emperical evidence here.

 

But one problem I feel Linden Lab really should bring to mind is that they didn't actually block bone translations on Aditi. Many people exersizing the Bento skeleton right now have no idea bone translations were even 
intended
to be blocked. You can rig and animate the face just fine right now, and many people are just declaring it "ALL CLEAR!"

See the problem with LL's request is that it is not based on objective analysis, and it can't be. The differences between rotating bones and translating bones in facial animations is subjective. Now, I'd be willing to bet that EVERY SINGLE animator in SL will objectively say that translating facial bones is better, but better is not the standard that LL seems to be seeking.

I did my best to give LL exactly what they asked for with my video showing the differences between the 2 techniques. That is really all we can do is show how and why rotation is never really used for facial bones.

Personally, I don't really care what is or is not doable right now on Aditi, only what LL says they are going to do.

Share this post


Link to post
Share on other sites

Im not sure why we have to provide so much examples about bone translation and its utilities.
Bone translations for facial expressions isnt just an option, is a must have. Whoever in the industry can tell you that you simply CAN'T animate a face with rotations. How are you suposse to move for example lips with rotations? The only bones where rotations are necessary is jaw. Even an eye blink isnt made with rotations even if could technically be possible. LL have to realize that, everytime you roate a bone, there are vertex that rotate over theirself (those that are near joint center) and that also others vertex move in opossite direction.

Asides of that, there is a huge important point on why we need bone translation that I hope to elaborate good enough on my comment.

Bones translations is a different way of using "joints offset".
Take for example a mesh head. The ideal case for a mesh head its to make it different enough from the SL standard avatar. Otherwise such mesh head wouldnt have too much sense right? Ok, so we have made a new mesh head and it looks amazing. Obviously, this mesh head doesnt match with the default one, meaning that certain bones may need joints offset adjustments to fit this new head. If we force joints position, we arent just forcing it on the mesh head but on the whole avatar. This could lead to distortions to those avatars that shouldnt require any new modification on joints offset. Basically, we would end with content that could break other content just for the sake that make it work and not having bones translations animations.

If we had bones translations, this could be avoided. Animations have been always priority based. And those bones that havent been moved, doesnt affect get any animation at all. I could assume that we can give this kind of priority to bone translations on BVH as well.

So, with the example that I used before, our mesh head could be uploaded without joints possitions. This, of course, will lead to a distorted mesh once worm. But we have now animations to make the bones go into place and fix the issue without affecting the rest of shape of our avatar or others attachments.

If these animations start to being implemented properly, stoping animations would lead to your avatar to goes back to normal and fix another big issue that have been there since the introduction of meshes making people to get distorted depending on what they wear.

Certain parts with new bones will sure also need bone translations. Wings for instance. Opening wings animations arent done with only rotations. When wings spread, you may need to move some of the bones too. This example may not be that clear but once you get into animations ad try to animate some wings, you will see why it can be very useful.

But leaving asides all those examples, facial expressions alone gives enough weight to make this feature a must have. As many people said already, having bone translations isnt a mere improvement for facial animations but a 100% necessary feature. I kinda guess that from programming point of view it may be hard for them to visualize mentally why rotations doesnt work. I would suggest to import the angel file into any software and try to animate the face. Then you will realize pretty fast that it simply doesnt work.

There is also another feature that people is missing pretty hard. Most of them probably due not knowing or because they simply didnt get that deep in 3D knowledge for simply using it for SL and its the vertex limit. I had say before that the already too short limit of 4 bones per vertex its really problematic since fitted mesh introduction. Now that will become even worse. If you want smooth facial weights, you cant limit that to only 4 taking into account that there are more than 30 bones already. Another example could be wings or back. Having to rig something that needs UPPER_BACK, CHEST, mChest, mPelvis, PELVIS, mTorso, mCollar, etc.. its already impossible. This limit have been increased long ago in most game engines. We really need to get this fixed in SL as well. Those that rigs for fitted mesh surely understand what it means.

  • Like 2

Share this post


Link to post
Share on other sites

Not having bone translations would severely hinder/limit facial animation. First, an animated example (best I could do 5 days before Christmas, sorry x.x):

https://gyazo.com/91b3b2bce5d7e4ff457ef2662d8bc53b

No Translation:

Notice that the range of motion is severley limited. We can only move the skinned vertices on a spherical plane coordinate, causing unrealistic eyebrow motion. Even if we moved the base of the bone further away to make the motion more natural- it would only be useful for that one particular motion. You would need a new bone origin for every eyebrow expression.

 

 

 

 

With Translation:

By keeping translation, we are able to make subtle motions of the brow, with no trouble at all. A wide range of positions are available because we are not constrained to a spherical surface.

https://gyazo.com/8183ac5c2ea0dad33b9b2dc686fbc621

  

 

 

 

Some History as to why modern game enginesuse bone translations:

Iirc (feel free to correct me) back in ye olden days, maybe even some ye new days, skeletal animation in engines were rotation only, e.g. 3x3 matrices. Modern engines handle 4x4, allowing for translation.

Do you want to see what happens when all you have are 3x3 matrices- e.g. no translations?

"

We're looking at a bone count of about 50 bones. Why? From the text (source at the bottom) without these extra bones, the text says: "the bones would likely not deform in an anatomically correct fashion"

https://udn.epicgames.com/Two/SkeletalSetup.html ]

Without these bone chains, each bone would be able to move in only spherical coordinates, can you imagine your eyebrows moving on a sphere and only a sphere? I can't. So these extra bones were inserted between the root and the end point to allow them access to the whole 3D area, rather than a spherical segment fo the 3D area.

If it weren't 5 days until Christmas and then new years soon after that, I'm sure I could make more custom tailored examples- but hopefully this establishes the need for translations atm.

 

 

  • Like 4

Share this post


Link to post
Share on other sites

@ Vir Linden:

 

If your main concern with bone translations is that facial animations won't translate between one face and another... could you not just explain that in bento's release documentation, so that animators who intend to make their facial animations widely compatible amongst multiple human faces can knowingly release their animations with rotation only? Why force everyone to adhere to a rule that doesn't need to affect everyone?

Not all heads need to share the same animations. Not even MOST heads need to share the same animations - different facial shapes, features, and rigging will make unique facial expressions per avatar almost a necessity. I imagine many avatar creators, like myself, will WANT to provide unique, custom animations that fit their own avatar head flawlessly, and ought to be allowed to do so to the highest quality they are able. Facial bones that are rotated simply don't make for the same high quality, realistic look that facial bones that are translated do.

 

But moreover... not everyone is human!

 

----------------------------------------------------------------

 

I'm going to take some time to elaborate on another argument in favor of bone translation in animations that I don't feel anyone else has adequately covered. 

 

Restricting bone translation is extremely debilitating to those of us who are not creating human avatars. Before bento released, I was working on a new horse avatar that had a 2 bone neck, to simulate more realistic movement for an animal with many vertebrae in the neck. The avatar also had rigged upper and lower eyelids, rigged jaw, rigged upper and lower lips, rigged left and right ears, and a rigged tongue. I achieved this by moving the position of the shoulders relative to the mTorso bone instead of the mChest bone in each of my avatar's unique stand and action animations, so that the mChest bone could be used as the base of the animal's neck. This was a thing I could do as a creator of a uniquely rigged and uniquely animated avatar. It was a simple and efficient solution that made a big difference.

What I had before - https://gyazo.com/18fc9ddf561ceb5cb0f2cdb58c007fda

See how beautiful that motion is? Just lovely.

 

Now that I am expecting to no longer be able to move the location of the shoulders, I am left with few options to achieve what I had planned to achieve. Let's review the options:

 

Option 0: Keep the rig I have now and provide no compatibility for new animations in the future. Out of the question. I will not do this as this is unfair to my customers and means no longevity for my avatar.

 

Option 1: If I use mChest as part of the neck as I was doing before, the horse's front legs will follow the neck, not the body. Not desired action.

Example - https://gyazo.com/09abe61a07c41c0762f40fe757593635

 

Option 2: If I use only the mNeck bone in the horse's neck, I lose the realistic motion that's had my customers raving - the main selling point of my new avatar. I am once again stuck with the same stiff, rigid, unrealistic neck that comparable avatars have today. This is not attractive and wouldn't be acceptable in ANY 3D platform outside of second life, so why settle for stiff motion here? I strive to achieve better.

Example - https://gyazo.com/cc48424ef640036a108b7eb687f5ceee

 

Option 3: If I use the mNeck and mHead bones in the horse's neck, use FaceJaw for the head, and FaceTongue Base for the jaw, I can achieve a neck and head that bend properly. However, I'm left with only 3 additional bones with which to rig the head - one of which must go to either the lower lip or the tongue, as it is parented to the bone being used for the jaw. However, since it can only rotate, not extend out of the mouth, I can safely eliminate the tongue and thus assume it must be used to rig the lower lip. With the remaining two bones, I must choose between rigging the eyelids - likely the upper, not lower, eyelids - or the ears. Either way, I lose a great deal of motion.

Example - https://gyazo.com/7d21fcf0a5b1504c5041a0ff906e73c1

 

Option 4: Here's where I hope I really make my point. In order to best achieve the same range of motion I would have had with bone translations, my best option appears to be to rig the head and neck to one of the avatar's arms. Yes, really. If I am to rig the neck to the three arm bones and the head to mWrist, I am left with 5 strings of finger bones to use on the face. One set must obviously go to the jaw, with spare bones to be used on either the lower lip or the tongue. I am then left with 4 chains to sort out between the left ear, right ear, left eye, right eye, and upper lip. This option provides the widest range of motion and comes the closest to achieving what I would have achieved with bone translations. The obvious drawback of this solution is that, when rigging to the avatar's arm, shape sliders will be awfully skewed and will not scale symmetrically from left to right.

Example - https://gyazo.com/51e8291af31925b0d01bc67e121e9d46

 

Think this last idea is stupid? I do too! And yet, it speaks volumes that option 4 is my best available option. It seems clear to me - and I hope it's clear to you too, from viewing these gifs - that this fourth rig example provides the best range of motion, and that is just not good design. As a creator who models, rigs, and animates my own avatars, I should have the freedom to take enough liberties with my own rigging and animation to achieve a range of motion similar to what I would be able to achieve on any other platform. 

LL should absolutely ENCOURAGE compatibility between avatars - especially human avatars, which number in the dozens. But should you FORCE it? I don't feel it should. Second Life was founded on the basis of enabling creativity. By forcing this restriction, you place the conformation of the masses above the creativity of the few.

  • Like 4

Share this post


Link to post
Share on other sites

I linked the video before, but I realized after the fact that no one is really going to understand what that video is.

The voice in this video is Paul Neale, a person who's knowledge and expertise I've grown to rely upon for a substantial portion of my training.  His list of credentials as a Technical Director is completely eclipses anyone else's on these forums,  He's worked for Walt Disney Animation, 2K games, Rockstar Games, Elliot Interactive:  You can read the full list here : 

http://penproductions.ca/clients.htm#Games

 

My point is this: This man knows just about everything about setting up a character rig that there is to know.

 


Paul Neal :

The number one thing I find new Technical Directors want to do is they want to try and limit controls for animators and you know make sure for instance that the elbow doesn't rotate too far, or it doesn't rotate backwards. 
Animators will hate you for that without question.
  They want to just unlock everything.  The current rig I just did, they wanted to be able to move and rotate every joint in the body even if that joint isn't supposededly able to move, rotate and scale by the way, so everything needed to be able to move, rotate and scale for them to be able to get the motion they wanted.  They didn't want to run into the position where they suddenly couldn't do something. And that's you know especially true for cartoony characters.


I will do my best to post some more specific examples from my own work ~ but I'm in the middle of another project right now and it's a bit of effort to get this rig into a head in a functional manner ( as I don't use the foodprocessor as  my main piece of software )

I will post more later!!

  • Like 2

Share this post


Link to post
Share on other sites

Haha, that is certainly true for me. I disable and unlock all limits and channels before animating. I'll not be restricted by reality! *twists skull off head*

  • Like 1

Share this post


Link to post
Share on other sites

Little update on this for those who may be struggling with the same problem.

 

Once I included bones in the export to FBX file the problem with all the bones collapsing the the origin went away~

 

Still figuring out why the actual joint positions aren't recording.

 

Fixed!!  Removed the excess bones from the file ( mPelvis is the root of the heirarchy ) Exporting now is working properly!!



 

Also worth mentioning that 2013.3 FBX Converter works fine.

As well as straight export out of 3ds Max 2015 & 2016 straight to DAE Though that can occasionally throw warnings/errors during the export process but it still remains functional in world.

  • Like 1

Share this post


Link to post
Share on other sites

The usage of proper version of FBX is not only about joints offset but about properly exported multi material subobjects. I'm not sure whether they fixed all of the issues that I had found on the uploader already but I still find safe to use that version as you won't get any benefit using another one anyway. While the Dae exporting option from max works, it seems to be bugged since 2015 taking too much time to export. 

Anyway, I'm glad that my workflow worked to fix your issues with joints offsets :)

--||-
  • Like 1

Share this post


Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now