Jump to content

Mel Vanbeeck

Resident
  • Posts

    75
  • Joined

  • Last visited

Posts posted by Mel Vanbeeck

  1. 6 hours ago, Vir Linden said:

    Maybe at some point, but we won't be doing any kind of bake API for the initial release.

    This is the worst news I've seen in a while. This means the applier marketplace will be further fragmented, inventories will be filled with unusable items, and people will have a lot of extra work to do to figure out how to find content that works for their chosen mesh body parts. Considering the main consideration for all LL's feature updates has been to avoid breaking any content, this is basically breaking the entire market that it affects, since it's not really possible to utilize the technology as intended (eliminating onion skins) while maintaining compatibility with any existing applier content.

    I thought this point had been made to the extent that it didn't really need to be discussed further. Could you explain the thought process behind this decision? The API is too difficult, or just doesn't seem important? Something else?

    • Like 1
  2. 17 hours ago, Theresa Tennyson said:

    Scott Adams of the "Dilbert" cartoon wrote a book called "Dogbert's Top Secret Management Handbook." One of the rules for a (bad) manager to keep in mind was, "Anything you don't understand is simple and easy to do."

    When cars were introduced it would have made livery stable owners a lot happier if they weren't introduced until they could run on oats. The engineers who had to figure out a way of making this work wouldn't have been happier, nor would the prospective consumers who'd have to wait until this was worked out. And unlike this analogy, right now in the avatar business there are "gas stations" all over already.

    The question of how difficult it is to do, or what is the simplest/best way to handle it has yet to be answered, as far as I am aware. I'm not as ignorant on the subject as you imply, but it's not my role to answer those questions. In this context, my role is just to say what I think is necessary, and why.

  3. 5 hours ago, Theresa Tennyson said:

    Because it won't be the present, it will have to be be the future, and you don't even know what's involved to do what you want. Meanwhile, things will work for me right now, and releasing it won't do anything to interfere with what the appliers and bodies that use them do. I heard someone in a chat say that were using Lola Tangos on a Maitreya mesh body. Just because something new comes out doesn't mean that the old things vanish.

    I do my work, Lindens do theirs. You say this as if a feature request is invalid unless you're able to program the feature yourself being intimately familiar with the entire system. I would be happy if the Lindens were so in tune with the SL market that they didn't need to ask for feedback on feature development, but lacking that, it's good that they do.

    All mesh body systems will wind up updating for bake on mesh one way or another, and customers as well as skin/clothing designers will have to grapple with whatever that winds up being. Customers could wind up being cornered or just confused into retreat as features are added and removed from their chosen body parts. Customers will have to weigh whether they want to update or not. Once this is live, people will not be able to simply use their old bodies and ignore the bake on mesh release without an accompanying mess of inconveniences like possibly missing out on other features being updated on their chosen body parts.

    Unless of course the features are fully developed, in which case there are no problems to worry about for pretty much anyone involved.

    The whole point of this is to make old things vanish. The onion skins are problematic on many levels, and the solution is to make them unnecessary so people stop wearing them entirely. If they don't actually become unnecessary, people continue wearing them and for all this effort the problem is not solved. All that would be accomplished is throwing the skin/makeup/body market into disarray without actually improving performance.

    • Like 2
    • Thanks 1
  4. Why not instead just release all the relevant features together, then people's stuff all continues working normally, and everyone can decide what to do for future product with all the options on the table? Does that just sound too orderly? You understand that there will be a bit of a grid-wide panic among designers and customers alike if appliers are rendered obsolete, right? This is no way to do things. Designers having to scramble to get usable products on the shelves, customers trying to figure out how to navigate body part updates, etc.

    How many appliers people have and use regularly varies from person to person. There are plenty of people out there with hundreds of USD invested in appliers.

    The difference between this and when rigged mesh was originally released is that this is the present, and that is the past. Unlike the past which we have no control over, we can do things better in the present, if we choose to. Why would you be against that?

    I don't really care whether it involves transitory inventory items, a hidden list, a new panel, whatever. That's the LL folks' job to figure that junk out. Just because we don't know what the simplest implementation is for them to build doesn't mean the feature is any less necessary. Pick something and make it work, is my guidance. If they want to discuss the options with us, then I'm all for it.

    Rest assured that if any script functionality is implemented, there will be a way to make all existing appliers continue to work, with the possible exception of materials appliers if they decide not to support materials.

  5. I came a bit late to today's meeting catching what seemed to be the second half of a debate over the subject of whether it would be worth the energy for the Lindens to create the script functions to support baked mesh from appliers.

    The people who seemed to be against script support seemed to be saying that it's normal for the entire grid to have their entire inventory become obsolete when a new technology hits the grid, and that this effort is to be expected of SL businesses. The issue of the appliers that people have already purchased did not seem to enter into the consideration at all.

    The issue here is whether the Lindens will invest the energy into releasing bake mesh in a way that will allow people to continue to make use of the many appliers they already own. It is a finite amount of effort required to do this, and  the difference between doing it and not doing it is a substantial amount of SL's products either working or not working with future mesh products.

    The desired outcome of this project is that mesh body parts will not have onion skins on them (each onion skin being a complete duplicate of the base mesh). Both designers and customers will determine whether they need those onion skin layers. If the script functions to inject into the bake are not created, there will be many very good reasons to continue supporting appliers via onion skins. Designers will create what they will create (perhaps supporting both bake and onion skins), but users who want to continue to use their possibly hundreds of dollars worth of appliers will use a version of their body or head which supports the onion skins. If the script functions are created properly, the only reason to continue wearing onion skins would be proper materials support. If the bake system is modified to support materials scripted appliers, then there will be zero reasons for anyone to continue wearing onion skin avatars.

    It's not a question of whether this is valuable. It absolutely is. It's just a question of whether LL wants to take the time to do this for their users.

    One of the major struggles of SL is new user experience. Absorbing all of the details of system compatibility, proprietary technologies, and just the basics of how things are built and how things work is a fight that often leads people to give up rather than stick around. Without script support for baked mesh, there is one more invalid combo that needs to be understood both by new users, and old users whose current inventory of skins and makeup all come from appliers.

    In my opinion, there is absolutely no excuse for releasing baked mesh without script support. Doing so would only demonstrate a total disregard for the designers and customers of SL and the value of their inventories - and a major lack of understanding of the current marketplace.

    • Like 3
    • Thanks 5
  6. Klytyna's analysis (while *****ly) is absolutely correct on this. This project will not really be finished until materials are supported. If it gets released without materials support, it will certainly get implemented and used by most (if not all), but there will still be a need for legacy onion skinned layers on any mesh body that intends to put clothing over skin, and they will still be in the mess of having to sort out alpha cuts since it won't be able to inherit the body alpha from the baked skin layer.

    I will say that in the current generation of mesh bodies, materials are very important to the overall quality of avatars.

    Yes, you can still put materials on mesh using scripts the same way it is done currently, but the problem of having just one specular map for your body which will be unaware of what parts of your skin are covered up by baked clothing means your specular and bump for your skin will always be showing through whatever clothing you apply, meaning that if you bake anything other than skin/makeup/tattoo it will just look ridiculous any time a specular highlight or normal texgture is visible.

    I mentioned that it is possible to bake materials together so long as they are properly associated with diffuse textures in their clothing objects, and this could work, but the logic of it breaks down when you try to apply it to transparent clothing like nylons or latex or anything like that. For example, if you want to bake underwear onto your skin, the underwear object would have its diffuse, specular and normal maps in the clothing object. The baking service then uses the alpha channel from the underwear diffuse texture and then applies it as a mask to the specular and normal map textures before combining them with the underlying skin specular and normal maps. The resulting baked specular and normal maps will be pretty good, but obviously not correct in the case of clothing which has transparent features.

    For this to work for transparent clothing, the clothing object would need a new texture explicitly containing a mask for the materials. Let's use completely transparent latex as an example. The normal map would need to be overwritten for all the skin, because the outer surface is no longer rough, but perfectly smooth in places covered by latex. Also, the specular of the latex should be dominant over all surfaces even though it is completely transparent in the diffuse map. I would say that this extra alpha map would be a vital part to a fully funtional materials baking service, but it could be an optional component which is only applied when present - so anyone making clothing with no transparent components could just omit the extra alpha map, and the baking service could use the diffuse texture's alpha instead as described above.

    What I would strongly urge, is that if it is really too difficult to design the baking service to handle materials before public release, then fine, people can cope, and will release diffuse-only clothing. But do not let the issue of including materials in the bake service go on the back burner! The longer this feature sits without support for materials, the longer people will be building products which will become obsolete the moment materials are properly supported, so plan on continuing the project until it's done! Until that time, mesh body part makers are still going to need to include the onion skin layers for legacy support of the more fully-featured appliers based clothing.

    • Thanks 1
  7. I have not done any testing other than uploading a pair of eyes (on unscaled bones) with one of the eye meshes scaled up by 7.777%. I was expecting to see the scale of the eyes change at a different rate compared to the default eyes, but throughout the slider range the small eye stayed small, and the big eye always matched the default mesh as far as scale, though it seemed to be set back about 1mm or so. I used the mesh eyes that are part of the MayaStar kit; I don't know where this slight difference in position is coming from.

    https://i.gyazo.com/cd961628ec43d0ede68511ed14aca41d.gif

    All I could really determine was that the scale in the blend shapes appears to be linear like your lad definition. I doubt my calculations are accurate to the third decimal place considering I was mainly just working off of your numbers which probably had some rounding too. It sounds much more plausible that the person making blend shapes would have used .25 as the scale value.


  8. Matrice Laville wrote:

    The scaling of the system eyes is controlled by morphs and the slider definition uses multiple overlapping ranges which result in a not exactly linear dependency.

    For the Alt Eyes we decided to use the slider setting for 0 and 100 as reference markers and then make a linear interpolation. This results in some deviation between system eyes and alt eyes in the slider mid range. The differing scale value
    value_max="0.56"

    was necessary to map the eye scaling of the alt eyes to the scaling of the system eyes.

    We could use the slider midpoint (50) as third reference marker and split the lad definition into 2 overlapping ranges. This probably gives more precise matches. In whihc cases is this relevant?

    Mel Vanbeeck wrote:

    This lack of a neutral position for all involved bones results in a discrepancy between what you see in modeling tools and what you see after importing.

    Our tool are prepared to display bones for arbitrary slider settings (also for the alt eyes) exactly in the same way as they are displayed in Second Life.

    I'd say this is relevant in any case where one is attempting to use the new scaling feature of the eye and alt eye bones, since they're responsible for scaling something next the very thin eyelids which have very low tolerances for error. Right now, without any correction from a slider-sensitive tool in your 3d program, mesh eyes wind up ~7% smaller than they're supposed to be, which can easily cause the eyelids to wind up 2-3x as "thick" as they're intended to be when using rigged mesh eyes. This could just be documented in a wiki somewhere, but that sounds sub-optimal, and I am not sure it's needed yet.

    I did some testing with some eyes that I scaled up by 7.7% before binding them, and it appears that the linear scaling from 0 to 100 defined in the avatar_lad currently matches the default eyes closely enough that I can't detect any difference with my naked eye. I'm a bit confused by this, though. I guess this means that the blend shape actually scales the bone up by 25.6% from 50 to 100, and down by 25.6% from 50 to 0? Couldn't the avatar_lad be set up like this, then?

    <param     id="30689"     group="1"     name="EyeBone_Big_Eyes"     value_min="-1"     value_max="1">      <param_skeleton>        <bone         name="mEyeLeft"         scale="0.2494 0.256 0.256"         offset="0 0 0" />        <bone         name="mEyeRight"         scale="0.2494 0.256 0.256"         offset="0 0 0" />        <bone         name="mFaceEyeAltLeft"         scale="0.2494 0.256 0.256"		 offset="0 0 0" />        <bone         name="mFaceEyeAltRight"         scale="0.2494 0.256 0.256"         offset="0 0 0" />      </param_skeleton>    </param>

    That would be nice, since we could leave the wiki model with normal 1,1,1 scaled eye bones if this worked, but either the default eyes are not actually scaling on the blend shapes in the Bento viewer, or the blend shape exactly matches linear scaling from 0 to 100. If that's the case, though, how did you arrive at the numbers you chose for your eye sliders with the center point at 64? I was assuming you were going by the actual blend shape models (which I haven't looked at). If you were looking at the blend shape models, that would mean you discovered that the smaller blend shape was 67.2% of the base, and the larger was 118.4% of the base shape. This would mean that with the bone being scaled to 93% when the slider is set to 50, its size increases by +27.5% when it's increased to 100, and -27.5% when reduced to 0 (weird that I'm getting slightly different numbers than the previous calculation (1.56/2*0.328)).

    edit: I noticed afterwards that 25.6% is about 93% of 27.5%, so I forgot to divide this out somewhere

    Apologies if I'm misunderstanding something here. I've been punching numbers into a calculator for quite a while now and I've had a number of contradictory results while working from different directions, so I can't say my level of certainty is all that high.

    If the eyes are in fact still being manipulated by the blend shapes and not the bones (and I suspect they are), then changing the avatar_lad to anything other than linear scaling would result in the eyes not scaling at the same rate as the blend shapes, so I guess setting additional targets in the lad wouldn't be the correct answer. If you were leaving the lad alone, the wiki skeleton eye bones would need to be set to 92.784%, which is the true scale at the 50 point in the slider right now, but since the scaling appears to be linear, I think the numbers could just be changed so that 50 is scaled at <1,1,1>.

  9. There appears to be a problem concerning the neutral position for the eye and alt eye bones. When the eye size slider is set at 50, the eye bones are scaled to approximately 93%, but the eyelid bones are all scaled to about 100%. If the eye size slider is set to 64, the eye bones are scaled to 100%, but the eyelids are scaled to <"1.002000 1.086000 1.198000">. This lack of a neutral position for all involved bones results in a discrepancy between what you see in modeling tools and what you see after importing. Perhaps I could scale the eye bones to ~93% in Maya to match what's in-world?

    The problem seems to be derived from these two definitions in the avatar_lad, which have asymmetrical min/max values in one, while the other has a symmetrical pair of values.

        <param     id="30689"     group="1"     name="EyeBone_Big_Eyes"     value_min="-1"     value_max="0.56">
        <param     id="689"     group="1"     wearable="shape"     name="EyeBone_Big_Eyes"     edit_group="shape_eyes"     label_min="Eyes Back"     label_max="Eyes Forward"     value_min="-1"     value_max="1">

    What is the thought process behind the min/max values in the first code snippet? Is this just a place where 93% seemed close enough to the middle and the emphasis was set on matching the blend shape size at the 0 and 100 ends of the slider, or am I misunderstanding something here?

  10. We've been working on our heads a bit and we're now seeing some strangeness in how the bones are positioned in the in-world skeleton vs. what is defined in the avatar_skeleton.xml file and the 7/18 .dae export.

    

    This is using the default shape, no animations, no custom joint positions.

    The left and right center eyebrow bone is clearly positioned higher than it should be. I went and verified that the avatar_skeleton.xml values match the values we have in Maya. I don't know what's causing that one bone to be out of place, or whether it's just that one bone that's out of place (other bones may be less noticeable in their position error).

    I was going to put this into Teager's JIRA, but I was not able to edit the JIRA.

    Edit: JadenArt was doing some further testing and discovered some more details

    She built a shape that was based on the SL Restpose shape indicated in Avastar, then exported her avatar .xml and compared the values to the ones in the avatar_skeleton.xml. 

    avatar_skeleton.xml (edited for relevant values)name="mFaceEyebrowCenterLeft" pos="0.070 0.043 0.056" scale="1.00 1.00 1.00"name="mFaceEyebrowInnerLeft" pos="0.075 0.022 0.051" scale="1.00 1.00 1.00"

    vs.

    jadenart.xmlname="mFaceEyebrowCenterLeft" position="0.070000 0.042869 0.063257" scale="1.002000 1.002000 1.257000" name="mFaceEyebrowInnerLeft" position="0.075000 0.021325 0.049980" scale="1.002000 1.002000 1.257000"

    Obviously the export values are suspiciously off, and I'm not sure why the Z value for scale is 1.25, but the main thing to notice is that the Z value for position is 0.007 high on the CenterLeft bone, but 0.001 low on the InnerLeft bone, while being approximately correct for the X and Y values.

    • Like 1
  11. It's beyond my level of expertise to opine strongly on the question of .fbx or .dae, honestly. There is a discussion of this topic that I found at http://forum.unity3d.com/threads/pros-and-cons-of-different-3d-model-formats-fbx-dae-ect.344009/ which seems to support my general impression of the issue, though, which was the top google result of "fbx vs collada".

    My original idea was that the .fbx exporter in Blender worked pretty well for crossing platforms, but if you're already neck deep in doing custom .dae exports perhaps effort is better invested there. I really couldn't say.

  12. The bones shown in this list follow the joints in the outliner in reverse order, so the top layer contains mHindLimb4right, and you can see the second selected, and so on. 

    The layer ending in 03228 (with no _ncl1_x extension) corresponds to the mGroin bone, keeping with the pattern.

    As far as these layers go, a bone can only be part of one layer.

    I suspect that in the long run, the .dae file in the wiki will not be used for anything other than importing the skeleton to various programs to bind custom meshes to, and possibly copy weights from. Other than just a basic test, I don't see why anyone would be importing this .dae directly to SL without doing some sort of editing on it first. From that angle, perhaps a .fbx rig would be the more useful file to have in the wiki for the purpose of getting started in whatever 3d program.

  13. Hi Gaia, thanks for taking a look at these Maya import problems. The 7/18 file is much better. I still receive the errors regarding the bind pose and transforms, but the errors concerning the blender profile and the one layer per bone are gone.

    For curiosity's sake, the 7/15 file created bone layers that looked like ths:

    

    To reiterate, though, this is not a problem with your 7/18 file. From what I can tell, the errors during the import of the 7/18 file are inconsequential.

     


  14. Matrice Laville wrote:

    Mel Vanbeeck wrote:

    That file is clearly very messed up.

    Can you give more details about where the file is clearly messed up?

    The Jaw Bone is missing in the files that we created on 12 july because the collada exporter dropped all bones which are not used for weighting. This is an intentional optimization to only provide the used bones. This optimization was discussed in the bento meetings a couple of months ago (allow partial rigs). I do not see where the collada files are messed up.

    The files from 15 july have been exported with the "include all animation bones" option and the jaw bone is again available in the dae (although it is not used for weighting the models)

    The incomplete skeleton is the main issue I was referring to. If I try to import that into Maya it's completely useless since the missing jaw bone causes every child to become parented to mFaceRoot instead, so all the correct bone positions become a mystery. It was of course also missing all the tail, wing, and hind leg bones.

    There is another bit of fun when importing to Maya which is also true of the 15 July export:

    The following parent and/or ancestor node(s) is/are not part of the BindPose definition.	mHead	mNeck	mChest	mTorso	mPelvis

    as well as 

    While reading or writing a file the following notifications have been raised.	Warning: The transform of node "mPelvis" is not compatible with FBX, so it is baked into TRS.	Warning: The unsupported technique element with profile "blender" in node element "mPelvis"	Warning: The transform of node "mTorso" is not compatible with FBX, so it is baked into TRS.	Warning: The unsupported technique element with profile "blender" in node element "mTorso"	Warning: The transform of node "mChest" is not compatible with FBX, so it is baked into TRS.	Warning: The unsupported technique element with profile "blender" in node element "mChest"	Warning: The transform of node "mNeck" is not compatible with FBX, so it is baked into TRS.

    etc. going on for every bone in the skeleton. This also puts every bone into its own layer, which takes a while to clean up.

  15. Unfortunately it seems that in many cases the priorities I would put on the changes are somewhat inversely proportional to the amount of work you estimate is needed to resolve them.

    Showstoppers - likely to cause major customer backlash and customer service costs

    1. Jaw Angle: moving jaw bone destroys mouth animations, and customers are extremely likely to want to play with this slider if a head is weight painted in such a way that it actually changes the jaw angle. If a head is weight painted so this slider has no effect, customers may play with this slider then leave it in a bad position without realizing they've messed their animations up. Customers will likely have no idea why their mouth suddenly looks derpy when animated unless they read notecards (less than 4.33333repeating% of residents have ever read more than the first 7 words of a notecard).
    2. Reparent tongue to lower teeth (important to allow the tongue to be animated with translations)

    Major problems - significantly harms rotation animation quality

    1. Lip bone positions: the current positions cause the lips to pull away from the teeth, or push straight into them when rotated side-to-side. This could be downgraded to a minor problem if BUG-20049 could be finished by launch, allowing designers to fix this on their own without disabling all related sliders, although it would be good to know in advance how tweaks to these bone positions would interact with lip and mouth shape sliders.
    2. Upper Eyelid Fold: Vertical change to the upper eyelid pivot is likely to cause the eyelid to clip through the eyeball when the eyelid is raised, or pull away from it when blinking. (This is somewhat mitigated by the fact that users are relatively unlikely to play with this slider)

    Minor problems - large changes to these sliders likely to cause animations to look bad/unreadable

    1. Outer eye corner (head builders are likely to weight their mesh so that the bone properly affects the outer edge of the eyebrow despite the fact that the demo mesh weights this to affect the eyelids as well, but the existence of this slider will probably waste many hours of various people's lives as they figure out what happened here.)
    2. lip fullness
    3. lip thickness
    4. lip ratio (This one actually didn't seem as disruptive as the prior two, but when combined with the above could present similar results. It's another slider that customers are unlikely to move far.)
    5. mouth corners

    I would not consider the use case for sliders as "static poses" to be significant. All head designers that take the time required to build a Bento head will definitely be building out some pretty comprehensive animation sets to handle facial expressions, including plenty of static poses.

    I would also add that there are a number of sliders which are just about entirely useless and not worth more than a few minutes of effort to preserve, since they are essentially sliders to make some sort of unsightly deformation. Face Shear, Shift Mouth, Crooked Nose can all be accomplished using an animation built to make you ugly, and while you can't execute other animations on top of this and keep that static ugly deformation, very few (almost zero) users are ikely to actually want to wear this type of face around for daily use. Eye Pop is a special case since it can't really be replicated with animations, but I'd venture a guess that less that 1 in 1000 residents use any movement to the eye pop slider.

    I would also say that BUG-20027 which seems to be getting very little attention should be prioritized fairly highly, as this will take the highly appealing translation animations out of the realm of content-breaking feature disablers to highly useful and and broadly usable.

  16. Oh, and the outer eye corner is affecting an unrelated bone (mFaceEyebrowOuterX). If there isn't a pair of bones for this slider, the slider just needs to be disabled. I understand that from a certain perspective it is useful to be able to affect your eyebrow shape, but that's not the label on the slider, and the confusion that is likely to result among both designers and customers is not worth the feature's benefit.

  17. I mentioned Lip Fullness and Lip Thickness are having some problems due to the varying distance between the pivot and its vertecies.

    

    I have the upper lip rotated up slightly to show the teeth, and these sliders are causing that to get amplified beyond the original setting. Some amplification is ok, but these sliders go well beyond too much (and well beyond what can be corrected through different weighting).

    I would suggest trimming the effective range of lip fullness to max out about where it sits when the slider is set to 75-80. Lip thickness turns into a problem by the time it hits 70. The values below 50 do not appear to be problematic.

    edit: also worth noting is that the longer these bones are to begin with, the less this problem will manifest, so there is some balancing to be done in regards to how far back in the mouth the bones are placed. The optimal position is probably a touch further back compared to the center point of the arc of the front teeth. This would also diminish the pivot errors caused by the inevitable mouth width translations of these bones. Upon further consideration, this logic does not hold up, since the root of the problem is the scale transformation increasing the distance between the pivot and the vertecies, any change to the initial bone distance would just get proportionally added to the scale operation, making for pretty much the exact same result. In light of this, the only way to mitigate this is limiting the slider range, as I originally suggested.

    I'll post this now so you have this info but I'll keep looking to see if there's anything else I missed.

  18. Another adjustment that occurred to me if the lower teeth bones can't be removed is that the tongue should probably be parented to the lower teeth rather than the jaw bone, since the lower teeth will be the primary structure the tongue will have to interact with. If the lower teeth are moving relative to the jaw for various mouth position corrections from certain sliders, this would mean the tongue has a moving target to deal with as it is animated, and since the tongue will likely be using translation animations even in heads that are built to be compatible with sliders, this could cause the tongue to wind up in the wrong place. Parented to the teeth, though, it should always have the right starting point for its translations.

  19. As a Maya user, it was fairly difficult to get going in this project. When I first got involved, things weren't uploading, etc, and the Maya file posted was not up-to-date. Once the June 2nd wiki update happened was the first time that I was able to actually upload anything and get a good look at the status of the project. Yes, I could have bought Avastar and started picking up Blender much earlier, but not being part of the original project it took some time to understand what the cost of not getting involved would be.

    I'll go ahead and eat my words a bit. Here's

     where they're using a facial rig very similar to Snappers rig. They had 98 joints in the face. They also use wrinkle maps and 14 corrective blend shapes along with a number of other techs. So yes, 10x the bone budget was an exaggeration. The rest is on-target. As I said, though, I'd love if SL could handle this sort of rig, but I understand why it's outside the scope of Bento.

    By the way, when I pointed out that the people who built this rig were not rigging experts, I was not being condescending, nor was I claiming to be a rigging expert myself, since in my opinion a rigging expert is someone who can name every muscle and bone in the human body, where they are attached, and what they do. That and they've spent a figure of years working on building advanced character rigs from the ground up. I was merely pointing out that we weren't looking at something that is beyond critique since there were many highly questionable choices for bone placement.

  20. Head Size - Essential
    Head Stretch - Wanted
    Head Shape - Wanted
    Head Length - Wanted
    Face Shear - Not useful — I don't think I've ever seen someone use this other than as a brief joke
    Crooked Nose - Wanted
    Lip width - Wanted
    Lip Fullness - Wanted
    Lip Ratio - Nice to have
    Mouth Position - Wanted
    Mouth Corner - Wanted
    Lip Cleft Depth - Nice to have
    Shift Mouth - Would not sacrifice anything for it (if someone wants to look like this they can do it with an animation instead)
    Jaw Angle - Wanted
    Jaw Jut - Nice to have, but again this is such a rare one that a looped animation would probably suffice.

    With the facts presented as it is, I see the purpose for the teeth bones is not trivial. In general I'd sacrifice the jaw angle slider before I would sacrifice the long list of sliders which affect mouth position.

    (Hey Vir, why not just throw us a bone? Most of this trouble stems from not having a bone for the jaw angle slider.)

  21. Actually, SL's rig can't work just like Snapper's rig, because that rig has about 10x the budget for bones, not to mention the dynamic wrinkle maps.

    Snappers Facial Rig is the Facial rigging tool for Autodesk Maya and also available for 3ds Max. It contains CG skin shader with multiple wrinkles maps and Rig Manager to handle selection and to create/save poses. No more information for now but you can take a look at demo video below.

    In fact it's unlikely that you'll see anything that detailed in any game today. Don't get so carried away with talking down to me that you stop making sense. With a limited bone budget, yes, you can still move the bones to the surface and use translations to your heart's content, but obviously I'm already aware of this, so your condescending tone is off-target.

    The fact that your coyot avatar was susceptible to bone scaling really tells me nothing of how those sliders will work on a human face. In many cases the translations are used to prevent scaling a bone from turning into an unsightly bulge in the affected area, like in the case of the eye size slider, where the eyeballs are moved backwards as they scale up so they don't bulge out of the face. These corrective translations are lost when you've overridden the bone position, and maybe that's fine on something like a coyote when it becomes cartoonish with enlarged features sticking out, but in most cases it won't be fine on a realistic human.

×
×
  • Create New...