Jump to content

OptimoMaximo

Resident
  • Posts

    1,809
  • Joined

  • Last visited

  • Days Won

    3

Posts posted by OptimoMaximo

  1. I'll try to clarify a few points if possible

    Perhaps, interpolation here was taken as a "bezier curve with tangent" like shown in a animation graph editor. Let's scrap the word interpolation altogether then...

    Also I made an animation tool, for Maya, that exports anim files. It also provides animation custom setup tools and all that, but let's focus on the export side.

    After the header data that we aren't that much interested in, comes the joint data. Each entry is a joint

    The entry then is followed by priority, number of keys and finally the actual data,rotation and position. Let's pretend we don't have position data. So for each SAMPLE it takes, it writes the rotation followed by the timestamp it refers to from the time line start-end, encoding it.

    Now the keyword here is SAMPLE. in my plug in, I chose to sample every frame, but I optionally allow a resampling procedure, so to take a sample every N number of frames. This reduces the file size as the sampled frames are actually less than the raw animation number of frames and, obviously, it reduces the "resolution" of the motion. It still works because from sample X to the next, we have a time specification, so in our example with only rotations, it will rotate the joint from an orientation value to the next in the delta time specified by the difference of the two timestamps. This job is done viewerside, and it is linear, meaning at constant speed from start to end of motion.

    The sampling is what makes Avastar output an average filesize, because it performs a sort of downsampling in order to get a somewhat more optimized file in terms of size, but you may also notice a substantial linearization of otherwise really smoothly designed motion, exactly because it doesn't sample every single frame, unless the user sets a marker on each frame they want to be not skipped. In my plug in I did the reverse, output raw data and let the user decide whether they want to resample.

    So that is how it works. The animation defines how long it takes from a sample to the next. Call it interpolation or not, that's what happens.

  2. 1 hour ago, Chic Aeon said:

    Keep in mind that the majority of folks are funning at pretty low fps.  

    I just checked on on my version of Blender with Avastar add-on the default is 24.   I don't really like making animations; rather I prefer poses but in the past I have uploaded lengthy ones with no issues so I am going to "side" with @OptimoMaximo on the fps change being the issue.     

    Most of the softwares that connect to a specific suit output raw data at 120 fps. That for a specific reason:

    Give the animator enough frames for cleanup (shake removal, flip fixes, smoothing transitions, etc)

    The high fps is not for "quality".

    In final application, either it be movie vfx or game animation, fps gets reduced to the film fps or to an average 30 fps, even though the playable frame rate might be higher. In the first case, because the film runs at that fps, in the latter case, because a well done job on the animation makes any fps higher than 30 useless, and no difference would ever be noticed. And guess why... To save on disk space. (And note, I'm talking about the animation fps, not the runtime fps. There is a translation step in between, so the animation gets "super-sampled" on screen.)

    Also, for whoever might bring up practical example of high fps animations in games, I know it can be possible, but

    1. Usually those are very short clips, most likely used if something mechanical is involved (like a reload animation) during game play (short=minimal data anyway, so just a little trade off)

    2. Long clips at higher fps are used for "lobby" environments to show off a character (not used in game play, and not many in the whole game)

    • Like 1
    • Thanks 1
  3. On 7/15/2022 at 10:37 AM, Lucia Nightfire said:

    Cut down on keyframes overall.

    Cut down on keyframes for individual bones, particularly ones that do the least amount of movement in any given time period.

    Unfortunately, it's not a matter of user's key frames, rather a matter of how many frames the animation has... And that depends from the fps.

    So the best move to try is to reduce the fps from the most likely 120 fps for "quality" to a nice 30 fps for optimization and scale down the animation length by a factor of 0.25.

    Then it would be a good idea to split the hands animations from the body and export them to separate files.

    • Like 1
  4. 7 hours ago, Da5id Weatherwax said:

    I totally get the wish for it to be consistent, but...

    As a programmer who uses multiple languages I adapt to the calling sequences, the naming conventions, whether the language has "shortcut logic" and a host of other things depending on the language I'm coding in. Hell, sometimes I "switch mental contexts" based on whether I'm going to compile for a big-endian or little-endian OS! I use them all from the same developer platform (eclipse, if you must know) - How is this any different to that for a 3d modeller? You use the conventions and styles best suited to the target platform.

    I can answer that. As a studio pipeline TD, the modeler, the rigger etcetc want tools to automate processes depending on the context. As soon as something errors out, you know what is the average ticket i get for support? "I got an error", period. No matter how many times we point to the report template to fill up, they do not want to even copy paste the error message, to give us a track to reproduce the error and understand what's going on. They push the keyboard forward and "can't continue, not my problem" kinda attitude. So no, they can't do the same as you do.

  5. 1 hour ago, Coffee Pancake said:

    Say's the guy selling an affordable L$ 8000 Maya plugin 

    Sure you don't need Avastar for Blender, you can do it all wit JUST blender. But Avastar cuts the work down by an order of magnitude, and no one had to pay anyone $350 just to come to the party.

    Once upon a time 3D Studio was the defacto way of doing things. Those days are gone.

     

    Well well well... See what argumentation we bring forward... The difference between a plug in needed to make things work for SL, otherwise Blender would NOT do what SL expects for fitmesh items and something, like my plug in, that does something that NO software can do natively is not that subtle.

    So get vanilla Blender and export a working fitmesh, without the use of Avastar derived starting points, like the avatar workbench. Oh! Instant fail? Hhmm guess why...

    Now get a Maya trial version and export a rigged mesh from the provided skeletons.... Oh! Instant success? Hhhmmm guess why...

    So there is a difference between add-ons/plug ins, aside from the fact that there is technical knowledge behind and my time has to be paid back somehow. And about the "affordable", yes it is. It is a one off payment against a 11k yearly subscription because... uuuhhh...  I've got a job in RL in the RL industry as pipeline and character TD... And I don't NEED survival income from my plug in, but I won't even throw it out free.

    Once upon a time, Notepad was the de facto way to write anything. It's not the case anymore. This fits best, considering the compared level between Blender, the notepad with all its shortcomings, and Maya

    • Haha 2
  6. On 6/27/2022 at 2:53 PM, Drayke Newall said:

    Replacing the system body mesh should be easy as pie for LL to do as it is literally a replace with the new look mesh if they are keeping all the same sliders/bones etc.

    Additionally, why would Linden Lab who actually can replace the system avatar mesh even consider using the mesh BoM system that was only introduced because content creators cannot replace the system mesh themselves? It makes no sense for LL to even consider not replacing the default mesh.

    I agree to this, it SHOULD be that easy. But it's not. The body is dumped in each single viewer as a proprietary binary data dump file, one for each section (head, upper and lower body), which contains also the blendshapes data for the sliders to work (or shape keys as they're called in Blender, or morph targets, etc) because the system used on the mesh bodies, with collision volume bones, does not affect the avatar mesh.( From what I could observe, those joints were used to generate the blend shapes, back in the day.). Then, these mesh files are being assembled inworld at runtime, removing the obvious seams probably by merging a list of vertices pairs, unifying their vertex normals, so it is something that is definitely very specific to the body itself which doesn't make the replacement process as simple as drag-drop-overwrite existing file-click on yes. Plus all the xml configuration files related to the sliders. Definitely this is something that a Linden would stay clear from at least 50 meters away.

    • Like 2
  7. On 6/28/2022 at 1:28 AM, Bree Giffen said:

    Where’s the equality of creation here? Isn’t that  the new mission stated by Linden Lab? If you can work hard enough….and own prohibitively expensive 3d software like Maya that costs $1,785 a year you can make money in SL.

     

    On 6/28/2022 at 1:47 AM, Coffee Pancake said:

    This exactly.

    Blender is the great leveler in terms of making content, and so what if you optionally need to buy some extensions here and there, they are for the most part very affordable and entirely community developed. 

    Well, this isn't a matter of equality.

    First, Maya for indie developers with an indie license is around 350 dollars per year. Easily the sum one would spend in "affordable plugins". So let's make the first thing straight.

    Second, for SL, buying Blender extensions is not optional. They are needed to pull out something from the software that also complies to the expected standard. About the point of Blender being the leveler, that's debatable in regard of SL content creation. Actually, for static items it is. When it comes to rigged stuff, what comes out of it is thee result of scripts operations needed to make basic architecture features work (bind poses in the first place, and all that comes from them.. Which is WAY more than what you may think) that leave behind a modified mesh, not the exact original, with an absolute mess of orientation matrices mess. If you're interested I can list all the hacky operations that at least one of these add-ons do to make it work, and what is the result when imported to Maya. And before anyone could just come up with "ah but that is a Maya problem, if it works in SL it's OK so fix your Maya", the issue is the other way around: the avatar was made in Maya following its architecture, a well formed collada or fbx should be imported just fine as it is right out of the box. Instead, in the best of the hypothesis, it comes in with 2 transformation offset, if not sometimes 3, a wrong bind pose pointing to a transformation matrix that is manipulated at least 2 times to compensate the orientation faults, and the shape keys baked in if any slider value was applied from the neutral pose.

    So a devkit that comes as a Blender scene is definitely big fat giant NO. or at least until these issues are actually being LEVELED, to use the same term.

    • Like 1
    • Haha 2
  8. 21 hours ago, Love Zhaoying said:

    Do detractors refer to it as "blunder"?

    Well, since people detract from Maya, which is the industry standard first, and what was used to make the original avatars (and, consequently, the architecture type the avatar system relies on) second, since the problem is to get everyone happy with a file format that doesn't pose different issues to different user cases, I'd advocate that a Blender devkit makes the worst among all user cases, with its internal architecture that defies the commonly established standards by using its (rather illogic, ill-formed and incomplete) own, which makes the purchase of a addon a necessary requirement to get something resembling a compatible asset, with all the issues that said add-ons entails under the hood.

    So yeah you know what? I advocate for a real devkit, a nice Maya scene. If some time is left, an fbx from that same Maya scene. So that the original working data is preserved as it should. Even if a usd archive would be provided, it leaves the SL constraining data out of what vanilla Blender can munch on anyway 😂

    • Thanks 1
    • Haha 1
  9. On 6/19/2022 at 4:03 PM, Jenna Huntsman said:

    That's okay as the individual channels are uploaded as textures, so the alpha channel is preserved when uploaded, but unused (/ discarded by the shader) for PBR materials - so it's information the bakes service can use in order to be able to layer materials textures on top of eachother. (hypothetically)

    This also means, that for a 4-channel PBR material, the upload cost would be 40 L$ (under current pricing)

    That leaves out the detail of how the bake service will treat those channels discarded by the shader, because at the end those layers will be applied to the body and therefore to the shader.

    I would make the feature request clear and detailed to LL, so that the detail would not get lost in their WIP as a secondary detail that, considering how they work, would inevitably fall within the "oh we forgot about that, but now can't change things to make that work" category.

  10. 15 hours ago, Jenna Huntsman said:

    Another thing, leaving those alpha channels unused is actually a good thing -- It makes implementing BoM for Materials muuuuch easier as now those materials textures can contain transparency information, which means they can be layered. (Note that this reasoning is from my headcanon, not anything official, but I'm advocating for continuing on this route for this reason)

    From the table, it doesn't seem that the alpha component is being actually taken into account. Seeing the base color marked as rgba and the others just as rgb, apparently the alpha channel would be discarded altogether...

  11. 14 hours ago, Quarrel Kukulcan said:

    That's not an option if I want to produce objects worn at the same time as Avastar-exported rigged mesh from the same dev kit. :(

    Well, at this point your option is to use the devkit setting avastar to use neutral pose, not being aware of the correspo ding inworld slider values. Most of the times that's enough to produce meshes that work on the majority of the user cases

  12. 5 hours ago, ChinRey said:

    Am I the only one to think there must be a better way to do this? :P

    There are better ways to accomplish this type of character features, but what is described above is the current working system, and no other way is supported.

    If you're referring to how Avastar handles things to get the compatibility with how Maya works in this regard, well, then you should go complaining to the Blender foundation 😉

  13. On 6/16/2022 at 10:24 PM, Quarrel Kukulcan said:

    I browsed through that once. Time to look again. It got...messy.

    Are you saying I need to reposition the c bones to the offsets they'd have if their parent m bones were transformed?

    Yep. Not only the parent joint scale and the collision joint scales accordingly to the parameters defined by the slider, but they also move into different positions, and there are different starting / end positions between male and female avatar to begin with 

  14. The default shape is not neutral, and that is the limitation of the slider system. Being Blender unable to use bind poses, the currently implemented shape translation from joint scales on the skeleton makes it impossible to determine what is the real neutral, which assumes all scales for the m joints to be 1,1,1.

    If you don't mind doing some math, the mapping data is available in the avatar_lad.xml file, if I remember correctly. From that data, you can retrieve what slider value corresponds to scale 1,1,1 on the m joints, and subsequently get the right c joints positions and scales right for a really neutral shape.

    On a side note, Avastar provides a toggle between default and neutral shape, but for what I remember, the sliders value do not update to reflect that and let you know what number to apply on each slider

    • Thanks 1
  15. On 5/30/2022 at 5:42 PM, Jenna Huntsman said:

    My theory is that the uploader has 2 different behaviours depending on the skeleton used - if using the Bento skeleton, then it seems to use a much more standard and logical bone representation wherein the head and tails of the bones are located at each end of the bone. On the other hand, the pre-Bento skeleton places the head of the bone at the midpoint along the bone, with the tail facing +Y. See below screenshot, which is the .dae provided from the Github.

    The "standard and logical representation" is a joint, not a bone. The head and tail concept are proprietary to Blender only.

    So, to explain the difference between the representations between

    On 5/30/2022 at 5:42 PM, Jenna Huntsman said:

    Ruth2weirdSkeleton.thumb.png.92af7b6ec974ceb2152c0b8b9c972636.png

    And this

    On 5/30/2022 at 4:20 PM, Paulsian said:

    Ruth2v4Dev Bones.jpg

    Is that the latter has been readjusted by Blender.

    The first image represents the joint orientation, that for SL needs to be x axis forward. If you enable bone roll axis display, you can see that. On the other hand, though, Blender bone rapresentation is fixed and the joint orient is established to always be Y axis along the bone length, and therefore the other 2 axis get adjusted by the so called bone roll.

    The issue is that Blender does not have an internal representation for custom joint orientations as well as for a bind pose, and most of the work that avastar does (as well as other add-ons that address export or import for SL) is translation work to comply with the expected standards.

  16. 2 hours ago, Jenna Huntsman said:

    This behaviour is likely caused by the exported file having the wrong axis settings.

    For reference, SL expects the exported file to have the following axis settings:

    Y Forward

    Z Up

    That's the animation setup for bvh export, along with the linear usits set to inches. (and negative Y forward)

    For rigging, the orientation should be

    Z up

    Positive X forward

    And the joints positions vectors from the viewer's skeleton definition set to use your preferred linear unit. If the numbers match, the unit you use doesn't matter.

    • Thanks 1
  17. 15 hours ago, Beq Janus said:

    In the same sense that LL's choice is their own, Unity and UE's proprietary choices are not some kind of standard either, just how those work

    Well that's not entirely precise. You see, what you're showing as an example comes from another proprietary software system, not from another game engine. The numbering system where the 0 is assigned to the highest level of detail was in place from the very first institution of a game engine, to ease out the selection of a model based on the camera distance and, later, the pixel numbers occupied on screen by an asset. It's been like this from the inception of the very first game engine that wasn't simply a raycast based texture projector, like Doom. The game industry has always had THIS standard. This is the reason why unanimously all game engines I've worked with in my journey of 3d content creation, from the ancient chrome engine, Bethesda's Creation Engine, Godot, Unreal (from the time of the very first SDK) and Unity use this convention. Then yeah, everyone is free to set their own standards if the so wish. It is just not compliant with well established standards.

    • Like 1
    • Thanks 2
  18. Yes I totally agree. But I also remember at the time of mesh implementation, when I tried to make this point,  the LL staff's half away denigratory/condescending attitude saying "we hear that, but our method is more logic", under the assumption Tha higher number = higher number of vertices 🤦‍♂️

    I stopped long ago to try and talk to LL. They don't listen, go ahead and leave things half implemented. See BOM without materials. See animesh without attachment points and shape support.

    • Like 1
    • Thanks 1
  19. 5 hours ago, Jenna Huntsman said:

    I've had a little play around with this, but I can't get it to successfully convert any of the animations I've got - I'll upload one of them and DM you a link, but this is the output that I see:

    ./bvh2anim bentoReferenceWtransforms.bvh out.anim
    Animation priority: 4
    Does the animation loop? (y or n) n
    ease_in amount: 0.3
    ease_out amount: 0.3
    nFrames: 1
    Frame time: 0.041667
    Segmentation fault (core dumped)
    

     

    Most likely the error is given by the 1 frame specification. In a SL compatible bvh, first frame is reserved to TPose, then the animation begins. So in this case I suspect that specifying 1 frame causes an error.

  20. 5 hours ago, FridayAfternoon said:

    Ability to set individual joint priorities is of high interest to me. Is it fair to say the only tool that will do that is animhacker, or are there others.

    big question: is the joint priority set per joint for the entire animation, or can it change from one key frame to another? I have a case where I’d like certain bones in the first half of the animation to be one priority, and a different priority in the second half. 
     

     

    My exporter for Maya does support per joint priority, as well as Avastar does. The way to make it work is just not in plain sight within Avastar though, as opposed to my plug in which has a convenience interface for that.

    Anyway, for how the animation is serialized, a per frame -  per joint animation priority is simply not possible

    • Thanks 1
  21. 13 hours ago, Jenna Huntsman said:

    In my head, based on what I've heard (that changing priorities would be an LSL-specific feature), I'd imagine that it'd be a minor modification to the existing system wherein the viewer is told to ignore all joint priorities within the file, and instead play all joints at the given priority level. Essentially that would break individual joint priorities, but as you say, the usage of individual joint priorities is fairly limited, so it would most likely be seen as an okay compromise.

    An alternate way to approach that might be the ability to specify the priorities through a parameter value, for example, using a constant called "JOINT_ALL" would target all bones, but you could also specify JOINT_ALL and another joint, for example, JOINT_MNECK and the animation would play all the joints (except the neck joint) at priority A, and play the neck joint at priority B. An example of that might be as follows:

    llStartAnimationSetParams("myAnimName",[JOINT_PRIORITY,JOINT_ALL,3,JOINT_PRIORITY,JOINT_MNECK,1]);

    The above example would function as mentioned above, animation "myAnimName" would play all joints (except the neck joint) at Pri 3, but play the neck joint at Pri 1.

    Another benefit to doing an approach like that might be the ability to specify other parameters about an animation, e.g. Ease-in time, Ease-out time, Loop

    Very unpractical method in my opinion.

    First, it shuts it off from the grasp of the majority of end users, who usually aren't very script inclined.

    Second, currently scripts communication with the viewer isn't supported for this kind of operations (scripts changing viewer behavior)

     

×
×
  • Create New...