Jump to content

OptimoMaximo

Resident
  • Posts

    1,809
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by OptimoMaximo

  1. I'll try to clarify a few points if possible Perhaps, interpolation here was taken as a "bezier curve with tangent" like shown in a animation graph editor. Let's scrap the word interpolation altogether then... Also I made an animation tool, for Maya, that exports anim files. It also provides animation custom setup tools and all that, but let's focus on the export side. After the header data that we aren't that much interested in, comes the joint data. Each entry is a joint The entry then is followed by priority, number of keys and finally the actual data,rotation and position. Let's pretend we don't have position data. So for each SAMPLE it takes, it writes the rotation followed by the timestamp it refers to from the time line start-end, encoding it. Now the keyword here is SAMPLE. in my plug in, I chose to sample every frame, but I optionally allow a resampling procedure, so to take a sample every N number of frames. This reduces the file size as the sampled frames are actually less than the raw animation number of frames and, obviously, it reduces the "resolution" of the motion. It still works because from sample X to the next, we have a time specification, so in our example with only rotations, it will rotate the joint from an orientation value to the next in the delta time specified by the difference of the two timestamps. This job is done viewerside, and it is linear, meaning at constant speed from start to end of motion. The sampling is what makes Avastar output an average filesize, because it performs a sort of downsampling in order to get a somewhat more optimized file in terms of size, but you may also notice a substantial linearization of otherwise really smoothly designed motion, exactly because it doesn't sample every single frame, unless the user sets a marker on each frame they want to be not skipped. In my plug in I did the reverse, output raw data and let the user decide whether they want to resample. So that is how it works. The animation defines how long it takes from a sample to the next. Call it interpolation or not, that's what happens.
  2. Most of the softwares that connect to a specific suit output raw data at 120 fps. That for a specific reason: Give the animator enough frames for cleanup (shake removal, flip fixes, smoothing transitions, etc) The high fps is not for "quality". In final application, either it be movie vfx or game animation, fps gets reduced to the film fps or to an average 30 fps, even though the playable frame rate might be higher. In the first case, because the film runs at that fps, in the latter case, because a well done job on the animation makes any fps higher than 30 useless, and no difference would ever be noticed. And guess why... To save on disk space. (And note, I'm talking about the animation fps, not the runtime fps. There is a translation step in between, so the animation gets "super-sampled" on screen.) Also, for whoever might bring up practical example of high fps animations in games, I know it can be possible, but 1. Usually those are very short clips, most likely used if something mechanical is involved (like a reload animation) during game play (short=minimal data anyway, so just a little trade off) 2. Long clips at higher fps are used for "lobby" environments to show off a character (not used in game play, and not many in the whole game)
  3. For optimization. Otherwise an animation file becomes the usual dump of excessive data. Animations already delay their play start when first downloaded, figure what happens with even bigger file sizes.
  4. Unfortunately, it's not a matter of user's key frames, rather a matter of how many frames the animation has... And that depends from the fps. So the best move to try is to reduce the fps from the most likely 120 fps for "quality" to a nice 30 fps for optimization and scale down the animation length by a factor of 0.25. Then it would be a good idea to split the hands animations from the body and export them to separate files.
  5. I can answer that. As a studio pipeline TD, the modeler, the rigger etcetc want tools to automate processes depending on the context. As soon as something errors out, you know what is the average ticket i get for support? "I got an error", period. No matter how many times we point to the report template to fill up, they do not want to even copy paste the error message, to give us a track to reproduce the error and understand what's going on. They push the keyboard forward and "can't continue, not my problem" kinda attitude. So no, they can't do the same as you do.
  6. Well well well... See what argumentation we bring forward... The difference between a plug in needed to make things work for SL, otherwise Blender would NOT do what SL expects for fitmesh items and something, like my plug in, that does something that NO software can do natively is not that subtle. So get vanilla Blender and export a working fitmesh, without the use of Avastar derived starting points, like the avatar workbench. Oh! Instant fail? Hhmm guess why... Now get a Maya trial version and export a rigged mesh from the provided skeletons.... Oh! Instant success? Hhhmmm guess why... So there is a difference between add-ons/plug ins, aside from the fact that there is technical knowledge behind and my time has to be paid back somehow. And about the "affordable", yes it is. It is a one off payment against a 11k yearly subscription because... uuuhhh... I've got a job in RL in the RL industry as pipeline and character TD... And I don't NEED survival income from my plug in, but I won't even throw it out free. Once upon a time, Notepad was the de facto way to write anything. It's not the case anymore. This fits best, considering the compared level between Blender, the notepad with all its shortcomings, and Maya
  7. I agree to this, it SHOULD be that easy. But it's not. The body is dumped in each single viewer as a proprietary binary data dump file, one for each section (head, upper and lower body), which contains also the blendshapes data for the sliders to work (or shape keys as they're called in Blender, or morph targets, etc) because the system used on the mesh bodies, with collision volume bones, does not affect the avatar mesh.( From what I could observe, those joints were used to generate the blend shapes, back in the day.). Then, these mesh files are being assembled inworld at runtime, removing the obvious seams probably by merging a list of vertices pairs, unifying their vertex normals, so it is something that is definitely very specific to the body itself which doesn't make the replacement process as simple as drag-drop-overwrite existing file-click on yes. Plus all the xml configuration files related to the sliders. Definitely this is something that a Linden would stay clear from at least 50 meters away.
  8. Well, this isn't a matter of equality. First, Maya for indie developers with an indie license is around 350 dollars per year. Easily the sum one would spend in "affordable plugins". So let's make the first thing straight. Second, for SL, buying Blender extensions is not optional. They are needed to pull out something from the software that also complies to the expected standard. About the point of Blender being the leveler, that's debatable in regard of SL content creation. Actually, for static items it is. When it comes to rigged stuff, what comes out of it is thee result of scripts operations needed to make basic architecture features work (bind poses in the first place, and all that comes from them.. Which is WAY more than what you may think) that leave behind a modified mesh, not the exact original, with an absolute mess of orientation matrices mess. If you're interested I can list all the hacky operations that at least one of these add-ons do to make it work, and what is the result when imported to Maya. And before anyone could just come up with "ah but that is a Maya problem, if it works in SL it's OK so fix your Maya", the issue is the other way around: the avatar was made in Maya following its architecture, a well formed collada or fbx should be imported just fine as it is right out of the box. Instead, in the best of the hypothesis, it comes in with 2 transformation offset, if not sometimes 3, a wrong bind pose pointing to a transformation matrix that is manipulated at least 2 times to compensate the orientation faults, and the shape keys baked in if any slider value was applied from the neutral pose. So a devkit that comes as a Blender scene is definitely big fat giant NO. or at least until these issues are actually being LEVELED, to use the same term.
  9. Well, since people detract from Maya, which is the industry standard first, and what was used to make the original avatars (and, consequently, the architecture type the avatar system relies on) second, since the problem is to get everyone happy with a file format that doesn't pose different issues to different user cases, I'd advocate that a Blender devkit makes the worst among all user cases, with its internal architecture that defies the commonly established standards by using its (rather illogic, ill-formed and incomplete) own, which makes the purchase of a addon a necessary requirement to get something resembling a compatible asset, with all the issues that said add-ons entails under the hood. So yeah you know what? I advocate for a real devkit, a nice Maya scene. If some time is left, an fbx from that same Maya scene. So that the original working data is preserved as it should. Even if a usd archive would be provided, it leaves the SL constraining data out of what vanilla Blender can munch on anyway 😂
  10. I await the dev kits in a REAL EXCHANGE FILE FORMAT. gltf, usd or at the very least a well formed fbx. NOT BLENDER.
  11. That leaves out the detail of how the bake service will treat those channels discarded by the shader, because at the end those layers will be applied to the body and therefore to the shader. I would make the feature request clear and detailed to LL, so that the detail would not get lost in their WIP as a secondary detail that, considering how they work, would inevitably fall within the "oh we forgot about that, but now can't change things to make that work" category.
  12. From the table, it doesn't seem that the alpha component is being actually taken into account. Seeing the base color marked as rgba and the others just as rgb, apparently the alpha channel would be discarded altogether...
  13. Well, at this point your option is to use the devkit setting avastar to use neutral pose, not being aware of the correspo ding inworld slider values. Most of the times that's enough to produce meshes that work on the majority of the user cases
  14. There are better ways to accomplish this type of character features, but what is described above is the current working system, and no other way is supported. If you're referring to how Avastar handles things to get the compatibility with how Maya works in this regard, well, then you should go complaining to the Blender foundation 😉
  15. Yep. Not only the parent joint scale and the collision joint scales accordingly to the parameters defined by the slider, but they also move into different positions, and there are different starting / end positions between male and female avatar to begin with
  16. The default shape is not neutral, and that is the limitation of the slider system. Being Blender unable to use bind poses, the currently implemented shape translation from joint scales on the skeleton makes it impossible to determine what is the real neutral, which assumes all scales for the m joints to be 1,1,1. If you don't mind doing some math, the mapping data is available in the avatar_lad.xml file, if I remember correctly. From that data, you can retrieve what slider value corresponds to scale 1,1,1 on the m joints, and subsequently get the right c joints positions and scales right for a really neutral shape. On a side note, Avastar provides a toggle between default and neutral shape, but for what I remember, the sliders value do not update to reflect that and let you know what number to apply on each slider
  17. The "standard and logical representation" is a joint, not a bone. The head and tail concept are proprietary to Blender only. So, to explain the difference between the representations between And this Is that the latter has been readjusted by Blender. The first image represents the joint orientation, that for SL needs to be x axis forward. If you enable bone roll axis display, you can see that. On the other hand, though, Blender bone rapresentation is fixed and the joint orient is established to always be Y axis along the bone length, and therefore the other 2 axis get adjusted by the so called bone roll. The issue is that Blender does not have an internal representation for custom joint orientations as well as for a bind pose, and most of the work that avastar does (as well as other add-ons that address export or import for SL) is translation work to comply with the expected standards.
  18. That's the animation setup for bvh export, along with the linear usits set to inches. (and negative Y forward) For rigging, the orientation should be Z up Positive X forward And the joints positions vectors from the viewer's skeleton definition set to use your preferred linear unit. If the numbers match, the unit you use doesn't matter.
  19. OH and I forgot to mention the engine I worked with that I hated the most: Cry Engine. I guess that in my case its name comes directly from how I felt about it... I wanted to Cry every time I had a task assigned involving direct in engine manipulation.
  20. That's right, but I'm on my phone and got 3 language packs installed... So that slipped through 😁
  21. Well that's not entirely precise. You see, what you're showing as an example comes from another proprietary software system, not from another game engine. The numbering system where the 0 is assigned to the highest level of detail was in place from the very first institution of a game engine, to ease out the selection of a model based on the camera distance and, later, the pixel numbers occupied on screen by an asset. It's been like this from the inception of the very first game engine that wasn't simply a raycast based texture projector, like Doom. The game industry has always had THIS standard. This is the reason why unanimously all game engines I've worked with in my journey of 3d content creation, from the ancient chrome engine, Bethesda's Creation Engine, Godot, Unreal (from the time of the very first SDK) and Unity use this convention. Then yeah, everyone is free to set their own standards if the so wish. It is just not compliant with well established standards.
  22. Yes I totally agree. But I also remember at the time of mesh implementation, when I tried to make this point, the LL staff's half away denigratory/condescending attitude saying "we hear that, but our method is more logic", under the assumption Tha higher number = higher number of vertices 🤦‍♂️ I stopped long ago to try and talk to LL. They don't listen, go ahead and leave things half implemented. See BOM without materials. See animesh without attachment points and shape support.
  23. Most likely the error is given by the 1 frame specification. In a SL compatible bvh, first frame is reserved to TPose, then the animation begins. So in this case I suspect that specifying 1 frame causes an error.
  24. My exporter for Maya does support per joint priority, as well as Avastar does. The way to make it work is just not in plain sight within Avastar though, as opposed to my plug in which has a convenience interface for that. Anyway, for how the animation is serialized, a per frame - per joint animation priority is simply not possible
  25. Very unpractical method in my opinion. First, it shuts it off from the grasp of the majority of end users, who usually aren't very script inclined. Second, currently scripts communication with the viewer isn't supported for this kind of operations (scripts changing viewer behavior)
×
×
  • Create New...