Jump to content

OptimoMaximo

Resident
  • Posts

    1,809
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by OptimoMaximo

  1. I am in the group 4 so I haven't got a call for the vaccine yet, but my doctor did recommend to not take the vaccine anyway because of one of my kidneys being extroflexed and thus at risk of collapsing by trying to filter a couple of byproduct substances used during the purification process that normal kidneys should have no problem processing, but my kidney reaction remains unknown and could possibly collapse badly and stop working 🙁
  2. I guess you're referring to the need of an object that is the root to move the dragon around, while it animates on the spot as opposed to animating it all and have the movements sorted from animation. It is a limitation that comes from how games handled movement and animation back in the days when SL was created. There is an invisible capsule that moves under the user controls. Parented to that, the avatar plays the animations associated to what is called "character state" : walking, running, turning, etc. So the movement is space around the scene needs to be basically subtracted from the animation and, as result, the animations are "on the spot". Compared to SL age, the "root motion" system that transfers the animation spacial motion to the controller is relatively new and can't even be implemented because of the animation system architecture relationship with the physics controller. We could call it "joints naive system" as the phisics that run the avatar controller on the simulator has no remote knowledge of what a joint is, even if it technically is a transform node and could, technically, be seen and sampled as if it was an object. Doing such kind of update would crack the pandora vase open and require a near infinite chain of dependencies updates that could potentially disrupt the whole platform working order completely, if not break it or, in the best scenario, break older content, which is seen by LL as the coming of hell, with horror and absolute fear (might mean a colossal financial hit the Lab could not survive) Back in those days, animesh was not available, and also key framed motion was. They basically had 1 model for each single pose of the horse, and it's parts, linked together and used a script to sequencially make only one pose be visible at any given time. OH I did go to the content creators meetings up to the bento project. I contributed to solve issues that the week later someone else claimed as their own solutions, while I was constantly shunned aside as the incompetent moron. But guess what, bento project had a long release delay because the issues I foresaw with the face joints setup had to be taken into account for a fix within the viewer when shape sliders where changed. Ever wondered why there's a useless joint called face root? Because of the offset fixes that Vir had to implement to make the face work with the existing shape sliders. Ever wondered why faces don't show elasticity in the shapes or animation? Because this incompetent moron (myself) claimed that using animation joints to drive shapes was a bad idea from the get go, not only because of the issues mentioned earlier that I foresaw, but also because the shaping results would not be really satisfactory. The blender "pros" knew it better, so I just shut up and let that be to its destiny, don't care. Therefore I quit going to the content creator meetings, what is the point? I am a rigger and pipeline developer in RL in RL studios, I'm more than capable of taking what I'm given and make it work anyway.
  3. About this part here, I think you wanted to say that bento skeleton was developed in Blender and not in Maya. First off, the original skeleton was made in Maya when Blender did not even exist. So the bento skeleton appendages had to comply to the already laid out standards. Secondly, yes, the main input came from the Avastar team, but they were well aware of what the standards should be, as their addon just do that : it runs a series of procedures to align Blender models and skeletons to the Maya based requirements, as joint orient, joints scale, bind matrix, vectors length, scene orientation, scene scale, object scale and linear units to be written correctly to collada files to comply with a Maya exported item. See, many years ago I collaborated with the Avastar team for the animation setup and workflows. So I'm well aware of these things. Then I moved on, and made my own animation plug in for Maya, and now also collaborate with @Cathy Foil on Mayastar. So again, I'm well aware of how these systems work.
  4. Did you try to set tge sitting balls as animesh objects before linking them back to the main model? Might be the balls haven't been turned as animesh and are seen as static links. As of AKK system. The first AKK horse I have seen was made of sculpts, and when mesh came out, they updated themodel but the system was the same. For every pose in an animation, there was a different model which was turned visible while all the others were turned transparent, creating the illusion of motion. This technique is called alpha flipping, but it's very much resource wasteful. Also, notice that those horses were attachments for the avatar to wear, and that the avatar played an animation to lift it up on the saddle, BUT the actual avatar did not really move as suggested by the name tag which did not move. The name tag follows the position of the avatar controller, the thing that moves it around the world, and not the avatar mesh,reason why when riding those horses the name tag was always sunk in the avatar mesh abdomen or crotch area. Later on, when bento was introduced, horses were skinned to the avatar extra joints. All animations became avatar animations, like AO, which still required the avatar to be lifted up to saddle position while skeletal animation was also applied to the horse itself. Less resources waste and way better looking animations. As Quistessa noted above, not all features transfer from 3d apps to engines. You'd say that the 2 topnotch game engines on market, unreal and unity, could do that, but it's not the case. While Unity, at least until last time I worked with it in 2019,actually has a skeleton in scene to parent things to, "sitting" a character on it still requires a ton of synchronized animations, even though not to the extent that SL does as noted earlier. Unreal engine, on the other hand, requires an even more complicated setup within the actor blueprint to achieve such result, and still have a ton of synchronization code and synchronized animations. The bottom line of all this is, basically, that a 3d app has all the means to take all the time needed to calculate and render any kind of setup the user wants, even 2 weeks per frame if necessary. A game engine can't, as usually the minimum performance standard is to have 1 frame every 3 milliseconds, at the very least. Therefore it can't afford to have millions of relationships to calculate the transformations that a 3d app makes you pay with longer execution time.
  5. No, you don't understand. First, Blender has the wrong orientation, not Maya. It's been built the way it is to conform to the general purpose concept of a forward vector that, in SL, is the X axis forward for everything. Also in both 3d packages, when things are skinned to joints, the object stays put while the mesh shape follows the animation, concept that apparently isn't quite clear to you yet. Want fire particles being emitted from the fireball? Blender allows emission from the surface of the mesh, SL only from the center of the object. And NO, these two aren't the same thing, sorry. Another concept that blatantly is not clear to you yet is that joints have no physical presence anywhere in both server and viewer side code, meaning that server doesn't know that joints exists at all, while the viewer has only the notion of transform points that transfer rotations over to the skincluster for deformation. At no point in time there is any object anywhere within and or in between the two components, server and viewer, that could be capable of setting or inheriting transformation values inworld. Animations are just cosmetic things that need a specific purpose, and the whole concept of something being "universal" within the scope of SL animation is pretty much ridiculous, let alone unachievable. So yes, that is the only method, as of the time of this writing, to obtain the result you're after. The only way to make that happen is having LL add the attachment points to animesh, which is inherently exactly what I was saying above: transforms that should be capable of setting and inheriting transform values inworld from the skeletal animation data. This is different from skinning the mesh to a joint as it is a parent-child type of hierarchical relationship and therefore tying two transform nodes directly.
  6. There's a separation between the transform node and the mesh shape that also is in Blender. When you animate your things, their origin points stay put where they were at the beginning, and only the mesh moves. That's the separation. Anyway, to get your desired results, you need to make the avatar animations by attaching an avatar to the seat via constraint and export that specific animation for your avatar to play back in synch with the dragon movement.
  7. Here comes the neck line issue, AGAIN. Let's summarize it once more then... The avatar's in SL are made out of separate parts. Those parts are made of vertices, which have some data attached to them called vertex normals. These normals basically tell the renderer in which direction the surface has its normal, aka perpendicular, direction for the light bounces to be calculated. Textures, such as the normal maps, BEND and MODIFY the hard-coded normals, but if the base normals are discrepant, the result will still be discrepant, so textures really do nothing except mitigate the visibility of the mismatch under favorable lighting conditions. Creators have their own working files. Although vertex normals can be copied and pasted from a set of vertices to another, this requires the modeler to have the other body part in question within their working files. Do body creators have ALL the heads models at their disposal to do this operation, which would cause to have one body version for each single head on the market? NO. Do the heads creators have ALL the bodies to perform the above mentioned operations, which would require one head version for each single body on the market? NO. So, there really is no fix about unless the head comes from the same brand the body comes from, or a covering mesh is applied on that area to mitigate the problem
  8. I understand what you mean, but the basic problem persists. You see, what you describe is a fix, operated time later down the line when you figure out something is wrong with an animation and don't have the original work file anymore to export the animation again. Which is good, as more tools allow better maintenance and a repair work flow. However, the baseline issue is that animators select all, key frame all attributes and export. And certain tools just plain export whatever has got key frames on, without checking whether actual animation is occurring, and calling the day on both sides, animator and export program. So yeah, while a solution to fix these kind of issues is great to have, because everyone can make a mistake, it is more like giving a man a fish rather than teaching him how to fish.
  9. Another way is to export your newly made shape to xml on your computer and reuploading it under your main account
  10. I don't think so. 3d modeling packages in general assume a basic understanding of coordinate systems at the very least. When it comes to SL, it's a vital part since bones orientations and transform attributes are very strict. What you see as sucked super thin is an inherent issue with blender that doesn't support bind poses and therefore the collision volume bones you see in the scene carry the wrong scale attributes values, which in SL are predefined to be minute. To fix this, you should be going through a long series of steps and fixes to make that skeleton compliant to the SL standards OR buy the Avastar add on
  11. Yes, this is true when it comes to the control rig and the various controllers. The deforming skeleton, instead, is another matter. The real issue arises when you grab a shoulder bone for example and key frame all channels, PLUS you also export the translation. Whatever its position was, it will be applied to the avatar, regardless of its shape. So if you wear a shape that has very broad shoulders and long arms and play such exported animation made from a default avatar, that shoulder will be crushed inwards to reach the animated place, disregarding the avatar base shape inworld entirely. And that's what I think the OP is referring to. This is an issue that had been noted loooong time ago and that the Avastar team never addressed, while instead in Maya I made sure to filter out based on intentional animation being in place. Sure, it makes simple static poses more tricky to get at first glance (I put another system to ensure export in such cases) but at least my exporter cleans up unintentional garbage from being exported.
  12. The problem with this statement is that, as this thread shows, not many understand the fact that what you describe is done, or should be done, by the mesh itself by carrying the joints positions, and the animations should only carry rotation values. This way, not only another tail animation can be used on your other tails, but also avoids the occasional and unintentional inheritance of joints translation across other body parts
  13. This workaround only works for bvh animations exports, since it's basically applying the orientation for that format assumptions. To export rigged content, instead, the correct orientation is Z up and X forward, so basically rotate the skeleton 90 degrees around the Z azis until the character looks in the positive X axis direction, then apply rotations. Notice that things could go wrong if the collision volume joints don't have a custom property setting their scale to what SL wants and you export the vanilla Blender scaled collisions, which in the end results in the model being squished along the limbs
  14. That's what the Maya base scale avatar has to be scaled up to in order to comply to the inches scale that bvh for SL expects. In Maya the avatar is almost 2 units tall, meaning almost 2 centimeters, and to get it to animation standards for bvh export it must be scaled up by 39.37, because that is the inches to meters scale factor. In Blender, based on meters, you should scale things down to get the actual size in inches to comply to the SL Bvh standards.
  15. What I do is get the tax calculated as technical or artistry consulting job, so no sales based VAT can be applied. Basically I'm a free professional that occasionally or on a steady basis works (and gets paid) for their services to the overseas company. This way I saved myself a lot of headaches. This works in Italy though, you should see if that can be applied in Germany too
  16. The bvh importer should accept a few aliases for each single bone, I guess to allow scenes with multiple rigs in 3d applications without name clashing, of which the root can be mPelvis, hip, avatar_hip... But! To my knowledge only hip works and that's how Cathy Foil and I have set up the animation export with that naming convention in our exporters. The anim format, instead, doesn't allow aliases, so for the anim exporter I resorted to some name strings manipulation to get the desired output. The scale set at 39.37 refers to the centimeters to inch conversion rate when the avatar is based on a centimeters scale where the linear units are centimeters and the unit itself has to be interpreted as inches, so I don't know if that still applies in Blender, which is based on meters as linear unit. I guess it would need to be converted to inches, requiring the inverse ratio to be applied. Anyway, importing a bvh animation without scale compensation results in a skeleton that is smaller than its counterpart in meters scale. Quite a fuss and pretty dumb design.
  17. I knew Squinternet personally. At the time, I was teaching Blender and she wanted to learn, but her severe illness slowed her down and passed away before being able to release anything mesh made entirely by her, without the help of any collaborator. She was a lovely lady, very intelligent and humble. Very humorous too. Fun fact about her SL name: being Italian, her name was the result of merging "squinternata" (mess-minded/unhinged) with "internet"
  18. The apply checkbox is not checked, so the transforms aren't being applied upon export and most likely would come back into the 3d app with rotations
  19. Yes. Edit the attachment inworld by right clicking it when it's worn and select edit. At that point rotate and position it until it looks good and you're done. It will remember your edits everytime you will attach it again
  20. What export? Is that a feature option in the blender exporter when udim setup is detected? If not a specific option that you're talking about, no exporter does that automatically regardless, or the whole point of the UDIM setup goes to waste
  21. Adding on top of that, remember that UDIM is good for ease of work, but then SL as any other game engine does not support that feature, and you must assign materials prior to upload to reflect the texture separation and be able to assign them to the correct set of faces.
  22. Did you try something like the Maya command 'go to bind pose' after import anyway? In Maya when you import a collada file from blender, the mesh is always exploded for example, and going back to bind pose usually fixes those issues
  23. There is a new license type, called Indie License, at 350 usd per year. https://makeanything.autodesk.com/maya-indie
  24. It's not a quirk in the files, and playing with the settings to have it look right in your 3d app may get things even worse rather than improving. The original avatar was created in Maya at centimeter scale to get the software export in native units, making the conversion to the SL internal file format scalable. If the skeleton refers to single units, the nomenclature won't have effects when said units are converted to another measurement. It happens that Maya uses centimeters internally and that's why it is set up the way it is. When you parent something to another transform and scale the latter up, tth local space within the transform won't undergo any matrix changes, just the visuals will. So the single joints vectors won't change and everything will stay compatible, while the visuals can be adapted to a meters scale bounding box. Once the skin is bound, their absolute size won't matter as the components positions will become relative to the joints they are influenced by within the deformer.
  25. Nothings broken, it's the actual scale of an avatar. 1 unit = 1 centimeter. If you want a meter scale sized avatar, you should group the skeleton (or parent to a transform node that has no shape node) and scale it by 100 on all axis, then rig your item onto it by fitting the size, placement and only then bind it to the skeleton. You'll have to work out for yourself the translation of the explanation above in c4d terms though
×
×
  • Create New...