Jump to content

OptimoMaximo

Resident
  • Content Count

    773
  • Joined

  • Last visited

  • Days Won

    3

OptimoMaximo last won the day on February 15

OptimoMaximo had the most liked content!

Community Reputation

729 Excellent

8 Followers

About OptimoMaximo

  • Rank
    Maya MahaDeva

Recent Profile Visitors

1,102 profile views
  1. Basically, it's either animated or a pose. They're both animation assets, the difference sits in the fact that a pose is only 1 frame, while an animation is composed of several frames. As per the AO part, i personally never heard of one and it's unlikely to work, as the AO plays animations on freely moving avatars based on their animation state (idle stand, walk, etc) and, therefore, there would be issues in regard of the two involved avatars positions inworld. What you're looking for is more likely to happen as standalone animations to be put in an object, allow the sitting of two avatars, and make them play the couple animation, with no guarantee of matching the intended behavior due to the sheer shape diversity among users. Animations are made starting from an avatar shape, the more different this shape is from your own the more accumulated errors you will get (animation design wants an hand to touch the avatar shape, but your shape has different settings so the hand will never touch the face as intended, either going through the head or never touching it) However there are a few animation engines, like AVSitter or nPose that allow you to build something that works and can be (approximately) adjusted to work with the widest range of users.
  2. I'm not sure i understand this correctly, but I think the OP meant an AO that already contains tails and ear animantions, as opposed to your comment where i understand you have a separate tail AO built in to the tail itself. So, that is basically what needs to happen, because a generic human AO can't handle the wide variety of tail shapes there might be out there, the tail skeleton might have different proportions and distances between joints making it unsuitable for a generic animation made on who knows what tail shape and position.
  3. You're forgetting a keyword... "High Quality"... and i'd add "Life-like" 😁
  4. Read my post more intently, as i was referring to the default avatar STANDARD, meaning Therefore, what if i arranged my avatar differently? Auxiliary channels now can do the trick, but this feature is not nearly as advanced/useful as my proposal of dumping BoM for a more robust and versatile materials layering system, which would apply to ANY item, instead of relegating this feature to a baking service generated texture for worn avatars only.
  5. The fitting is not in question. The two objects definitely fit together. The issue at hand is that the two objects borders do not share the same vertex normals, therefore the light hits them and bounce off of them differently. Other people might have a different combination where the seam shows less or at all, or maybe they have a neck fix attachment that works well because their chosen head is closer to the default head's neck shape. Good luck with finding a solution, i hope to see it posted here soon
  6. The scale issue depends from the software you're using. Blender and 3dsmax default to meters and they should stay so, Maya instead requires centimeters scene units. For Maya there is MayaStar plug in that comes with all the necessary files, for 3dsmax there is Marrionette by polysail. Mayastar can be found on the marketplace, Marionette can be purchased by asking the creator
  7. Those proportions are achieved using the bones scale, that's why this joint attribute is disabled in SL animations. If you do scale the bones on your side, you won't get the same exact result as in your application because the relative location of each bone head and tail in world space would differ from the avatar's, inworld. For what concerns their position, it's a different matter. That feature is intended to give the option of different shapes other than human, and both the joint roll and posizion play a huge role in how the animation is output and, finally, how it will be played back inworld. Making sure that the bone roll is compliant to SL's default avatar skeleton, animations work just fine BUT there will be some (if not a lot of) discrepancy between your animation design and what you will see inworld. As a general rule, when joint position is involved in the creation of an avatar, the mesh is marked as "use joint position" during upload and animations should only run on rotations. From your picture, i think you are in need of a better repositioning of your leg bone, which you can do by deleting the armature modifier, perform your fixes, set this pose as new rest pose and attaching a new armature modifier.
  8. This is something that SL users should get more acquainted with. Perfection and "life-like" is something achievable within limited extents, defined by the nature of 3D objects creation itself. There is a reason why games have restricted camera movements, if an average SL user would inspect a AAA game asset as they're used to do in SL, they would find so many imperfections that don't show up during normal game play. Many reasons for this, and it's not lazyness: it may be for optimization as computer resources are finite, or simply because wasting time on fine details that the camera won't happen to frame or get close to, just slows down the production for no real benefit. A small detail like that shouldn't disrupt the enjoyment of your two products, because there are many ways to cover that up, starting from outfits to "neck fix" attachments or, if you're into photography, you could just edit it out.
  9. It is a problem with the two pieces, body and head, that don't share the same vertex normals because they were not exported together. That's why belleza body comply to system heads: the creator exported a system head along with the body so the boundaries could share their vertex normals and no seam would show. Solution? None, unless the head creators share the original mesh head with the body creators who in turn should then export a body version for each single mesh head brand, or the otjer way u around... And i don't see that happening
  10. I'm not saying it can't be fixed, I'm saying LL won't regardless. That is your main issue, animats: you don't read intently and start roaming off what the intervention was about
  11. It's been discussed before, but apparently the base core for such a thing can't stick in your head. Let's try to summarize it once again: The library/code/program implementation needs to be free. Linden Lab won't use anything that costs a single penny. Proof of this was the choice of Collada over FBX for mesh import and the current LOD generator The content can't be shared with yet another service provider. Simplygon free version does that instead. The resulting content has to fall within the current mesh asset architecture: what you're describing from Simplygon just doesn't work. It not only creates more texture assets, which leads to more materials, but it also is prone to create new meshes as opposed to subsets of the high level, which in both cases would break the current asset architecture's rules. Say for a moment that the Lab is willing to create a new type of asset for this new feature (just this makes me LOL), this means that all meshes created before such implementation should be grandfathered, resulting in more spaghetti code than it is already, which means jumping to point a) and c) below. Now, I agree that a different generator like the decimator that pretty much all 3D softwares have would be a great improvement already, and programming it so to not break UVs and material boundary edges isn't impossible (i can speak about Maya, and it does that pretty well). What i find arguable is the degree of changes you claim everytime this topic comes up. LL simply won't do it, because: a) too much work b) possibly have to pay for a license c) too high of a chance to break something We all know how much work power (and consequently, how much $$$) LL is inclined to spend overall; we know that from SL's history and the habit they have to delegate the work they should do to residents committed to SL more than they are (because it's work power for free) and from Sansar, which is way behind than any project any other company the size of LL would be. Their revenue is in the hundred of millions per year, yet they refuse to hire professionals full time for each department of work. This isn't my guess, the developers we talk to at the content creators meeting are hired part-time, at least until i participated, yet their number is ridiculous. Sentence which i personally agree with, but having a reality check about who you're talking about (LL) in regard to the degree of commitment required to keep the platform running under the constraints dictated by the aforementioned reasons should make you think twice, before posting these claims again.
  12. Wrong: you can get the two avatars to overlap and texture bake from one to the other, your 3D app will take care of the remapping. It's really a couple clicks set up (after you have made a template with readily overlapped avatars). Same goes for the mesh clothing: you can fit the mesh clothing on the target avatar and have the clothing textures be baked on the avatar skin, same procedure.
  13. If you still have the mirrored vertex group for the other side's correspondent bone, you can copy that in the vertex group panel and rename it after the corrupt vertex group name, making sure to delete the original corrupt one before the renaming
  14. You are correct on most things there, except a little detail that is worth mentioning Squash and stretch can be achieved only within some extents, as it requires scale data too to go along with joint repositioning. Scale data is not supported by the internal animation format except for the mPelvis joint, which makes every animated scaling be inherited by all child joints you might be using, basically making the squash and stretch animation be visually crippled. For all the rest you're totally right
  15. Sorry Chinrey, but that-s incorrect. What you are referring to is the difference between two formats, OpenGL (+Y, used in Unity, Blender, Maya, 3DSMax natively) and DirectX (-Y, used in UnrealEngine4 and Substance softwares). The differences also sprout in the tangent basis that's being used, where, for example, Blender, Maya and UnrealEngine4 use a Mikkt Tangent basis by default while other softwares MIGHT need to get this setting sorted as not being their default. Also there is to notice that smoothing groups play a HUGE role in how a normal map is being baked (high to low poly workflow), changing the resulting map by a VERY big deal depending on the setting. So, to summarize, there a few aspects that interfere with normal maps creation that in SL can give very noticeable artifacty results, but those same aspects are a factor also outside of SL.
×
×
  • Create New...