Jump to content

OptimoMaximo

Resident
  • Posts

    1,809
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by OptimoMaximo

  1. You should actually invert the UVs normals to have that behavior. If you mirror the UVs so that they're flipped, then a negatively scaled normal map would offset the placement of the red channel back to its original shaded look.
  2. Yeah, like me when I read "mesher".
  3. Selection order I guess. Try the upload twice for the same object ad you should see the same root. Another way to ensure a certain object to always be the root is to actually make it the parent of the other objects. The parent-child relationships are kept as a linking order (not the real parenting, but the same objects were assigned always the same link number by creating a hierarchy of objects). Note that this last annotation is based off some tests I did years ago. The up loader code might have been changed in such a way that this behavior is no longer guaranteed, but it's worth to try
  4. Well, it is a weighting problem. The importer won't recognize the rig when the max number of joints detected exceeds the max allowed (110), and it breaks the deformation when the max influences per vertex exceed the max (4). So, first you must make sure that the list of joints in your skinning is <= 110, then you should run a weights cleanup, setting the max number of influences to 4. This latter may break the deformations in Blender too, which means you have to paint adjust it. Once it's compliant to those limits, your mesh will upload just fine
  5. I have a suggestion in that regard: toilet paper 😂
  6. No, it's not neglectable at all. Adding hands means 30 more joints x number of frames, and that makes a huge difference. See, it's not the number of key frames that get exported, the anim file gets ALL the frames your software is instructed to sample for export. There is no smooth interpolation, from a frame to another there is just a linear change. Now, avastar has a sort of automated process where it compresses the animation as much as it can, that's why it isn't smooth as you'd like. The number of frames that get exported is less than what you set up at 20fps, and between a frame and another there is no smoothing, just rotation change at constant speed from angle A to angle B. That's why higher fps means smoother, in SL. There simply are more frames to sample a smooth animation curve more accurately. There's a hidden trick in Avastar to instruct it to not resample the animation, and that's to add a marker in the action editor called "fix" : the scripts will skip the keyframe in that location entirely. So if you would like to retain the entirety of the animation data, add a "fix" marker to every frame along the length of your animation (should do that with a script to avoid tearing your hair off). This will obviously result in a bigger file size. To decrease output file size, you should consider splitting the hands animation from the rest of the body by performing 2 separate exports, muting the channels of those joints that you're nor exporting for that file. Then in world you should setup a device that starts the animations together, body and the associated hands animations.
  7. There simply aren't any that specifically target SL asset creation. My suggestion is to indeed get a book and learn from that, I personally prefer that approach as well. The benefit and understanding I got from books outweigh videos by thousand fold in my opinion. With that said, at the time I was first learning, the blender foundation website had the shop section, and from there I got the book that really enlightened me and opened my mind to a whole new level. It was "Blender for dummies", based on blender 2.3 😂 jeez I'm old... Anyway, I think that's no longer published, but one thing is sure: learn the basics of the software, even if not relevant to the niche you're aiming to. You'll adapt the concepts later on.
  8. Pbr systems do not use specularity at high level, meaning it's not exposed to user input. The point is, being Physically Based, it's handled internally from physical properties*, and it's the whole point of having a mask to mark the metallic areas as opposed to dielectric areas. What makes a visual difference, which the user has to develop for the use case at hand, is how rough the surface is. Arguably, like the shader used in UnrealEngine, someone thought to add a greyscale "specular level" input to further modulate the energy conservation property of a material (which basically means how much of the original light bounces off the surface), which modifies the principle of absorption. Other rendering engines add more options but those are basically just "layers" of additional specular materials, like the so called clearcoat, used to simulate the non metallic specularity of a polished car on top of the metal paint. *the physical properties are hardcoded, because reflectance at normal and energy absorbtion is known beforehand, and they fall within two general types, metallic and non metallic (dielectric)
  9. In a pbr environment, the AO maps mainly influence the strength of the specular reflection coming off the surface in the darker areas, because it is basically receiving less light to begin with. Then it also darkens the diffuse color.
  10. That is a case for a custom joint position MESH to wear and achieve a persistent deformation, which would respect the sliders influences and would be visible to anyone in range, newcomers included. And stops and resets automatically on detach from avatar
  11. That's how undeformers work, actually. You just exported it as looping animation, while you might have got the same result by having that same animation, not looping, ease in and out values to 0. Undeformers are either full body, to avoid mistakes, or aimed at your own issue and run when detaching the attachment that starts a problematic animation OK let's make this bit of information more clearly known and stated: on the servers, there is no skeleton, pose or animation information. The Skeleton definitions used when rezzing, changing shapes, running animations all reside within the viewer's installation folders. All updates are sent from client to server, and the server then distributes those updates to all connected viewers in view range. Those updates are then interpreted through those definition files and finally shown on screen. Now, the problem with animated positions is that these are absolute positions from the avatar root. So if the user has a shape that has the jaw joint say 10 cm higher and 3cm backwards than the default, imposing the default position via animation would not account for the shape, placing the jaw 10cm lower and 3cm forwards in comparison to the user's shape.
  12. Cog is fine taking the animation from the Hips joint. Now if you look at the picture from the previous post, you had nothing driving the Torso from the mocap animation. You need to put something in the Torso and Chest slot, and you should see for yourself which joints and what combination of joints from the mocap works best, since you have multiple spine joints as input.
  13. Ouch! What you're describing is bad practice, because it basically means that you're exporting ALL joints positions when doing you little finger extension, including what you did not animate! So why is it bad? Because you're dumping unnecessary data into a file that would be way fewer bytes in size. Animations have a limit of either 60 seconds length, or 250kb, whichever is hit first. You're wasting resources, and slowing down both viewer loading and server streaming, because of their bigger sizes. Not specifically the case of these really short animations, but still... Everything adds up to the pile when you think of a crowded place.
  14. Spine1, spine2, spine3 and spine4 are extra joints from the bento additions that are folded over themselves and should not be used as they're not intended to be used on default avatars. Remove those and you'll see that your avatar won't bend backwards anymore when the animation ends
  15. It is a mess, because this time, you were lucky enough that Catwa uses one of the two sets of default, neutral or "default". Most other head creators have completely different joints positions for a big number of face joints, if not all. It gets even more complicated than that. Those presets you see in avastar are the two basic shapes, nothing applied and the Ruth shape everyone get when logging in the first time, or creating a new shape. Also, the system works by manipulating the scale of some joints to achieve the new position, and consequently their child joints. So everything should really be rigged to the neutral shape, because all joints scales are at 1,1,1 (except collision volume bones, that's another layer of complexity I skip here) ... However... The male Skeleton doesn't have a neutral shape, it's the result of shaping the base skeleton even further through scale. Unfortunately, blender users can't see this, because blender doesn't support bind poses. And therefore, avastar does all the heavy lifting to translate SL system to something it can use and understand.
  16. Evidently the head you're wearing has a different set of eyebrow joints position than the default avatar provided with avastar. Therefore their animated positions overrode the head's custom positions, and those persist because, well, they're different to begin with and their default zero positions differ.
  17. Your animation is most likely including the extra spine joints, which do not reset nor have a base animation to go back to. In your retargeting destination rig, make sure that the spine joints aren't included to begin with.
  18. When the jaw juts forward, it's not only the chin that moves, the bones do not stretch. So if you have to move something forward, it is again the jaw bone. Let me suggest this approach then: keyframe the jaw before the motion start time, then copy paste this same keyframe to a later time on the time line. Perform the movement in between. When exporting, include like 12 frames MORE to the end of the animation, and set the ease out value to 0.5. (12 if you're working at 24 fps, 15 frames if working at 30 fps, and so on) This should ensure that the animation dislocates the joint as you want it, but also place it back to the initial position. The extra frames are there to ensure that the ease out time value can accommodate the whole movement, and not leaving the joint at an intermediate location.
  19. So you're moving the position of the jaw end joint, and not actually using the jaw joint. To open the mouth, you don't need to unlock the end of the joint to reposition it, what you need is to ROTATE the jaw bone. This helps to avoid possible deformations due to stretching the length of the main jaw joint, and or not returning the correct rest location of the jaw end joint. Moreover, rotating the main jaw joint ensures a circular arc for the motion, achieving the same effect as moving the jaw end, if that is what the mesh is skinned to.
  20. LL needs to be proved wrong before they admit, with their actions, that what they claimed as not possible is indeed possible, just putting the necessary work (and will to work... )
  21. Not a bug, it's intended and known behavior: all body joints have a base animation they always return to, after a custom animation is stopped. Except for the bento joints, which do not have any base animation layer, and therefore they stay at their last rotation or position value. You need to implement a base animation layer, in which you loop the pose of your intended "default" state, in this case the jaw staying at rotation 0,0,0. This way, when your animation stops, the jaw will go back to the base animation layer in the amount of time defined by the ease out value you set when you uploaded your animation
  22. I guess that passing any number of triangles to the gpu would work on its blackface feature. This means that whatever gets culled is not being sent, and the shader on the gpu renders backfaces of those that aren't culled. If the system is not organized like this, that's something that needs to be implemented, because it can be done and it's nothing new. I don't see the problem with keeping the culling mechanism and send the result to gpu to perform its operations. Unless, as usual, LL code has subpar limitation in this regard, which I can't be aware of.
  23. OptimoMaximo

    Imposters!

    Well, it's not because of the loose vertex. Having a cube extended bounding box using a triangle, force the mesh to be seen as a cube proportioned mesh, and during the conversion within the uploaded, the vertex normals do not get squashed, which needs the recalculations I mentioned. Then about the swap distance : do you realize that the first part of the formula is actually the vector magnitude? Of course having one of the component bigger the the other two affects the bounding box significantly, still you can get the same magnitude with smaller vector components taken collectively, and that reduces the single axis offset.
  24. OptimoMaximo

    Imposters!

    Sorry to intervene on this, but I dissent about these statements. This method might be cooler, coolness is a subjective feeling. However it's technically not elegant, and may introduce exceptions in the exported files and their handling. First, think of the mesh as a polylist: with a stray vertex, you're breaking the standard and introduce an exception, where there has to be listed only a vertex that doesn't pertain to a face. Indeed you need to make sure it is vertex 0 or it gets pruned by the uploaded, so you basically brute force it into the definition. There is no context where brute forcing is cool or elegant. Secondly, with this method you force an extension of the bounding box in just one or at most 2 axis, and with this latter, also offsetting the pivot point sideways. Instead, using a stray triangle gives benefits only. First benefit, is that you keep the definition as a poly list. Second benefit, if you make it into a degenerate triangle (a zero area polygon, for whoever is wondering what that is... Basically a triangle in which two vertices overlap) it doesn't render, but allows you to place it across a cube diagonal, extending the bounding box in all directions, maintaining also the pivot point location untouched, if done carefully. From the second benefit, we also have a third one, for which vertex normals do not have to be recalculated during upload, and the problem there was with un matching normals we had until some time ago would not appear to begin with. I know, a fix was introduced by Beq Janus and I believe it's been put in the official LL viewer too. However, if one can avoid this recalculation to begin with, the better, because this fix works only when hardware shaders are on. With hardware shaders off, the mesh surfaces show the vertex normal inconsistencies. The fourth benefit from doing a cubic extension is that the LOD switching happens the same way, just with a better volume distribution around the object, with a minimized offset from the surface in all directions; though this benefit might be subjective and dependent from the specific user case, so an exception where a side offset bounding box is preferable over the evenly distributed version is perfectly admissible ... Still, a triangle is better than the stray vertex. The argument of saving one triangle and 2 vertices, instead, is ridiculous, no offense meant. I mean, excessive load would never come from that +1 triangle, come on.
×
×
  • Create New...