Jump to content

OptimoMaximo

Resident
  • Posts

    1,809
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by OptimoMaximo

  1. I do agree with this. It applies to everything I don-t know that specific cases, but balanced criticism might be seen as unbalanced or covertly balanced, as much as a merchant might be upset at any criticism whatsoever as you say. It-s a minefield, really, and i think it's the ground for the many feature requests regarding the ability to ban specific residents from MP stores. At least, this way you prevent someone from buying any more of your products; disabling an already purchased item isn't fair at all.
  2. Because i don't want to deal with marketplace (packing, vendor, listing, description, etc) and customers
  3. Oh and for the records: I make stuff for a brand. If the owner would ever ask me to make a 4096 texture for a product and turn that into the equivalent 16*1024, I would do it. I get my payments IF i abide to the request.
  4. Who said I actually did that way in the final product? I'm showing my method and did as many of them as could fit in a screencapture to prove the point at hand. As long as you can paint on the model, yes. Mudbox for one manages UV Tiles just fine. Doing it planarly in Photoshop or similar can't give good results with my method, since the final model also gets squared-out UVShells, not those you see there, for even better texture space usage, reason why I do a transfer bake from one UVset to another. Manual painting can't get the UV distortion sorted, while the baking does.
  5. I don't condone this practice, but i see what's the reasoning behind it. None of the following comes from me, it's pure analysis of thought processes behind this behavior These merchants get an unfavorable review, which means the customer doesn't like the product. "If you don't like it, then we "switch it off" for you, what's the problem? You first badly criticize the product then you keep using it?"
  6. Then i am out of ideas regarding this issue. Paging @Gaia Clary who might have more insights to share
  7. The two Tpose icons on the left, hover your mouse over each one and you should read Neutral on one of them
  8. Then it's most likely the shape issue, at least for the dislocation of the forearm. I'm pretty positive that you've been using the Default female, rather than the Neutral (check in the shape presets). When doing joint positioning, you should start from the Neutral shape.
  9. If someone told you there is a vector magnitude to calculate, would you be able to follow? Because that's the type of math involved, plus quite a bit more variables such as the number of triangles in each LoD, all entangled in the final result.
  10. Duh! didn't see this... opened with blender, and it says that the bones were not saved http://prntscr.com/m7vrz8 http://prntscr.com/m7vs9n
  11. This is most likely due to the starting shape you rigged from in addition to the other thing you're wearing on top. You should always begin the joints repositioning starting from the Neutral shape, not the default shape This is an issue with joints orientations. It's not visible in Blender right away, but when you go into Edit mode on your skeleton, in the side panel (that toggles with N) you can find a "Bone Roll" value. Always make sure that, after the bone position is correct, this value is set to zero.
  12. You're most welcome hopefully things are a bit more clear now. Something to expand over a little bit more, I think. Reading your initial post: While this seems to be the case for what we can see displayed on screen, it's quite a bit misleading (not what you say, the concept itself). What makes a human being articulated? Not the bones. If we had one single solid bone holding up our shape, we couldn't move. The pivotal point (in all of its possible meanings) is its joints structure. Also the SL preview, that you rightly show in the OP, is based off joints. What we perceive as "bone" is, actually, a visual representation of the non-unit vector magnitude that leads to the next joint in the hierarchy structure. Every joint has its own local position relative to its parent (a vector), and the bone is nothing more than a line (the vector's magnitude) representing that value. This is why we have a few problems in SL when it comes to animation. There is too much entanglement among all of the three matrix types that govern a skeleton (location, rotation and scale matrices), something that is never done in game development. Mixing rotation and scale matrices on the same bone is a bad idea per se, but in animation we also have an interaction with the location matrix at the same time. Indeed that's why we have collision volume bones for shaping the volume of our mesh avatars and the location is handled via main joints' scale: keep the two matrices separated as much as possible, working on different joints. The system avatar uses a different method, since its data is shared across users, locally, using blendshapes (blender's shapekeys) for the volumes through the viewer, and distance between joints through the skeleton. The volume shapes were defined using the collision volume bones and turned into vertex animation based morphs, were/are streamed to the viewers as local data and the collision volume bones were disabled for such feature, but kept in the skeleton for realtime IK inworld (theoretically, since this feature is broken except for the default animations). With fitted mesh, LL needed to re-enable those collision volume bones in order to give rigged meshes those fitting capabilities. So you can see that collision volumes are acting within the scale matrix, while the regular joints mostly work within the rotation matrix, with one axis's scale used to alter the location of its child joint(s). That's the reason for the slight imperfections we see on tight fitted clothing, that fit almost everyone's shape (alpha masking ftw!). Add a location matrix, and we get a matrix math massacre party, where one matrix (usually the location) disagrees and fights with another (usually the scale, because that tries to emulate the positioning of joints via scale)
  13. That's another issues that has to do with rigging, actually
  14. Sorry for getting you wrong, previously. Indeed the two things must be kept separated. The way data is being sampled and calculated simply exclude each other in the process, if we want to keep things functional. First of all, let's clarify what translation animation is supposed to handle. Joint repositioning during an animation was initially intended only as a mean for "shape shifting", literally. That's why originally all animations could not include position data except for the hip joint (obviously). That's also why mesh import give us an option for Joint Position: you want to keep the custom positioning this avatar was built on to animate it with only rotation data, since the joint positions are assumed as embedded in the mesh. With the advent of .anim file format, more bones could be exported with translation data that had a meaningful use with no avatar disruption/destruction, for instance the attachment points or some collision volume bones to emulate muscle bulging. I intentionally did not use the term "shape", in favor to a more generic "custom positioning", as this may be a little confusing: for a SL character, a shape is a set of scale transforms being applied to specific sets of joints to give us the inworld shaping capabilities we all know. The default male and female have, for example, some scale applied on the local Y axis of their arms. And that's supposed to be the default. In Blender+Avastar, you can see that on the female avatar when you switch from default shape to neutral (the male is another handful of issues, with some collision volume bones positioned differently from the female). Neutral is the original shape with all scale transforms set to 1, and the character is way smaller and squashed. However Blender doesn't support bindposes and meshes can be rigged only to neutral transform bones. All the compatibility things for SL are being handled via script when exporting, so you can't directly see any scaling on the bones within the interface. Because of this, scale animation can not be imported or set (except for the hip bone, but that's another story...), so here is where the pain begins with translation based animations. Let's leave out the custom joint positions some meshes might have. Just figure that you want a specific shape to animate with (and the default avatar IS a shape, male or female doesn't matter). All of the joints locations, visually, are somewhere in space as result of their scale and all the children bones inherit that shifting, in a cascade fashion because many have their own scale value. However, each transform actually has its own pivot point, and since what is getting sampled is only position and rotation, you may agree that rotation is not an issue (how many degrees has a joint rotated?), but when it comes to position, what are we going to sample? Each transform has it own pivot point, and we said that the scale factor of each joint determines where in space this joint LOOKS to be, but is that its true location? When you then export a BVH file, the joints Tpose raw data is being written in the file's header without accounting for the scale (which is not handled at all by the importer even if you may include it) and only then you get the array of values over time that make the animation. Somehow the conversion from BVH to the internal .anim establishes the position for all joints, also those that aren't animated, and as result of animation playback you get all of them to snap to the Neutral shape absolute location. With .anim format you get the same or similar result, only applied to the joints that were actually animated and not on all of them. In my plug in for Maya i partially circumvented this issue but freezing the export skeleton's transformations in place where the shape had the joints moved to, however this doesn't make it exempt from undesirable shape shifting. It is just more subtle visual effect, assuming that the final user is starting from the same shape i used (or using this method on custom joint positions, when using a default human avatar skeleton shifting occurs anyway). But then Bento came into play and things got even worse. Especially when heads and fingers are added in the equation. LL has done some automated fixing inworld so there's not really much to worry about, BUT... The way joint position is being calculated is not the distance from the parent joint (yay! 🤦‍♂️), it rather is the joint's distance from the center of the scene (so to keep the individual joint data independent, in case one or more of its parents are missing in the file and avoid cascades of data only used for reference). The automated calculations then are based off the neutral shape (remember? no scaling on any joint so no scale induced repositioning in space), then the shape values are being applied and finally the animation data is transposed by the accumulated shifting that occurs across all the involved bones. Therefore the greatest precision in animation can be achieved animating on a head (or fingers) that were made and rigged on and for the neutral shape, when location data needs to be included. With all this being said, it should now be clear why location animation should be kept separate from the general animation (rotation only). Of course it is easier to make the whole animation with face and fingers together and it may also work well on a custom (joint position) avatar. However, since the majority of users rely on default-compliant avatars with shape capabilities, this becomes a mess to handle and results in what @Fionalein complains about (rightfully). Being able to export everything (location and rotation) in one file doesn't necessarily mean that one should, for many reasons. On a technical standpoint, this can result in unwanted behaviors (Fiona's case). Under an artistic standpoint, it "freezes" a body animation to a specific set of face expressions. For hands and fingers it is not a big problem, as usually fingers do not dislocate or stretch and location data isn't even necessary as joint positions are being defined in the mesh hands. On a marketing standpoint you can offer a wider range of customization, diversifying the animations that can play together (if you keep them separated). All the issues arise from the face rig, and the need to use location data in both the mesh (head and facial features require that) and in animation (because of the single-joint system in use, that requires the movement of a joint to get a certain mesh surface deformation). And indeed, moving the joints with animation gives "shape shifting" as result, from every standpoint. So, if you wish to get a "naming convention" to define the two things, we can pretty much use this: joint translation animation can be "shape shifting animation" (when it occurs on joints with weights on the avatar mesh) as opposed to "character animation" when it is intended to animate a shape-defined character.
  15. it's most likely a vertex normal issue. To fix it, you should make sure that those lines of overlapped vertices share the same vertex normal orientations, or the lighting will make that line be visible (reason why it is visible only from certain angles). If you're exporting the pieces to one single file, there should be an option to weld the vertex normal of adjacent objects. instead, if you want to export the various pieces separately, you can try and edit the vertex normal, but i don't know whether Blender has this tool. If it doesn't, join the pieces together, merge the overlapping vertices so that their vertex normal get unified and split the parts back, so that the newly recreated rows of vertices share the same vertex normal orientation. hopefully this helps 🙂
  16. A little imprecise. Animations actually DO have a translation and rotation export mean, depending on their use. Your point here is assuming that the control rig's bones are being exported, while in reality the SL's skeleton bones that are controlled by the animation rig are being sampled upon export. Especially those used for IK: those are effectors for an IK solver to actually solve the rotation of controlled bones in order to fit the IK controllers' transforms. Therefore the keyframed transforms on those effectors do not reflect the actual bones' channel data that the export bones collect from the solver, even when translation is enabled (because there is no translation channel). For sure it does. That comes from the buggy method that the bvh files reader/translator uses to handle the file write upon import to SL when location data is enabled during export, writing down the whole hierarchy joints positions (unavoidable as it is key in the BVH file syntax). anim files are more effective since you actually export only the animated bones (but the problem still occurs on those that are being exported)
  17. This doesn't mean it's really OK to go the same. A full body is a more complex and bigger on screen type of shape than a ring. Also shoes are bigger and more complex shapes. The issue is actually in where those meshes come from: they were made for rendering, not for realtime environments like SL or any other game. Nope, it's your object. the "Operating System" has nothing to do with how an object is constructed, it uploads and displays the content of a file it is being fed by the user. You may want to go back to your 3D app and check whether there is a smoothing modifier attached to it. Blender calls it "subsurf" or "subdivision surface",, 3DSMax "turbosmooth" and Maya just "smooth". Depending on the settings, the export procedure may apply such smoothing to the exported model while not showing in your software's view.
  18. If one of the axis sizes is less than 1 cm, that axis size will be capped to 1 cm, which is the smallest dimension an object can have in any direction. Basically, your ring thickness is not thick enough. On a side note: don't you think that 68460 triangles just for a ring is an overly excessive polycount?
  19. The comment about not being able to export "as is" was from Kyrah, not me What she meant is that you can't export the modified structure itself, as the SL internal simulator has no clue about those and doesn't accept modifications to the established skeleton hierarchy. The resulting animation would be applied to the SL avatar skeleton and as such, it will run no problem (providing that you don't add bones or that you don't modify the hierarchy structure). You can reposition and change the shape of the bones as you like, as long as you don't change the parent-child relationships within the SL avatar's skeleton; the control rig can have the hierarchy you want and will constraint the original bones to follow, that's why an animation control rig comes in handy when you need to simulate a different structure. And an IK setup, whether with spline or not, falls within such definition: it stays in your 3D software to let you animate as you wish, controlling an established and immutable underlying structure (the SL avatar skeleton and its hierarchy).
  20. Translation: So I would need references for a normal body, about the sizes that the current magazines use: XXS, XS, S, M, L, XL, XXL. You have to submit an application to the brands' owners, requesting what is called "DevKit", specifying the 3D software you use. Most of them supply a working model for Blender based on a plug in called Avastar, for sale on Machinimatrix website or inworld at Jass sim (at least it was last time i checked, anyone with more up-to-date info in this regard please correct me). Before the "but i don't want to pay for a plug in, can i just use plain blender?" question comes about, please consider reading this thread If you fancy the necessary expertise to overcome the underlying incompatibility issues between plain Blender and SL, that Avastar performs for you automatically, then you can use plain Blender with no plug in.
  21. The IK spline itself stays within Blender. You can export the resulting animations to use them in SL, so that the motion you see in Blender as result from the IK spline setup can be used and played back in SL.
  22. An IK spline is an IK system that works in couple with a spline or curve, so that the IK chain is dependent from the curve rather than its target. Curves can then be affected by physics by turning them dynamic or by manipulating the curve points using cluster handles (or what Blender calls Hooks). However, if what is shown in this video serves your purpose well, go ahead with it, it's simpler to set up
  23. I somehow missed this part. This is something that shouldn't happen, and it doesn't when using a dedicated alpha channel. Moreover, using a dedicated alpha also allows to make partial transparencies that work with alpha masking at a minimal rendering cost and no Z buffer / depth fighting with other alpha textures EDIT to add an example: a glass on a window. I know that alpha mask blending mode isn't optimal for finely cut contours.
×
×
  • Create New...