Jump to content

OptimoMaximo

Resident
  • Posts

    1,809
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by OptimoMaximo

  1. On this chunk: I am one of those who made an animation exporter, for Maya, and the per joint priority feature is quite clear in my implementation. However, finding users that actually need or care to differentiate joints priorities isn't that common, at all. As I was saying in the previous post, joint priorities values are the only ones that actually matter. The "global" value is just a shortcut to set all joints at the same value, while providing a convenient interface to set the header info. Many years ago I collaborated at the making of Avastar to the animation and rigging features (the horse used to this day in Avalab website is the one I made for the quadrupedal rigging tutorial) , with the original coder Magus Freston, and that is the most sound and logic route of action. It is the same route used by the bvh uploader in the viewer, by the way. The Lab was just too lazy to implement an interface to allow per joint priorities in their bvh uploader, as it could have been done from the very get go of SL.
  2. Actually, the specific joint priority is the only influencial value in all the serialized data, the "global" priority has no effect in regard to animation execution. It is the header, and as for most file formats headers, it's function is just provide general information (like the number of animated joints, for example) but not a list of joint names, as it currently stands. Now, if we had the chance to change priorities as you said, how would that be implemented? Reading the header is OK and quick, but to get a list of involved joints means that the entire animation has to be deserialize first, provide the user a UI to pick from and a value field to change that joint priority. Then what? Save it back to the asset? How would that behave at that point? If those changes would go to the asset you got a LICENSE TO USE, could that mean that every single derived copy would be affected? Or would that save a new asset, which you should be marked as creator of if we follow the current creation pipeline, when clearly you aren't? Then you pay the fee for the new asset creation, but the original creator is still the creator? And the original permissions? If those inherit from the original, you pay for an asset that you do not own anyway? And if not saved, your changes would be valid and running only on a per-session basis. Not really practical. Much food for thought.
  3. Priority is serialized in the binary encoded anim file, which has to match specific requirements and therefore data blocks can't be omitted, in order to be able to deserialize it.
  4. The constraint in SL animations are made to target a collision volume bone to another collision volume bone on the same avatar, and it should be animated to get the desired result. So the offset value from the constraining object and the constrained one should get a value every frame in accordance to the animation itself. Detail that , in this spacific case, doesn't help the OP. You'd say "ah but the contact between feet and floor doesn't include a collision bone, the floor doesn't have one", which is correct in rigging terms, but the skeleton definition actually has an otherwise unavailable joint at coordinates 0,0,0 of the avatar called Root, which works as a landmark for the ground plane height. In my animation plug in, which exports anim format files, I've tried the impossible to make it work as intended (animated offset and all that) , to no avail. Only result I got was a limb that sticks to some other point, with no offset, so the mesh always intersects with some other mesh. Basically useless, so I dropped the idea
  5. @BinBash @fluffy sharkfin This is a debate that I've been running for a while, and I'm quite tired of it to be honest. The problem with redefining something that has been solidly established for years is not only a source of unnecessarily generated confusion, it's mostly within the premise that SL is somewhat and somehow something special in regard of assets and workflows to allow the creation of a new word. Under this view, someone who creates things using prims should be a primmer, someone who does the same with the sculpt maps should be a sculptprimmer or a primsculpter (?), who works on textures should then be a texturer and so on. Already the term "rigger" is an overstatement in SL, since the process of rigging in reality doesn't end at weight painting and includes so many more things, but still somewhat acceptable because the original term includes the skin weighting operation. So forgive me if I don't agree with your view. I'll not be here boasting how many years of experience I got on games and films to say what I say, but just reading anything that isn't related to SL in all the fields regarding 3d productions of every type should clarify what I mean in the previous paragraph of this post.
  6. Pbr implementation would also require a texture type filter, for instance normal maps flagging as such to get them uploaded and converted to jpeg2000 as 16bit, as it is required for normal maps that actually work as they are intended to. And perhaps also a scalar values texture packing as used in unreal engine, with the greyscale based images packed all together in each texture channel to get ambient occlusion, roughness and metallness in one single color texture. And the good thing is that these textures don't even need to be 16 bit. Optionally, a height map could also be placed in the alpha channel to be used in a parallax displacement, as seen in the unity engine
  7. What would you expect from someone that looks for a MODELER but calls it MESHER? 🤣
  8. It would be of help if you provided a little bit more context to your request. Aside from the fact that..obj files are static mesh type of format, without skeletons and deformation definitions, so those would basically be useless for the stated purpose... What application do you use? Source material might be available in a form or another, depending from that, for instance Daz 3d has its own set of resources, which are specific to it in comparison to what you would be looking for if, for example, you were working with 3dsmax, Maya or Blender. Also, the topic of animation export from each of these softwares is not as straightforward as one would think, and there are specific plug ins needed for that, so you would get that information without pulling all your hair off your head trying to figure out why things do not work properly, if not at all. Help us helping you 😁
  9. And you keep pushing comparisons with a type of software that was DESIGNED to do exactly all this. SL has no built in design for these features right now. It's not a matter of how complicated to calculate those points via matrix or vector map, the problem is the underlying structure, which currently needs an agent to be able to stream these changes over to another agent, which along with the lack of server-side code base to support the lack of data coming from an animation, makes what you are requesting impossible to achieve. Remember that the server only knows that you're playing an animation, sends the same file name to others, but its contents are unknown to it, the animation is being read and played back on the avatar in the viewer. Ever noticed how two different viewers may see the same avatar playing the same animation, but the two might not be synchronized in the 2 different viewer? I really don't know how else I could try to explain this to you by now.
  10. I won't get into details with elaborate answers as to why this or that, but I will make you notice a couple of facts: Can you sit on an avatar attachment, even though its intended use is to be a vehicle? Can two avatars walk hand in hand with realistic arms interactions, such as connected IKs, without making animations specific to the involved avatars? Systems have limitations. Your, or anyone's, niche need of any "realism" stand within those limitations, which CAN be changed, but usually at the cost of breaking things. Standalone game engines, as much as Blender or any other 3d app, run standalone and backward compatibility is never granted. Meaning that if you upgrade your Unreal Engine, Unity, Blender or Maya, from a version to another, there is no guarantee that you won't be forced to re-do some if not all the work on features that were involved, directly or indirectly, in the implementation of the new "shines". Is SL the type of platform that can afford such eventuality?
  11. An agent is basically the structure with the server side code that makes a viewer, so that you get an inventory, a shape editor window, an avatar, its collision box, the name tag on top of the avatar, the voice dot and related spacial positioning for streaming voice, etc... Including the attachment points and the collision volume bones with their connections from the shape sliders. Which isn't currently possible for many reasons, same reasons for the lack of support for shape sliders and attachment points. These are defined in a separate file that gets applied at log in, while the base joints can be found in another. The agent assembles the two definitions at runtime. The second file defines the connections with the sliders, their ranges, what attributes of a joint is affected and the hierarchy placement of such "add ons" joints. All contained in your viewer, the thing that creates the agent when you log in. So, the answer to your question would be yes, if each animesh gets a scripted viewer to control it. But that's a bot , and this would defeat the whole purpose of animesh. There needs to be another type of system, but apparently that isn't in the current LL interests list, apparently
  12. Attachments are just joints, so you can add rigid clothing components to animesh objects in form of rigged items, can be animated but still, the traditional concept of attachment points that translate objects is not supported because, again, it is managed viewer side. It's the only exception that gets streamed over to other viewers and, again, the server is not aware of that stuff. It's a sort of builtin rudimentary skinning to joint, but it's not the type of feature you're thinking it is. Animeshes do not have an associated agent to arrange the necessary structure required, at current time of this writing.
  13. Both contexts, from Friday and from Virtual, are things that are being played in the viewer, while the server is completely unaware of these things. That is where the problem sits. The feature you're talking about would require such visual, viewer side updates to be sent over to the server and streamed over to every viewer in range. So to put this in context, when griefers were used to throw particles bombs to lag everyone down, it was sufficient to kill the max particle count in the viewer and all went back to running smoothly, with the server totally unaware and, thus, unaffected by such lag bombs, so that the griefers could be kicked and the bomb returned. This just to explain the lack of any communication between viewer and server when it comes to visual effects such particles and animations, that were designed to run within the viewer. Imagine now to have such a potentially heavy calculation feature being streamed and updated continuously between viewers and server, for as many agents in range.
  14. I think you're confusing the concept of a painted matte and a background impostor. The first one, a painted matte, is what you can bake into the environment and is so far away scenery, that the map you're playing in is never going to let you reach by walking or driving your character there. The second, a background impostor, is part of a different system of Lods groups, where an actual hierarchy of objects would be needed to make full use of such a feature. Then a connection to the player camera would change the opacity of each angle of view related objects when the billboard rotation gets close to the next angle impostor. I have seen up to 64 impostor groups for buildings that tower in a background scenery, and the object Lod group that gets to zero opacity then gets turned off, so at any time, there are just 2 up to 4 visible planes in view,and a distance fog in the middle does the rest of the work
  15. I'm afraid that, because of the current state of planar mapping, really no complex mesh object has been made with planar mapping in a decent visual state, and therefore avoided like the plague in favor of the reliable uv mapping... On a side note, you may want to take a look at external tools triplanar mapping, and how those solve the problem of the transitions between normals directions differences. At least Maya and 3dsMax have it, I can't remember Blender though, but it's a so basic mapping feature that I'm sure it's got that as well
  16. If the assumption of a "needed feature" starts from "because you can do it in Blender", then why not getting particles to collide with objects, or have meshes get displacement mapping and multiple uvSets,, or allow custom skeletons and animations with infinite priorities, object animation and so on
  17. Actually, any studio that can afford Nuke can do that from a video. I participated in a production, time ago, that used it to generate a mesh from a point cloud of a video of an object taken from all possible angles, and exported the mesh with texture. They ended up using just the model for a shadow Catcher material in a series of shots where the CG creature needed to cast its shadow onto that object. Oh and I forgot to mention, that the mesh was also processed with a mesh reduction node, so it was feasible to use, and the uv was not a mess. Needed work, but it was usable regardless
  18. Stupid question maybe, but when resetting the script, is the call to clearcameraparams stated before or after the call to reset? Because if it's placed after the reset, well it resets the script and the clearing command doesn't run, so it should be placed right before the call to reset
  19. I did the serialization in Python for Maya. Strings are a bytes(bytearray(joint_name, joint_name.len())) and you're done. In the actual file you can still read the letters
  20. Also, if position data isn't needed, you can just write it off with a single 0. The only joint that really needs position data is the mPelvis for obvious reasons, but that might also not have position data if really not needed. As you say, constraint data is optional and can be left out entirely. Worth of note is the fact that each joint has its own priority specified in its sequence.
  21. Since the introduction of Bento extended skeleton, attachment points skinning has been disabled server-side. It might occasionally work in some instances, but that's not the standard behavior. See the wiki pages about bento project http://wiki.secondlife.com/wiki/Project_Bento_Resources_and_Information Quoting the above article: Prior to Project Bento, rigging to attachment points was never formally supported, and was strongly discouraged, as once an animation stops playing it often leaves the attachment points in a deformed location that is very difficult for a resident to understand. With the addition of Project Bento, meshes rigged to Attachment points may be rejected by the server since one of the primary reasons for the addition of these new bones was to discourage this process.
  22. Most likely, when you turned animesh on, they moved into avatar hand position from the location you originally rezzed them. If that was the case, that means that your object is not actually where you see it, rather you should try to select the by now empty space where once the object was displayed. Try with a marquee selection over that area and deselect anything else you might have selected in the environment
  23. Good to hear there's a new fellow Maya user! Although basic Maya is just fine to rig for SL, you might want to add Mayastar plug in to your work flow in order to simplify the job of testing shape sliders and have a collection of all the needed tools in one place as well
×
×
  • Create New...