Jump to content

OptimoMaximo

Resident
  • Posts

    1,809
  • Joined

  • Last visited

  • Days Won

    3

Posts posted by OptimoMaximo

  1. 9 hours ago, Extrude Ragu said:

    Tools people typically use to animate, such as Avastar, if you look at a .anim animation file produced by Avastar with AnimHacker you'll see that not only does Avastar set the base priority to what you exported, but they also set the joint priority to the same value. That means that most animations out there now have non-zero joint priorities.

    On this chunk:

    I am one of those who made an animation exporter, for Maya, and the per joint priority feature is quite clear in my implementation. However, finding users that actually need or care to differentiate joints priorities isn't that common, at all.

    As I was saying in the previous post, joint priorities values are the only ones that actually matter. The "global" value is just a shortcut to set all joints at the same value, while providing a convenient interface to set the header info. Many years ago I collaborated at the making of Avastar to the animation and rigging features (the horse used to this day in Avalab website is the one I made for the quadrupedal rigging tutorial) , with the original coder Magus Freston, and that is the most sound and logic route of action. It is the same route used by the bvh uploader in the viewer, by the way. The Lab was just too lazy to implement an interface to allow per joint priorities in their bvh uploader, as it could have been done from the very get go of SL.

     

    • Like 1
  2. 9 hours ago, Extrude Ragu said:

    What most people are not aware is that inside the animation file, besides the base priority, each individual joint in the animation also gets a priority.

    Actually, the specific joint priority is the only influencial value in all the serialized data, the "global" priority has no effect in regard to animation execution. It is the header, and as for most file formats headers, it's function is just provide general information (like the number of animated joints, for example) but not a list of joint names, as it currently stands.

    Now, if we had the chance to change priorities as you said, how would that be implemented? Reading the header is OK and quick, but to get a list of involved joints means that the entire animation has to be deserialize first, provide the user a UI to pick from and a value field to change that joint priority. Then what? Save it back to the asset? How would that behave at that point? If those changes would go to the asset you got a LICENSE TO USE, could that mean that every single derived copy would be affected? Or would that save a new asset, which you should be marked as creator of if we follow the current creation pipeline, when clearly you aren't? Then you pay the fee for the new asset creation, but the original creator is still the creator? And the original permissions? If those inherit from the original, you pay for an asset that you do not own anyway?

    And if not saved, your changes would be valid and running only on a per-session basis. Not really practical.

    Much food for thought.

    • Like 1
  3. On 4/23/2022 at 4:46 AM, Jabadahut50 said:

    I know this is a bit of a necro and from someone who doesn't really have a history here on the forums but you can solve the puppeteering impracticality problem with animation constraints.

    Here is a post from Extrude Ragu on what they are:

    and here's a link to the program they created for you to add them to your own animations:

     

    The constraint in SL animations are made to target a collision volume bone to another collision volume bone on the same avatar, and it should be animated to get the desired result. So the offset value from the constraining object and the constrained one should get a value every frame in accordance to the animation itself. Detail that , in this spacific case, doesn't help the OP.

    You'd say "ah but the contact between feet and floor doesn't include a collision bone, the floor doesn't have one", which is correct in rigging terms, but the skeleton definition actually has an otherwise unavailable joint at coordinates 0,0,0 of the avatar called Root, which works as a landmark for the ground plane height.

    In my animation plug in,  which exports anim format files, I've tried the impossible to make it work as intended (animated offset and all that) , to no avail. Only result I got was a limb that sticks to some other point, with no offset, so the mesh always intersects with some other mesh. Basically useless, so I dropped the idea

  4. @BinBash

    @fluffy sharkfin

    This is a debate that I've been running for a while, and I'm quite tired of it to be honest. The problem with redefining something that has been solidly established for years is not only a source of unnecessarily generated confusion, it's mostly within the premise that SL is somewhat and somehow something special in regard of assets and workflows to allow the creation of a new word. Under this view, someone who creates things using prims should be a primmer, someone who does the same with the sculpt maps should be a sculptprimmer or a primsculpter (?), who works on textures should then be a texturer and so on. Already the term "rigger"  is an overstatement in SL, since the process of rigging in reality doesn't end at weight painting and includes so many more things, but still somewhat acceptable because the original term includes the skin weighting operation.

    So forgive me if I don't agree with your view. I'll not be here boasting how many years of experience I got on games and films to say what I say, but just reading anything that isn't related to SL in all the fields regarding 3d productions of every type should clarify what I mean in the previous paragraph of this post.

     

    • Like 1
  5. Pbr implementation would also require a texture type filter, for instance normal maps flagging as such to get them uploaded and converted to jpeg2000 as 16bit, as it is required for normal maps that actually work as they are intended to. And perhaps also a scalar values texture packing as used in unreal engine, with the greyscale based images packed all together in each texture channel to get ambient occlusion, roughness and metallness in one single color texture. And the good thing is that these textures don't even need to be 16 bit. Optionally, a height map could also be placed in the alpha channel to be used in a parallax displacement, as seen in the unity engine

  6. It would be of help if you provided a little bit more context to your request.

    Aside from the fact that..obj files are static mesh type of format, without skeletons and deformation definitions, so those would basically be useless for the stated purpose... What application do you use? Source material might be available in a form or another, depending from that, for instance Daz 3d has its own set of resources, which are specific to it in comparison to what you would be looking for if, for example, you were working with 3dsmax, Maya or Blender. Also, the topic of animation export from each of these softwares is not as straightforward as one would think, and there are specific plug ins needed for that, so you would get that information without pulling all your hair off your head trying to figure out why things do not work properly, if not at all.

    Help us helping you 😁

  7. 7 hours ago, VirtualKitten said:

    You seem to  talk about cost of implementation in monetary units and in server time  but blender does all this free too

    And you keep pushing comparisons with a type of software that was DESIGNED to do exactly all this. SL has no built in design for these features right now.

    It's not a matter of how complicated to calculate those points via matrix or vector map, the problem is the underlying structure, which currently needs an agent to be able to stream these changes over to another agent, which along with the lack of server-side code base to support the lack of data coming from an animation, makes what you are requesting impossible to achieve. Remember that the server only knows that you're playing an animation, sends the same file name to others, but its contents are unknown to it, the animation is being read and played back on the avatar in the viewer. Ever noticed how two different viewers may see the same avatar playing the same animation, but the two might not be synchronized in the 2 different viewer? I really don't know how else I could try to explain this to you by now.

  8. On 3/31/2022 at 7:29 AM, VirtualKitten said:

    I think this is all becoming more clear Secondlife why doesn't see its self as roleplay?.

    It may not have this as a current concern  but it wrote new scripted bots to deal with automatic messaging group invite and shop attendance for commerce.  it wrote new light model for everyone . So when it brought in Animesh why was this heralded the greatest thing in Secondlife and then it was cut down short what actually happened . You alluded to griefers and the possibility of misuse, Surly the pupose of Animesh which was released Wednesday, November 14th,2017 Alexa Linden Lab announced the official release of Animesh, with the promotion of the Animesh viewer as the de facto release viewer.:

    "Welcome to Animesh! Animesh is a new Second Life feature to allow independent objects to use rigged mesh and animations, just as you can today with mesh avatars. This means that you can now have wild animals, pets, vehicles, scenery features and other objects that play animations" https://wiki.secondlife.com/wiki/Animesh_User_Guide#:~:text=Welcome to Animesh!,other objects that play animations.

    Optimo,  How is this statement and opportunities been  met if the connection of the animation timings cannot be linked to Avatar attachments movements without agent characteristics giving any realism to them . Surely this cant be right ? 

    Furthermore it can be read.

    Animesh has been in development for about a year, and like Bento, has been a collaborative effort between Linden Lab and Second Life content creators.  Essentially, it allows the avatar skeleton to be applied to any suitable rigged mesh object, and then used to animate the object, much as we see today with mesh avatars. This opens up a whole range of opportunities for content creators and animators to provide things like independently moveable pets / creatures, and animated scenery features. Press release https://modemworld.me/2018/11/14/animesh-officially-released-for-second-life/

    Hugs D

    I won't get into details with elaborate answers as to why this or that, but I will make you notice a couple of facts:

    Can you sit on an avatar attachment, even though its intended use is to be a vehicle?

    Can two avatars walk hand in hand with realistic arms interactions, such as connected IKs, without making animations specific to the involved avatars?

    Systems have limitations. Your, or anyone's, niche need of any "realism" stand within those limitations, which CAN be changed, but usually at the cost of breaking things. Standalone game engines, as much as Blender or any other 3d app, run standalone and backward compatibility is never granted. Meaning that if you upgrade your Unreal Engine, Unity, Blender or Maya, from a version to another, there is no guarantee that you won't be forced to re-do some if not all the work on features that were involved, directly or indirectly, in the implementation of the new "shines". Is SL the type of platform that can afford such eventuality?

  9. On 3/28/2022 at 8:43 AM, VirtualKitten said:

    In regard to the  agent solution you admitted to Optimo would this solve this if all Animesh  was an 'agent'  or you could have 'agent' checkbox  on the features page  in viewer and had the required structures to hold the attachments in place the same as agent ?

    An agent is basically the structure with the server side code that makes a viewer, so that you get an inventory, a shape editor window, an avatar, its collision box, the name tag on top of the avatar, the voice dot and related spacial positioning for streaming voice, etc... Including the attachment points and the collision volume bones with their connections from the shape sliders. Which isn't currently possible for many reasons, same reasons for the lack of support for shape sliders and attachment points. These are defined in a separate file that gets applied at log in, while the base joints can be found in another. The agent assembles the two definitions at runtime. The second file defines the connections with the sliders, their ranges, what attributes of a joint is affected and the hierarchy placement of such "add ons" joints. All contained in your viewer, the thing that creates the agent when you log in.

    So, the answer to your question would be yes, if each animesh gets a scripted viewer to control it. But that's a bot , and this would defeat the whole purpose of animesh. There needs to be another type of system, but apparently that isn't in the current LL  interests list, apparently

    • Sad 1
  10. On 3/24/2022 at 7:08 PM, FridayAfternoon said:

    Still, allowing attachments to animesh objects would provide the desired functionality. You just add a particle emitter to the mouth of the animesh dragon and it moves (in everyone’s viewer) along with the animation.

    they added benefit is you could dress up your animesh character with clothes and so forth (assuming of course the mesh is the same). That would make it way easier to create NPC’s for RP sims and so forth.

     

     

    Attachments are just joints, so you can add rigid clothing components to animesh objects in form of rigged items, can be animated but still, the traditional concept of attachment points that translate objects is not supported because, again, it is managed viewer side. It's the only exception that gets streamed over to other viewers and, again, the server is not aware of that stuff. It's a sort of builtin rudimentary skinning to joint, but it's not the type of feature you're thinking it is. Animeshes do not have an associated agent to arrange the necessary structure required, at current time of this writing.

    • Thanks 1
  11. On 3/15/2022 at 2:51 PM, FridayAfternoon said:

    I don’t think the issue is related to the way SL plays animations on the client vs the server, after all particle emitting glow sticks which move with an avatars hands during an animation are a thing and that sounds very much like what is being asked.

    Not sure if I understand the context here but it sounds like the animals (I.e. the dragon) must be animesh, because if it was a mesh avatar you could just attach the particle emitter to the mouth. 

    So maybe the real request is to allow attachments on animesh objects?
     

     

    10 hours ago, VirtualKitten said:

    FridayAfternoon yes you are spot on correct it is  related to animesh the math may be hard as a translation matrix but it must already exist in Secondlife to move other items I cant see how this is is impossible if its already being done?

    Both contexts, from Friday and from Virtual, are things that are being played in the viewer, while the server is completely unaware of these things. That is where the problem sits. The feature you're talking about would require such visual, viewer side updates to be sent over to the server and streamed over to every viewer in range.

    So to put this in context, when griefers were used to throw particles bombs to lag everyone down, it was sufficient to kill the max particle count in the viewer and all went back to running smoothly, with the server totally unaware and, thus, unaffected by such lag bombs, so that the griefers could be kicked and the bomb returned. This just to explain the lack of any communication between viewer and server when it comes to visual effects such particles and animations, that were designed to run within the viewer. Imagine now to have such a potentially heavy calculation feature being streamed and updated continuously between viewers and server, for as many agents in range.

    • Like 1
    • Thanks 1
    • Sad 1
  12. I think you're confusing the concept of a painted matte and a background impostor.

    The first one, a painted matte, is what you can bake into the environment and is so far away scenery, that the map you're playing in is never going to let you reach by walking or driving your character there.

    The second, a background impostor, is part of a different system of Lods groups, where an actual hierarchy of objects would be needed to make full use of such a feature. Then a connection to the player camera would change the opacity of each angle of view related objects when the billboard rotation gets close to the next angle impostor. I have seen up to 64 impostor groups for buildings that tower in a background scenery, and the object Lod group that gets to zero opacity then gets turned off, so at any time, there are just 2 up to 4 visible planes in view,and a distance fog in the middle does the rest of the work

    • Like 1
  13. 8 hours ago, animats said:

    I'm working on some rendering stuff, and for test purposes, I need some useful mesh objects with planar texture mapping to test against. Not just cubes, cylinders, etc -  I have the basics.  If you have any you can give me, I'd appreciate copies. Thanks.

    I'm afraid that, because of the current state of planar mapping, really no complex mesh object has been made with planar mapping in a decent visual state, and therefore avoided like the plague in favor of the reliable uv mapping...

    On a side note, you may want to take a look at external tools triplanar mapping, and how those solve the problem of the transitions between normals directions differences.

    At least Maya and 3dsMax have it, I can't remember Blender though, but it's a so basic mapping feature that I'm sure it's got that as well

  14. If the assumption of  a "needed feature" starts from "because you can do it in Blender", then why not getting particles to collide with objects, or have meshes get displacement mapping and multiple uvSets,, or allow custom skeletons and animations with infinite priorities, object animation and so on

    • Thanks 1
  15. 2 hours ago, Fluffy Sharkfin said:

     

    they use handheld scanners for smaller objects and drones for anything "larger than a bus".  The fact that they're using specially designed hardware to capture the source images is probably why their results are far superior to those using a regular camera and some opensource software.

    Actually, any studio that can afford Nuke can do that from a video. I participated in a production, time ago, that used it to generate a mesh from a point cloud of a video of an object taken from all possible angles, and exported the mesh with texture. They ended up using just the model for a shadow Catcher material in a series of shots where the CG creature needed to cast its shadow onto that object.

    Oh and I forgot to mention, that the mesh was also processed with a mesh reduction node, so it was feasible to use, and the uv was not a mess. Needed work, but it was usable regardless

  16. 2 hours ago, EnCore Mayne said:

    i'm working on an animation script that sets the camera parameters from a dialog. aside from all the usual insanity of trying to script anything, i've managed to get close to what i intended. of course, close isn't what i wanted. i actually wanted to clear the camera parameters on resetting the scripts. i've got llClearCameraParams() at a number of places, unsitting, state_entry, listen, changed, and run_time_permissions. if there were other events, i'd put llClear.. in that too.

    unfortunately, on resetting the script, the earlier setting remains on first sit then it's cleared on the second sit. any hints as to why that might be?

    i don't have a workaround. is there something funky about llClearCameraParameters() i'm missing? can i clear it so it's cleared on a reset of the scripts?

    Stupid question maybe, but when resetting the script, is the call to clearcameraparams stated before or after the call to reset? Because if it's placed after the reset, well it resets the script and the clearing command doesn't run, so it should be placed right before the call to reset

    • Like 1
  17. On 2/25/2022 at 8:34 AM, Quistess Alpha said:

    (edit* found a few obvious errors, like number of constraints being 0, but there's still something about the string format I'm not understanding, or doing correctly

    I did the serialization in Python for Maya. Strings are a bytes(bytearray(joint_name, joint_name.len())) and you're done. In the actual file you can still read the letters

    • Like 1
  18. 11 hours ago, Wulfie Reanimator said:

    You need: Joint name, priority, N rotation keys, {time, axis} N times, N position keys, {time, position} N times, and then your N constraints (which can be zero) and the rest you have there. Though, if you're not going to use any constraints, don't write any of that left-over data either.

    Also, if position data isn't needed, you can just write it off with a single 0. The only joint that really needs position data is the mPelvis for obvious reasons, but that might also not have position data if really not needed. As you say, constraint data is optional and can be left out entirely.

    Worth of note is the fact that each joint has its own priority specified in its sequence.

    • Like 1
  19. 58 minutes ago, Quistess Alpha said:

    I've also heard it relies on using a specific version of blender: Several months ago someone asked me for a throwing/catching a (tennis) ball script, I directed them to someone who explained the checkbox trick, it didn't work for them due to their specific version of blender (or avastar?) and they commissioned the llSLPPF type script from me. Just my anecdotal 2L$.

    Since the introduction of Bento extended skeleton, attachment points skinning has been disabled server-side. It might occasionally work in some instances, but that's not the standard behavior. See the wiki pages about bento project

    http://wiki.secondlife.com/wiki/Project_Bento_Resources_and_Information

    Quoting the above article:

    Prior to Project Bento, rigging to attachment points was never formally supported, and was strongly discouraged, as once an animation stops playing it often leaves the attachment points in a deformed location that is very difficult for a resident to understand. With the addition of Project Bento, meshes rigged to Attachment points may be rejected by the server since one of the primary reasons for the addition of these new bones was to discourage this process. 

     

    • Thanks 1
  20. 5 hours ago, Julia Taifun said:

    Hi, I rezzed these full perm nails and read that in order to align them they needed to be changed to animesh which I did... however now I can't select them at all. Any suggestions?

    mesh nails.png

    Most likely, when you turned animesh on, they moved into avatar hand position from the location you originally rezzed them. If that was the case, that means that your object is not actually where you see it, rather you should try to select the by now empty space where once the object was displayed. Try with a marquee selection over that area and deselect anything else you might have selected in the environment

    • Like 2
  21. On 11/23/2021 at 5:08 PM, JulieeJune said:

    Hello,

    I want to attempt to start working with making clothes in Second Life but I am unaware as to what I need. I have access to Adobe Applications, Maya & Mudbox. But not sure what else I need.

    I am also unaware of how to import a mesh body into the program. (The one I would want to import is Maitreya Lara as it's the one I own)

    I read that this is probably among the hardest things to do in SL but I would like to give it a shot anyways, I am somewhat aware of how to model, texture, rig with skeletons and such, but still not fully.

     

    Any information is appreciated, thank you ^-^

    Also.. not sure if this is the correcy forum for it...

    Good to hear there's a new fellow Maya user! Although basic Maya is just fine to rig for SL, you might want to add Mayastar plug in to your work flow in order to simplify the job of testing shape sliders and have a collection of all the needed tools in one place as well

×
×
  • Create New...