Jump to content

OptimoMaximo

Resident
  • Posts

    1,809
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by OptimoMaximo

  1. Its implementation certainly is, though. Hte implied functionality is (as always with LL) half way through being actually useful, resetting the skeleton locally only. How hard is it to build a message to the server telling other viewers to trigger a reset on their side as well? Ultimately doing what this thing is for: un deforming an avatar, for good.
  2. This. First, because it's Unity based. Its ecosystem of partially co-working features that are mutually exclusive makes it hard to release anything other than the base (legacy) rendering pipeline. Second, the recent Unity Technology screwup with its install fee, which is still applicable to all products released from 2024 on, now "adjusted" to either a per install fee or 2.5% royalty fee on gross revenue, whichever is cheaper. So good luck with LL releasing the mobile viewer to begin with 🤣 I guess it will slowly and silently die off, like other projects (muscadine for example)
  3. Decimation, or rather simplification algorithms on animation curves, can be beneficial to animation in few scenarios, like the following: 1) reducing file size on mocap data when exporting to fbx or atom files, certainly not bvh or anim (more on this down below...) 2) interpolation screw-up, to more easily and possibly effectively apply an euler filter, but once again this is mostly related to mocap animation and the effects of retargeting processes. So why doesn't this apply to SL animation exports? That has to do with the nature of both formats, bvh and anim. Both are data dumps, which means they both contain just the value information of a joint at a given time. No other information is provided, like curve tangents in and out angle and their weight, which define the curve shape. This means that both animation file types will result in linear interpolation between a sampled value and the next. With bvh this is a fixed behavior, one sample every frame. The uploader would then try its best "cleaning up" from excessive data. Anim files instead are thought to be more optimized. You can get it to store only the relavant animation key data at given times, on a per joint basis. But still, there is no data about the curves key frames, so it's just a series of keys with an associated timestamp, nothing more. Therefore, both formats can't rely and do not rely on the key frames data, but just sample the curves along the specified frame range. What anim does is to give an opportunity to decide how to sample on a per joint basis, thus giving us the opportunity to implement some kind of algorithm to select relevant frames and export only those, per joint.
  4. The method I'm explaining in my previous post gives that result. I made a tool for Mayastar exactly to achieve this vertex normals transfer easily and quickly, that I know is being used by at least 3 major mesh heads creators that I won't name here, besides the mayastar body at every mesh junction (hands and feet). I used it on my own monster avatar as well, which is split in several pieces, none of which show any seam, regardless of the upload method (indeed that avatar upload dates back a few years by now, so no gltf magic involved). For blender, I don't know whether there is a tool that automates that. But you can do it manually, as far as I know.
  5. While everything Arton says is also true and correct, if we're taking the regular upload and the unit cube upload method, we can still get the seams to go away by a simple "trick" : copy the vertices normals from a chunk of mesh to their correspondent vertices on the other chunk and only then bake the normal maps. It isn't going to be perfect to the absolute match, but the seams will literally be imperceptible to sane view distances, like roughly 30 cm away from the surface. Now, about the point of tangents being geometry and uv dependent, it doesn't really matter much in terms of the texture baking effectively being precise and true to the original surface, that's what the normal map is supposed to take into account. It's more about the three vectors that a normal map samples, most importantly the normal of course, with its tangents being calculated perpendicular to it. If they (the normals) match across the border of the involved pieces, also the tangents will be calculated accordingly, providing data that conforms to the high poly surface in a consistent manner. Regardless of the calculation method, whether it be mikkt, right handed or left handed, clockwise or counterclockwise. It's a matter of vectors' consistency.
  6. This happens because the two areas you're baking differ in their tangent space, therefore the normals *look* like they create seams, and visually do, but for the purpose of modifying the normals at the pixel level during rendering, they're correct. Why do they have different tangents? Well, because those derive from a few factors, namely edge orientations in 3d space, edge orientation in the uv 2d space, connected faces areas and 3d world orientation. All these factor into the calculations involved for translating the surface detail from the high poly into pixel encoded normal data.
  7. Updated method? What would that be like? Along with the packer software, you can find a substance painter preset with the packing options ready for use, and a substance designer file that does the packing as well in that software.
  8. The avatar skeleton is composed of 3 main set of joints: the main animation joints, the so called collision volumes and the attachment joints. The main animation joints have been expanded under the project bento with additional joints, like tail, wings, fingers and hind limbs. Any joint can be repurposed to be placed in different locations and act in custom manners, both under rigging and animation work. Attachment joints are the only joints which can accept static objects and show them in world following the animation of our avatars: these are the child joints under the main skeleton joints, and we're not intended to be used for rigging, although they can be used for such purpose. Rigging to attachment joints is not officially supported, and their functionality for rigging is not guaranteed to be maintained. They need extra care because they have specific offsets and not taking care of a correct resetto default position will break the location of attached items until next relog. So as I was telling you previously, your best bet is to get either Avastar or Bento buddy, these add-ons for blender will ensure that you get a correctly setup skeleton with all you need to perform SL avatar compliant rigging and animation. Figuring out all the avatar intricacies from the xml files you can find in the viewers installation folders is definitely something you can do by yourself too, given your background as software engineer, but it's a lot to go through, things aren't really clear right out of the box, and it would take you quite a bit of time and effort to sort things out, especially if the type of software you usually develop is not related to 3d content
  9. The product you're aiming to emulate clearly states having both animesh version and bento skeleton animation. So it is avatar animation related, nothing more. What you need is a SL skeleton and bind the objects to relevant joints to export animations, which you will then call through scripts for playback. To make the process easier, you can buy either Avastar or Bento Buddy for this task, so that you won't be pulling your hair off your head trying to reverse engineer the rotation issues tied to working within Blender, those plug ins will sort that for you upon export of both meshes and animations. Just do not ever change the skeleton hierarchy.
  10. Skipping all the other BS before this part because, again, I'm not gonna waste any more time The "collision" volume bones aren't made for collision, and they already follow physics, at least some of them. How do you think the physics assets work? As per the rest, they don't have a volume. They're used to SET the volume of the mesh to create the shapes like fat, muscles etc. The other use they have WOULD be animation, but the system is pretty rudimentary. You can set a sort of constraint between two of these joints and then write down the value frame by frame, but can only have a set of pre-made behavior, like point and plane, meaning they would just follow a point in space or be orient constraint to a plane. You can see that in action in the default animations, the plane type on the feet and the point type on the hands when playing one of the stand animations. Now you can keep playing the role of the expert, I'm out of here.
  11. The two systems, SL and bullet physics engine, work in completely different ways. Animations in SL run client side, the server isn't aware of them if not for the file neam to stream over to the attendees viewers. Also, mesh colliders can't be attached to joints in SL for the afore mentioned reasons. Or, better phrased, they can be attached to attachment joints, but those colliders won't follow them, for the aforementioned reasons. Your points are simply moot. With this said, I'm not gonna waste any more time with you.
  12. Me sighs See how to read a changelog. Screenshot attached See how they write down transforms to fix physics APPEARING to reset Assets modified? See the requirements section. Yes I'm outdated about skyrim mods. But I know how those engines worked and how modding there works, the engine sources aren't open and can't be recompiled, so these kind of behaviors can only be implemented on top of existing features.
  13. I used to mod for skyrim, back in 2012. Then I moved on. Actually, there are a few posts of mine here where I helped solving the gap between rigged mesh pieces using my knowledge from the skyrim setup. By the way, those claims of mine are very old, I moved onto being a rigger, pipeline developer and now tech artist. I'll share with you my rl LinkedIn profile where you can message me to get also confirmation. Just dm me if you don't believe my background. ETA: here is the topic ii was talking about
  14. Just read it, here's what you do not understand : (from the change log) BSDynamicTriShape = bullet physics engine triangle Mesh COLLIDER Skinning redone and fixed bugs for heads with > 8 joints From the requirement section: A boatload of basic assets redone to implement more joints to drive animation based deformation Configuration files added to write down the extended set of joints used by this systems status on any pause event to prevent hard reset "glitch" (sign of non runtime playback, on system simulation pause the values are stored in the engine itself for all the compiled animation assets and physics. Sorry) Configuration files to set rotation clamping, guess why? To avoid intersections. Physics behavior is usually called either stiffness or damping And the list goes on. So I'll THROW your point back to you again.
  15. Yeah @Zalificent Corvinusalaugh all you want. DM me, I will share some piece of information for you to understand what I'm talking about.
  16. Oh yeah? Reading before commenting would be beneficial though... Those physics you describe are precalculated over existing animations, so even though they can perform at runtime, it's still precalculated in the form of ranges, constraints and skin weights. The push boobs up animation can't detect mesh intersections, even if it did it's too expensive to perform. Colliders based detection, although less expensive, would cause too much strain and unpredictability on such use cases, although bigger capsules like legs and torso can be supported, being fewer and big enough to have some simplified volume detection. However this can't give the no clipping result you claim. Again, to have such results with that tech and still play at consistent fps, those interactions have to be pre-processed. Such effects are now available in the latest Unreal engine 5 preview, the 5.3. And it still is experimental, covering vertex animation as well as skeletal animation. Experimental means that it may or may not get to stable plug in release, be scrapped altogether, or remain as beta / experimental for ever, like the vertex animation texture baking within the engine itself. Experimental since UE 3.
  17. The bad and super low quality content is what goes around in SL.
  18. That is most likely to not happen, at least in the short / mid term. There are a series of implications with how those features work at their base level that render the nanite conceptual process a destructive hell. I mean, they had introduced the control for skinned meshes influence control in editor, but still like a decade later that reduction results in the mesh collapse in 80% of the cases... We'll see, but I won't hold my breath while waiting for that to happen.
  19. Yeah I should have prefaced my comment with "at the time of this writing". They intend to extend Nanite to many levels of the DAG. But you really need to try it out for yourself . Nanite has big practical limitations at the moment, limiting its use spectrum and reliability to hold the shapes unaltered.
  20. Have you actually used UE5 and Nanite? Because what you're describing is not what it is. The geometry sharing you're talking about is called instantiation, so that the poly list data gets loaded only once, then tranformed to the various instanced object transform data (location, rotation and scale). It's a quite old technique which saves memory to load stuff, but not rendering, since at rendering level the light has to hit and be calculated on a per surface component basis. Nanite, on the other hand, is a per pixel analysis, and it's not good for everything. Basically what it does is to see whether a triangle is smaller than 1 pixel on screen and, if it is, collapses it. This would sound good doesn't it? Too bad though that when it comes to geometry density like the one SL buildings would use, it is completely useless. Edge case: very thin and long triangles which areas do not equal a pixel get crushed, often ruining the surface continuity. Other edge case: the model is not geometrically dense enough to make use of Nanite, no auto-LoD ever gets to be generated, always displaying the same model at any distance, until at some point the entire object gets so far away that the single faces start being collapsed and suddenly the entire object becomes one single pixel. Nanite is only good for environment nature objects like rocks and cliffs and anything that requires a high polygon density and a sufficiently big size to get a realistic look and feel when the player camera gets close enough. Also, Nanite can't be used on any deforming object, rigged content or shader animated meshes, like characters or foliage (trees, grass, bushes...) Please read the docs and try those tools with something that you would use the tool on, before being taken by the hype of "the era of LoDs is over"... Tech demos are made to sell the concept and showcase what improvements a feature brings to the table, looking at that content only, without real critical thinking over the data, just creates confusion.
  21. Let's wait for the authority in the field who will promote the obviously best solution 🤣
  22. What the connection would be between what I said and your comment?
  23. Yep, what I wanted to say is that if I set the properties, like tiling, before I switched to the pbr material, those transforms were retained and the pbr material was tiling, but no longer editable. I can't see why those controls can't be left editable and be used to tile the material itself. In the end, what is done is applying a transform to the primitive's uv Eta: if i recall correctly, I had to set the pbr material on the object first, switch back to old textures, set the tiling and only then switch back to pbr materials to have the effect persisting. Setting tiling and the likes before assigning a pbr material just reset it when the pbr material was assigned
  24. It's odd, last time I tried, the scaling rotation and offset were available in the texture edit windows only. Before applying the pbr material, set the parameters, then switch to the pbr material and the set parameters apply to the pbr material. However, as soon as you switch to pbr, the placement parameters grey out and aren't available for edit any longer. Disclaimer, before someone comes in accusing me of misinformation: things may have been changed in the meantime, I did my tests on the very first pbr enabled viewer released in beta.
×
×
  • Create New...