Jump to content

OptimoMaximo

Resident
  • Posts

    1,809
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by OptimoMaximo

  1. OptimoMaximo

    Rhino

    First of all, i said that i used it in the past, during my master in 2010. At the time, only NURBS were fully supported. Then again, re-read more intently: i did NOT say that Rhino generates high poly mesh, i said that a NURBS, once imported to another software, translate into a high poly mesh and that was most likely the cause of that issue. CreateNURBSCube >>>> box, only because you say "create" and you can specify what type and then the shape right out of a command. This is LITERALLY telling a software what to do not to mention that "box" is no geometry name, Cube is.
  2. Furthermore, the two methods should be mutually exclusive. Using the second method, start from the base female in neutral shape and do NOT touch the sliders, as you'd be moving the bones around to make them fit to your character.
  3. Please re-read the procedure: you do the adjustments BEFORE weighting. If you do the reverse, what you're reporting is indeed what to expect do it as you find it more comfortable in your workflow. The issue doesn't come from this part If your avatar is that different from the Human you get from the default avatar, then the second method should do better. Try to not angle the bones too much in order to keep compatibility for animations as much as possible.
  4. OptimoMaximo

    Rhino

    CreatePolygonCube is LITERALLY the only command, then a creation window appears for the parameters, get them right and click create button. The script editor then "echoes" the command in MEL scripting language (echo between quotes because it's the Maya term for "show the corresponding script line"). If i wanted a NURBS cube, CreateNURBSCube would be the command, parameters window again, etc etc.... But there are buttons for these things already, so it's not that necessary and very much neglected feature, in Maya. I used Rhino in the past and have seen good users at it. If you read my previous posts intently, you won't find anything like what you say i did, telling you how it works. I never did. The only thing i said is that it's a NURBS based modeling package, i also answered to this post here and i just explained what most likely happens when you import a NURBS and automated polygon conversion is on (like it happens in Maya, also Autodesk software as 3DSMax), why the SL import failed (the vertices limit) and added a few words on triangulation, edge flipping for Chic who asked about it and a few other polygon related things. So where exactly have you read me telling anyone HOW Rhino works? Now that we've found a Rhino user, we can yield the floor and have a better explanation about Rhino in a Rhino thread.
  5. OptimoMaximo

    Rhino

    Exactly what happens in Maya with the command line and script editor.
  6. Hi! It's not really difficult with Avastar You have two ways to approach this. EDIT: Always begin from the Female Avatar in Neutral shape! First method: scale down your creature to match the avastar character in height. Adjust the shape sliders to try and match the head and limbs as much as possible. If necessary, use Proportional editing on your character to adjust the slight differences between the joints positions and your character's joints. SL avatar is not that well proportioned in its limbs lengths. When done, bind the character to armature. Do your weighting. After the weighting, take the Avastar in Object mode and scale it up accordingly to the size you initially wanted your character to be. Around in the interface provided by Avastar, there should be a button labeled "Apply Armature Scale" (not the one in the animation export though). Alternatively, you may want to go to Pose Mode, select all bones and hit CTRL+A Apply pose as Rest pose. This method is to make your character fully compatible with existing animations. Second method: Go to Edit mode on the Avastar armature. Reposition the joints accordingly to your avatar. Make sure that all Bone Roll values on each bone is set to zero. When done, in the Avastar tool set you can find a button labeled "snap deform rig to animation rig" or something similar, i can't remember the label exactly. When done, in pose mode, hit CTRL+A Apply pose as Rest pose. This method may give some trouble with existing animations and works best if you make your own. The Machinimatrix website also has a tutorial i wrote a few years ago on NonHumanCharacter Rigging. It's a paid tutorial, but it should work. Hope this helps
  7. OptimoMaximo

    Rhino

    The same goes for Maya. Once you get the hang of it, the Options windows of each tool is exactly like talking to Maya and say what you want.
  8. Yes, .anim format allows Rotation, Translation and scale (on the mPelvis only) data animation at the same time. That was the whole point of using BVH in the first place. Joint position was initially thought and seen as a "shape shifting" mean and only rotation should ever be accounted for, during skeletal animation. the internal .anim format export/import is a workaround to avoid the BVH upload window in SL. That window translates the BVH file into a .anim format, but the compression method it uses is so lossy, that at some point anyone would want to circumvent it. Great power comes with greater responsibility, both of which require greater knowledge. The shape sliders on the SL avatar rely on joint scale to reposition the joints, it's not an actual location change, reason why the scale can not (and should not) be animated. The upside of .anim is that you can get lossless animation import, without that lot of jittering anyone experiences from BVH imported animations. The downside of it is that you must be more careful. Working on a specific shape other than the default Male/Female avatar in their neutral state brings a floating point accumulation error that the LL shape adjustment fix can't really manage properly, unless you're wearing that same exact shape inworld too, or you're doing a non human character set of animations with embedded joint positions in the mesh. Not to mention that .anim format can animate also Collision Volume bones and attachment points. Always remember, that owning a device to record MoCap Data and being able to transfer such data to the SL skeleton doesn't automatically make you a professional animator. These things are easily done following a series of steps in a procedure. An animator is the one that knows how to avoid what is called "skeleton collapse" at runtime, which is what you're ranting about.
  9. OptimoMaximo

    Rhino

    That's what someone was saying about poly reduction, so i just followed up. On the other hand, I don't understand why should one use Rhino instead of learning Maya or 3dsmax or Blender
  10. The problem is that animators still use joint positions when exporting animations. This is mainly due to the fact that they animate the face along with the body (or other joint positions), which was the reason for LL was reluctant to add joint position animation on the face section of the rig. In case of MoCap animations, this may become a severe complication because the pro tools AKA systems like Maya's/MotionBuilder's HumanIK is a node set up that confuses the Translate Evaluator, exporting values relative to the normalized W term in the joints' rotation quaternion (see the 3D math definition of Quaternion to understand why). In a tool like Maya's HIK, the process required to have clean exports is to split the body from face animation using animation layers and exporting two different animations to be played back in parallel. Time consuming and cost ineffective. This latter could even be neglected, but the time required (and the knowledge they show to NOT have) is an impediment to flood the market with new releases to milk the users as kettle. Well, a pro would have known this issue and wouldn't have been so adamant in requesting joint position animation on face rigs at all costs to begin with. The same result could have been achieved via rotation just having the joints being offset off the mesh surface, but when i pointed this out in the last meeting i took part for the Bento Project nobody listened. Nobody considered that FaceRig(face slider + animation) = trouble, which is the reason for Collision Volume bones at the time of creating the SL skeleton. Never ever use joints for both Rotation Matrix and Scale Matrix. But i was the unaware one and "the ignorant fool" Never took part to those meetings ever since, it's pointless. EDIT: oh and before someone comes out with the usual "but you could have contributed a working example to show this", my answer is always the same: i don't do work for LL for free. They should have hired a real professional rigger/animator and PAID for. I'll just swallow whatever the community's "pro" come up with and work (around) within the given limits.
  11. OptimoMaximo

    Rhino

    Got some time for it and to change the triangle direction:
  12. OptimoMaximo

    Rhino

    It's also vailable in the Mesh menu if i recall correctly, not in form of modifier, along with a quadrangulate option
  13. OptimoMaximo

    Rhino

    Triangulation is something that the softwares do under the hood even when you work with quads. You can notice it when you skew a quad too much, it gets bent and folded in two triangles even if the triangles don't show an actual edge running diagonally through the quad itself. When rigging, there are a few areas that are more densely populated of polygons to allow a smoother deformation, lie eye and mouth corners to name a couple and give you an idea. When you get to test the rig weighting, you may notice that such areas show some bending and deformation that affects the shading, following a quad's triangle which obviously doesn't bend the right way around. That is the moment for using tools like the one Arton had noted, Maya's Flip Triangle Edge and even Blender's got its own, although i don't remember the exact name it has in there. When all has been fixed and looks good, proceed with actual triangulation in the software and don't let it do automatically upon export.
  14. OptimoMaximo

    Rhino

    That is why i was pointing out that the automatic fbx export triangulation may cause issues.
  15. OptimoMaximo

    Rhino

    This option may cause trouble to triangles near strongly bent surfaces (mouth corners for example), rendering affected triangles as holes. In such case, it's advisable to triangulate the objects prior to export rather than upon export in the dedicated dialog.
  16. This method requires overlapping UVs to work and map meshes to the right places on the texture. Lightmapping is out of question because we can't have a second UV for the baked lighting, so byebye AO because of the overlapping on the main UVSet (UE4 does a lightmap UV set for you under the hood upon import). It can be a good excercise, but for the majority of content on commercial scale, that won't work. Customers want AO on the model, since the viewer delivers a faint occlusion which is quite taxing per se, and to make it work well it needs shadows turned on all the time.
  17. Your UV is mirrored, looks like the original but the model has flipped geometry, which means you'd need to flip the UV too. To solve this issue, keep your original UVMap and then create another UVMap, where the left side's UV is flipped horizontally. Bind the material texture mapping to the first UVMap and bake onto the second UVMap.
  18. You probably got me wrong. That statement of mine is a WISH i have.
  19. OptimoMaximo

    Rhino

    I have to correct myself here, as the max number of vertices is 256*256 = 65536 per mesh object The other limit is a material issue, where if a material is being assigned to more than 21844 vertices it gets silently split into a submaterial, which will be missing in the lower LoDs and, therefore, causing the dreaded MAV block missing error
  20. OptimoMaximo

    Rhino

    Rhinoceros is a NURBS based modeling suite. Therefore the result is going to be converted to a very high polygon model, once imported to another 3d software, which is going to be a problem to import to SL. Your best bet is to try and reduce the imported model in 3dsmax until you get a mesh <65535 vertices, which is the top limit for import in SL
  21. As far as texture encoding goes, LL had taken the route of embedding different channels in the same texture when they implemented materials. It can't be stressed enough, the materials in SL aren't just a normal and specular map with a glossiness slider value to govern the specularity. It's not as efficient as UE4 multi-texture embedding and their shaders, but hey, that's UE4... In SL we have: Diffuse color texture can embed a Glow map in its alpha in order to render a specific area of a texture as glowing, keeping the remaining pixels looking normal. Basically, a per pixel glow instead of a face property. ALPHA DOESN'T MEAN ONLY TRANSPARENCY! Normal maps' alpha channel embed a Glossiness map, which the build tool helps with by giving us a multiplier (needed to convert a linear greyscale into a SL readable format) Specular maps' alpha embed an Environment map, which basically turns on/off and modulates the metallic shininess (again with a multiplier to help converting a linear greyscale into SL readable data). The current use of materials i could see around IS a waste of resources already, because Glossiness and Environment maps are neglected by most, but they're still accounted for during rendering, even if those textures actually don't have any alpha channel they're being treated as if they had one! LL has tried it, but the problem sits in the lack of proper documentation that anyone, without a solid technical background, can understand and make use of. However, LL is lacking on one, fundamental aspect: keep the development going! This is the point i have been advocating against the BakesOnMesh project: we've got a shader that gives us some material input slots, KEEP DEVELOPING THAT and come up with newer features to save on more resources on ALL items types that can use materials, instead of reviving old features that, at current state, address only one aspect of content creation, the mesh avatar bodies, in a crippling manner, since it doesn't address the newer texturing features and procrastinating their implementation to an indefinite time, if that will ever happen.
  22. Good luck with this. You should know exactly how long the AO animations last and make suitable collision volume bone animations to go with it with perfect timing between yours an the AO's. As far as i can tell from what i could look up on the web, that's an automation system to synch animations on sit and it is designed for OpenSim. I guess you're making your own mod version for this task Although you can keep developing the weighting to include Collision Volume bones on this current avatar, the shape is given by the skeleton definition and it's not dependent from the mesh you were wearing at the moment of export. This shape gives you a good starting point but it won't include the actual weighting from the SlinkPhysique body, which is crucial in order to design your animation in this case, since the twist deformation would depend from that. Your best bet would be to apply for a SLink devkit, bring it in Blender and attach it to avastar, make the arm twist set up and create your own animations. Synchronization will end up being a big pain and it's prone to error accumulation, which will ruin the effect overtime. Anyway, good luck!
  23. I'll try again for everyone's benefit. Sculpts in general are an attempt to use NURBS surfaces into SL. The problem is that a NURBS uses 1 single UV map that covers the whole UV space 0-1, which is what we're used to about textures in general, while the sculpt OBJECT inworld is a "converted" prim. Now, what happens in principle: 1) Take the prim, made of multiple texture faces. Each of these faces is a mesh with boundaries' vertices that overlap the adjacent ones, unstitched, each of them with their own 0-1 UV to get a square texture (save a prim cube with firestorm's save to collada and bring that into a 3D software to see it) 2) rearrange all of these UVs in a specific pattern, governed by the "stitching type" rule to achieve the basic shape you want, so to cover a new 0-1 UV space (the sculpt map image), pretending that it is now a solid surface (and then someone exploited this behavior with the multi-texture sculpt trick...) 2_1) recalculate the UV/vertices to comply to the required subdivisions to get 4096 vertices 3) rearrange the object's vertices accordingly to point 2/2_1 4) decode the colors from the sculpt map and move the corresponding vertices into place, including the overlapping ones. Colors should be encoded in RGB 0-255 and therefore only a selected range of colors is recognized as valid vertex locations (read further below) 5) precalculate the UVs/vertices for LoDs degradation (i don't know whether this steps happens earlier though) Symptom of this inefficiency is that a so small texture takes a very short time to be downloaded, but the time we wait for them to be displayed on screen doesn't reflect this "efficiency" because of the prim system hacking that the viewer has to perform. Over-sampled sculpt maps make this process from point 4 more unreliable (object-shape-wise) and intense, because the system expects each UV point to be at the exact center of a pixel, 1 specific color, but it can't. It may find a pixel border (so what's the color here?) or a gradiented transition between two colors, which doesn't belong to the specified 0-255 color range. Hopefully it makes more sense now as per why i personally think sculpts are a genius idea that shouldn't have been implemented in SL the way they are.
×
×
  • Create New...