Jump to content

OptimoMaximo

Resident
  • Posts

    1,809
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by OptimoMaximo

  1. That's called IP theft, as those models belong to The Sims' franchise.
  2. The Belleza original models were made in Maya, which fully supports bind poses. However the creator might not know that Blender, natively, doesn't. The problem that the OP is facing is that their pants were modeled with spread legs, so the issue may persist.
  3. Leaving aside the missing surface in SL, which is most likely a problem with too many materials... In Blender you can't have a custom bind pose (i don't know if 2.8 will bring news about this, in the future), you must rig it in TPose. If you think about it, what you get in SL is exactly what you rigged: imagine the pants like you modeled, just rigged to the normal TPose straight legs... once you import it and wear those pants, the offset would carry over and you get that result you're showing from SL. To use a custom bind pose, you should use Avastar add on, but i don't know whether this feature is still there nor if it works as it should.
  4. Yes, as they are called by the script in two separate sequential llStartAnimation, it is perceived by the user as a single animation. Howerver animations are client side, only the asset server is aware of animations files being sent to the client, the simulator itself doesn't even know that animations exist.
  5. There is an issue with script calls when pairing a bento animation and a body animation for which the bento joints do not start playing. If, for example, you want "cool stand" animation (non bento, body only) to play back and fire "cool hands for stand" (bento animation, fingers only) is not going to work in ZHAO II even when using the method "first animation , second animation" Not entirely true, ZHAO II is set up to take advantage of layered animations as seen in the default walk, where the upper body is driven by a priority 2 animation that differs between male and female and a priority 3 animation that runs on the lower body on both genders, as described here http://wiki.secondlife.com/wiki/Internal_Animations walk 6ed24bd8-91aa-4b12-ccc7-c97c857ab4e0 avatar_walk.bvh 3 & 0 Yes[1] Automatically replaced by female_walk if female shape is worn (priority 3 for pelvis and legs, 0 for everything else) As usual LL's documentation is not accurate, but if you use the developer tools in the viewer you can see that there are two animations playing back when walking (with no AO attached), and switching gender replaces the animation that is playing on the upper body. This is an issue that was brought up at the Bento Project meeting and Vir Linden is perfectly aware of this, but, as usual, nothing has been done about it (it is easier to drop the ball in the creators' hands with a "you can script a work around to make it work")
  6. It all depends from the worn shape. This is something that most can-t see as it-s not exposed in the user face, but the shape sliders system is based upon joint scaling to achieve a bone length. So, since the difference between a default male and female shape sits in their joint scales, the first thing that happens is an accumulation error of the final joint in a chain in world space due to its parents' relative distance: the hand location in world space won't be the same between two different shapes even if forearm, upper arm and collar bone are rotated by the same amount. There is no such a thing as unisex or "fit everyone" animations/poses.
  7. I didn't forget that reason, but this isn't a field where one can claim to have the meal served. I understood that too, Bitsy. I can see the value in that as well, but that's a job that deserves compensation, nobody would do it for free. Failing is easy when the counterpart doesn't want to listen and apply the explained knowledge, just keeping their way and grabbing only the bits needed to make it work anyway. You did what you could, Rey. Being receptive to something is up to the listener.
  8. Why not making a list arranged by material? Ground: Grass Tree/tree stumps Rocks vegetation rubble (leaves, wood splinters, twigs) Dirt Mud Mushrooms Plants Water: River Creek Waterfall Puddles Animals: Mammals Insects Reptilians (turtles/snakes) Anfibians (frogs/toads) Environment: Weather Sounds Animal Droppings Burrows/Hives Fog/Myst Caves ManMade: Campfires Garbage Dead animals (these could be dead for natural reasons but the main reason so far is humans, usually) Furnitures Ruins
  9. The forum is full of best practices, but, as I see it, those are neglected for various reasons: assumes learning more about the 3D tool at hand It slows down the process (less releases in the same time frame) who wants to listen, claims "step by step" instructions on specific topics (like copycats) instead of trying to understand and adapt the principles/methods to their specific case, so they drop out because of points 1 and 2 "I'm a pro already, I don't need your 90's game stuff to dumb down my gorgeous products" who cares (which looks like it is the most prominent reason)
  10. Did you check whether MD import windows has a checkbox to import skinned meshes? If there is and it is unchecked, the skin data would be stripped off the model
  11. From the looks of things in your screenshot, that is an Avastar rig. I'm quite confident that the bones you can't bend are the blue ones, because they're constrained to (invisible) green ones. The blue bones are those you should weight bones influences to, while the green ones are supposed to handle animation only and shouldn't be in the vertex groups list at all. Moreover, you're using collision volume bones there, to get a fitted mesh result (red bones). Those bones are being handled by the Avastar add-on scripts, as Blender can't manage bind poses, natively. This is indeed what squeezes your mesh inside your avatar once imported. Solution 1: purchase Avastar and use its tools and exporter to get rid of the collision volume bones problem and keep going with your learning Solution 2: remove the collision volume and the animation bones, rig only to the blue bones. Make sure to orient the character facing the +X direction and apply rotation on both mesh and skeleton before exporting
  12. I've just upgraded to ZBrush 2019 and i still have to look at the new ZRemesher, by the looks of things it's a significant change since there's a "Legacy 2018" button next to it. So far i can just tell about what I've tried (2018 and older), and the ZRemesher Guides do their job up to some extent. Edgeloop Masked borders and slice brushes to make polygroups splits work nicely but it requires full loops everywhere, ideally perfectly straight. Even using the 3 tools in conjunction, so far, has led to a few spirals in my experience. Sure thing the result is NOT meant to be used as a retopo model, rather it's a mean to convert a Dynamesh into a workable mesh with topology and start the subdivision sculpting for the tiny details (after projecting the subdivided mesh onto the original Dynamesh to catch details that may have gone lost in the process). Might be worth using bits and pieces of it in the retopology though, with the due clean up of the troublesome areas. That's what happens in real game and movie productions too. Don't think that SL is a special case of and in any form. The platform has a few limitations in place for various reasons, but i see a common misconception for which there is a huge difference gap in making models for SL or for another engine, aside from the tech limitations. For one, the idea that bento rigging is any different from the legacy skeleton rigging, or that fitmesh is any different from regular rigged meshes. Again, aside from tech specs that may differ because of the platform implementation of feature that require a different amount of data to handle, there is no difference between the two things (bento vs legacy skeleton / rigged vs fitted) and model making techniques do not differ, unless we look at polycounts (whereother engines can go way higher than SL if needed) I'll give you here a bit of workflow expertise on how this is supposed to be handled: freeform sculpting with dynamesh, up to general shape is achieved and the low frequency details are in place. Polish it to remove bumps. increase dynamesh resolution for mid-fine details sculpting, when done... duplicate the tool and ZRemesh one of the copies, subdivide it an project it onto the Dynamesh left over from the duplication (this catches the details lost in the process) finalize your sculpting on the ZRemesh result with all the high frequency details Duplicate the sculpted Zremesh mesh and bring one copy back to initial subdivision. This way you keep the low-res displacement from the sculpting and you can reuse this to "steal" good topology body parts Duplicate the finalized sculpted ZRemesh and apply all the subdivisions to get a real high density mesh run the Decimation master on this, the decimated mesh is the one to export as retopology base (it keeps shape, details and doesn't choke your app when importing it into Maya, Blender, etc) export the unsubdivided Zremesh result from step 4 import both meshes to your 3D app and check the ZRemesher result topology to identify issues, delete what doesn't work and keep the rest work the empty spaces left over from step 8 bake maps from the decimated mesh OR send the retopoed model back to ZBrush and perform the baking in there, remember to flip Y when exporting the texture. Decimation Master's result is not intended for sculpting or animation at all, that's why you get wonky results by doing so.
  13. Most folks in SL see this as an unnecessary hassle for the perfect 3D mesh sale, IMO. Some time ago there was someone here on the forums claiming that the SL limits on vertex counts are ridiculous, because he had to spend so much time in ZBrush (!) finetuning the ZRemesher to output something that would be accepted by the uploader. Leaving aside the lack of understanding about these ZBrush tools and what they're intended for within the workflow, the average SL user thinks that highpoly = high quality, for which every modeled bit of detail they're able to keep on the mesh counts towards a "higher quality product", and retopology is seen as a "dumbing down my hard work"... totally neglecting details like ZRemesher's tendency to create spiral edgeloops (!!), then wondering why their rigged content deforms weird in some areas or why it takes that long to snap into place before it starts animating along with the avatar animation XD So yes, I do love the retopology process and work, and the satisfaction it gives me when the end result deforms, moves and fits flawlessly.
  14. Assuming that the rendering pipeline is similar to other realtime engines, deformation data is computed by the GPU from the polylist object computed by the CPU. So if this is true also for the SL viewers, the heavy CPU performance hit you refer to is due to the mesh construction/topology/vertex amount, while the rigging (deformation data) is computed by the GPU realtime. In this sort of scenario, it's conceivable to consider more joints data = more calculations, but it's also worth noticing that the per-mesh joint data limit splits these calcs into more "streams", depending from how the fitted mesh was sliced (in case of avatar bodies) or separated into pieces, making the process lighter. @Beq Janus might shed some light on this part as i'm not sure how viewers handle this.
  15. Just to make this clear: Second Life accepts only one UV Map, AKA UVSet. That is, it is a data block containing a set of UVs for the model, regardless whether those are overlapped and rendered on a per-material basis or not. A model, ideally, can have more than one UVSet: you can layout the whole model UVs (or just a selection of it) in different, parallel and concurrent, UV layouts that can be used by different textures, hooking them up appropriately in the 3D software. Again, this is a feature that SL does not support and, therefore, we're all constrained to use one UVMap, that we can split and layout on a per-material basis, so your question should be: how many materials are recommended?
  16. That would be the case IF it were a Specular Level, however SL uses what is called a Specular Color. This comes from the attributes inherited from a blinn-phong shader model, in order to give metals a proper reflection: in that shader model, metals have no diffuse component (black) and their resulting diffuse color (what you see once rendered) comes from the specular color. I did a short explanation of this in this other thread, not long ago Obviously the link pointed by @Beq Janus contains the information needed for SL, and she's absolutely correct in doing so. However i think some extra, "external" info isn't a bad idea to have
  17. The physics shape gets its bounding box shrunk or inflated to match that of the visual mesh, AFAIK
  18. The bounding has increased its height, so the physics plane is adjusting accordingly, in my opinion. Try to use a cube instead
  19. enable translation in the export, it's a checkbox
  20. Clearly it-s a photoshopped screenshot
  21. For quite a long time, the bvh exporter in Avastar was broke and somehow was fixed later on. So something in the evaluation script might be slightly borked. You may want to use .anim format, which is better in any aspect.
  22. This clashes with any definition of a specular/glossiness workflow, and here is the demonstration rather than a link to obscure to the most folks documentations. Substance Designer: here there's a blending of materials, the top is gold, the bottom is a green plastic. On the right hand side, there are the output textures for a spec/gloss texturing workflow. Notice: on top there's the Diffuse, then Normal, Specular, Glossiness and height map (which we'll neglect). The glossiness map shows no division because the base materials were set to the same value As you can see, whatever is metallic has NO diffuse component and its color is being driven by the specular color. All dielectric AKA non metallic materials always have a specular map set as a shade of grey and its color is left in the diffuse. The metals are black in their diffuse components to mark per-pixel metalness, and this can be seen in any renderer that uses blinn-phong based shaders. Now, this partially happens in SL. To keep the legacy textures from breaking, LL did a sort of hybrid with the metallic/rough workflow. The regular diffuse color textures were maintained, however they needed some way to mark per-pixel metalness, which ended up being the Environment Map (SL's specular map's alpha) so that the diffuse color textures (different from just diffuse, a diffuse color is the result of lighting) could be kept and act as a basecolor map (in the metal/rough workflow), have a blinn-phong compliant specular color map and get the legacy shininess setting (the metallic look) operate on a per-pixel basis using the Environment map provided with the specular maps' alpha channel. Therefore, NO, you don't have to fiddle with color in the specular map for a non-metallic material and its lack of colorization doesn't give it a metallic look. As a further note, you may inspect the specular map's levels. Metallic materials levels range slightly beyond 0.5 (or 127) while non-metals range slightly below this value (reason for a specular level map to exist in a few rendering engines) Exactly.
  23. They're both text files and as such, they may be heavy and not suited for network streaming, the point is that between the two, if there was a choice to be done, fbx would be better because generally it ends up being smaller than collada. Yes i know, i was about writing a custom exporter for Maya some time ago, after my animation exporter was done, and i did research SL mesh's internal architecture. I found out it was something impossible to achieve because some required data needs to be written to file on upload by the uploader itself in a specific location of the file, so i dropped the idea altogether, it wasn't worth it. The specs say that the final binary encoded file would have a max allowed size of 8 megabytes and contains all LoDs, physics and all materials as mesh subsets, but again some data needs to be filled in post-import (like the havok physics). It's very streamlined and it's for sure the best way to stream a mesh object over the net, compared to a generic text file such as fbx or collada.
  24. And what does this consumer related question have to do with the creation forum?
  25. While it is with no doubt the de facto standard, you need to understand that network traffic is better off with a format that is not text based. SL-s internal asset is written in bynary, which translates in a file that has a limit of 8 MB size, including LoDs and the max vertices count This clashes with my personal experience about it. If this may be true for static object, let's not forget that both FBX and Collada are full scene exporters, and a true comparison can be done only when also animation-related data is included. I see this pretty much everyday, since my workflow includes export to fbx and conversion to Collada using FBX converter (although i'm told Maya can now export perfectly acceptable .dae files, this habit of mine still persists from the early days of mesh when Maya collada had some clashing against SL's collada expectations): a rigged model in FBX is WAY smaller in size than the same scene in Collada, whereas the fbx is just around 700KB, the collada counterpart reaches the tens of megabytes, and things get definitely worse when blendshapes and keyframes are involved. Let's not forget that fbx is Autodesk's take on Collada, tweaked to suit and support proprietary softwares' specs and scene structures: fbx supports geometry grouping, collada doesn't for example. Then, there is also this: There's need for an SDK and a license to support fbx in your own application. Reason why LL chose the rather inexpensive Collada. If the viewer wasn't opensource, they probably would have used fbx. Many game engines already internally handle meshes in a binary format. Reading the documentations, you can pretty easily find limit numbers set as 65536, which is the most important clue to interpret it as "we binary encode this data, this is the limit". The only example of this that is coming up to my mind right now is UE4 limit on keletal meshes number of joints, which is 65536 and Unity clear statement about their internal asset encoding that is binary.
×
×
  • Create New...