Jump to content

OptimoMaximo

Resident
  • Content Count

    851
  • Joined

  • Last visited

  • Days Won

    3

OptimoMaximo last won the day on February 15

OptimoMaximo had the most liked content!

Community Reputation

860 Excellent

9 Followers

About OptimoMaximo

  • Rank
    Maya MahaDeva

Recent Profile Visitors

1,246 profile views
  1. In my opinion, Maya is a good example of a well organized interface that makes learning things easier. It's not simple, just well organized. Since "auto-LoDs" isn't a reality we'll see anytime soon (if ever), that's the reason i made my auto-lod script for Maya. At the very least, for rigged meshes (i was thinking to animesh objects), because those are the type of objects that give out the worst result and the highest amount of trouble upon upload, so my script automates the (time consuming and tedious) process required to make things work, as a mean to incentivate good LoDs models by my user base. I'm hard testing it against a creature avatar i'm currently making. It all works flawlessly so far, with no skinning issues (fixed by my automation) and quick-and-easy iterations when things need to be fixed after test imports filures (a bad deformation i didn't notice earlier from a joint i totally overlooked). Degradation inworld is smooth and unperceivable at each single LoD switch. There's only one weird thing happening SL side, not pertaining my script: when rezzed, this creature in total weights 685 LI, when switched to animesh its LI LOWERS to 130 LI (and performing the animesh LI calculations manually, it should be 137.022). So, i can agree with @animats about the LoD implementation in SL: it is ridiculous, deceiving, arguable and disappointing, although i don't agree with his vision of automation-and-impostoring-all-the-way. A perfectly industry-standard-alined set of LoDs models (50% the triangle count of LoD above) should give a consistent result, no matter what the final use the model is intended for .
  2. Baking is a very tiny niche demand, therefore image based lighting that aims at that isn't particularly easy to find, as opposed to hdri images that instead provide a good contrasty lighting. Substance Designer now supports HDRI imaging creation, so it wouldn't be too difficult to create baking-friendly environments. Now, that may be a bit hardcore for anyone to approach such a solution (SD is node based and assumes some rendering pipeline knowledge), so my best suggestion is the following: Find a good HDRI that lights the front as you wish it to be lit, regardless of what sides or back looks like. In Maya, either Arnold, VRay or Redshift use Image Based Lighting hdr applied to a dome or sphere object. This means you can duplicate the IBL dome/sphere and rotate it around so that the same lighting applies to another side (say, the back). Adjusting the exposure values give you control over the overall lighting from each environment sphere. Moreover, in Maya you can set render layers and apply a material override (e.g.: ambient occlusion, just make sure to plug normal maps to be accounted for in the AO pass) and AOVs (basically, components of the beauty render, just separated, like specular, indirect specular, etc) and render it all in one go, leaving the compositing of such effects to the post process stage in photoshop (or equivalent). Built-in environment spheres are usually the so called "physical sun and sky" set ups, which is a no go in terms of even lighting distribution (it's basically just a clear sky and a light that determines the sun light direction, instead of a pre-made hdr image's sun)
  3. ok so, just to get this more clear: you made the animation with the avatar facing back by rotating the Origin bone on the grid floor? Because that makes a bell ring for me. It is most likely a Blender bug in how math is being performed. In the Avastar python code for anim export, rotations and positions are being calculated doing some matrix math in order to switch the reference frame (the world) to match that of SL anim format and retrieve the correct coordinates. The problem should arise (if i remember correctly, i have no time right now to check) because location data is being retrieved from the Origin bone as reference for the actual avatar position inworld (the character controller that moves the avatar around, which is the position reference point), while the rotation values are being calculated from the world coordinate system (i think for an easier reuse of the matrix calculation code), because quaternion rotations default to the world's 0,0,0 location and using the origin would have required a lot more math on top of it, with the resulting risk of things not working properly (or at all). Anyway, as a general guideline, the Origin bone should be used only for positioning the avatar somewhere in the scene (i.e.: next to a prop the avatar should interact with), while the actual movement around (including its rotations) should always be performed using the COG bone.
  4. I might be completely off, and this would apply only if the avatar you're using is Bento. Not everyone remembers that there are additional spine joints, just overlapping back and forth between mTorso and mChest. Also, Avastar has a targetless IK enabled on all joints. So what I'm thinking is that, perhaps, editing the curves is affecting the rotation on mSpine 1 or 2 through the targetless IK modifier. I've never seen that happen myself too. There are two possible solutions at this point: trying to remove any keyframe on all the mSpine joints if there are any, or disable targetless IK that runs through the mSpine joints (all of them) in case they don't have any associated keyframe.
  5. If i say what you don't like to hear, that's being rude... That's new to me, this means I'm one of the rudest persons on this forum then 😆
  6. I'd say voluntarily ignored as that's what he's all about 🤣 And this
  7. Until baking details like strings, rubber pads and indents into maps is seen as "low quality", "3d noob" practice, there won't possibly be anything suitable to be added to an animesh character's wardrobe. Unfortunately.
  8. It-s not a classroom, and users aren-t here to explain every single basic detail for you to slowly progress under instructions. It's definitely a learning place, sure, but you have to have a basic understanding of processes first, then ask for clarifications/explanations and get to learn what it takes specifically for SL. Without a clue about what you're doing, it's just a monkey-see-monkey-do type of approach, that leads to unoptimized, roughly arranged assets thrown at the platform for a quick buck or something. So, again:
  9. Says who, by inspecting an object, can't tell if it's an alpha flipping or an animated texture, and not being able to describe the latter properly for others to understand. Good job! 😂
  10. Never heard of animesh before? Edit:Rolig was faster 😅
  11. So basically your contribution is to be "the idea guy" with literally no work involved from your behalf. Good luck 🤣
  12. Hi! First it all depends by whether your models are rigged or static, then if static what do they represent? An organic shape like a statue can be reduced in triangle count more loosely than a primitive based build, like an house that's basically a bunch of cubes. The general rule is to have a significant lesser number of triangles for each LoD than the LoD above. The uploader's generator does a harsh (pretty random) reduction, where the mid lod is around 1/4 the triangle count of your model, for example, and the following LoDs are 1/4 of the previous ones. And not always respecting your UV mapping. I don't know 3dsmax but i assume there's an option for an automatic reduction too, where you can set basic parameters like UV preservation. Be careful though, because the lowest LoD is what most heavily influences the outcome as far as land impact goes. That might need some manual work instead of relying on an automatic process, because every edge and triangle matters in terms of silhouette shape retention and final LI. Follow the advice mentioned in the previous posts as a basis, then try to expand further basing off the type of model you actually have at hand.
  13. Yes Either recording an actor, or manual posing over time This part might be not OK. Is that model licensed to be used on SL or similar? Is it for commercial purposes? Aside from being a task that goes beyond your current skills level, there are legal issues to consider. And a forum thread can't be a learning place, or just to ask the questions about step by step procedures needed to get to your expected result. I, for one, will stop replying to this thread posts. Please read my comment above yours and ask yourself a few questions.
  14. I can talk about Maya. When importing a new fbx animation, if the skeleton is the same, just make sure to set the fbx import setting as "update" and that will bring in just the animation, updating the current skeleton key frames. Then it's just a matter of retargeting the new motion to the sl character
×
×
  • Create New...