Jump to content

Quarrel Kukulcan

Resident
  • Posts

    532
  • Joined

  • Last visited

Everything posted by Quarrel Kukulcan

  1. The ZHAO only triggers off avatar locomotion modes -- swimming, sitting, turning left, that sort of thing. You need to find or write one that detects system face animations instead.
  2. Do you mean stationary mannequins or wearable mesh bodies?
  3. SL's maximum image size is 1024x1024 and it runs everything uploaded through JPEG-2000 compression. Images don't have to be square but width and height both need to be a power of 2 or SL will stretch that dimension up to the next one. If you have to shrink an image it's often best to use linear interpolation instead of what most image software suggests because if you use something like bicubic sharpening, the sharpening interacts poorly with the JPEG-2000 and you'll get excess artefacting. It's probably best to display art using a cube or maybe plane prim. Those don't collapse into simpler models with distance and they don't cost Lindens to create. If you're crafting architecture or statuary in a 3D program, you'll need to learn about what SL expects for the physics collision model and the multiple simplified Levels of Detail that all uploaded mesh objects have (and which will be auto-generated if you don't create your own). That's bigger than I have time to post about tonight, sadly. Folks in the Mesh forum will be able to help there.
  4. Not one I was aware of, since it's not in the official wiki and there isn't a more-accurate alternative AFAIK. Anyway, the joke's on you: I'm wrong. It's a fencepost error on my part. The Loop In/Loop Out frame panels in the Firestorm upload UI don't specify whole frames. They specify frame endpoints in time. "1" doesn't mean "all of frame 1", it means "the end of frame 1", which isn't exactly intuitive. Ex: You make a 3-frame animation and want to loop the last two forever. Once you add the reference frame SL expects so it doesn't eat your real first one, you've got a 4-frame animation with your loop on 3 and 4. So Loop In should be 3 and Loop Out is 4, right? Well...maybe. You'll get a transition from your frame 3 pose into your frame 4 pose but it will only last one frame's worth of realtime. "Loop In 3 / Loop Out 4" doesn't result in a loop where you're locked in pose 3 for one full tic, then locked in pose 4 for one full tic, which is what I was expecting. If it's more important to have 2 tics of delay (say, for timing reasons), you need to set Loop In to 2 -- and also align your endpoint keyframes differently to avoid snapback. Now, sure, in an animation with hundreds of frames, this stuff isn't noticeable, but it's been glitching my gestures and short anims for the longest time and I've just now figured out why.
  5. "The first frame of a BVH file isn't displayed. It's used to determine which joints the animation controls." That's from the wiki, so no surprise there. In what I assume is an attempt to preserve the animation length, SL will replace the first frame with a copy of the second for animation purposes. Except the uploader replaces the first frame with two copies of the second frame and makes the whole animation one frame longer. EDIT: See reply.
  6. Thanks. That answers the "do they store a sparse set of keyframes" question: they do not. It's full specified data for each joint for every frame.
  7. Can BVH files store a small set of keyframes or do they encode every bone's transforms for every frame? If they can store keyframes, will the resulting animation be smoother in SL at low FPS, and do they support custom easing?
  8. When you want a certain behavior, it's best to work backward from the results you want and craft a simple approximation. You'd be surprised how often simple is better -- more intuitive, more codeable, more tuneable. For instance, your complicated approach with repeated tests and an increasing chance after each failure is actually more likely to pass early than late. The behavior will trigger within 15 seconds 55% of the time and have a 95% chance of happening within 25 seconds. Here's a graph of how often the behavior lasts exactly X seconds before switching. I bet you didn't expect that since it sounds like you wanted longer delays to be more common. To weight a result toward the high end, one kind of mathy way is to take the floating point number at the core of the randomizer -- the one that's in the 0.0 to 1.0 range -- and do something to it that increases the values in the middle but leaves the 0 and 1 values alone. One straightforward option: take the square root. That tweak has to be made before multiplying out to cover the range of values you need. In LSL, where we only have llFrand() and it multiplies by the range for us, that means doing this: llFrand(max) becomes llPow(llFrand(1.0), 0.5) * max If you want a stronger or weaker skew, tweak the 0.5. There's also a non-mathy way to skew results high: generate two random numbers and use the bigger one. Skewing random numbers low means squaring the 0.0-1.0 core value (power 2.0 instead of 0.5) if you're mathing it, or rolling two unskewed numbers and picking the smaller one if you're not. If you want to do something with a bell curve -- something more likely to pick a number in the middle of a range than one near the ends -- one simple way is to add N random numbers that each cover 1/Nth the range. Say you want an eyeblink every 4 to 10 seconds. If you use 4 + llFrand(6) you're just as likely to get short or long blinks as average ones. To skew the blinks toward average delays you could use 4 + llFrand(3) + llFrand(3) instead. 4 + llFrand(2) + llFrand(2) + llFrand(2) would make the bias even stronger. P.S. The "select from an array" approach is also very simple, flexible and tuneable.
  9. Max image size in SL is 1024 x 1024 (not 512), so that's not your problem. Every image you upload gets run through JPEG2000 compression. Small images sometimes give you a checkbox to refuse that compression but I'm not sure it works. But what looks like is also happening here is that your HUD object isn't big enough to feature your image at exact 1:1 pixel scale. Like, say, if your HUD panel image is 384 pixels across in your source PNG but you put it on a HUD prim that only goes across 369 pixels of screen in your viewer. That will also cause a resampling and loss of quality.
  10. The wiki advises using touch_end() instead of touch_start() to do click-based state changes.
  11. Well, for one, you're creating a variable "name" that redefines one of listen()'s parameters. I don't know if that's safe. Actually, (a) always happens if the message is correct. (b) sometimes also happens.
  12. It's not used when uploading. (Okay, it is used when uploading if the image has a dimension larger than 1,024 or that isn't a power of 2, but that's a separate issue.) It's used when displaying in-world. The points that SL renders on your monitor might need to be pulled from anywhere, mathematically, in the face texture. Imagine that your texture pixels are NOT full squares of color, but instead are infinitely small pinpoints that only define a color at their precise locations. For every spot that doesn't line up perfectly with those, your renderer will have to use some method of figuring out what color to use. SL's renderer doesn't have the option of using the closest one like Blender does. It's locked into "average the closest ones based on linear distance". What color does SL need to render the spot inside the purple circle on this cube face if that face has a 2x2 pixel texture? Well, since SL is gonna linearly interpolate, that won't be white. That circle is about 30% of the way from the top pixel, so it will be about 70% of the top pixel's color + 30% of the bottom pixel's. The side effect of huge pixels is that basically every point on the texture is between pixels of different colors. If we use a higher resolution texture, we'll get something like this. Now the exact same spot is halfway between a white pixel and another white pixel. SL will take 50% of white and add it to 50% of white to get white.
  13. It's because SL displays textures using linear interpolation between the closest pixels. In general, you want to blend between pixels rather than displaying every point as exactly the pixel it falls within, since the latter makes even full-res textures look slightly more block-dotted. What you're seeing is the drawback of using that method on unusually low-res texture images. Unfortunately, SL doesn't give you the option of setting different interpolation methods for different materials like Blender does.
  14. Of course you're ready to try. This takes a while to master, and the second best time to start is now. (The best time is pretty much always "years ago" and not an option.) Don't worry too much about not doing the later Blender Guru tutorials yet. Anything dealing with materials or lighting is less relevant in SL because the rendering engine is different. A very (very) general overview of what lies ahead of you looks something like: A1. Learn how to make static things in a 3D modeling program. A2. Learn how to make rigged things in a 3D modeling program. B. Learn how to properly make things for one particular realtime 3D game engine. That is not a strict order. You'll do a lot of bouncing around. And definitely get yourself a login on the preview grid: http://wiki.secondlife.com/wiki/Aditi
  15. When it comes to the original system AV, some body sliders work by altering basic animation bones while others blend between custom-made, localized deforms built into the system AV mesh. When LL first added rigged mesh uploads in 2010, bodies and clothes were automatically affected by the first kind of slider (because moving a bone moves everything rigged to it) but not by the second (because uploaded meshes don't support per-vertex morphing, and even if they did, SL doesn't communicate slider positions directly, plus morphs don't easily transfer between different arbitrary meshes). So a couple of years later, LL added new bones that moved and scaled in ways that imitated the non-bone deforms as closely as possible. (That's Fitted Mesh.) If you rig to these new bones, you pick up the missing body slider influences. There are two problems, though. One: it's more and harder work to rig to a combination of animation bones and Fitted Mesh bones. Two: due to how LL implemented this feature internally, you can't rig to these bones in straightforward fashion in some popular 3D modeling tools without paid add-ons.
  16. Something else to be aware of: every time you select one of the built-in avatars to wear, SL creates an entire brand-new set of objects & skins and puts them on you. Those will build up in your inventory. It's good practice to delete the ones you decide against and make an Outfit of any you like. (That'll let you tweak it, too, and not lose the changes.)
  17. Just go through the upload process but don't click the final "Upload" button. The "Calculate weights & fee" button will tell you the LI. Also this. It's a much more thorough test (though that's a separate issue).
  18. Flexi-prims and transparency blending are the two things that drive complexity up the most. A lot of hair uses both, so keep an eye out for ones that don't. Alpha blending tends to be used in a lot of mesh bodies & clothing, actually. You have to watch for those factors too if you're aiming for low Rendering Complexity. It's not just vertex count. The Utilizator avatars are old. They work with length-and-height body sliders but not bulk-and-curviness ones. (Well, for Avatar 2.0 those sliders work on the legs but not the torso & arms.) The torso add-ons bump complexity because you're basically wearing one-and-a-half avatars with some parts alpha-ed out. (See my first point.)
  19. Where did you get your QAvimator? I have the experimental Bento one and I don't see this behavior (though I see the legs snap for 1 frame during preview, only when previewing while standing, no matter how I import). I'm not a regular QAvimator user, though. Can't stand the UI. What do you mean by "zeroing frame one"?
  20. There are some oddities. You run a color directly to the material output surface shader input instead of using the principled shader output. Also, I have no idea how you're getting any results doing a combined bake with neither lighting option checked. I get either black output or an error when I try that. In any case, I can't get this problem to happen myself. Since you're using an AO node as part of creating the diffuse color, you should be able to do a diffuse (not combined) bake and select only Color. (But fix your final output link first.)
  21. I know folks have recommended QAvimator, but I don't think it can be configured to start with a custom skeleton with bones out of their default SL positions. Blender can, and Avastar makes the animation export process straightforward, so I think you'll find that easier. (The hard part will be getting hold of the official base pose rig. There doesn't seem to be a rigged Hallowpup dev kit.) For SL animations in general, and especially for something with a nonstandard skeleton, you want to restrict your animation to rotations only. You can reposition the root hip bone to lift/drop the whole body pretty safely, and you'll need to reposition the tongue bones to make it stick out, but don't keyframe location changes on any other bones if you can help it. This will minimize your animation messing up customizations and deformers.
  22. And to make matters worse, there is also a popular mesh body right now named Classic (or Legacy Classic) from a vendor named MeshBody (or Meshbody or The Mesh Project or TMP -- it gets called a lot of things) and it's not the same thing as the classic/system avatar built into SL that Maitimo is talking about here. Sometimes vendors put one of these logos on clothes they've made to fit system avatars.
  23. Generally you want to join all your meshes into one object but use multiple materials (in Blender) and assign them to individual polys. These become different faces in SL and each will have its own texture control. You still need to UV unwrap like others are describing.
×
×
  • Create New...