Jump to content

Quarrel Kukulcan

Resident
  • Posts

    277
  • Joined

  • Last visited

Everything posted by Quarrel Kukulcan

  1. Thanks. That answers the "do they store a sparse set of keyframes" question: they do not. It's full specified data for each joint for every frame.
  2. Can BVH files store a small set of keyframes or do they encode every bone's transforms for every frame? If they can store keyframes, will the resulting animation be smoother in SL at low FPS, and do they support custom easing?
  3. When you want a certain behavior, it's best to work backward from the results you want and craft a simple approximation. You'd be surprised how often simple is better -- more intuitive, more codeable, more tuneable. For instance, your complicated approach with repeated tests and an increasing chance after each failure is actually more likely to pass early than late. The behavior will trigger within 15 seconds 55% of the time and have a 95% chance of happening within 25 seconds. Here's a graph of how often the behavior lasts exactly X seconds before switching. I bet you didn't expect that since it sounds like you wanted longer delays to be more common. To weight a result toward the high end, one kind of mathy way is to take the floating point number at the core of the randomizer -- the one that's in the 0.0 to 1.0 range -- and do something to it that increases the values in the middle but leaves the 0 and 1 values alone. One straightforward option: take the square root. That tweak has to be made before multiplying out to cover the range of values you need. In LSL, where we only have llFrand() and it multiplies by the range for us, that means doing this: llFrand(max) becomes llPow(llFrand(1.0), 0.5) * max If you want a stronger or weaker skew, tweak the 0.5. There's also a non-mathy way to skew results high: generate two random numbers and use the bigger one. Skewing random numbers low means squaring the 0.0-1.0 core value (power 2.0 instead of 0.5) if you're mathing it, or rolling two unskewed numbers and picking the smaller one if you're not. If you want to do something with a bell curve -- something more likely to pick a number in the middle of a range than one near the ends -- one simple way is to add N random numbers that each cover 1/Nth the range. Say you want an eyeblink every 4 to 10 seconds. If you use 4 + llFrand(6) you're just as likely to get short or long blinks as average ones. To skew the blinks toward average delays you could use 4 + llFrand(3) + llFrand(3) instead. 4 + llFrand(2) + llFrand(2) + llFrand(2) would make the bias even stronger. P.S. The "select from an array" approach is also very simple, flexible and tuneable.
  4. Max image size in SL is 1024 x 1024 (not 512), so that's not your problem. Every image you upload gets run through JPEG2000 compression. Small images sometimes give you a checkbox to refuse that compression but I'm not sure it works. But what looks like is also happening here is that your HUD object isn't big enough to feature your image at exact 1:1 pixel scale. Like, say, if your HUD panel image is 384 pixels across in your source PNG but you put it on a HUD prim that only goes across 369 pixels of screen in your viewer. That will also cause a resampling and loss of quality.
  5. The wiki advises using touch_end() instead of touch_start() to do click-based state changes.
  6. "Nearest neighbor" and "No interpolation" are the same thing.
  7. Well, for one, you're creating a variable "name" that redefines one of listen()'s parameters. I don't know if that's safe. Actually, (a) always happens if the message is correct. (b) sometimes also happens.
  8. It's not used when uploading. (Okay, it is used when uploading if the image has a dimension larger than 1,024 or that isn't a power of 2, but that's a separate issue.) It's used when displaying in-world. The points that SL renders on your monitor might need to be pulled from anywhere, mathematically, in the face texture. Imagine that your texture pixels are NOT full squares of color, but instead are infinitely small pinpoints that only define a color at their precise locations. For every spot that doesn't line up perfectly with those, your renderer will have to use some method of figuring out what color to use. SL's renderer doesn't have the option of using the closest one like Blender does. It's locked into "average the closest ones based on linear distance". What color does SL need to render the spot inside the purple circle on this cube face if that face has a 2x2 pixel texture? Well, since SL is gonna linearly interpolate, that won't be white. That circle is about 30% of the way from the top pixel, so it will be about 70% of the top pixel's color + 30% of the bottom pixel's. The side effect of huge pixels is that basically every point on the texture is between pixels of different colors. If we use a higher resolution texture, we'll get something like this. Now the exact same spot is halfway between a white pixel and another white pixel. SL will take 50% of white and add it to 50% of white to get white.
  9. It's because SL displays textures using linear interpolation between the closest pixels. In general, you want to blend between pixels rather than displaying every point as exactly the pixel it falls within, since the latter makes even full-res textures look slightly more block-dotted. What you're seeing is the drawback of using that method on unusually low-res texture images. Unfortunately, SL doesn't give you the option of setting different interpolation methods for different materials like Blender does.
  10. Of course you're ready to try. This takes a while to master, and the second best time to start is now. (The best time is pretty much always "years ago" and not an option.) Don't worry too much about not doing the later Blender Guru tutorials yet. Anything dealing with materials or lighting is less relevant in SL because the rendering engine is different. A very (very) general overview of what lies ahead of you looks something like: A1. Learn how to make static things in a 3D modeling program. A2. Learn how to make rigged things in a 3D modeling program. B. Learn how to properly make things for one particular realtime 3D game engine. That is not a strict order. You'll do a lot of bouncing around. And definitely get yourself a login on the preview grid: http://wiki.secondlife.com/wiki/Aditi
  11. When it comes to the original system AV, some body sliders work by altering basic animation bones while others blend between custom-made, localized deforms built into the system AV mesh. When LL first added rigged mesh uploads in 2010, bodies and clothes were automatically affected by the first kind of slider (because moving a bone moves everything rigged to it) but not by the second (because uploaded meshes don't support per-vertex morphing, and even if they did, SL doesn't communicate slider positions directly, plus morphs don't easily transfer between different arbitrary meshes). So a couple of years later, LL added new bones that moved and scaled in ways that imitated the non-bone deforms as closely as possible. (That's Fitted Mesh.) If you rig to these new bones, you pick up the missing body slider influences. There are two problems, though. One: it's more and harder work to rig to a combination of animation bones and Fitted Mesh bones. Two: due to how LL implemented this feature internally, you can't rig to these bones in straightforward fashion in some popular 3D modeling tools without paid add-ons.
  12. Something else to be aware of: every time you select one of the built-in avatars to wear, SL creates an entire brand-new set of objects & skins and puts them on you. Those will build up in your inventory. It's good practice to delete the ones you decide against and make an Outfit of any you like. (That'll let you tweak it, too, and not lose the changes.)
  13. Just go through the upload process but don't click the final "Upload" button. The "Calculate weights & fee" button will tell you the LI. Also this. It's a much more thorough test (though that's a separate issue).
  14. Flexi-prims and transparency blending are the two things that drive complexity up the most. A lot of hair uses both, so keep an eye out for ones that don't. Alpha blending tends to be used in a lot of mesh bodies & clothing, actually. You have to watch for those factors too if you're aiming for low Rendering Complexity. It's not just vertex count. The Utilizator avatars are old. They work with length-and-height body sliders but not bulk-and-curviness ones. (Well, for Avatar 2.0 those sliders work on the legs but not the torso & arms.) The torso add-ons bump complexity because you're basically wearing one-and-a-half avatars with some parts alpha-ed out. (See my first point.)
  15. Where did you get your QAvimator? I have the experimental Bento one and I don't see this behavior (though I see the legs snap for 1 frame during preview, only when previewing while standing, no matter how I import). I'm not a regular QAvimator user, though. Can't stand the UI. What do you mean by "zeroing frame one"?
  16. There are some oddities. You run a color directly to the material output surface shader input instead of using the principled shader output. Also, I have no idea how you're getting any results doing a combined bake with neither lighting option checked. I get either black output or an error when I try that. In any case, I can't get this problem to happen myself. Since you're using an AO node as part of creating the diffuse color, you should be able to do a diffuse (not combined) bake and select only Color. (But fix your final output link first.)
  17. I know folks have recommended QAvimator, but I don't think it can be configured to start with a custom skeleton with bones out of their default SL positions. Blender can, and Avastar makes the animation export process straightforward, so I think you'll find that easier. (The hard part will be getting hold of the official base pose rig. There doesn't seem to be a rigged Hallowpup dev kit.) For SL animations in general, and especially for something with a nonstandard skeleton, you want to restrict your animation to rotations only. You can reposition the root hip bone to lift/drop the whole body pretty safely, and you'll need to reposition the tongue bones to make it stick out, but don't keyframe location changes on any other bones if you can help it. This will minimize your animation messing up customizations and deformers.
  18. And to make matters worse, there is also a popular mesh body right now named Classic (or Legacy Classic) from a vendor named MeshBody (or Meshbody or The Mesh Project or TMP -- it gets called a lot of things) and it's not the same thing as the classic/system avatar built into SL that Maitimo is talking about here. Sometimes vendors put one of these logos on clothes they've made to fit system avatars.
  19. Generally you want to join all your meshes into one object but use multiple materials (in Blender) and assign them to individual polys. These become different faces in SL and each will have its own texture control. You still need to UV unwrap like others are describing.
  20. SL will rescale and reposition your physics model so it has the same bounding box as your highest-level LOD. (It does that to all the lower LODs too, when you make your own.) There is no way to stop this. It looks like your snowman is multiple objects, which will import as a linkset with one parent mesh/prim and the rest all direct children of it. Your physics frame is getting rescaled to the bottom snowball since that's your parent. You probably want to join the whole body into one object and leave the arms separate.
  21. I guess my first question is, why are you baking an animation? Are you merging multiple separate ones? Are you trying to generate an explicit keyframe on every frame instead of relying on computed easing? Did you check the baked animation to make sure it has no keyframing on any bones but the wings? SL's importer should ignore all bones that have no keyframes (or that do but never move at all).
  22. An animesh object is a mesh object that animates itself by running its own animations on its own bones (as opposed to the older, clunkier technique of assembling prims and running a script that constantly rotates and slides parts of itself). It can do this either while attached to you or while rezzed on the ground. The animations it plays on itself don't affect your avatar. Now, it might also animate you, but that's because any object can animate you -- mesh or prims, attached or not.
  23. Key point: Second Life does not have a master idea of "what pose your bones are really in" due to the fact that animations are not synchronized between different residents' screens. Also, as far as the server is concerned, animations are phantom and don't actually move anything. If I start dancing Tango37, all the server knows is that my avatar is executing Tango37. It doesn't have an authoritative, "true" idea of what frame of that animation I'm on or where any of my bones have moved to. Instead, every single resident who can see me (including me myself) generates the animation's effects individually within their own viewer. Every resident's viewer does this separately, just for them, and none of them are the "true" version. When someone new teleports in, the server tells their viewer I'm dancing Tango37 and that's all. It has no other information to give. So, that newcomer's viewer has no choice but to display my avatar starting that animation at frame 1 on their screen, while everyone else sees me as farther along. Complicating that even more is the fact that avatars can be impostered (rendered at a lower LOD and/or with sporadic animations) or jellydolled if their rendering complexity is too high. That also ruins the idea of forcibly synchronized animations on everyone's screen. Plus some viewers support making all animations run faster or slower, or overriding them with custom poses. Second Life was originally coded this way because it was a much more feasible way of handling data loads on the slow computers and internet of 2003. The more content residents created over the years, the more disruptive it has become to rewrite everything so the server does have a master idea of my "true" animated state and constantly feeds it to everyone in the region to keep things synchronized. And that's the change that'd have to be done to give you the feature you want. It's a major restructuring and it would cause immense side effects, so while it's possible...
×
×
  • Create New...