Jump to content

Quarrel Kukulcan

Resident
  • Posts

    532
  • Joined

  • Last visited

Everything posted by Quarrel Kukulcan

  1. You can't give custom colors to the emission mask. The mask only controls how bright to make the diffuse pixel at that location. Black/0/0% means display at room light, white/255/100% means display as if it were under maximum lighting (same as Full Bright, just pixel-specific), and anything else is proportionally between those. You can get closer to what you want by using glow instead. Glow is also affected by the alpha channel (even if you set the texture's display to No Alpha) and the owner can change the glow color. It'll be a little fuzzier and raise the object's render complexity.
  2. It's nothing revolutionary. Make a script that does the following 2 to 5 times per second: get a list of all animations playing right this moment if the basic smile expression is in that list, play a Bento smile if the basic cry expression is in there, play a Bento cry if the basic wink expression is in there, play a Bento wink etc. The whole list of old expressions is at http://wiki.secondlife.com/wiki/Internal_Animations#Facial_expressions. Just look for "Facial expression" in the last column. (Even though system avatar expressions are done with morphing and not bone animations, SL lets scripts start or detect them the same way. That's the only reason this works.) If you want to change a classic avatar's expression, you have to have a script call a system face animation (in addition to starting yours). We can't create animations that alter classic AV faces, and classic facial expressions don't alter Bento heads. Run both and you'll be fine.
  3. Try deleting the finger bones from your armature. Heck, maybe even delete every bone except for the attach point you need plus its parent chain back to the hip.
  4. I'm not sure that's broken so much as showing all the bones defined in the Blender armature, including attachment point bones and IK targets (which I don't think SL even uses). Blender lets you assign bones to different visibility layers and only display some of them.
  5. If you're triggering an expression via a gesture, you can edit the gesture and add an equivalent Bento facial animation. There is no way to apply the game's existing old-school expression morphs to a rigged mesh. For one, I don't think we have easy access to the morphs themselves. Two, shape morphs are hard to translate to different meshes because they're so itemized and specific to the original object's vertex count and layout. Third, skeleton-based animations are the only method Residents have of animating mesh objects. So "Make a similar-looking Bento animation from scratch" ends up the only practical course of action. Anything else will be so complex that an automated algorithm will screw it up and fixing it by hand will be at least as much work anyway. P.S. I did a little experimenting and it's also possible to run a script that constantly checks for old-style expressions. This could be hooked up to animation calls..basically an AO for your face. I imagine it's pretty laggy and might take some work to avoid sync problems.
  6. You can turn on a feature called Backface Culling in Blender to make it operate like SL and only draw the front sides of your polygons. It's useful for predicting how things will look in-world. Here's where it is in the current versions (2.8 / 2.9):
  7. The compressions used by TGA and PNG don't degrade image quality. They only affect file size. You won't get better images by turning them off. (Saving as JPEG will degrade your image, no matter how high you set the quality, so use PNG or TGA instead.)
  8. Not one that works with BoM. If you want more pixels in your tattoo, you either need a texture bigger than 1k x 1k (which you can't get in SL right now) or a separate object with a different UV mapping that uses more of the texture space.
  9. Textures get compressed onto SL's servers and sent to people's video cards at 8 bits per channel AFAIK. Working in 16 or 32 bits per channel probably won't gain you anything.
  10. Does your animation have a keyframe for every bone in the spine? In your first video, it looks like the avatar is bending at the stomach. The arms are stiff like they should be.
  11. Once you've created a normal map, you won't see its effect on your low-poly model unless you use it as part of the low-poly model's display parameters/material/whatever your software calls it (or upload the normal map texture and assign it to your low-poly mesh to SL). (Also, SL won't show it unless you have Advanced Lighting on, which you probably do unless you're on a slow laptop -- I'm just covering all the bases here.) It helps to do something that seems brainlessly simple when trying a new technique, like here: I'd start from an empty file, make a rectangle, sculpt a gouge into it, then turn that into a normal map on a single low-poly quad. If you get that working you know you've got the basics and can handle it for clothes.
  12. It can, depending on the exact nature of the details you're trying to maintain. Especially since SL stores textures using a lossy compression algorithm too, which might not play nice with the slightly-highlighted edges produced by Photoshop's bicubic-sharpen filter.
  13. I guess Illustrator could have an automatic "crop to content during export" feature that needs to be found and turned off. I realize I've been assuming the OP trimmed the texture to just the car on purpose. I apologize.
  14. Okay! There's the misalignment problem. The car was UV unwrapped to the bottom regions of a square texture, but the OP appears to be uploading a rectangular one trimmed to just the car area. Look where the markings on the red-blocked texture end up once the coordinates align. When you make a texture for an object, you have to match the proportions it was originally UV unwrapped onto. (You CAN change the size, but not the aspect ratio.)
  15. PNG files don't include DPI information. They don't know inches -- they only store the dots. And 512 dots is 512 dots is 512 dots, regardless of whether those dots are intended to be displayed at 96 DPI on a monitor or printed at 300 DPI on paper. Notice something, though: those 512 dots cover 5.33" on the screen but only 1.71" on paper. If you want to print enough dots on paper at 300 DPI to cover the same 5.33" distance they take on your screen, you need 1,600 dots. And that's what a vector-based drawing program like Illustrator or Inkscape (or even Photoshop, if it has non-rasterized text or vector objects) do when you tell them to export at higher DPI: they redraw your image internally, from scratch, at a higher base resolution than your screen shows, then export that. That is EXACTLY what you want to happen if you're creating images to be printed at 300 or 600 or 1,200 DPI. You get all the extra dots you need to cover the same real-world distance on your flyer or backdrop or poster as on your monitor, and you get them by redrawing all your text and other objects at a finer detail level in the first place (which looks good!) instead of by taking the drawn-on-screen image and upscaling it into a blocky, low-res mess (which looks bad). But you don't want that for 3D texture work. If you make a 512 dot image because you want a 512 dot image, but you export at 300 DPI, you'll get a 1,600 dot image (which will be a lot crisper because there's so many more dots), which Second Life will shrink back down to 1,024 dots (which will STILL look crisper, just not as much). If you export it at 600 DPI you'll get a 3,200 dot image that's too big to upload.
  16. Technically false . . . but practically true. Vanilla Blender can make rigged mesh. (And it's fine to practice with it that way!) But it can't easily make rigged mesh that responds to the full set of body shape sliders, which means your creations won't match a lot of wearers' physiques. And that's often a show-stopper.
  17. 1. Pretty basic: 2. A little more specific: 3. A video on how LoDs work from the viewer's end, with a little info on producing them from the creator: 4. A more detailed dive into creating manual LoDs: Every object (prim or mesh or linkset of 2+ combined prims and/or meshes) has to attach to one avatar skeleton bone (or to the HUD). If it's anything but a rigged mesh, the whole object will be totally stiff and move in lockstep with that one single bone and no others. A rigged mesh can have different parts follow different bones, as well as have gradual transition regions -- like, say, the elbow of a shirt being influenced by a blended combination of the upper arm and lower arm so it flexes around the curve. I wouldn't worry about it.
  18. Someone just had a similar thread in the Mesh forum (which is actually a little more appropriate): All mesh objects have 4 visual Levels of Detail and 1 physics collision frame (and they're not the same thing, despite the fact that some quickie tutorials say to use the lowest LOD as the physics frame). It doesn't matter whether the object is rigged for animation and it doesn't matter whether it's attached or free-standing. You can't prevent an object from having all these versions by simply not making them, either, because SL will fill in any missing ones itself -- usually poorly.
  19. Is this also true if you instruct the uploader to make edges smooth/sharp based on edge angle?
  20. How 24? Unless you're not sharing verts & edges across adjacent sides, but you are between the two triangles within each side. I'm guessing you're trying to avoid smoothed inter-face edges?
  21. I don't think I could make a cube with a LI that low.
  22. From what I've seen (reading here and personal experience), a mesh can fail to be recognized as rigged due to a misspelled bone name or too many different bones, overall, influencing vertices. The limit is 110 -- which really shouldn't matter for a necklace, but if your weight transfer gave you a lot of 0s... Things that I've discovered do not cause this failure are having vertices that have no weighting or more than 4 nonzero weights. The mesh won't animate properly, of course, but these weighting errors won't disable the "include skin weights" option.
  23. You can't animate the bump, specular, or diffuse texture individually, true, but I know that for smooth scrolling at least, all three of them scroll together. (They're also locked into a base state of no scaling, offsetting or rotation.)
×
×
  • Create New...