Jump to content

Quarrel Kukulcan

Resident
  • Posts

    530
  • Joined

  • Last visited

Everything posted by Quarrel Kukulcan

  1. Are you talking about SL's PBR materials or the classic system? In PBR, the ambient occlusion data is the red channel of the combined AO/Roughness/Metallic image. You can make this into a grayscale shadow map with any image editor that can separate one color from a color image into its own grayscale image. SL has no way to use a standalone shadow map, though -- you'll also have to merge it with the diffuse color texture yourself (in GIMP or Photoshop or whatever), using "multiply" as a layer blending mode instead of the standard way. SL doesn't have displacement maps (which move mesh vertices), but I think you mean "bump map". SL doesn't use bump maps but it does use normal maps, and you can convert the first into the second with some editing software or online. Big technical note: SL's normal maps use what's called "+X/+Y swizzle" (same as Blender, Maya and Unity; different from 3dsMax, Source and Unreal which use +X/-Y). The normal map should look like it's lit with aqua light from the top and purple light from the right, and you'll need to use the red and green channel settings in your conversion tool that accomplish this. If you get the polarity wrong, the bumps will cast shadows in unrealistic directions in SL. (If you already have a normal map, you're in a bit of a spot. I don't know of a way to combine two normal maps easily.)
  2. It could be a lot of things, sadly. Are any vertex groups locked? For these, check under your Tool panel in weight paint mode. Under Brush Settings, is your blend mode "mix"? Does toggling "Front Faces Only" under "Advanced" help? Does picking the square setting under "Falloff" help? Under "Options": is Auto Normalize on? Is Restrict on?
  3. Negative scale implies that, well, you scaled something by a negative amount at some point, to mirror or invert it. First thing I'd check is that you've applied all your object-level scales/rotates/translates. (Object mode, select something, Ctrl-A, "All Transforms"). That will make your objects actually have the sizes, angles and positions they look like they have, rather than still having their original characteristics with modifiers. The latter can cause import problems. (It also causes odd interactions with some of Blender's own modifiers, like bevel.)
  4. If you have modify permissions, you can set the transparency of each differently-textured portion of a prim or mesh object. However, you can't control how many parts it has or where the borders are, so if the mesh object's creature didn't lay parts out the way you need, this approach won't help you and you'll have to fall back on custom editing the texture and hoping you can make the borders and the alpha settings look good.
  5. The RGB of the diffuse texture is the diffuse color. The A of the diffuse texture can be used as blended opacity, hard opacity (with arbitrary cutoff), or emission. (Or ignored.) The RGB of the normal map is normal vector data (almost, but not quite, identical to the PBR spec...the exact difference is technical, subtle and still beyond me) The A of the normal map is glossiness (i.e. smoothness, or inverted roughness) The RGB of the specular map is the specular tint. The A of the specular map is the environment map strength (i.e. how much the fake skybox reflects on it, for a chrome/mirrored look) So, yeah, roughness should be inverted and added to the normal texture as an alpha channel. Ideally, you may also want to apply a forward or reverse curve correction if one channel is perceptual and the other is linear. Metallic doesn't translate directly. For highly metallic parts you'll probably want a combination of some environment reflection plus a specular tint based on the diffuse color, while nonmetallic is no environment and white/gray specular. Expect to do a lot of hand-tuning here. If your PBR has both translucency and emission, you'll have to give up one.
  6. SL shrinks all images to a maximum of 1024x1024 internally. Your jpeg uploads are getting resampled and interpolated, which is why you're not seeing so much jpeg artifacting. If it looks good, then it looks good, but your results are unusual. Pixel resolution absolutely matters, and it's maximized by using larger textures and by intelligently unwrapping your UVs to use as much texture space as possible. But the dpi stat is meaningless in this process and ignored by SL. It's optional hint data meant to indicate how big the image is intended to be when printed on paper. A 1024x1024 texture wrapped around a can in SL will look the same no matter what dpi you say it is (unless your paint program does something odd like add extra, dpi-dependent post-processing steps during export, I guess). They might take less time to upload to SL but that only happens once. SL shrinks all uploaded images to 1024x1024, max, then compresses them with its own algorithm (unless you pick the "lossless" option, which is only available for smallish images, say, 64x64). That processed image is the only one SL stores, and it's what's sent to other residents when they look at your object in-world and their client renders it. You can't make that file smaller by uploading a compressed image initially. Unless you mean jpegs are faster for your Photoshop/Blender/etc. programs to load, in which case, sure, but you're still trading quality for negligible speed increases in a non-speed-critical step.
  7. The short answer is that Fitted Mesh bones (the ones with ALL_CAPS names) don't work in straightforward fashion with vanilla Bender. There are basically four solutions. 1. Don't use those bones. The downside of this is that your mesh will ignore about half of the avatar's body shape sliders -- and that's a dealbreaker for clothing. 2. Use a SL-specific Blender add-on. 3. Use the armature in fitted_mesh_270.blend from https://www.avalab.org/fitted_mesh_survival_kit/ (or any other one that has the correct extra information in the Fitted Mesh bones -- most don't), and when you export to .dae, make sure you check the "Keep Bind Info" box in the "Extra" options tab. 4. Use another modeling program that supports bind poses.
  8. Three things. First: What @OptimoMaximo said. Animations only change where the avatar gets drawn. They don't affect where it's actually standing. Second: If you are exporting .bvh animations from plain Blender and your Blender scene is in meters, you need to specify a scale of 39.37 in Blender's animation export options. That's because SL assumes your translation data is using inches. Your 3m forward jump in Blender will be a 3" forward jump in SL if you don't do this. Third: There is probably a limit on hip translation distance. I believe for Animesh objects, it's 5m. Non-animesh is larger but I don't know for sure what it is.
  9. Also FYI: it's best to use only rotations in your animations. Using translations interacts poorly with SL's face and body shape slider system. The two prominent bones that translations are OK on are the root bone and the tongue.
  10. What I've picked up from lurking is: When you export to .bvh, you create a file with full, raw translation and/or rotation data for each relevant joint at every frame, with no keyframe info whatsoever. On import, SL analyzes the data and reverse-engineers its own keyframes to come close to that motion, using only linear interpolation and no easing. When you export to .anim, the same thing happens, only it's your exporter doing the reverse-engineering calculations. Depending on your tool, you may have more control over the quality of the process, but the output still won't be your original keyframes and easing curves. Is that a fair description?
  11. You can only have one UVmap data structure on an uploaded object. Blender allows multiple UVmap data structures on an object, but if you upload that, SL will ignore all but one and mess things up. However, you can have up to 8 materials per object (a.k.a. "faces" in SL), and each can have its own texture, so you can use the full UV area for each material and they won't interfere with each other, even though your unmappings lay all over one another. And once the object is in SL, you can not only assign a texture to each face, but you can shrink/slide/rotate that texture so it appears multiple times across the UV area and in different orientations. For scaling in particular, this lets you repeat a pattern across a large area so you get finer detail. The downside is that the pattern will be obvious if you shrink it too far, and it needs to be seamless so it blends smoothly with itself.
  12. Mis-aligning shouldn't be possible, given that SL rescales and repositions the physics mesh so its bounding box matches the High LOD's.
  13. When did .bvh uploads ever allow priority 6?
  14. Something else to consider: all(*) textures uploaded to SL are recompressed internally into a lossy JPEG 2000 format (which is different from JPEG!) regardless of what format you upload it in(**). This compression can introduce blurriness or artifacts in high-detail areas. If you shrink your images using any kind of sharpening algorithm, like Photoshop's "bicubic sharpen" or GIMP's "Lo-Halo", you can get a result that looks great in its original form but bad in SL after JPEG 2000 is applied. Don't be surprised if the best final SL result comes from shrinking with linear interpolation. * You might be given the option of uploading without compression if the image is very small. ** "the format you upload in" should always be PNG or TGA, since those use lossless compression
  15. You are going to have a rough time getting this to work in SL like it does in your modeling program. 1. SL animated mesh objects can't have custom armatures. You have to use the standard avatar armature and repurpose existing bones. 2. You can't import animations as part of the Collada DAE file. You have to export & import them separately, either in .bvh or .anim format. .bvh may require using fussy tricks to get the axis order right. .anim will require a paid third-party addon because it's a proprietary Linden Lab format that vanilla Blender doesn't support. 2.a. You'll also have to mark the object as Animesh in-world, install your animation, and write a script to play it. (Animesh objects have a minimum Land Impact of 15, by the way.) 3. SL animations generally can't scale bones. 4. SL doesn't support shape keys or shape key animations
  16. When making your own clothes models, the best approach is to acquire the model or dev kit for the mesh body you're making clothes for. Then you can fit your clothes to it properly, as well as transfer weights from it so your clothes animate identically and respond the same way to shape sliders. If you're making clothes for the default SL avatar, be advised that even rigging to Fitted Mesh bones will not let your clothes conform exactly to body sliders, especially at extreme values. They're a kludge that LL invented to try to wedge support for body shapes into mesh clothes.
  17. Okay, that's my fault. I was under the wrong impression that only Fitted Mesh bones contained bind pose info. Sorry for misleading you.
  18. Turn on the "Include Joint Positions" option when you upload. At least, that made it work for me. You shouldn't have to do this, but there are clearly bones in that armature that aren't where SL expects them.
  19. Yep! That is exactly how it works. Each material in Blender becomes a "face" in SL. SL does not let you select individual arbitrary polygons on a custom mesh. (Caveats: Each object is SL is limited to 8 "faces", so don't use more than 8 materials per object. In fact, try to use as few as you can, since large amounts hurt graphical performance. If you make custom LODs, you may have to ensure each of your materials is used on at least one triangle somewhere in the mesh for all 4 levels, otherwise the upload will fail with most viewers. I think Firestorm is the only viewer without this problem.)
  20. Many people export the textures separately because it is so difficult or impossible to make SL import them from the .DAE. You will pay the same amount of Lindens either way.
  21. BVH exports don't store any priorities or looping data. That's why you have to enter such information on the SL import panel.
  22. Your best bet is to add an alpha layer to your texture that's a grayscale copy of it, then tell SL to use the alpha layer as an emissions mask instead of opacity.
  23. You can rig in vanilla Blender, but your mesh will deform in SL if you rig to Fitted Mesh (ALL_CAPS) bones unless you use an SL-supporting add-on or you use an armature that has extra custom properties and you export with an extra checkbox checked. There might be dev kits out there that happen to have that special data in their armatures, but I wouldn't put money on them all having it. There is also the option of rigging only to basic and Bento bones, but then your clothes will ignore many body shape sliders and will be bad at matching the wearer's measurements. You can get cleaner weighting that way, but if it's too different from how the body is weighted, your clothes won't flex exactly the same way as the body and parts will clip through when the wearer moves. (This may not be an issue if the body has enough alpha control. YMMV.)
×
×
  • Create New...