Jump to content

Quarrel Kukulcan

Resident
  • Posts

    532
  • Joined

  • Last visited

Everything posted by Quarrel Kukulcan

  1. I'm sorry I confused you. I responded to two different posts in one message. One response was to your post about not seeing a Model upload option. Has that been fixed? The other response was about you wasting 250 Lindens on a broken upload. That's a different problem, and to avoid that in the future, I talked about the second account you can activate on the other Second Life server that only has a couple of starter map zones and gives you tens of thousands of Lindens for free that you can use to upload and test things you've made to make sure they work right before you spend your own, real Lindens to upload them to the main Second Life world. Have you done that?
  2. Furthermore, once you have an object selected, if you check "Select Face", these hotkeys then cycle through the faces within that object.
  3. What computer and operating system are you using? I was just forced to update to the latest Linden viewer 6.5.3.568554 for Windows 10 Home, 64 bit. I have all 5 upload options, including Model (which is mesh). I also can't find any preferences that might turn some of those menu items off. Never upload anything to the main world first. Get your account activated on the ADITI preview grid. You will have to file a support ticket and wait a few days, but you can upload there using a bunch of fake Lindens that they just give you there. You will only have to pay real Lindens once, when you figure out how to make your upload flawless and then repeat the process on the real server. https://wiki.secondlife.com/wiki/Preview_Grid
  4. A UV coordinate at the very edge of a texture does not pull its color info from the pixel at the edge of the texture. That's because SL always averages all the nearby pixels based on distance and always treats textures as wrapping around in both directions. (Other rendering engines let you control these behaviors, but SL doesn't.) The left edge of the UV map is NOT the middle of the leftmost pixel, for example -- the absolute left edge lies halfway between the leftmost pixel and the wrapped-around-rightmost. Consider putting this 2x2 texture on a cube face. I've duplicated it in multiple directions so you can see the wraparound joinings, but only the bright 4 squares in the middle are the actual texture. Since the default prims run their UV maps all the way to the edge, you're only going to get pure green from the top left at a UV of about (0.25, 0.75) -- that's the actual center of the square box marking the upper left pixel. When Second Life goes to display that cube in the world, any surface point less than 1/4 of the way into the cube will blend in other pixels from the opposite edges because the UV coordinates are getting closer to pixel borders. Custom meshes can avoid this by not creating their custom UV maps all the way to the edge in the first place, but that's not an option for standard prims. Instead, you need to tweak the texture scale so the middles of the outer pixels creep closer to 0.0 and 1.0 on the U and V axes. (SL scales textures from the center, so you fortunately don't have to tweak offsets too). Higher-res textured won't be this extreme, of course, but they only shrink the width of the problem area along the edge.
  5. None of the first 7 matter except possibly gamma. I leave them off. Pixel format is the big one. "Automatic" will -- presumably -- include an alpha channel if your image has one and leave it out if it doesn't. If you don't trust GIMP to get it right, you can force it to "8bpc RGB" or "8bpc RGBA". You almost always want "save color values from transparent pixels" if you have an alpha channel. If you leave this option off, GIMP will change the RGB values of all completely clear pixels to the same thing (typically 0,0,0). That makes the PNG compress better but will create artifacts in SL at the border between clear and not-clear pixels. It'll also absolutely wreck things if you're using the alpha channel as an emission or glow mask and not translucency. Compression doesn't matter much. All levels are lossless. Higher numbers produce smaller files but take more processing time -- but the difference in CPU load is tiny in today's computers. Thumbnail is good if you want the icon on your desktop to be identifiable. The rest I leave off. Most of them don't matter anyway, and gamma and color profile, if they do anything, will add an extra layer of complexity as they try to correct contrast and color adjustments.
  6. Also make sure you're setting the name of the mesh data inside the mesh object in Blender. Apparently you can't just change the name of the object.
  7. Delete every node in the second material first. Then copy all nodes in the first and paste into the second.
  8. You need to use the standard SL armature. Since you're using Blender, try the Bento Female-2017 or Bento Angel-2017 .blend files from https://www.avalab.org/avatar-workbench/. You can also find 2016 .dae files for import from LL at https://wiki.secondlife.com/wiki/Project_Bento_Resources_and_Information butthe bones probably won't all visually connect right. You can move the bones but not change their parents or add any. You will need to check the "include joint positions" option when you upload, and possibly "lock scale if joint position defined" too. The second option prevents shape sliders from interacting poorly with your custom armature -- but it does that by disabling some. You absolutely want to upload to the Aditi beta grid so you're not spending real Lindens to test for errors! Aaaaaand there are plenty of idiosynchrasies to watch for. No vertex can be weighted to more than 4 bones and you can't have more than 110 animation weight groups in one object. Delete (or otherwise get rid of) any bones you aren't going to use. There are more than 110 bones in the full Bento skeleton, so you can't use them all for a single full-body mesh. Think a lot about how you're going to rig the face. There are no system standard facial animations, and conventional market animations for common human Bento heads probably won't be compatible. So you could need to make all your own. Use the ankle bones for the feet and the foot bones for the toes. Don't use the toe bones. mSkull is historically unused too. Use either the main eye bones (if you want the avatar's eyes to behave entirely like system eyes) or the alternate eye bones (if you want the eyes to be under full animation control). Set your physics mesh to a cube or single triangle. The physics mesh of an avatar isn't used by SL but it still needs to have one, so make it simple to keep the LI and upload costs low. Probably lots more than I can recall off the top of my head.
  9. It's not because the translation math is hard. (It's easy.) It's because LL would have to rework the entire way SL communicates animations between the central server and every single person logged in. It can be done, but it would be a lot of work -- possibly too much work for the amount of benefit it would provide. The way SL was designed, the server doesn't know what frame any animation is on. It only knows which animations are playing. For example, the server would know that an animesh dragon is playing animation "dragon_hop_2" but it doesn't track the frames. Instead, every single resident who can see that dragon has their own private idea, in their client, of what animation frame it's on, but the clients don't coordinate with each other or with the server, and none of them is "the right one". There is no single, correct, authoritative answer to "what frame of dragon_hop_2 is the dragon playing right this instant?", which means the server can't tell how it should move avatars attached to the dragon's animation bones. LL can't add your feature without redesigning this. There are multiple ways to do it, but they're all hard and could all screw up lots of other things in unforeseen ways.
  10. llGetAnimation() returns the text name of the one single system animation associated with the avatar's current sitting/walking/running/flying state. It returns no other data and can't tell you about custom animations. llGetAnimationList() returns the UUID keys of all animations currently being played on the avatar.
  11. I just want to stress this. All three file formats have lossless compression. All three can handle an alpha channel (a.k.a. a 32bpp image, or RGBA). There is a persistent myth that you have to save in TGA if your image has an alpha channel because PNG doesn't support it or doesn't save it right. What I think happened is that someone, somewhere, many years ago, used some image software that behaved differently when it exported TGA vs. PNG files, and that morphed into "PNG doesn't work right". Now, it is the case that some editors will zero out the R, G and B values of fully-transparent pixels during export (mostly because it helps with compression). You usually don't want to do this because it creates artifacts around the borders of the non-transparent parts of the texture in SL. This isn't an inherent difference between TGA and PNG, though, and hopefully your editor has an option to turn it off. (I know GIMP does.)
  12. SL textures are 1k x 1k max. Different viewers let you upload up to 2k x 2k (or possibly higher with debug settings), but the image will be downscaled using linear interpolation. It will also be JPEG-2000 compressed onto LL's servers no matter what file format you upload.
  13. How would that work? You weight a ball entirely to "aLeftHand" (or whatever the attachment point bone is named -- I can't seem to find a reference for those) instead of "mLeftHand"?
  14. You need either a higher-priority version of the carrying animation or lower-priority walk animation (and probably others) in your AO.
  15. For baking in general, this video isn't bad: https://www.youtube.com/watch?v=eYvgFWEiNp8 (Note: It's a little unusual that the presenter uses Blender to turn a bump map into a normal map. I think that's usually done with a direct conversion tool like https://cpetry.github.io/NormalMap-Online/ or a filter in something like GIMP. You also always want to set a normal map's Color Space to Non-Image in its image texture node! Don't leave it on sRGB. Big, big oversight there.) Texture baking is used for lots of things from Blender to SL, sometimes more than one at once: Creating a single material & texture from multiple materials (which is tricky and involves multiple UV maps: https://www.youtube.com/watch?v=9airvjDaVh4) Capturing a snapshot image of a procedurally-generated texture. Creating a normal map for your object based on details sculpted into an extra-high-poly version of it (excellent coverage at https://www.youtube.com/watch?v=0r-cGjVKvGw) Burning directionless nook-and-cranny shadows and/or light-source-produced shadows and reflections onto an object's diffuse texture for added photorealism, especially for residents who aren't running SL with full graphics options. Generating those last things I just mentioned as standalone images for texture artists to use as overlays.
  16. You won't be able to use PRIM_SLICE to trim the top off your mesh object, but you can change what texture appears on that material face (to swap among a set of specific appearances) or, if you've prepared your texture's alpha and the UV map right, to change the texture offset and set the liquid to an arbitrary height.
  17. It doesn't look like there's a way to change only those parameters. You could try using llGetPrimitiveParams() first to read everything you don't want to change, but that will fail to get the texture UUID if the texture isn't in the object's contents and the object's owner doesn't have full permissions.
  18. FWIW here's what I've used to accomplish the second. string myAnimation = "ANIMATION_NAME_HERE"; default { attach(key id) { if (llGetAttached() != 0) llRequestPermissions(id, PERMISSION_TRIGGER_ANIMATION); } run_time_permissions(integer perm) { if (perm & PERMISSION_TRIGGER_ANIMATION) llStartAnimation(myAnimation); } changed(integer change) { if (change & CHANGED_TELEPORT) llStartAnimation(myAnimation); // sometimes necessary } }
  19. Do you want the object itself to walk behind you, or do you just want to hold it in your arms? Fenix's answer is about the first meaning. The second is a lot simpler.
  20. What viewer did you upload with? LL's viewer doesn't show Bento hand or head animations in the upload preview. You have to do the full upload and then play it in-world (pro tip: use the test server for this!) or use a different viewer that shows correct Bento previews. I just made a test animation that flexed Bento fingers. I used plain Blender with no add-ons and uploaded it as a .bvh file with the Linden Lab viewer. I was still forced to select a hand pose in the upload panel (because you have to pick one). That's the hand pose that showed in the preview panel, and it's the pose my avatar's hands used when I played the uploaded animation without Bento hands on, but when I put Bento hands on and played it, I saw my custom flexing. EDIT: I also tested uploading with Firestorm. Firestorm previews uploaded animations by making your avatar perform them instead of showing them on a thumbnail avatar in the import window. You can see correct previews of Bento animations on your hands and face that way, if you have the correct mesh body parts on.
  21. IIRC custom hand animations only work for residents wearing Bento mesh hands. The system avatar's hands can't be given custom animations because they are not rigged to animation bones. Instead, they are locked into whichever one pre-set pose you specify during upload. (Mesh hands ignore this setting -- if the animation doesn't move the fingers, the hands won't change.) I think the official viewer's animation preview only knows how to show the system hand pose choice and doesn't know how to display the effects of Bento hand animations. On the other hand, Firestorm previews animations by playing them on your avatar, so it will show Bento hand animations if you have Bento hands on (but it won't show you what system hands will do).
  22. Turn on the "detailed logging" option in your import panel's "Log" tab. That will give you more information about what's going wrong. (Firestorm and the official viewer both have this; I'm not sure about others.) It could be a number of things: mesh too dense extremely thin triangles reversed normals stray vertices/edges overlapping duplicate vertices illegal object name characters Working with quads won't cause it inherently (though a thin rectangle will make thin triangles). This is a separate problem. If you don't start by transferring weights from a reference mesh body, your clothing will bend slightly differently during animations and respond differently to body shape sliders. Wearers will almost certainly have to alpha out big chunks of their bodies (which may or may not be an issue -- I don't know what you're making this for).
  23. You're going to have to paint the weights manually then. Most of the hood will have weight 1.0 to HEAD and nothing else. The neck area will need a smooth transition from that to whatever the exact weights are at the highest part your dev kit reaches.
×
×
  • Create New...