Jump to content

animats

Resident
  • Posts

    6,153
  • Joined

  • Last visited

Posts posted by animats

  1. On 5/17/2020 at 12:12 AM, rasterscan said:

    I wish there were more mesh all in one avatars.

    There are. They're usually sold as animesh, and they're usually from games. All the clothing is built in, all textures are pre-baked, there's no customization, and the LI is very low. I use some of those for NPCs. They're not usually used as avatars, although they could be; they have the same bone structure.

    mallkids.thumb.jpg.e35f9dcb0576390bab6187a807a3bc1c.jpg

    Background character NPCs. Not customizable at all, no facial expressions. I keep them moving around, so you don't get too close a look. The closest one is 22LI as animesh.

    threecharacters.thumb.jpg.c7462fe3a168b28441313dc0150169b9.jpg

    The next step up. A basic bento animesh character in various outfits.

    Current mesh avatars are a feature added on top of classic avatars. These animesh have the essentials without the legacy baggage. There's a full bento skeleton, and a basic skin layer. Mesh or texture clothing can be added. These are, from left to right, 37 LI, 48LI, and 33 LI. (The long dress needs some mesh reduction; it's adding more LI than such a simple dress should.)

    Animesh lack the user convenience features of avatars. Clothing has to be linked to the model or baked onto the skin texture outside of SL. There's no "wear" user interface. The hoodie, dress, and shoes are rigged mesh. The jeans, sweat pants, and T-shirt were baked onto the skin texture in Photoshop. There's no "baking" for animesh yet. Nor do they have the size adjustments of avatars, either the classics like height or the fitmesh ones like width. The only layer is the skin of the body; there's no "dress" or "coat" layer.

    If SL were to have simpler avatars, this is where to start. Once you add a mesh avatar, you've hidden all the classic avatar stuff layers, but you're still carrying it around. Animesh is basically the mesh avatar layer without the classic avatar underneath.

     

  2. Took a look at the code in the viewer which does this. Looked in Firestorm sources.

    Radius for level of detail purposes is calculated in llvovolume.cpp in  function LLVOVolume::getBinRadius. There's a special case for rigged mesh. There are comments indicating it's 2x too big for some historical reason.

    // Volume in a rigged mesh attached to a regular avatar.
    // Note this isn't really a radius, so distance calcs are off by factor of 2
    //radius = avatar->getBinRadius();
    // SL-937: add dynamic box handling for rigged mesh on regular avatars.

    The calculation for a rigged mesh is to take the bounding box of the entire avatar and get its diagonal. This is twice the radius of an enclosing sphere, and comments indicate that twice is wanted.

    (Object radius is displayed in the edit dialog in  llpanelobject.cpp in function LLPanelObject::activateMeshFields. This, interestingly, always displays the size of the avatar for any rigged attachment. That's separate from the size actually used for LOD calculations. This may be code that should have been changed when SL-937 went in, but wasn't. So, don't fully trust what you see in the edit box for rigged attachments.)

    So, yes, if it's rigged on an avatar, its "radius" is the diameter of the sphere that encloses the entire avatar. Which seems to be why small attachments don't drop to a lower LOD at distance.

     
     
     

     

  3. 5 hours ago, ChinRey said:

    That's still an interesting and valuable test since it explains why these bugs weren't noticed earlier.

    Yes, some descriptions of this problem say "rigged mesh", but only fitmesh gets fully resized in all directions.

    If we wanted to compute the actual bounding radius for an attachment, how could that be done? Sphere that encloses all the bones to which the mesh is rigged? Don't consider joint rotations, just add up the relevant bone lengths in a tree fashion. That gets the worst case radius with all the limbs splayed out.

    eeb0d66333ae3c2e048de40cd4049df9_large.p

    Second Life bento skeleton. Humanoids use the green bones. Yellow are for wings, red for a tail, blue for a quadruped.

    From the list of bones rigged, and their lengths, it's easy to compute a maximum radius. Shoes get a small radius, boots more because they add the lower leg, and pants still more. Bracelets and necklaces are usually tied to single small bones, so they get a small radius. Which is what we want to happen. How hard would this be to do in the viewer? Can a third-party viewer do this for test purposes?

     

  4. 10 hours ago, ChinRey said:

    I think you're confusing fitted mesh with old style rigged mesh, Animats, although it's possible that you'd get the same effect with fitted mesh if it has a very simple rigging.

    Right. I was testing with an old-style rigged mesh item.

  5. On 5/15/2020 at 4:46 PM, animats said:

    After reading the above, need to try a test case. More later.

    OK, now I get it. Rigged objects do stretch if the skeleton is resized, but only in the direction between the joints. You can demonstrate this (preferably in private) by putting on a close-fitting mesh jacket, going to "edit appearance", and turning the "Fat" slider all the way up. The garment will not enlarge in width. That's what you'd expect from the way rigging works.

    So,  making tiny garments and expecting them to fit large avatars won't work. Thus, using the un-worn size of mesh objects as their size for LOD purposes doesn't open a way to cheat the mesh calculation.

    It looks like the current value is double the radius of the sphere that encloses the entire avatar, for all rigged objects. That's just wrong.

    I'd like to see a debug switch in a viewer that let you switch to a reasonable LOD radius for rigged objects, so we could see what it looked like.

    • Haha 1
  6. 27 minutes ago, Kyrah Abattoir said:

    A broken auto lod generation within the SL uploader.

    I looked into this last year. Didn't find an off the shelf solution.

    • The really good mesh reducers, like InstaLOD, InstantUV, and Simplygon, are proprietary and expensive. So you can't just put them into LL's viewer or Firestorm. (You can use Simplygon for free if you're willing to give all your content to Microsoft.) Most of them are for best for reducing 10 million triangles to 10,000 triangles, anyway. Not for 10,000 to 10. That's really hard.
    • Quadric mesh optimization, which tries to minimize the volume between the original and reduced meshes is popular, and there are academic implementations. There's one in Unity, and one in Blender They're brittle. If the mesh isn't "watertight", they tend to crash. That approach is volumetric; you have to be able to decide whether a point is inside or outside the object. SL meshes often are not that clean. This approach is more useful in an interactive system where you can fix the mesh.
    • Optimizing fabric is tough. A piece of fabric is a very thin box. If it has a wrinkle, the wrinkle is a depression in one face and a bulge in the other. A mesh optimizer will not be able to flatten that wrinkle properly. If it tries to flatten the bulge, the mesh goes through itself, so the mesh optimizer can't do that. If it tries to flatten the depression, it introduces a big volume error as it bloats a thin sheet. If you run the quadric mesh optimizer in Blender on a fabric object, it pulls in the outer edges of the fabric and makes the sheet smaller, because that's the change which causes the least error volume. Optimizing clothing meshes needs to be done at a level that knows it is dealing with thin sheets. Probably Marvelous Designer.
    • Algorithms which have lots of tuning parameters can do a good job. There are thousands of low-wage employees optimizing game assets with such tools. That's semi-interactive - you pin the edges you don't want moved, mesh reduce, and use tools to indicate which areas can take more reduction. Keep face detail, give up foot detail, that kind of thing.

    Maybe someone more into mesh geometry can do better. I just went looking for open source solutions. Didn't find anything you can just drop in.

    I'd like to see automated impostor generation. That can be done automatically. I used to have a little "impostor garden" out back of my workshop in Vallone to demo this as a proof of concept. Many people on here saw it.

  7. Just now, Wulfie Reanimator said:

    If you scale down a rigged mesh to a miniscule size, or a giant 64sqm, it'll always appear its proper size once you attach it. Only your shape and the length of individual bones can change how rigged mesh looks on you.

    Not sure that's right. Thought experiment: you wrap a bracelet around a wrist, and rig it to only the wrist. Will changing the size of the avatar change the size of the bracelet? I don't think so. Something attached to two joints with weights will be stretched to fit, but only in the direction between the joints. I think.

    I've done this for a rigged mesh waitress tray (the only way to attach things to animesh), and it kept the same dimensions regardless of the avatar.

  8. 2 hours ago, Kyrah Abattoir said:

    "Generate" Need to be removed from the viewer entirely, I don't understand why LL clings to it like this.

    Here's something easy to do that might help. The minimum number of triangles for "Generate" should never be less than some number. I'd suggest about 20. The mesh reducer does such an awful job for very small numbers of triangles that it should not be allowed to go there. If you want a smaller value, you have to go make a lowest LOD model in Blender or Maya.

  9. 13 minutes ago, JanuarySwan said:

    What's the hidden catch not spoken of here? 

    That it's one click away for umpteen million Fortnite players, probably. That's Epic's idea of the metaverse - easy access.

    Also, no idea what their retention is. Sansar got a few hundred users for a few hours when they had some popular DJ. Fortnite got up to 45 million once that way. But then everybody left. That's the trouble with running a performance venue - the venue itself is not an attraction, and the people running the show demand substantial payment.

    • Like 4
  10. Ah. Only rigged mesh gets this big radius expansion. Regular attachments don't get this error.

    The design thinking seems to be that rigged mesh has no size; it gets its size from the skeleton to which it is rigged. How true is that? Most clothing that I rez in-world seems to be approximately the right size. Will rigged clothing in totally the wrong size properly stretch to fit? Probably not, since much rigged clothing comes as S, M, L objects. So if we use the rezzed "native" size of the object, that might work.

    As Beq pointed out in 2016, there's a second bug; the system calculates the diameter of the enclosing sphere and sets that as the radius. That's why we see rigged items for human-sized avatars with a "radius" of 2-3 meters, instead of being around 1m.

    How hard would it be to put a debug switch into Firestorm to optionally fix this? Then we can go to some clubs and fashion events and see what happens. It's viewer side, so it only affects the person using it. @Beq Janus?

    • Like 1
  11. Here's something strange that may be causing excessive rendering work.

    jacketlodstatsrezzed.thumb.png.112a5b713a828a48f012a0fdd1cb78ec.png

    Leather jacket, rezzed on ground. Object radius 0.730, drops to medium LOD at 6.1m at LOD factor 2.0. Fine. Full detail only in close-up.

    jacketlodstatsworn.thumb.png.ce7291f1cf9542d9b24fc6516b513828.png

    Same leather jacket, worn. Object radius 2.659m, bigger than the entire avatar. Drops to medium LOD at 22.2m. 3.6x the proper distance.

    Is it because it's an attachment? No. Fouette gave me this mesh cube LOD test. It's a cube with a different texture for each LOD, so you see which LOD is live.

    cubelodstatsrezzed.thumb.png.685db45a4259b550a4ddbcd39b23f4c9.png

    Cube on the ground. Cube is 0.2m x 0.2m x 0.2m. SL object radius is 0.173m, which is half the cube diagonal, or the radius of a sphere that would just enclose the cube. That's what it should be.

    cubelodstatsworn.thumb.png.ea5df992c12febcb98c5ae8d88713a21.png

    Wearing the cube. Same radius as when rezzed, as it should be.

    So, wearing that jacket in a crowded club impacts 13x the area it should, the square of the overexpanded radius.

    So why is this? Is that a bug, or is there some reason for this? What triggers the expansion of the radius? It's not just wearing something.

    I'd previously thought that all attachments to an avatar operated at the same LOD. But they don't. We can see that with this test cube. Note that both the one on the floor and the one worn are at "Medium". (That cube shows its current LOD.) Worn objects do LOD just like regular objects. Wearables only stay at high resolution if their object radius somehow gets bloated.

    • Like 1
  12. 5 hours ago, DeepBlueJoy said:

    It's more "throw them in the deep end and see who swims or sinks."

    Literally. I once saw a pair of new users, one day old, trying to kayak in the canals of Bay City. It was going very badly. They had no paddles, but the kayak was pushing them through a paddle animation. They didn't know how to steer, and were banging back and forth between the walls of the canal because they were holding down the arrow keys. They didn't know how to get out of that situation. One stood, came off the kayak, then sank into the canal, because the default AO doesn't understand swimming. And there they stood, underwater in a channel with no stairs or ladders, stuck. I watched this for a while and IMd "Press PAGE UP to fly". They tried that, and hit the underside of a bridge. Then they were stuck in the bridge. Eventually they got out of that mess. But a nice couple had an awful first day.

    I wonder if they ever came back in world.

    • Like 3
    • Confused 1
  13. 21 minutes ago, Parhelion Palou said:

    One of the Linden Lab developers recently said that they'd looked at engines like Unreal but those will *not* work with SL. SL is too different from the usual games.

    I looked into UE4, and I agree about UE4. Too much of what makes UE4 fast is preprocessing in the development tools.

    This new UE5, though... That does LODs at viewing time, just before rendering, in the GPU. When more technical details come out, that deserves a close look.

  14. 2 hours ago, DeepBlueJoy said:

    I am frankly afraid that someone will take the idea from under Linden and turn it into a mass use space, the way Facebook undercut and made redundant its earliest competitors/antecedents.

    It's called "Fortnite Party Royale", and it opened a few weeks ago.

    2 hours ago, DeepBlueJoy said:

    I agree.  The beginner learning curve is not a high hill.  It's a wall... a slick, very intimidating wall.  This goes double if you're a) older b) not hugely technical.  I like the fact that it is a place that anyone can so a lot with... eventually.  I didn't like the first few months where I often felt downright stupid. 

    Yes. The place to start is the clothing system. That needs to be simplified. A "How to look good in SL" area and class would help. (Caledon Oxbridge, surprisingly, doesn't have one.) New users are often sent to the Free Dove or Freebie Galaxy, which are designer bargain bins. If you don't know what you're doing before you get there, you'll get horribly confused, get items that don't work for your avatar, and may end up with some off-brand avatar for which little clothing is available.

    A new user should leave the entry area with one look as good as the ones shown on the Second Life home page.  Anything less leaves new users feeling they've been had. One really good look. From there, they can discover the joys of shopping and dressing.

    (Attention Marketing: LL, please stop sticking new users with "70s Disco Guy".)

    • Like 6
    • Haha 2
  15. No real details on how UE5 does it yet. Discussions on GameDev are speculating.

    They seem to have a dynamic level of detail system something like the one described in this 2010 paper. Epic probably got the GPU to do much of the work, a big advance over the old GLOD system. That's probably the big advance here. UE5, like UE4,  will be open source and documented, so we'll know how it works soon.

    The detailed mesh probably has to reach the client before it can be reduced, even if it's not being displayed. More network bandwidth may be needed. Maybe not; SL uses more bandwidth on textures than on meshes. Or you could download lower LOD meshes reduced in the same way server-side for distant objects, and replace them if you get close. That would be traditional LOD on top of this. Wouldn't have to reduce the mesh as much, though, since it gets another reduction during rendering. Big-world MMOs will probably have to do that; they have the same streaming asset problems as SL.

    The PS5, for which that demo was made, is a pretty good computer. 8 CPUs at 3.2GHZ, 24GB of RAM total, and a 16 teraflop GPU. That's a lot of engine for US$400. Way beyond typical SL user hardware today. UE5 is expected to run on a wide variety of platforms, like UE4 does, with performance reduced as necessary.

    It's good to see real-time graphics approaching the level Hollywood was at only a few years ago. It will be interesting to see if Linden Lab can keep up.

×
×
  • Create New...