Jump to content

animats

Resident
  • Posts

    6,144
  • Joined

  • Last visited

Posts posted by animats

  1. Second Life has several different systems for moving objects. There are ones that use the physics system, there's keyframe animation, there's direct positioning (llSetPos, etc.) and there's pathfinding. They're not interlocked. You can only use one system at a time. Allow a second or two between changing from one system to another. llMoveToTarget and llSetTargetOmega belong to different systems.

    llLookAt and llRotLookAt belong to the same family as llMoveToTarget, and can be used together.

    llTargetOmega is interesting. It's usually just a visual effect of rotation, done in the viewer. Selecting the object will make it stop rotating for you, but not for other viewers. It's for wheels, windmills, and spinning decorative objects, where you want rotation but don't care about actual position.

  2. 1 hour ago, LoneWolfiNTj said:

    So an object inside the fenced parcel is literally inside a hollowed-out part of a "Box" object.

    Oh, so maybe the pathfinding system doesn't recognize box hollows. That wouldn't surprise me. It's basically building a very low rez model of the sim, and giant box hollows are rare. There are other things it doesn't handle, such as walkables not bigger than 10m x 10m x 0.2m.

    Worth a try - make the fence a Static Obstacle, and look at the navmesh to see if the box hollow was recognized properly.

    • Like 1
  3. 8 hours ago, LoneWolfiNTj said:

    Ah-ha! I think I see the problem! It was a 1-prim fence, a single object wrapping the property on 3 sides. So a pathfinding object trying to navigate the area would literally find itself "surrounded by" or "inside" an object and hence think that "all points are unreachable".

    Ah! That probably means the fence object has a bogus physics model which is confusing pathfinding.

  4. I've been talking about "silhouette protection", as Unreal Engine does it. It's possible to do that in Blender, by hand. Here's what it looks like.

    Here's a simple but hard test case - a thin sheet with a bump in it.

    sheetwithbump.thumb.png.9b37eb267eefcc7551ef87b7d4e4650a.png

    Flat sheet. 480 triangles. Created by stretching a cube to 10m x 10m x 0.1m, then subdividing the big faces x10.

    badreduction2.png.19a5073a805981aad265c92146fd6c9f.png

    Bad mesh reduction with Blender's decimate. 14 triangles. If you use that algorithm with extreme reduction, this is what happens. Note that the outer edges of the sheet have been pulled in. This is why extremely low-poly LODs look so awful.

     

    protecectededges2.thumb.png.2a8108b582d8501e8307529fd9f8a128.png

    Silhouette protection, done by hand. Only the selected vertices and edges can be mesh-reduced. The outer edges, both vertically and horizontally, are exempted. Notice that the inward side of the bump is not protected, because it doesn't extend the silhouette.

    goodreduction2a.thumb.png.58e06bd8bd31c485254da31ed5a457ce.png

    Next, decimate as hard as we can, with the outer edges locked. Not too bad.

    goodreduction2.thumb.png.3fccdf1efa9cf4deec5b8a558fdf0e5c.png

    Final result, without the edges showing. 52 triangles. That's as far as you can go with Blender's decimate.

    With a bit more hand work, you can get down to 32 with only minor changes to the silhouette. So that's where mesh reduction should stop.

    You can play with this in Blender, using the Decimate command, followed by Limited Dissolve to merge planar triangles.

    So that's what I was talking about. Mesh reduction needs to preserve the silhouette of the object, so things don't blank out at distance.

    • Like 1
    • Thanks 1
    • Haha 1
  5. That may be related. Sometimes, in hilly terrain, the navmesh can be as far as 2m from the ground surface. That happens over in Hippotropolis. My NPCs have to correct for this. I wasn't expecting to see that on a parcel that's basically flat, but maybe the road edges are contributing to the problem.

    Definitely file a JIRA.

  6. I went over to his parcel, and I can't figure out what's wrong.

    Pathfinding objects work fine on the adjacent parcels, but not his. I brought over my "Come Here" tester. On his parcel, it makes a short move, then says it can't find the navmesh. Works fine on both adjacent parcels. Even on bare ground, which is always walkable, pathfinding won't work there.

    I looked at the navmesh with the viewer, and it looked fine.

    I looked at the objects with my "Amulet of Pathfinding", which does a ray cast and reports all the objects in the cast direction until it finds a walkable. There's no invisible object on top of the ground interfering with ground contact.

    I even tried one of my NPCs. Those move with keyframe animation, but sense using llGetStaticPath and llCastRay. They work OK, and are able to sense the navmesh. So the navmesh is good, but the part of pathfinding that uses it is being difficult.

    The only thing that was unusual was that the parcel was really close to the prim limit. When you turn on pathfinding in an object, it pushes up the LI to at least 15. With only 9 LI left on the parcel, turning on pathfinding hit the limit. I wonder if hitting that limit disables pathfinding for the entire parcel for a while.

    • Like 1
  7. The inventor of VRML has a new article: Metaverses, the Third Wave. He has this to say about Second Life:

    "VRML ultimately fizzled out by the late 1990’s. It was too early in terms of commercial adoption. It was also a matter of too much too soon, as the world was still coming to grips with the basics of the Web. However, our work inspired others, most notably Philip Rosedale, who left Internet video pioneer Real Networks to create Second Life, arguably the first-ever fully working Metaverse system: a 3D-rendered virtual universe on the Internet. Not open; not at billions-scale; but a good start. Most of the tech of this time ultimately crashed and burned, but Second Life survives to this day with a thriving community."

    His summary of the current situation:

    "It’s more than tech herd mentality; we are getting substantive signals that it may finally be Metaverse time."

    • "The suits are circling. In the wake of Facebook’s new positioning as a Metaverse company, execs at a multitude of tech and media outfits have put it front and center of their strategy, or are at least saying they have a Metaverse strategy. ...""
    • "The pundits are pontificating. A new class of self-appointed “experts” are jockeying for position as thought leaders. ..."
    • "The kids are creating. Most significantly, the creator class in an ascendant economy is making all kinds of Metaverse stuff: interactive 3D content in Fortnite, Roblox and VRChat, NFTs on myriad platforms, and open and decentralized worlds in Decentraland, to name a few. This is important, because otherwise the two previous points could just as well suggest a hype bubble."

    I agree with that last. The hype to code ratio is very high in this space. Which is why I keep trying to push SL to move forward, rather than hopping on some other vaporware project. SL has the metaverse working; it's just too slow and glitchy to go mainstream.

     

    • Like 2
    • Haha 1
  8. 8 hours ago, ChinRey said:

    But this isn't really automatic LoD generation, Animats. It's more semi-automatic with a large number of parameters defined by the user/creator.

    Also, the reduction rate they achieve in the video is rather low.

    True, although most people probably use the default settings. The Unity mesh reducer has fewer user tuning parameters but can do a roughly comparable job. This reflects the target market. Unreal is used for big-budget AAA game titles, where people obsess over visual quality. Unity is used for second-tier projects and is easier to use but not quite as good visually.

    So here's the Unity "just push the button to create LODs" system.

    Unity's LOD system. Very easy user interface.

    The SL mesh uploader should work roughly like this.

    How much to reduce is another issue, closely tied to the LI computation, which is highly controversial. The current LI charging rewards very low lowest LODs too much.

    Avatars have a different set of problems. A near-term solution is to use the existing avatar impostoring system. The higher the complexity of your avatar, the closer the distance at which it drops to avatar impostor mode. To see what that would look like, set "number of non-impostor avatars" to 3 or so in your viewer. Then go someplace with lots of avatars.

    • Like 1
    • Haha 1
  9. 4 minutes ago, Jennifer Boyle said:

    So, why can't LL (compulsorily) optimize meshes after textures are applied?

    Ah. See, this is one of the quandaries of SL. In game development, this happens long before the game ships. In SL, you can change the texture of any object at any time, either from the menus or from a script. By then, it's too late to change the mesh. Every viewer in range already has a copy of the mesh.

    The compute effort to regenerate optimized LOD meshes is high (several seconds of compute for the best optimizers), and that's a big load to add to the system. However, it's worth thinking about doing that on clothing changes, when the baking servers, separate from the sim servers, are doing the bakes-on-mesh work. But that's just an idea at this point. I think it's worth looking into, because unoptimized clothing is the worst drag on the rendering process. The land impact system, flawed though it is, keeps things from getting totally out of hand for non-avatars. Avatars, though...

    Hard problem, big potential win. Roblox is working on it, as they add a full clothing system. You know it's getting serious when Roblox gets written up in Vogue Business. Roblox avatars used to be a joke, but they're moving forward rapidly.

     

    • Like 1
  10. 6 hours ago, ChinRey said:

    Because it's a combination of technique, art, creativity, intuition, experience, strict logic and crazy out-of-the box thinking.

    That's not really true of LODs. SL, though, has a special problem - meshes are optimized without textures.  For a textured mesh, there's an objective standard of LOD correctness. You render the high LOD and lower LOD to occupy the same amount of screen space, sized for the lower LOD, and compare the differences in the pixels. If the match is good from multiple angles, it's a good LOD. Also, if the images match like that, the switch is invisible to the user. In areas where the color is uniform, you can take out more detail.

    SL, though, has to optimize meshes without textures. It doesn't know where the texture detail is. That's a problem.

    Here's Unreal Engine's automatic LOD system:

    Notice the settings for "Pixel error".  UE4 docs say: "Because the algorithm knows how much visual difference every edge collapse is adding, it can use this information to determine at what distance that amount of error is acceptable."

    The SL uploader, both the old and new version, doesn't do that pixel by pixel comparison. So we don't get a good match between LODs, let alone a seamless one.

  11. 5 hours ago, Naiman Broome said:

    What is the best correspondence between PBR Material maps and Second Life maps?

    PBR to SL?

    • Diffuse = Albedo
    • Diffuse alpha = Transparency
    • Specular = 1.0 - avg(Specular)*Roughness

    That's a start. PBR has more channels, but they don't translate well. Beyond the basics, you've kind of reached the limits of SL materials.

    • Like 1
  12. 6 hours ago, ChinRey said:

    Even so, I think it's fairly obvious it isn't good enough to keep.

    OK, please make sure Vir Linden gets that message.

    What's going wrong here is applying quadric optimization to a thin shell. Note that the vase is hollow. The optimizer is trying to minimize the error volume, which is all wrong for a thin object. Roughly the same mesh optimizer is available in Blender, so you can try this.  Here's a good way to see this in Blender.

    • Create a cube.
    • Rescale it down to a thin, but nonzero thickness, sheet. Apply the scaling to the mesh.
    • Subdivide the mesh so that you have about 5x5 squares on each of the big faces. You now have a flat piece of thin "cloth".
    • In edit mode, grab the central square of the big faces, on both faces, and pull it a bit out of the sheet. You've now put a "wrinkle" in the cloth.

    Try Blender's quadric mesh reduction on that. It won't flatten the "wrinkle". It will pull in the edges. Why? Because that's what changes the volume error between original and reduce version the least. That algorithm does not understand "thin sheet" at all.

    This is an indication that any mesh optimization for SL clothing has to be thin sheet aware.

    There are better mesh optimizers, but they are not free.

     

  13. Has anyone tried the new LL "mesh optimization" project viewer yet?

    I'd appreciate comments and samples. I don't upload enough meshes to really test this properly. Is this a good fix to the bad LOD problem?

    Things to check:

    • If you upload something, and ask for maximum reduction, do you get holes? It should stop reducing rather than emitting a broken model.
    • On thin objects, like clothing, does it preserve the outer edges, or does it start trimming back the fabric? Again, it should stop reducing before emitting a broken model.
    • Does it mess up on non-watertight meshes?

    Vir Linden mentioned that it's an open source quartic mesh optimizer. That means it tries to minimize the error volume between the original and reduced meshes. That approach is very effective for well-defined volumes. Non-watertight meshes may be troublesome, because of inside/outside ambiguity. So please try this on some SL content of your own. This is an LL product, so report bugs via JIRA.

     

    • Haha 1
  14. Go look at all those gorgeous pictures on Strawberry Linden's blog. That's what SL content looks like. You just have to stay still in one place for a long time as the viewer slowly catches up before you can take the picture. Moving around normally in SL is looking at a world that's blurry and constantly glitching. That's not good enough any more. Any modern game that looks that bad is laughed off the market.

    LL's Project Interesting was on the right track. But they got to 70% done and gave up.

    If you make that work right, the whole user experience changes. Here's some window shopping in New Babbage.

    entomoogistshop.thumb.jpg.22b238e0026bafafc696df6b7c5d870c.jpg

    Entomology shop. You can go inside and examine all those butterflies in detail.

    mapshop.thumb.jpg.68791386d0e8dec6dfb6fd12389c73ac.jpg

    Map store. Find the secret map of the tunnels.

    club.thumb.jpg.c551d1d0cdf3e14984633b30953588d9.jpg

    Club. Go inside.

    noctisfurn.thumb.jpg.fdae527954d59d684811b3e951346dce.jpg

    Furniture store window display

    graveswindow.thumb.jpg.af7ed1d7fdda13cddfec33a460639a8e.jpg

    Graves Investigations. Many occult items of interest in glass cases.

    yingsigns.thumb.jpg.bd12a37cf4b6194beda0d77a4390bcfd.jpg

    Remember Belgium! If you look closely, it's an attack by the tripods from Mars.

    pharmacy.thumb.jpg.3a8d7f6ebe0a02fcc34ac5bc170c5fb1.jpg

    Pharmacy. Every bottle is labelled.

    This is another demo from my test viewer (log in and cam only). There's no long waiting for the detail to appear. These pictures were taken as fast as I could get positioned in front of the window. Within 1-2s of going somewhere, you have clear textures nearby.

    The whole Second Life experience changes as the viewer speeds up. You can go and look at anything that looks interesting, without having to decide "is this good enough to spend a minute waiting for loading?" Suddenly SL starts to look like an AAA title.

    So, quit worrying about trying to tame creators. Fix the viewer and server and make this thing show all that beautiful content.

    • Like 9
  15. The Verge tries to make some sense of the metaverse hype.

    Didn’t we have a whole metaverse hype cycle around Second Life in the ‘00s? What’s different now?

    It’s true: plenty of new “metaverse” phenomena aren’t really novel. People were becoming digital land barons and selling virtual items in Second Life nearly two decades ago. Schools and businesses have opened satellite campuses in that world and others. Social 3D spaces like CyberTown long predate Second Life. Even before that, early virtual worlds popped up in the 1970s with text-based multiuser dungeons or MUDs. Many older worlds also inspired the kinds of utopian predictions we see around the metaverse today.

    ‘Fortnite’ isn’t the first virtual world to inspire utopian predictions

    One reason we might be experiencing the hype cycle again is that graphics technology and internet connectivity has significantly advanced since, say, Second Life’s 2003 launch. Many video games operate under a “live service” model where the developers constantly update a game to encourage players to return, creating a more convincing illusion of a living, breathing, ever-changing world. Non-metaverse games like League of Legends or Overwatch make significant changes to gameplay years after release, treating the experience more like a virtual space than a static game. From there, a leap to in-game concerts and fashion shows doesn’t seem that far.

    At the same time, virtual and augmented reality have gotten closer to consumer application, even if VR remains niche and AR nascent. One estimate suggests Facebook has sold around 8 million Oculus Quest 2 headsets, and several dozen VR games have made over $1 million in sales. Those are tiny numbers compared to phone and console sales, but huge compared to the practically nonexistent home VR market 10 years ago. Apple is reportedly working on VR / AR headsets, and Chinese company Nreal has successfully shipped full-fledged consumer AR sunglasses at a comparatively low price.

    Pop culture is obsessed with cinematic universes, so why not have a virtual one?

    Another possible reason is that modern pop culture is built on sprawling and highly intertextual media franchises owned by a few companies that promote their huge intellectual property catalogs as shared universes. That enthusiasm has translated into dreams of — as Verge editor Liz Lopatto describes it — “an online haven where superhero IP owned by different companies can finally kiss.” (This is the entire premise of Ready Player One.)

    • Like 2
    • Thanks 1
×
×
  • Create New...