Jump to content

animats

Resident
  • Posts

    6,160
  • Joined

  • Last visited

Posts posted by animats

  1. A few years back someone blanked out nine regions of Zindra with something that generated random huge blinking geometry, big enough to reach two regions away. It also played Caramel Dancer. That gave it away; it was possible to find the sound, and then the object. Found ARd, deleted by Governance, user never seen again.

    Recently, someone stole my moveable furniture. I have some chairs you can push around and rearrange if you want. When no one is around, they return to their normal position. Someone dragged them to a ban line in a nearby region, so they were auto-returned and I got an email. This was annoying, but not serious. If it gets to be a problem, I'll make the scripting more restrictive.

    My parcels are all open rez with a 20-30 minute timeout. I expect occasional problems, but mostly things clean themselves up with no attention from me. Someone did rez a house once, on top of my stuff. But it was just a new user who saw the "Rez Zone" sign and wanted to try out their purchase. I sent them to one of the big Linden region-sized sandboxes sized for large builds.

    The NPCs I have running around give the impression that someone is watching. They can email me for certain serious problems. It's been years since I got a email for which I logged in and asked someone "Is there a problem here?"

    I've never had to ban anyone. Worst case, I subjected them to a sales pitch for some of my less useful products, such as the driveway hose bell that goes "Ding" when you drive over it, and the beach ball that bounces off ban lines. Or I put them on a demo motorcycle and let them ride off into the sunset.

    • Like 6
  2. escalatortest1.thumb.png.33e35e263af33881396340211f793202.png

    First try of this convex decomposition program. This is the frame from one of my escalators. (No steps; those are a separate part.)

    • On the right, the original.
    • In the middle, the default reduction to convex hulls. This resulted in 24 hulls. Would be an OK physics model.
    • On the left, forced down to 8 hulls. Except for that bump at the top left, which would get avatars stuck on the escalator, it's fine. 8 was too small.

    Looks OK so far. Its choices on where to cut are reasonable. Try it yourself. I used the Python 3 version, listed under PyPi here.

    Creators, please try some of your own models and post the results. Thanks.

    • Thanks 1
  3. Second Life has much trouble with bad physics models. The system for generating them automatically during mesh upload is not that great. It's a hard problem.

    There's been recent research work on solving that. Google seems to be funding work in the field, partly because they need this for Google Earth.

    New papers, both from 2022:

    Looks like there's been considerable progress. The big advance is doing approximate convex decomposition, where it's OK if there's a little overlap between the parts. If you go for exact decomposition, as Havok does, the joints get complicated. For collision purposes, a little overlap is not a problem.

    The second paper has some useful pictures.

    convexdecomp1.thumb.jpg.a6a4db0badfb0a762423924ca756569a.jpg

    The input is a single mesh, and the output is multiple convex (no inward dents) meshes, shown in different colors. Convex meshes are suitable for SL's collision system. (Convex hull intersections are fast to compute. See "GJK algorithm" for the theory. That's what everybody serious, including Havok, uses.)

    This approach seems to get the classic hard cases right. The Havok decomposition has a hard time with simple cases such as a house with four walls, let alone one with a door and windows. This system seems to do better.

    convexdecomp2.thumb.jpg.370226d3dd7d35d216e5a444c15babe8.jpg

    Harder cases, with the accuracy turned down. This looks promising. If you turn the accuracy way down, something reasonable still comes out. That's what we need.

    Of course, these are the author's test cases. Anyone want to build the code and run some models through it? The program takes .obj format, so if you have it in Blender, you can try it with this program.

    SL viewers that upload content send this info as a "Physics Convex Block". Currently, Havok code in the viewer generates those, but a new approach could be substituted.

    • Like 3
    • Thanks 2
  4. I can sort of see ways to do this. Suppose we had orthographic images with depth from straight down, and angled down from north, south, east, and west. That's something I could generate with a version of Sharpview. Now we have some 2D images with depth, like aerial photographs. Google Earth puts that kind of data together very well. You can look down into narrow canyons between buildings in Manhattan. The canyon has to be really narrow before they are unable to generate a good side image of the building.

    Now the trick is to crunch that down into a low-poly 3D model. There's open source photogrammetry software, usually used for making 3D models from drone images, that can do this. The "low-poly" part may be hard. Most of the software for this generates point clouds. We want the faces of buildings to be a very small number of triangles, with the outer edges correctly aligned.

    • Like 2
    • Thanks 1
  5. What's striking about all this is that SL, with long draw distances, is putting more rendering work into the distant stuff than the near stuff. If we can somehow get past that, the big world thing will work better.

    Here's more of how GTA V does it. (I use GTA V as an example because, although 10 years old, it remains a popular big-world game that looks good. Its tricks aren't too complicated to be considered as improvements to SL.)

    So, an alternative to flat sim surrounds is 3D sim surrounds, like off-sim 3D terrain people add to their islands.

    vinewoodhillslowpoly.thumb.jpg.79fd217d7766b3cfd78f311ab13ed39c.jpg

    Rolling terrain, the easy case. This is one mesh with one texture. There are tools for SL to make sculpts like this. Someone was showing examples like this at SL20B.

    littleseoullowpoly.thumb.jpg.9d7319d36df3d31f4542e86163418252.jpg

    Urban terrain, the hard case. Vertical surfaces are hard. Those buildings, and those orange cranes at top center, will be hard to do automatically. It matters. Distant hard edges against the sky need to look right. That's how people navigate visually.

    Ten years ago, this was almost certainly done by hand. Today, we see Google Earth and Microsoft Flight Simulator doing this automatically from aerial imagery and depth info. Anyone know of a tool for this? If we had a library of low-rez SL region models like this, updated once a week or so, the compute load for looking at the big world would come way down. Beyond draw distance, the viewer would switch over to these low-rez models.

    • Like 1
    • Thanks 1
  6. It's not that it's impossible to have clothing physics. It's that it is not just compute-intensive, but requires clothing and models designed for it, and considerable prep work.

    new-workflow.png

    Unreal Engine 4/5 clothing designer workflow. That part up at the top, "DCC tool", describes the same process as making SL clothing in Blender. The creator does that. Then there's a phase where that's brought into the Unreal Editor and prepped for game use. That's where LODs are created, meshes are reduced, collision models are created, clothing parameters (how heavy? how stiff?) are set, and the clothing tried out. There are tools for debugging physical clothing.

    This is the big difference between SL content and game content. SL lacks those technical polishing steps and the tools for them A few things can be set in the uploader, and some parameters can be set in world, but that's it. All this stuff adds work for creators. SL has trouble getting creators to create halfway decent LODs. This is much harder. It also needs some scene-wide coordination. If you get 90 avatars with cloth physics in a club, not all can have full physics.

    That said, SL really does need a more modern approach to clothing. Something needs to happen when you get dressed that adjusts all the clothing so that it fits and layers properly. Roblox has that now. This is a hard problem, but others have solved it. The  Roblox solution is not run-time cloth physics; it's automatic clothing adjustment that happens when you change outfits. I'd like to see a system that guarantees that, if you try on clothing in pose stance and it looks right, all the layers will stay properly layered, without peek-through, for any avatar position. Didn't Sansar have a "pull and tug" system which could do some of that?

    I've been trying to find a video of game cloth physics showing a long un-slit skirt moving properly. Capes and slit skirts are easier to simulate, because they always have an easy direction they can move.  But a long dress or skirt is tough. There are tech demos of that with one character on screen, but few full games. Most games use capes, ponchos, or slit skirts to avoid the hard problems.

  7. Treecy sent me a screenshot:

    9411d3e03aa27dbf71d062ee88eb0f49.png

    Treecy says that was from current Firestorm.

    Any idea where that comes from? I searched the Firestorm sources for "Your inventory is", and the only hit was

    Your inventory is currently filtered ...

    Any ideas, anyone?

     

    • Thanks 2
  8. You can do quite a bit with animesh NPCs. But they can't do many avatar things. In particular, they can't "sit". "Sitting", in SL, is a strange kind of linkage that's only implemented for avatars. So NPCs can't interact with furniture intended for avatars.

    However, NPCs and furniture can be designed to work together. You can see this at "Sexytyme", which is what you'd expect.

    There's also SmartBots/SmartMates. These are actual avatars, bots controlled by programs outside SL. Rent a bodyguard to follow you around. They can even follow you through teleports. These are quite good, but they're expensive and sold as a service because they need a computer outside SL to control them.

    Virtual Kennel Club had pets that would follow you around. They shut down recently.

    And, of course, there are my own NPCs. Visit Hippotropolis. Cindi will find you and say hello, then run off and resume jogging around.

    All of these work. All are hard enough to set up that they are not used much in SL. If you sell something like this this, it takes a support organization to get customers through the rough spots.

    • Like 2
    • Thanks 1
  9. I don't see the connection between Second Life and Burning Man. SL doesn't try to be a nicer world. Just a virtual one. SL has landlords, evictions, property taxes, neighbor problems, and jerks. Just like real life.

    What makes Second Life work is this:

    • It's a big world, and being a jerk has only about a 100 meter annoyance radius, the range of a shout. When someone manages to annoy over an area larger than that, it's so obvious that Governance can take action. That's rarely needed.
    • The only broadcast medium is group messages. You have to join a group yourself. No one can add others to their group unwillingly. You can leave a group at any time and you're immediately off the list. So spam is a minor problem.
    • On your own land, you have enough power to kick jerks out, keep them out, and keep them from seeing much of what you're doing. Or even just block new users.
    • There's a social consensus that, given all those tools, you should deal with your own problems and not whine about little stuff.
    • The general policy of LL management is rather hands-off.

    This is much of why SL works as a society. It's subtle, and I've seldom if ever seen it described in a published article. It's really quite amazing, when you think about it, that a virtual world anyone can enter for free works so well.

    Now, there are other ways to manage a virtual world, and they're worse. Meta's Horizon and Roblox have a huge number of low-paid "moderators" armed with ban hammers. Roblox is trying AI moderators, so Big Brother is always watching and listening. Users live in fear. LL has managed to avoid that mistake.

    Comments?

     

    • Like 4
    • Thanks 1
  10. 45 minutes ago, William Gide said:

    abandoned land get some sort of (script deployed, perhaps) characteristic landscaping — with trees — so the world doesn't look like like a covered-over landfill.

    There's been some indication at Creator User Group that there are plans for better land textures after PBR gets deployed. Skies have been modernized and look good, and so has water, but terrain needs an upgrade. Terrain doesn't even have old materials, let alone new ones. It's just flat base color. So suggest things in that area.

    Maybe upgrade Linden trees and grass, which are also just flat base color. Then use them on abandoned land.

    It would be amusing if there was a slow, automatic process for abandoned land. After abandonment, harsh edges gradually smooth out. Then some Linden weeds appear. Then more weeds and grasses. Then small trees. Gradually, the trees become larger. Eventually, after a year or two, it looks lightly forested.

    • Like 9
    • Thanks 2
  11. 4 hours ago, Casidy Silvercloud said:

    There's an edge by one of the big mainland (the peanut one) where 3 different sims meet. Those crossings can be a pain all on their own, but the worst part about that particular crossing is that you MUST cross right in this one particular corner on one of the sims, within this little jagged sort of triangle shape or all sorts of things go wrong. I don't know if the people who rent or own that area still use a security orb, but there was one there that had a zero second immediate removal. Aside from the zero second warning, the main problem with that one is that it didn't send you home. What it did was send you right back to the corner where the three sims meet, but right on the inside edge of that triangle instead of the outside edge where it's safe. Then you'd get stuck in a loop. It will usually remove you from your vehicle, possibly remove stuff from you and once it put me in a perpetual fall I couldn't get out of because the crash was taking forever.

    There's a trouble spot on Robin Loop in Heterocera something like that. It's a road that crosses a region corner. One quadrant of the corner happens to be in a region that has no other Linden land ownership other than a tiny chunk of road. So there's not enough land impact capacity to drive through there. If you drive through it, you get an "insufficient resources" popup and are stuck.

    Some years back I got the Moles to put up traffic cones around the trouble spot. That warns people to stay clear. There are other spots where roads and jagged-edge parcels result in ban lines atop a Linden road. If you find one, bug LL into putting up guard rails or traffic cones or something by submitting a support ticket. This is a routine road maintenance task, done by the Linden Department of Public Works. There are at least half a dozen places on mainland with such fixes.

  12. 9 minutes ago, Love Zhaoying said:

    Well, SL has "doors that lock" but you can still cam inside, etc. etc.

    You should not be allowed to sit while camming if there is not line of sight between avatar and sit target. Then you couldn't cam and sit through a locked door.

    • Thanks 1
  13. 6 hours ago, Katherine Heartsong said:

    And yet, this is what people who play immersive, realistic looking 3D games want and expect these days. They don't care how it's being done technically, whether it's GTA, The Sims, World of Tanks, Cyberpunk 2077, Lives by You (next month release), or SL, they want the environments in games that are set in a real world type of environment that humans are familiar with to act and look as real as possible.

    That's the expectation these days.

    Yes. That's the whole point of all this.

    4 hours ago, Henri Beauchamp said:

    One word: Vulkan.

    Sharpview uses Vulkan. It helps rendering speed considerably, which is why Sharpview consistently gets 60FPS on SL content on a reasonably good GPU. It's not magic. It doesn't help with GPU memory space, for example. I could go into more detail, but it gets boring. IM me if you really want to talk Vulkan. The general idea is that you don't want to use most of your graphics resources rendering stuff so far away it can barely be seen. Hence levels of detail. Most of what I'm talking about here involves coping strategies for bad lower LOD models.

    4 hours ago, Henri Beauchamp said:

    LL also started to implement some on-the-fly conversion techniques so to be able to render SL on mobiles: this could also benefit the ”standard” viewer.

    That's the right way to do it. The region impostor images I've been talking about here are one example of that sort of thing. That's the simplest form and easy to do. The next step up is automatically making low-poly models of large areas. Or of avatars, a subject I've discussed elsewhere. That's harder.

    • Like 3
    • Thanks 1
  14. 3 hours ago, Eowyn Southmoor said:

    Vehicles do not just "bounce off" banlines. Undoubtably some might, but certainly many do not, so I wouldn't be making such a broad sweeping statement.

    "Bounce" is not the default for vehicles. It can be done by scripting in vehicles, but it's not common. My motorcycles do it. I have a cheap full perm "Beach Ball Ban Line Tester" on Marketplace, if you want to know how to script this. Throw it at a ban line and it will bounce off.

    It's hard to do this perfectly for a vehicle with avatars on board. If the avatar root hits the ban line before the vehicle root does, or object entry is allowed but avatar entry is not, the avatar gets ejected. This is a problem if you sideswipe a ban line. In the narrow waterways from in north central Sansara, where jaggy ban lines extend into what looks like a public waterway, it's easy to lose a boat against a ban line.

    "Bounce" ought to be the default. There's an accepted JIRA to improve vehicle vs ban line behavior, but LL has not acted on it. Then people would not have to understand the previous paragraph.

    • Like 4
  15. Cool VL Viewer is doing very well here. I can't get Firestorm much above 10 FPS with comparable settings, even with shadows off. I should download Cool VL Viewer and try that, too.

    Here's the same scene in Sharpview. One region only. This discussion is all about what to do about distant regions, and I haven't implemented that yet.

    babbagesv0.thumb.png.f8742d00ac14ca22b597bf668531cc69.pngNew Babbage in Sharpview, single region only. 60 FPS. Shadows on. GPU is 43% busy. About 200m to the region boundary from this point.

    (Note funny things sticking upwards out of flying submarine atop central tower. That's an old sculpt object, and it does something strange with sculpt coordinates that Sharpview doesn't emulate properly. I know of three such objects in world, all from the pre-mesh past of SL.)

    So that's where I am now. Resolution for near objects is fine. This discussion is all about what to draw in the distance that won't slow things down much. The goal is to be able to see distant landmarks so you have a sense of place. Nice for sailing, flying, and to a lesser extent driving. Or just walking around a city.

    It's possible to use more resources drawing the stuff that's barely visible than the close-up stuff. That's no good; you sacrifice local detail and responsiveness for a minor improvement in distant stuff.

    I've commented on GTA V's backgrounds. Turns out they are not flat images. There are custom-built low poly models of each large area in that game. This is something that would be hard to do for SL, although not impossible. You can generate simplified meshes of entire regions. Someone had some of those on display at SB 20. I've made some myself. Those are just the SL map projected onto an elevation map. It's possible to do much better, but it comes close to copy-botting if it gets too good. So I'm been planning on using flat images, which are permitted per the SL terms of service.

    • Like 3
  16. SL has its own special problems, of course. Some comments on those.

    1 hour ago, Qie Niangao said:

    I'm amazed people survive Mainland with such long draw distances. Whenever I crank up my cam to take a deep 360 or something, I have to spend the next five minutes derendering garbage floating in the sky.

    Admittedly, I barely skimmed the thread so I may have missed it: Does this envision a mechanism to "curate" the imposter images—and the 3D scene—to hide stuff like floating debris?

    That's a problem. One reason I use New Babbage and Bellesaria as examples is that both prohibit low-altitude sky junk. I posted a long draw range image a few weeks ago taken from SL's highest mountain, and what ought to be a beautiful vista looks awful.

    So I'm considering a sky junk filter. If it's not attached to the ground, it doesn't appear in impostor images. (Technical definition: "Attached to the ground" means that its bounding sphere does not intersect with some other bounding sphere that is attached to the ground. This is a recursive definition. If there's a chain of objects down to the ground, it's not sky junk.)  Tall towers will still show, but stuff just floating, no.

    1 hour ago, Henri Beauchamp said:

    (the well known mesh LODs issue in SL).

    Yes. A "crap LOD" detector is needed. A first cut is just to look at the triangle counts. If they look like High=2000, Lowest=2, that's a crap LOD item. I'm tempted to have Sharpview replace those with a single-color cube if distant, or push them up to High LOD if close. It's not a great solution, but it's something.

    (In Firestorm, you can set "LOD factor" to 0, which shows everything at lowest LOD. Most large objects hold up OK. Some don't. Large buildings and trees are the worst offenders, because they blight a whole area. Try that, at least with your own stuff. Look for trees that turn into a bare trunk, and see-through buildings with loose triangles. If it's yours, please fix it. If you're a landlord, talk to your tenants. Thank you.)

    • Like 2
    • Thanks 3
  17. Let's look at some options for improving this:

    • Throw hardware at the problem. There are better GPUs. If you have to ask how much they cost, you can't afford them. Also, most of that compute power is going into drawing background objects you can barely see. This is not cost-effective.
    • Pre-render impostor images. This works if you can keep the viewpoint from getting too close to a flat image. That's what I've discussed above. We don't have to blur the impostor images. I've been doing that to keep image size down, but it's not essential. "Too close" is an issue. 256m looks pretty good. 128m is pushing it.
    • Identify distant hero objects and give them more graphics resources. This is common in games. The distant castle on the hill that's important to gameplay may be manually assigned more resources. In SL, the viewer has no idea what's important, but it can at least tell what's big. So the viewer might pick 5 to 10 distant but large linksets and do them at a higher LOD than usual.
    • Identify lesser objects and cut their resources down. If something is small and distant, it might just be rendered as a single-color mesh cube, scaled to the original dimensions. This works well for buildings. The color is the "1x1 texture", or what you get when you reduce the texture down to 1x1, which is a single color with an alpha value. The GPU can draw a huge number of little cubes without problems.

    I often use Grand Theft Auto V as an example, because it's a very successful big-world game and they use all kinds of cheats to cram a big chunk of Los Angeles into a fast-moving game. So here's a close-up from the GTA V scene I posted above.

    gtablur1.jpg.d39ede82a0af7232db9b3a43cab169ac.jpg

    A very close look at a GTA V background. This is from the same image posted above.

    Now, this is definitely blurred. But notice the nice hard edges on the large buildings. Those are "hero objects" that received special handling. They get parallax effects against the background as the viewpoint moves, which distracts from the fact that the background is flat.

    This is all standard game technology.

    • Thanks 3
  18. Good comments.

    I'm writing this from the viewpoint of a third party viewer developer who has to deal with existing SL content.

    First, looking down from that tower in New Babbage is one of the hardest cases in Second Life. There's a whole city down there, with much detail, and you can see most of it from the Albatross's docking tower. At street level, most objects are hidden (occluded) by buildings. So let's look at what current viewers can do.

    LOD factor 3, NVidia 3070 with 8 GB, 32 GB RAM, gigabyte fiber networking. A good gamer PC, more than most SL users have. About US$2000.

    babbage128m.thumb.jpg.3ba5077c2441061d77cd0160091a5ea1.jpg

    Where did everything go? Where's my glorious vista of the city? This is a 128m draw distance. 70 FPS.

    Looks bad from up here on the tower, but very playable. The standard New Babbage environment has been turned off for this test, so we see a rare sunny day in New Babbage.

    Let's try 256m.

    babbage256m.thumb.jpg.fcfa6490fcbd48fdf9f61ad86348ff7d.jpg

    That's better. But we can't' see City Hall, the big tower. 256m. 20 FPS.

    Frame rate has dropped to a barely acceptable level. We can see about two blocks. At ground level, 256m looks pretty good.

     

    babbage0.thumb.jpg.31d7e1a8d18c5348031291b9784d2685.jpg

    Let's see the whole city. 1024m draw distance. 4 FPS. Minutes of loading time.

    This is unplayable, but looks great. The usual problem of staged SL photography.

    So these are the options we have right now. Can we do better?

     

     

    • Like 4
    • Haha 1
  19. I'm toying with some schemes to compensate for bad lower levels of detail. Maybe push the LOD up for a small number of large distant objects, so if there's some big hero object that you really should see in the distance, it will show up. Draw medium-sized distant objects as their bounding cube with their average color, then blur a bit. The goal is to never have the world drop off into nothingness when there's more world out there.

    These are just ideas at this point, not code in Sharpview. Comments?

  20. More tests of what this approach might look like. This is just manual editing of SL screenshots from Firestorm.

    bigby0.thumb.jpg.6704854a87d7ccb3dd253c930aef634c.jpg

    Bellesaria, a street of Traditionals. 256m draw distance.

    bigby2.thumb.jpg.1fd51e6d8ac2f273173cde19ccad1a40.jpg

    Same picture, but everything beyond 128m has been blurred. If you didn't have the other picture to compare with, would you notice?

    This is the worst case for those region impostor images I described above. The idea is that the viewer would show the four nearest regions, then show flat images of areas further away. If you have enough GPU, you might get 9 regions, with real stuff drawn out to at least 256m.

    Next, an extreme case.

    babbage0.thumb.jpg.ef08e43efa0615cb0c5db1d38877c65a.jpg

    New Babbage, from the clacks tower in Babbage Palisade. Draw distance 1024m, LOD factor 3, two minutes to load and 5 FPS on a powerful computer. We just can't draw that much at full speed.

    Midday lighting so we can see the place. The default environment in New Babbage is smoky, and you can't see this far.

    babbage2.thumb.jpg.021cbad43d611933322b75bb3e5d3019.jpg

    Same image, but everything further than 128m has been blurred. This is what I'm shooting for as a goal. See entire cities, a bit blurred in the distance.

    Compare with the GTA V pictures above. Same concept.

    • Like 1
×
×
  • Create New...