Jump to content

animats

Resident
  • Posts

    6,107
  • Joined

  • Last visited

Posts posted by animats

  1. 3 hours ago, AnitaLaDodyx said:

    And yes, animats, that's exactly the issue.  As far as your solution, I'm not 100% how to accomplish that, so if you could expand on it a bit, I'd appreciate it and try it out.

    You need a "car sit animation", which you can find on Marketplace. Once you've tried one of those, and seen where it puts you, it will be clear how you need to adjust the sit target.

  2. Scripts in Second Life don't just run. They get bundled into objects, which can be rezzed, taken back into inventory, copied, and moved from one sim to another. The state of the script survives all that. It's rather clever, and somewhat different from normal program behavior.

    • Like 1
    • Thanks 1
  3. If you want to create content for Second Life, you need to create a COLLADA file. This can be done in Blender or Maya. Instructions start here.

    Second Life has its own rendering system. It uses OpenGL. It is not based on Unity or Unreal Engine or any other game engine. It's not even close.

    The source code for the Linden Lab viewer is here. The source code for the Firestorm viewer is here.

     

     

    • Like 2
  4. 19 hours ago, AnitaLaDodyx said:

    It appears my hitbox is too low, hitting the ground and pushing the vehicle up. 

    That may be the legs of the avatar. A seated avatar still has the same collision outline as a standing one.

    The usual fix for this is to have a seated animation that offsets the root -1m in Z. Then you offset the seat position in the vehicle by +1m in Z to compensate.

  5. 31 minutes ago, Ardy Lay said:

     I simply wanted the PROTOCOLS clearly documented and authentication and authorization and content licensing intent to be implemented in such a way as to allow products of creativity to be moved from system to system where it complies with the content license for that product. 

    Having re-implemented most of SL's protocols in Rust, protocols are not the problem. There's a fair amount of documentation, and you can look at both the viewer code and Open Simulator. (Here's my re-implementation of Linden Lab Structured Data in Rust if anybody cares.) The main problem is that they're all non-standard.

    Asset portability is an interesting concept. I previously described how SL no-copy objects could be tied to non-fungible tokens so that you could move objects between SL and other grids. I personally don't want to do that, because most NFT are Make Money Fast schemes, and most NFT content is crap. It's technically possible, though.

    To users, it would be a vendor system, like CasperVend. You buy an object from a vendor, and it's delivered to you, no-copy, no-mod. You also get an event recorded on a blockchain that says Object A belongs to user B and is currently on grid C. if you want to move it to another grid, you can take it to a transfer portal in-world and send it to yourself on another grid.

    Objects set up this way would have scripts which check, on rez and once a day or so, if they're authorized for the grid they're currently rezzed on. If not, they send a message to the owner and self-delete.

    To creators, you have to have a copy of the object in a dropbox on each grid where you allow sales.

    Users active on more than one grid would need a crypto wallet to track their NFTs.

    None of this requires LL involvement.

    It's worth thinking about as major brands get into NFTs.

    f78753ee-c88a-44a0-962b-c6f09e3c72bf-scrOwn a licensed Gucci item in Roblox for about US $800. Impress your shallow friends.

    We could even have a HUD which allows you to click on other people's stuff and check if it's a properly licensed overpriced copy. It would call out to a blockchain node and check for the record that says A owns B on grid C. There could be orbs in clubs to eject phonies.

    I'm not personally interested in doing this, not being into branded merchandise. If someone wants to do it, go for it.

    • Like 1
  6. 5 hours ago, ChinRey said:

    The two most noticeable advantages SL has over Roblox are A rated content and fancier graphics

    Roblox is caching up to SL on fancier graphics.

    skinned_meshes_blog_1920x1080.jpg

    Roblox, current generation graphics.

    Here's their user interface for avatar developers.

    Their triangle limits are 10,000 tris per mesh part, and 50,000 tris per avatar. For special events, they may be allowing performers to have more. Note the detailed avatar on stage in the lower right above.

    Roblox just had their developer conference. Their road map to the metaverse is now clearer.

  7. I've been using the GLTF test file set from Khronos for rendering tests. But those are mostly tests of one PBR feature at a time.

    The real problem SL faces is that nobody else tries to mesh-reduce down to the 10 to 50 triangle range. Other systems usually drop to impostors rather than to ultra-low-poly models. So mesh reduction algorithms aren't designed for that. Most examples are taking 50,000 triangles down to 5,000 triangles, not 500 to 50. That's why a mesh reduction algorithm that degenerates to a rough outline impostor is promising. It will look awful in close up, but might look OK at distance.

    If we can just get lowest LOD to be a rough outline with the approximately correct low-rez texture on it, we'd be ahead of where we are now. You can get the renderer to apply a bit of blur to low-LOD objects. A blurry but otherwise correct background is visually acceptable. Go look at some AAA title video game, and you'll see a lot of distant blur. Large holes in distant buildings, though, are just not acceptable.

    So, what I'm saying here is that the mesh reducer in test in a project viewer needs to be brought up to where UE4's mesh reducer is now. Everything I'm talking about here is existing modern video game technology. SL just needs to catch up. LL has some young graphics Lindens who get this stuff.

    Summary for non-technical users: the goal here is to stop distant buildings having holes and missing walls, and distant cars having missing body panels.

  8. On 10/9/2021 at 11:01 PM, animats said:

    I've been talking about "silhouette protection", as Unreal Engine does it.

    I'm trying to find open source code for that. I've found a paper on extracting silhouettes from mesh, out of the CS department at Harvard. The example used is the bunny, a model which shows up in many academic papers. People used to use the Utah teapot, but that became too much of a cliche. Not sure how robust this model is on messy 2D meshes.

  9. 23 minutes ago, LoneWolfiNTj said:

    Interesting. I've never tried running pathfinding indoors. Hmmm. Let me try. Ok, surprisingly, even though my house is now a "movable obstacle", a pathfinding cat can move around in it. Sorta. Except, after the first few minutes, he gravitates to the stairwell and starts head-butting the south stone wall. No error messages. No path_update events at all. Just standing there head-butting the wall. Weird.

    That's about typical. You need timers and recovery to get pathfinding to work at all in difficult situations. It can be done, but not well.

    Standard SL pathfinding with automatic recovery when stuck. Note the messages in the top window as stall recovery keeps kicking in. When a timer indicates no progress with Pursue, it tries short wanders or short forced moves to try to get unstuck. Built-in pathfinding is awfully dumb about dealing with problems. It's another one of those SL features that got to 80% complete, and then work stopped.

    Frustration with this led me to develop my own system, different what's shown above. It works better, but the cram job to fit it into 14 or so intercommunicating 64K scripts was a huge pain.

  10. This whole episode illustrates why I developed mobile NPCs, but don't sell them.

    Someone with land rights has to set up the parcel. Most parcel owners do not know how to do this. So someone who does has to go to their parcel and talk them through the setup process, or become a member of their land group so the setup person can do it. There are too many special cases like this where things go wrong and there are tough problems to figure out. It's too time-consuming for a product.

    If you want to do this, you need a support organization, like the Virtual Kennel Club's "certified trainers". You need to give classes. You need to check back for the first few days to make sure the NPC is working properly.

    I have a JIRA filed on how to make setup easier. Accepted one year ago this week.

    So I mostly sell escalators, which, once installed, Just Work.

    • Like 1
    • Thanks 1
  11. 33 minutes ago, LoneWolfiNTj said:

    I'm not convinced the problem I was having in Rancourt was a navmesh issue. The navmesh seemed to be penetrating the fence and slicing through just a few inches under the topsoil, and yet pathfinding wasn't working. Wondering objects avoid obstacles. So if a wandering object sees "distance to nearest obstacle" as being 0 in every direction, it's going to say "PU_FAILURE_NO_VALID_DESTINATION", even if its connection to the navmesh is perfect.

    But of course, I may be wrong. Would making a "hollow" fence "static obstacle" make pathfinding work? Well, I'm not going to tear down the carefully-aligned fence on my parcel 2 more times to find out. But wait, I can use a sandbox. Just a sec... checking that pathfinding works at Sandbox Island (124, 83, 26)... CHECK. Checking that putting the "hollow" fence around my Wanderer destroys pathfinding... CHECK. Changed fence from "movable obstacle, concave" to "static obstacle" and rebaked navmesh; does that fix the problem? YES!!! Wow, I didn't expect that to work. Interesting!!!

    I'm curious: what was your line of reasoning that lead you to suggest "static obstacle"? It worked, but I'm not understanding why.

    If you make something a "static obstacle" it goes into the static navmesh, and you can see it. If the static obstacle system knows about an obstacle, pathfinding can  deal with it. Movable obstacles are dealt with dynamically, and that part of the system isn't as good.

    Static obstacles cut holes in the navmesh, as you can see by viewing the navmesh. They have to touch, and sometimes go a bit into, the walkable surface.

    Incidentally, any part of a walkable surface steeper than about 65 degrees becomes a static obstacle. So you never walk up vertical surfaces.

    Pathfinding is composed of two different systems working together. The static navmesh part, and llGetStaticPath is reasonably good, but the movement control and movable obstacle part is less good. It has to work in real time, and really isn't that smart. The higher the script load on the sim, the worse the movement gets. Basically, pathfinding gives objects a heading and speed, which they follow until the next pathfinding update. Under heavy load, there are fewer updates and they tend to bump into things.

    • Like 1
  12. Second Life has several different systems for moving objects. There are ones that use the physics system, there's keyframe animation, there's direct positioning (llSetPos, etc.) and there's pathfinding. They're not interlocked. You can only use one system at a time. Allow a second or two between changing from one system to another. llMoveToTarget and llSetTargetOmega belong to different systems.

    llLookAt and llRotLookAt belong to the same family as llMoveToTarget, and can be used together.

    llTargetOmega is interesting. It's usually just a visual effect of rotation, done in the viewer. Selecting the object will make it stop rotating for you, but not for other viewers. It's for wheels, windmills, and spinning decorative objects, where you want rotation but don't care about actual position.

  13. 1 hour ago, LoneWolfiNTj said:

    So an object inside the fenced parcel is literally inside a hollowed-out part of a "Box" object.

    Oh, so maybe the pathfinding system doesn't recognize box hollows. That wouldn't surprise me. It's basically building a very low rez model of the sim, and giant box hollows are rare. There are other things it doesn't handle, such as walkables not bigger than 10m x 10m x 0.2m.

    Worth a try - make the fence a Static Obstacle, and look at the navmesh to see if the box hollow was recognized properly.

    • Like 1
  14. 8 hours ago, LoneWolfiNTj said:

    Ah-ha! I think I see the problem! It was a 1-prim fence, a single object wrapping the property on 3 sides. So a pathfinding object trying to navigate the area would literally find itself "surrounded by" or "inside" an object and hence think that "all points are unreachable".

    Ah! That probably means the fence object has a bogus physics model which is confusing pathfinding.

  15. I've been talking about "silhouette protection", as Unreal Engine does it. It's possible to do that in Blender, by hand. Here's what it looks like.

    Here's a simple but hard test case - a thin sheet with a bump in it.

    sheetwithbump.thumb.png.9b37eb267eefcc7551ef87b7d4e4650a.png

    Flat sheet. 480 triangles. Created by stretching a cube to 10m x 10m x 0.1m, then subdividing the big faces x10.

    badreduction2.png.19a5073a805981aad265c92146fd6c9f.png

    Bad mesh reduction with Blender's decimate. 14 triangles. If you use that algorithm with extreme reduction, this is what happens. Note that the outer edges of the sheet have been pulled in. This is why extremely low-poly LODs look so awful.

     

    protecectededges2.thumb.png.2a8108b582d8501e8307529fd9f8a128.png

    Silhouette protection, done by hand. Only the selected vertices and edges can be mesh-reduced. The outer edges, both vertically and horizontally, are exempted. Notice that the inward side of the bump is not protected, because it doesn't extend the silhouette.

    goodreduction2a.thumb.png.58e06bd8bd31c485254da31ed5a457ce.png

    Next, decimate as hard as we can, with the outer edges locked. Not too bad.

    goodreduction2.thumb.png.3fccdf1efa9cf4deec5b8a558fdf0e5c.png

    Final result, without the edges showing. 52 triangles. That's as far as you can go with Blender's decimate.

    With a bit more hand work, you can get down to 32 with only minor changes to the silhouette. So that's where mesh reduction should stop.

    You can play with this in Blender, using the Decimate command, followed by Limited Dissolve to merge planar triangles.

    So that's what I was talking about. Mesh reduction needs to preserve the silhouette of the object, so things don't blank out at distance.

    • Like 1
    • Thanks 1
    • Haha 1
  16. That may be related. Sometimes, in hilly terrain, the navmesh can be as far as 2m from the ground surface. That happens over in Hippotropolis. My NPCs have to correct for this. I wasn't expecting to see that on a parcel that's basically flat, but maybe the road edges are contributing to the problem.

    Definitely file a JIRA.

  17. I went over to his parcel, and I can't figure out what's wrong.

    Pathfinding objects work fine on the adjacent parcels, but not his. I brought over my "Come Here" tester. On his parcel, it makes a short move, then says it can't find the navmesh. Works fine on both adjacent parcels. Even on bare ground, which is always walkable, pathfinding won't work there.

    I looked at the navmesh with the viewer, and it looked fine.

    I looked at the objects with my "Amulet of Pathfinding", which does a ray cast and reports all the objects in the cast direction until it finds a walkable. There's no invisible object on top of the ground interfering with ground contact.

    I even tried one of my NPCs. Those move with keyframe animation, but sense using llGetStaticPath and llCastRay. They work OK, and are able to sense the navmesh. So the navmesh is good, but the part of pathfinding that uses it is being difficult.

    The only thing that was unusual was that the parcel was really close to the prim limit. When you turn on pathfinding in an object, it pushes up the LI to at least 15. With only 9 LI left on the parcel, turning on pathfinding hit the limit. I wonder if hitting that limit disables pathfinding for the entire parcel for a while.

    • Like 1
  18. The inventor of VRML has a new article: Metaverses, the Third Wave. He has this to say about Second Life:

    "VRML ultimately fizzled out by the late 1990’s. It was too early in terms of commercial adoption. It was also a matter of too much too soon, as the world was still coming to grips with the basics of the Web. However, our work inspired others, most notably Philip Rosedale, who left Internet video pioneer Real Networks to create Second Life, arguably the first-ever fully working Metaverse system: a 3D-rendered virtual universe on the Internet. Not open; not at billions-scale; but a good start. Most of the tech of this time ultimately crashed and burned, but Second Life survives to this day with a thriving community."

    His summary of the current situation:

    "It’s more than tech herd mentality; we are getting substantive signals that it may finally be Metaverse time."

    • "The suits are circling. In the wake of Facebook’s new positioning as a Metaverse company, execs at a multitude of tech and media outfits have put it front and center of their strategy, or are at least saying they have a Metaverse strategy. ...""
    • "The pundits are pontificating. A new class of self-appointed “experts” are jockeying for position as thought leaders. ..."
    • "The kids are creating. Most significantly, the creator class in an ascendant economy is making all kinds of Metaverse stuff: interactive 3D content in Fortnite, Roblox and VRChat, NFTs on myriad platforms, and open and decentralized worlds in Decentraland, to name a few. This is important, because otherwise the two previous points could just as well suggest a hype bubble."

    I agree with that last. The hype to code ratio is very high in this space. Which is why I keep trying to push SL to move forward, rather than hopping on some other vaporware project. SL has the metaverse working; it's just too slow and glitchy to go mainstream.

     

    • Like 2
    • Haha 1
  19. 8 hours ago, ChinRey said:

    But this isn't really automatic LoD generation, Animats. It's more semi-automatic with a large number of parameters defined by the user/creator.

    Also, the reduction rate they achieve in the video is rather low.

    True, although most people probably use the default settings. The Unity mesh reducer has fewer user tuning parameters but can do a roughly comparable job. This reflects the target market. Unreal is used for big-budget AAA game titles, where people obsess over visual quality. Unity is used for second-tier projects and is easier to use but not quite as good visually.

    So here's the Unity "just push the button to create LODs" system.

    Unity's LOD system. Very easy user interface.

    The SL mesh uploader should work roughly like this.

    How much to reduce is another issue, closely tied to the LI computation, which is highly controversial. The current LI charging rewards very low lowest LODs too much.

    Avatars have a different set of problems. A near-term solution is to use the existing avatar impostoring system. The higher the complexity of your avatar, the closer the distance at which it drops to avatar impostor mode. To see what that would look like, set "number of non-impostor avatars" to 3 or so in your viewer. Then go someplace with lots of avatars.

    • Like 1
    • Haha 1
×
×
  • Create New...