Jump to content

animats

Resident
  • Posts

    6,135
  • Joined

  • Last visited

Posts posted by animats

  1. We need a 30 second delay on the LSL calls for ejection.

    Yesterday, I flew from the northern tip of Satori to Heterocera via Bellessaria. 12 threads from security orbs. 3 covenant-violating ones over Belli.

    The worst problem was someone who had a mega-prim 256x256x1 surround with pictures of clouds, solid. Didn't realize it was an obstacle, hit that and got stuck.

    Suggested design feature for security orbs: check the horizontal velocity of the avatar. If they will be out of your area within 10-30 seconds, don't bother them.

    • Haha 1
  2. 2 hours ago, Coffee Pancake said:

    The SL viewer presents itself as a relic, it's very easy to look at how little the UI has advanced and assume that's the viewer.

    If Animats puts the stock Linden UI over the top of his vulkan render pipeline, it's going to be in the same boat. It might run better, but that alone isn't sufficient to for it to be perceived as a major advancement by non technical end users.

    Agreed.  Nor do I have any major plans in that direction. I'm just addressing the performance issues.

    Mobile is tough. SL on the small screen is just too small. The amount of compute and network bandwidth required is high. You'd need a really high end data plan and a high end phone. SL with cloud rendering is quite feasible now, but you'd be paying maybe $30 a month for server time. And the UI is totally unsuited to mobile.

    VR is tough. SL movement in VR would make people nauseous. This is a generic problem with VR in big worlds. VR works best in a very constrained space, like VRChat or Beat Saber.

    • Like 1
    • Thanks 1
  3. Why is some landlord doing this on a large scale? If you have enough of those things to fill a region, you can rent your own isolated region for this sort of thing.

    Thinking back, the ones in the picture I posted appeared when new regions were unavailable because the old Linden Lab data center was full and the transition to AWS hadn't happened yet. Now that new region sales are working again, there's no excuse for this sort of thing.

    Does anyone have a "move build to another location" tool that works?

  4. 5 hours ago, Scylla Rhiadra said:

    Or it would confirm their preconceptions that SL is mostly about sex.

    So what? The stated position should be "It's a virtual world that mimics the real world. Of course it has sex".

    (Roblox won't even let users hold hands. The word "gay" is banned in Roblox.)

    5 hours ago, Nick0678 said:

    Facebook's Meta is the "real news".... Lets be honest realistically speaking they don't give a f... about SL and the 90.000 concurrents that it had back in 2009.

    Facebook is very evasive about the number of users that Facebook Spaces had, or Facebook Horizon has. It may be a very small number.

    • Like 1
  5. There's formal training available on game user interface design.

    https://gdconf.com/masterclass/psychology-game-ux

    "Experiencing a game happens in the player's mind. This is why understanding the human brain (especially its limitations) while it is perceiving and interacting with a game is paramount to accomplish faster and more efficiently your developers' goals. This workshop proposes to delve into how the human brain works in terms of perception, attention, and memory (critical elements for usability), and offers insights on human motivation, emotion, and gameflow (critical elements for engaging games). Based on these elements, the workshop proposes a UX framework and UX guidelines during the different game development stages."

    The instructor has a PhD in psychology, was in charge of the user interface for Fortnite, and has published two books on the subject. So this is someone who knows what works.

    Perhaps Linden Lab could send someone to take this online course.

    • Like 2
    • Thanks 2
  6. 19 hours ago, Flea Yatsenko said:

    I think a long term goal would be to have viewers share optimized bulk assets instead of just downloading individual assets

    I've thought about viewer cooperation to boost performance, but it has griefing problems. Never trust the client. There are also terms of service problems.

    An impostor server, to create the illusion of very large draw distance, is an option.

    This would store pictures of entire regions. The TOS allows you to take pictures of SL and redistribute them. Each region would have pictures from at least 8 horizontal directions, plus straight down. Beyond draw distance, you'd be shown pictures of adjacent regions, positioned like billboards at the edge of the region. If you're flying, you'd see mostly the straight down images, like flying over a map. Distant images would be blurred a bit, as if there was a bit of haze. This is roughly what Google Earth and GTA-type games do to show you a big world in 3D.

    With that, users would have more of a sense of place. If there are mountains to the north, you'd be able to see them, and have a sense of direction. When boating, you'd be able to see distant shorelines. When flying, you'd be able to see distant terrain and airport runways, as if you were flying over the map.

    This takes a few tricks to deal with SL content. The picture-taker would only cover maybe the first 128m above ground, so as not to show sky junk for miles around. The downward pictures would be taken from a height just above the highest object with a ground connection. It would be like "terrain view" in Google Maps, vs. "map view".

    The pictures would be taken by a bot that photographs the world once a week or so. Mainland only, plus some of the larger multi-sim regions. For a single region, it's unnecessary, of course.

    This is just a concept for now. I might take some pictures of New Babbage's 14 sims as a demo. A real version would require LL support, or support from other large Open Simulator grids.

     I used to have a little "impostor garden" in Vallone, where visitors could compare 3D objects and impostors. From 20m away, the impostors looked pretty good. For this, I'm talking about impostors of objects at least 100m away.

    Here's GTA V:

    707055.jpg

    Use of distant impostors in GTA V. Second Life should look this good.

    This is something LL could do. It' s not that difficult. You get a huge improvement in visual quality for a low investment.

    It would show off that SL is a really big world, unlike those dinky Facebook worlds.

    • Like 3
    • Thanks 1
  7. 39 minutes ago, Prokofy Neva said:

    Provide Jobs to Newbies

    That may not mean "jobs" as an income source, but just tasks to do. If you spend any time at the new user areas, you find that the two big questions are "What do I do now?", and "How do I fix this &$*@! clothing problem.". One of the helpers at Caledon, who tends to sit on the benches near the entrance, says that many new users think he's a quest giver.

    LL has tried, with things like Linden Realms, but they're not that interesting.

    Part of the problem is that many of the interesting things to do require skills users won't' have on day one. For example, sending users on a quest across mainland using the Drivers of SL HUD, which gives turn by turn directions. That HUD allows for making stops, and doing various tasks at stops. So it's really a full quest system. But there are so many things that can go wrong on a road trip in SL that it would be tough on new users. And all the stuff of attaching a HUD and rezzing a vehicle is a pain. With a grid-wide experience, though, it could be set up so that the new user just walks through a portal to start the quest.

     

    • Like 4
  8. 14 minutes ago, Flea Yatsenko said:

    You could do a lot of optimization if you were allowed to make assumptions that assets in SL wouldn't change and they could be bundled/compiled into something more efficient to handle.

    That's what makes Unreal Engine work. Vast amounts of prep and optimization during level building. If you can never see area A from area B, you never need to draw A when in B. Or even load it. That kind of thing.

  9. 54 minutes ago, Coffee Pancake said:

    A replacement, fully open source, documented with blender & Maya dev kits, commercially viable (ie, not junk, something creators will make content for), mesh body and head made by LL and mapped to all the body sliders is desperately needed.

    Roblox has just developed new technology to make clothing Just Work.

    Layered_2_1920x1080.png

    In the new Roblox system, the clothing is automatically adapted to fit the avatar. This is very new, and still in test.

     

    image-10-e1634839475230.png

    Next generation of Roblox clothing on new-style avatars. It's not just blocks any more.

    Their in-house designers like a somewhat anime style. That's not built into the system. Creators can create and sell third-party clothing.

    (Roblox has an average user age of 13, remember. Clothing has to have more coverage over there.)

    So, it can be done, and has been done.

     

    • Like 4
    • Haha 1
  10. 8 hours ago, Jackson Redstar said:

    in a Nutshell, it seems - Sansar. Each region of downloaded to local before you get there so no more lag waiting for textures no matter where you cam to

    That's not how this works. That's an illusion. The viewer is in a race with the user to load content before the user gets close enough to see through the illusion. High-resolution textures are constantly being loaded as the camera moves around, and distant textures are being dropped to lower resolution to keep the frame rate up. There's a lot going on behind the scenes to create the illusion that nothing is happening. Pay no attention to the man behind the curtain.

    Here's another example, from an earlier, lower performance version of the system. This shows what it looks like when textures are not in local cache. This is a tour of a shopping event. Booth after booth of high-detail objects. You'll see some small or distant grey objects that haven't loaded yet, and some objects with fuzzy, low detail textures. As the camera gets close, the grey objects turn to textures, and fuzzy textures go to higher resolution. You never see a fuzzy texture in close-up for more than a second or two. The shopping experience improves. You can see the merchandise.

    Next problem is getting rid of grey. What I'd like to do there is have a per-region list of objects, faces, and average colors. Download that and use it as the initial color for each object. Here's a test of what that would look like.

    monocolor1.thumb.jpg.29e62a6bd57c738b56a38864bd04945c.jpg

    Mono-color mode. No textures, just colors. The object color is simply the texture reduced to 1x1 size, which is a single color.

    The plan is to show something like this when you first enter a new area for which textures are not yet available. The amount of data needed to do this is small, and so this info can be cached for a very large number of regions. Texture loading will quickly follow, starting, of course, with the objects occupying the most screen space loaded first.

    No more annoying, immersion-breaking grey. You can see where you are. You can see where you're going.

    There is no fundamental obstacle to Second Life / Open Simulator looking like an AAA game title.

    • Like 4
    • Thanks 1
  11. This is just the rendering part at this point. Log in and cam. It's a long way from a full viewer.

    I don't want to get end users too excited about this. Many viewer devs, both inside and outside Linden Lab, know what I'm up to. After a year of work, and some good results, I decided to put out a progress report.

    This is partly in response to recent comments about excessively large textures. Those are not a real problem if you don't load all those pixels until you really need them. If you get close enough to a 2048x2048 texture to fill your screen, it will get loaded, but not otherwise.

    Some of what I've done here could be done in the LL or Firestorm viewers. They have a texture loading priority system, left over from Project Interesting, but it was never finished.

    All this applies to non-avatar objects. Avatars have their own rendering problems, and I have not looked hard at those yet.

  12. I've mentioned occasionally that I'm working on a new viewer. Here's some video from an early test version.

    Second Life at full detail.

    This is what Second Life and Open Simulator should look like. No more standing in front of a blurry object and waiting for it to load. Waiting, and waiting. And wondering if it's worth the wait. This changes the whole SL experience, for the better. Now Second Life looks like an AAA game.

    Second Life content does not have too much detail. It just needs a more effective graphics system to display it.

    What's going on here? This is an all-new viewer, with no Linden Lab code. It's written in Rust, and uses Vulkan for graphics. It has physically-based rendering. It's multi-threaded. One CPU is just refreshing the screen, at 50 to 60 FPS here. The other CPUs are making changes to the scene as the camera moves. All those high-detail textures are being loaded from cache just before the camera gets close enough to see them. If everything is in cache, this viewer can stay ahead of camera movement, even for very  high detail content like this. If the content has to come from the server, the objects that cover the most screen area are always loaded first. So what's in front of you is never blurry for more than a very brief period.

    All this is very experimental. This is just the rendering part of the viewer. There's no user interface other than moving the camera. All this can do is look.

    I'm working through the hard problems of building a high-detail metaverse here. The underlying technology is cutting-edge - the Rust programming language, Vulkan, WGPU for cross-platform graphics, and Rend3 to make WGPU usable. The lower levels libraries are not yet stable or complete. (For example, WGPU doesn't implement rigged mesh yet, so there are no avatars shown.) I'm doing this to see what's possible medium-term, not to produce a new SL viewer in the near term.

    Linden Lab tried to do something like this once, as part of Project Interesting. But it was a tough retrofit for the old viewer code, and they were not successful.

    • Like 15
    • Thanks 8
  13. I've seen My First Metaverse Article by Joe Clueless about a hundred times now.

    Between the NFT scammers, Zuckerberg trying to take over, and Linden Lab fumbling the ball, the metaverse field has become frustrating.

    I want to see good virtual worlds. Not happening.

    (Except Roblox. They have a clue, a plan, users, and money. But their world is mostly 13 year olds.)

  14. Facebook's vision of the Metaverse is everybody wearing Facebook goggles for all their waking hours. This is the vision in "Hyperreality". (Search Google for the "Hyperreality" video). That's a dystopian vision which is all too close to being achievable.

    • Like 3
  15. Calculating the curve isn't that hard. Getting the character to follow it is hard. Keyframe animation works by having the server send position updates to the viewer. Update arrival time isn't precise, so if you put too many points in your keyframe animation, you will get jitter.

    Keyframe animation can move and turn at the same time. On top of that, you can run regular animations. Combining this works reasonably well, although not perfectly. If you want to see this, visit Hippotropolis, and use area search to zoom in on Cindi, one of my NPCs, and watch for a while. What you're seeing is keyframe animation plus a sort of "animation overrider" which plays the appropriate animations depending on the motion. That has "walk", "run", "turn left", "turn right", and some stand animations.

    The keyframe animation paths start as a path from llGetStaticPath. llStaticPath will generate duplicate points and similar junk, so a cleanup pass is required. Then there's a maze solver to deal with non-static obstacles. Then a list of points is generated. If points are far apart, additional points are added near turns to mark the desired starting and ending points of the turn. Then a turn rate is set for the turn so that there's a smooth rotation from before the turn point to after the turn.

    You don't want to start turning too far in advance of the turn or it looks bad. It's still a hard change in rotation rate, not a spline. There's no "ease in/ease out" feature for keyframe animation.

    Anyway, you can get NPCs up to the movement level of typical SL users, but not to AAA title level.

     

    • Like 2
    • Haha 1
  16. Good idea. I have something like that. I use Firestorm macros a lot for such things.

    I have a more complex system I use for my NPCs. Error messages at various error levels are sent as link messages to a logging script. It maintains a large circular buffer of messages. When there's a serious problem, the error and the preceding 50 or so events are packaged up as an email and sent to me.  There's also a listener on DEBUG_CHANNEL, so that if there's a stack/heap collision, that's caught, and the events leading up to it are logged. Plus there are stall timers, to restart everything if something goes wrong. If they trip, the log of recent events gets sent in the resulting email.

×
×
  • Create New...