Jump to content

animats

Resident
  • Posts

    6,107
  • Joined

  • Last visited

Everything posted by animats

  1. Considering that the surface area of the human body is around 2 square meters, something has gone horribly wrong if the clothing area exceeds 1000 square meters.
  2. Maybe. Second Life is potentially more mainstream now than it has been in the past. Matthew Ball, the venture capitalist, pointed this out in his series of metaverse essays. Because of the COVID epidemic, some form of remote presence is the new normal. The stigma of using a virtual world is gone.
  3. 9500 of them probably would be minimum-wage censors. Roblox has about 4000 censors, outsourced to some staffing company in India, and a few hundred staff in California making the thing go.
  4. So what? The stated position should be "It's a virtual world that mimics the real world. Of course it has sex". (Roblox won't even let users hold hands. The word "gay" is banned in Roblox.) Facebook is very evasive about the number of users that Facebook Spaces had, or Facebook Horizon has. It may be a very small number.
  5. https://www.bbc.com/news/technology-59180273 They interviewed Anya Kanevsky, VP of product management at Linden Lab.
  6. There's formal training available on game user interface design. https://gdconf.com/masterclass/psychology-game-ux "Experiencing a game happens in the player's mind. This is why understanding the human brain (especially its limitations) while it is perceiving and interacting with a game is paramount to accomplish faster and more efficiently your developers' goals. This workshop proposes to delve into how the human brain works in terms of perception, attention, and memory (critical elements for usability), and offers insights on human motivation, emotion, and gameflow (critical elements for engaging games). Based on these elements, the workshop proposes a UX framework and UX guidelines during the different game development stages." The instructor has a PhD in psychology, was in charge of the user interface for Fortnite, and has published two books on the subject. So this is someone who knows what works. Perhaps Linden Lab could send someone to take this online course.
  7. Microsoft's "metaverse" is out. Legless avatars again. SL should use this as a marketing point. "Our avatars have legs!"
  8. I've thought about viewer cooperation to boost performance, but it has griefing problems. Never trust the client. There are also terms of service problems. An impostor server, to create the illusion of very large draw distance, is an option. This would store pictures of entire regions. The TOS allows you to take pictures of SL and redistribute them. Each region would have pictures from at least 8 horizontal directions, plus straight down. Beyond draw distance, you'd be shown pictures of adjacent regions, positioned like billboards at the edge of the region. If you're flying, you'd see mostly the straight down images, like flying over a map. Distant images would be blurred a bit, as if there was a bit of haze. This is roughly what Google Earth and GTA-type games do to show you a big world in 3D. With that, users would have more of a sense of place. If there are mountains to the north, you'd be able to see them, and have a sense of direction. When boating, you'd be able to see distant shorelines. When flying, you'd be able to see distant terrain and airport runways, as if you were flying over the map. This takes a few tricks to deal with SL content. The picture-taker would only cover maybe the first 128m above ground, so as not to show sky junk for miles around. The downward pictures would be taken from a height just above the highest object with a ground connection. It would be like "terrain view" in Google Maps, vs. "map view". The pictures would be taken by a bot that photographs the world once a week or so. Mainland only, plus some of the larger multi-sim regions. For a single region, it's unnecessary, of course. This is just a concept for now. I might take some pictures of New Babbage's 14 sims as a demo. A real version would require LL support, or support from other large Open Simulator grids. I used to have a little "impostor garden" in Vallone, where visitors could compare 3D objects and impostors. From 20m away, the impostors looked pretty good. For this, I'm talking about impostors of objects at least 100m away. Here's GTA V: Use of distant impostors in GTA V. Second Life should look this good. This is something LL could do. It' s not that difficult. You get a huge improvement in visual quality for a low investment. It would show off that SL is a really big world, unlike those dinky Facebook worlds.
  9. If only that worked for big, ugly, low-altitude skydomes. Hundreds of objects to derender to make these go away. And they block the sun.
  10. Inkscape and Pinta are options. Inkscape is more for doing line art, text, etc. Pinta is more like a paint program. I used to use Pinta but switched to GIMP. There's still Photoshop Elements, which is a $100 one-time purchase.
  11. That may not mean "jobs" as an income source, but just tasks to do. If you spend any time at the new user areas, you find that the two big questions are "What do I do now?", and "How do I fix this &$*@! clothing problem.". One of the helpers at Caledon, who tends to sit on the benches near the entrance, says that many new users think he's a quest giver. LL has tried, with things like Linden Realms, but they're not that interesting. Part of the problem is that many of the interesting things to do require skills users won't' have on day one. For example, sending users on a quest across mainland using the Drivers of SL HUD, which gives turn by turn directions. That HUD allows for making stops, and doing various tasks at stops. So it's really a full quest system. But there are so many things that can go wrong on a road trip in SL that it would be tough on new users. And all the stuff of attaching a HUD and rezzing a vehicle is a pain. With a grid-wide experience, though, it could be set up so that the new user just walks through a portal to start the quest.
  12. That's what makes Unreal Engine work. Vast amounts of prep and optimization during level building. If you can never see area A from area B, you never need to draw A when in B. Or even load it. That kind of thing.
  13. Roblox has just developed new technology to make clothing Just Work. In the new Roblox system, the clothing is automatically adapted to fit the avatar. This is very new, and still in test. Next generation of Roblox clothing on new-style avatars. It's not just blocks any more. Their in-house designers like a somewhat anime style. That's not built into the system. Creators can create and sell third-party clothing. (Roblox has an average user age of 13, remember. Clothing has to have more coverage over there.) So, it can be done, and has been done.
  14. That's not how this works. That's an illusion. The viewer is in a race with the user to load content before the user gets close enough to see through the illusion. High-resolution textures are constantly being loaded as the camera moves around, and distant textures are being dropped to lower resolution to keep the frame rate up. There's a lot going on behind the scenes to create the illusion that nothing is happening. Pay no attention to the man behind the curtain. Here's another example, from an earlier, lower performance version of the system. This shows what it looks like when textures are not in local cache. This is a tour of a shopping event. Booth after booth of high-detail objects. You'll see some small or distant grey objects that haven't loaded yet, and some objects with fuzzy, low detail textures. As the camera gets close, the grey objects turn to textures, and fuzzy textures go to higher resolution. You never see a fuzzy texture in close-up for more than a second or two. The shopping experience improves. You can see the merchandise. Next problem is getting rid of grey. What I'd like to do there is have a per-region list of objects, faces, and average colors. Download that and use it as the initial color for each object. Here's a test of what that would look like. Mono-color mode. No textures, just colors. The object color is simply the texture reduced to 1x1 size, which is a single color. The plan is to show something like this when you first enter a new area for which textures are not yet available. The amount of data needed to do this is small, and so this info can be cached for a very large number of regions. Texture loading will quickly follow, starting, of course, with the objects occupying the most screen space loaded first. No more annoying, immersion-breaking grey. You can see where you are. You can see where you're going. There is no fundamental obstacle to Second Life / Open Simulator looking like an AAA game title.
  15. This is just the rendering part at this point. Log in and cam. It's a long way from a full viewer. I don't want to get end users too excited about this. Many viewer devs, both inside and outside Linden Lab, know what I'm up to. After a year of work, and some good results, I decided to put out a progress report. This is partly in response to recent comments about excessively large textures. Those are not a real problem if you don't load all those pixels until you really need them. If you get close enough to a 2048x2048 texture to fill your screen, it will get loaded, but not otherwise. Some of what I've done here could be done in the LL or Firestorm viewers. They have a texture loading priority system, left over from Project Interesting, but it was never finished. All this applies to non-avatar objects. Avatars have their own rendering problems, and I have not looked hard at those yet.
  16. I've mentioned occasionally that I'm working on a new viewer. Here's some video from an early test version. Second Life at full detail. This is what Second Life and Open Simulator should look like. No more standing in front of a blurry object and waiting for it to load. Waiting, and waiting. And wondering if it's worth the wait. This changes the whole SL experience, for the better. Now Second Life looks like an AAA game. Second Life content does not have too much detail. It just needs a more effective graphics system to display it. What's going on here? This is an all-new viewer, with no Linden Lab code. It's written in Rust, and uses Vulkan for graphics. It has physically-based rendering. It's multi-threaded. One CPU is just refreshing the screen, at 50 to 60 FPS here. The other CPUs are making changes to the scene as the camera moves. All those high-detail textures are being loaded from cache just before the camera gets close enough to see them. If everything is in cache, this viewer can stay ahead of camera movement, even for very high detail content like this. If the content has to come from the server, the objects that cover the most screen area are always loaded first. So what's in front of you is never blurry for more than a very brief period. All this is very experimental. This is just the rendering part of the viewer. There's no user interface other than moving the camera. All this can do is look. I'm working through the hard problems of building a high-detail metaverse here. The underlying technology is cutting-edge - the Rust programming language, Vulkan, WGPU for cross-platform graphics, and Rend3 to make WGPU usable. The lower levels libraries are not yet stable or complete. (For example, WGPU doesn't implement rigged mesh yet, so there are no avatars shown.) I'm doing this to see what's possible medium-term, not to produce a new SL viewer in the near term. Linden Lab tried to do something like this once, as part of Project Interesting. But it was a tough retrofit for the old viewer code, and they were not successful.
  17. I've seen My First Metaverse Article by Joe Clueless about a hundred times now. Between the NFT scammers, Zuckerberg trying to take over, and Linden Lab fumbling the ball, the metaverse field has become frustrating. I want to see good virtual worlds. Not happening. (Except Roblox. They have a clue, a plan, users, and money. But their world is mostly 13 year olds.)
  18. Facebook's vision of the Metaverse is everybody wearing Facebook goggles for all their waking hours. This is the vision in "Hyperreality". (Search Google for the "Hyperreality" video). That's a dystopian vision which is all too close to being achievable.
  19. Calculating the curve isn't that hard. Getting the character to follow it is hard. Keyframe animation works by having the server send position updates to the viewer. Update arrival time isn't precise, so if you put too many points in your keyframe animation, you will get jitter. Keyframe animation can move and turn at the same time. On top of that, you can run regular animations. Combining this works reasonably well, although not perfectly. If you want to see this, visit Hippotropolis, and use area search to zoom in on Cindi, one of my NPCs, and watch for a while. What you're seeing is keyframe animation plus a sort of "animation overrider" which plays the appropriate animations depending on the motion. That has "walk", "run", "turn left", "turn right", and some stand animations. The keyframe animation paths start as a path from llGetStaticPath. llStaticPath will generate duplicate points and similar junk, so a cleanup pass is required. Then there's a maze solver to deal with non-static obstacles. Then a list of points is generated. If points are far apart, additional points are added near turns to mark the desired starting and ending points of the turn. Then a turn rate is set for the turn so that there's a smooth rotation from before the turn point to after the turn. You don't want to start turning too far in advance of the turn or it looks bad. It's still a hard change in rotation rate, not a spline. There's no "ease in/ease out" feature for keyframe animation. Anyway, you can get NPCs up to the movement level of typical SL users, but not to AAA title level.
  20. Good idea. I have something like that. I use Firestorm macros a lot for such things. I have a more complex system I use for my NPCs. Error messages at various error levels are sent as link messages to a logging script. It maintains a large circular buffer of messages. When there's a serious problem, the error and the preceding 50 or so events are packaged up as an email and sent to me. There's also a listener on DEBUG_CHANNEL, so that if there's a stack/heap collision, that's caught, and the events leading up to it are logged. Plus there are stall timers, to restart everything if something goes wrong. If they trip, the log of recent events gets sent in the resulting email.
  21. This came up at Server User Group recently. A new default mainland environment is coming. When EEP was rolled out, the default mainland environment was too dim. There were no tools to change all the environments mainland-wide. Those tools are now supposedly working, but the new default environment hasn't been pushed out yet.
  22. If you just want to stop a physical object, set its velocity to ZERO_VECTOR.
  23. Right. Shadow is $30/month. NVidia GEforce Now is about $200 a year. (They have a free tier. Take a number and wait for a connection, then connect for an hour.) Paperspace is about a dollar an hour. Having one of those options endorsed by LL would be moderately useful, so there's a pathway to SL from more platforms.
  24. This is a surprisingly hard problem. My NPCs will try to get out of the way of a vehicle, but they're more sluggish doing so than I would like because they have to do multiple llCastRay calls to find open space.
  25. One can always hope. It's been like that for the last month. It's a culture conflict. Zenescope is a comics publisher. Their user base is accustomed to waiting for the next issue.
×
×
  • Create New...