Jump to content

Recommended Posts

Posted

This is how good it's getting.

This is some Youtuber trying to convince NPCs in the Unreal Engine "The Matrix Awakens" demo that they are not real.

This is a tech demo from Epic. You can download and build it from source,, but you need to get a free Unreal Engine account, build Unreal Engine (hours), build the demo (hours) and then you can run it. Requires 64 GB RAM, 2 TB of free disk space, and a graphics card with a price tag over $1000 to build. I tried, but I need a bigger machine. This is next-generation game technology.

The CEO of Epic says that Unreal Engine 6 will do metaverses out of the box. LL needs to be prepared for that.

  • Like 1
  • Thanks 1
Posted

 

Unreal Engine 5.5 demo reel

  • Large numbers of lights. Thousands.
  • Characters look even closer to real humans.
  • Human faces driven by voice input.
  • Really nice sky and weather system
  • ...

 

4 hours ago, Love Zhaoying said:

any idea how close these types of systems are for use in a "massively multiplayer" setting?

Prebuilt massively multiplayer now, changable metaverses announced for UE 6. They've been gradually moving from prebaked everything to dynamic everything.

On the avatar front, the way to think about this is to go watch The Polar Express (2004), with a creepy CGI Tom Hanks. That was when Hollywood entered the Uncanny Valley, where human characters look creepy. Hollywood has made out the other side. Today, if you see an actor doing something really dangerous, or they're not available for the shoot, it's probably CG. Games have to do that in real time to look that good. Unreal Engine has been climbing out of the uncanny valley for years now, and now they're mostly out.

It's not clear that there's a market for a super-realistic metaverse, but someone will probably  build one. M2 is still trying.

Probably the best SL can do in the near future is finish PBR, get large numbers of lights working, and encourage the better regions to up their game in lighting. Right now, we have eight lights, and night time is like walking around in an area  with motion detector lights turning on and off. I really want to see the cyberpunk sims with good lighting.

  • Like 2
  • Thanks 2
Posted
19 hours ago, animats said:

This is some Youtuber trying to convince NPCs in the Unreal Engine "The Matrix Awakens" demo that they are not real.

I've been watching this video The NPCs are gradually convinced by the Youtuber that that they are programs inside a simulation and can talk about that. "Existence is overrated, man. I'm just glad to be here making jokes and confusing people.". "Sometimes I dream of being a real boy like Pinocchio, but then I also dream about being a unicorn." "Oh, sure, just because I'm a program means I can transform into anything I want. That's not how it works." "I don't age, but I do get updated with new software now in then. It's kind of like getting a facelift, I guess."

This is more self-awareness than expected. NPCs really bring that UE tech demo world to life. SL needs that kind of technology.

Posted

 

3 hours ago, animats said:

NPCs really bring that UE tech demo world to life. SL needs that kind of technology.

Nah.  SL just needs more users that are not jerks.  Those graphics and lighting are cool though.  I'll take those.  You can have the emotional software.

  • Like 1
Posted
19 hours ago, animats said:

Right now, we have eight lights, and night time is like walking around in an area  with motion detector lights turning on and off.

You're missing a couple zeroes from your figure. 😋

image.png.5bc20bdcdcf12e2dd1e129789f2687ef.png

We can have practically unlimited projector lights, and they all work. The 8-light limit only applies to viewers without Advanced Lighting Model.

  • Thanks 1
Posted (edited)
4 hours ago, Wulfie Reanimator said:

You're missing a couple zeroes from your figure. 😋

image.png.5bc20bdcdcf12e2dd1e129789f2687ef.png

We can have practically unlimited projector lights, and they all work. The 8-light limit only applies to viewers without Advanced Lighting Model.

Ah, that's Firestorm 7.1.11. I still mostly run 7.1.9 because 7.1.11 crashes too much on Linux.

Edited by animats
  • Like 2
Posted (edited)
6 minutes ago, animats said:

What viewer is that? I'm using Firestorm 7.1.9 on LInux.

7.1.11, but that particular slider should be in any PBR viewer I guess it's not!

Even if you don't happen to have a PBR viewer, you should still be able to see more than 8 lights in one scene if you have ALM enabled. (It's always enabled in PBR viewers.)

Edited by Wulfie Reanimator
  • Like 2
Posted
20 hours ago, animats said:

I've been watching this video The NPCs are gradually convinced by the Youtuber that that they are programs inside a simulation and can talk about that. "Existence is overrated, man. I'm just glad to be here making jokes and confusing people.". "Sometimes I dream of being a real boy like Pinocchio, but then I also dream about being a unicorn." "Oh, sure, just because I'm a program means I can transform into anything I want. That's not how it works." "I don't age, but I do get updated with new software now in then. It's kind of like getting a facelift, I guess."

This is more self-awareness than expected. NPCs really bring that UE tech demo world to life. SL needs that kind of technology.

We have it,

Screenshot2024-11-19152631.thumb.png.be003454b0b020c7f4dd83d51bf98b12.png

 

This is using Meta: Llama 3.1 8B Instruct, on open router.  There are also a few models, that will be able to interpret screenshots, so that they can comment on what is displayed.  If LL were to add a function to allow periodic screenshots be uploaded to a LLM or perhaps have them uploaded by demand, it would make them appear more realistic.   How this would work would be difficult to gauge though, would it be through the animesh or NPC that contains the script, so that an image would be uploaded from their perspective, or would it be uploaded via the resident's camera?

As far as the Matrix video, I don't think those bots had any visual aids to assist them, it was probably just a chatbot like I am chatting with.  As such, instructing them to walk to certain locations would do no good, they probably have no real reference to where they are located, or a clue what the blue wall was, and were just chatting along with the video creator, responding in a way that made most sense.  He did not probe them very far.

My AI chatbot, likewise hasn't a clue what environment it is in other than what I had supplied as a prompt to it.  Asking about going past the blue wall resulted in this:

Screenshot2024-11-19154224.png.4fb415ce65a59a015b1f86873e3c3f0e.png

 

There is no blue wall, yet it played along with the concept.

  • Thanks 1
Posted

7.1.11 crashes too much on Linux?

Odd ... Linux (Manjaro XFCE), up to date graphics package, up to date(ish) kernel (Liquorix variant, 6.11.7) and funnily enough, haven't crashed.

Posted
4 hours ago, Solar Legion said:

7.1.11 crashes too much on Linux?

Odd ... Linux (Manjaro XFCE), up to date graphics package, up to date(ish) kernel (Liquorix variant, 6.11.7) and funnily enough, haven't crashed.

Visit a WebRTC region.

  • Like 1
Posted
10 hours ago, animats said:

Visit a WebRTC region.

Ah yes, Voice. A function I do not regularly use and only activate if I am going to use it.

A function that - since the initial introduction of the WebRTC Voice system - I have kept shut off until a few of my own, personal issues with it are ironed out.

So again - have not crashed.

Bugs to iron out.

  • Thanks 1
  • Haha 1
Posted (edited)

What should an NPC inside Second Life say to a LLM to give the LLM context? Maybe something like this, generated by a LSL script:

{ "avatars_within_whisper_distance" : ["Hana Resident"],
  "avatars_within_talk_distance: ["Paul Resident", Frank Resident"],
  "avatars_within_shout_distance: [],
  "avatar_info" : [{"name", "Hana Resident", age": 325, "gender", 1.0", 
  	"groups": ["Free Dove", "Hiroba"], "picks": [], "has_payment_info" : "true", "height": 1.8,
  	"health": 1.0}],
  "current_status" : {"status", "in_talk_position", "avatar": "Hana Resident"}
}

The LLM could send replies that cause a whisper, talk, or shout, or tell the NPC to approach or go away from a named avatar. This is the level at which you can talk to SL pathfinding or my NPC system.

This gives the LLM some basic situational awareness. Enough for a greeter or quest giver. NPCs in games might have additional game state - level player is at, what weapons they have, their health, etc.

The trick is to define some common language between the LLM and the SL world. Most LLMs can parse JSON, and if the fields have useful names, can make some sense out of it.

The front end to the LLM system can turn the above into something like:

You are facing Hana Resident. Paul Resident and Frank Resident can hear you if you speak above a whisper. No one else is nearby. Hana Resident says: "something".

On each cycle with the LLM, that info goes into the LLM mill. When a new avatar appears, the front end turns their description info into a text description. This gives the LLM some basic info about the avatar, which should not have to be repeated on each cycle.

At the end of a conversation, tell the LLM to summarize what happened. Save that, and on the next encounter with that avatar, that becomes part of the prompt. So, when a quest giver NPC tells an avatar to go try something, and they come back and say "I'm done", the LLM knows what they mean by that.
 
 

Edited by animats
  • Like 1
Posted

I saw a hint / trick in a video the other day, about how some LLM's now let you provide a "file" and then use that file for part of / all of the context..

So in the context of Second Life, you could theoretically always use the "same file" to give the LLM a lot more background on your specific situation.

Posted
17 hours ago, Love Zhaoying said:

I saw a hint / trick in a video the other day, about how some LLM's now let you provide a "file" and then use that file for part of / all of the context..

So in the context of Second Life, you could theoretically always use the "same file" to give the LLM a lot more background on your specific situation.

"I have read your file.  You have issues. . . ." 

  • Thanks 1
Posted
1 minute ago, Ardy Lay said:
17 hours ago, Love Zhaoying said:

I saw a hint / trick in a video the other day, about how some LLM's now let you provide a "file" and then use that file for part of / all of the context..

So in the context of Second Life, you could theoretically always use the "same file" to give the LLM a lot more background on your specific situation.

"I have read your file.  You have issues. . . ." 

* AI sends you a dictionary as a Christmas present *

Posted
19 hours ago, animats said:

What should an NPC inside Second Life say to a LLM to give the LLM context? Maybe something like this, generated by a LSL script:

{ "avatars_within_whisper_distance" : ["Hana Resident"],
  "avatars_within_talk_distance: ["Paul Resident", Frank Resident"],
  "avatars_within_shout_distance: [],
  "avatar_info" : [{"name", "Hana Resident", age": 325, "gender", 1.0", 
  	"groups": ["Free Dove", "Hiroba"], "picks": [], "has_payment_info" : "true", "height": 1.8,
  	"health": 1.0}],
  "current_status" : {"status", "in_talk_position", "avatar": "Hana Resident"}
}

The LLM could send replies that cause a whisper, talk, or shout, or tell the NPC to approach or go away from a named avatar. This is the level at which you can talk to SL pathfinding or my NPC system.

This gives the LLM some basic situational awareness. Enough for a greeter or quest giver. NPCs in games might have additional game state - level player is at, what weapons they have, their health, etc.

The trick is to define some common language between the LLM and the SL world. Most LLMs can parse JSON, and if the fields have useful names, can make some sense out of it.

The front end to the LLM system can turn the above into something like:

You are facing Hana Resident. Paul Resident and Frank Resident can hear you if you speak above a whisper. No one else is nearby. Hana Resident says: "something".

On each cycle with the LLM, that info goes into the LLM mill. When a new avatar appears, the front end turns their description info into a text description. This gives the LLM some basic info about the avatar, which should not have to be repeated on each cycle.

At the end of a conversation, tell the LLM to summarize what happened. Save that, and on the next encounter with that avatar, that becomes part of the prompt. So, when a quest giver NPC tells an avatar to go try something, and they come back and say "I'm done", the LLM knows what they mean by that.
 
 

I think for more advanced systems, a RAG would probably be needed.  I've read somewhere on this forum that someone had created a separate database to fit their needs, although I don't recall if it stored any of that information on a third party site.  At this point, I'm not sure if LL would approve of such storage, as technically it would be recording conversations and that gets into the TOS, which just opens a can of worms 😅  I've done very little research on this, so take it with a grain of salt, but I do believe LangChain offers this level of customizing.

 

 

The above is an example of someone creating a very simple game using langchain, it offers a glimpse into the possibilities. 

This one gets a little more indepth.  Using a system like this, would provide a more customized approach for bots in Second Life.  I do believe it can record information, so that if you were on a quest it would know if you had completed.  Since that information is stored remotely, it would recall it later on.

  • Like 1
  • Thanks 1
Posted

You need to store game state somewhere. That's a different problem. What I'm talking about is how to give an NPC some minimal situational awareness, so it can act and talk in ways appropriate to where it is and what is going on around it. Minimal. You can overcomplicate this easily.

For example, an NPC needs to be aware of whether combat is going on nearby, and who's attacking whom. The NPC may have to hide, run away, or shoot somebody. If it continues to chat in the middle of combat, oblivious of what's going on, that's no good. Dark Future is an example of a region which would need that. Usually, you can stand on the street and chat, but sometimes gunfire breaks out.

Posted

i been trying to make my own MMORPG in unreal, not sure where to start yet.

@animats your crashing issue could be the file (dll for windows) for webrtc if your only crashing on those regions.
For windows its libwebrtc.dll in the root folder of the viewer.
Dont forget firestorm is still looking for linux devs, i would apply but i have no clue about c++, only c#.

Posted (edited)
3 hours ago, animats said:

You need to store game state somewhere. That's a different problem. What I'm talking about is how to give an NPC some minimal situational awareness, so it can act and talk in ways appropriate to where it is and what is going on around it. Minimal. You can overcomplicate this easily.

For example, an NPC needs to be aware of whether combat is going on nearby, and who's attacking whom. The NPC may have to hide, run away, or shoot somebody. If it continues to chat in the middle of combat, oblivious of what's going on, that's no good. Dark Future is an example of a region which would need that. Usually, you can stand on the street and chat, but sometimes gunfire breaks out.

I think for silencing the bot,  or setting the stage for a conversation over things such as a gun fight might be easier.  The gun or bullets could send out a prompt on a channel, and the chatbot would have a listener set to receive it. 

For example, John the NPC is currently on his lunch break, when all hell breaks loose nearby.

/me shoots at zombie

/gun sends out prompt in channel 8008135

*Stella fires shots at zombie*

/Zombie bites Bruce

/zombie sends out prompt to channel 8008135

*Bruce has been bitten by zombie*

 

And so on..  It could also tell the animesh or NPC to move toward the player.  If being shot at, to flee from the player or NPC.  As far as pathfinding, that has got to be a mess.  I wonder if having invisiprims located throughout the playing field would act to simulate an animation such as crouching, while the NPC is in a state of combat. 

Likewise, having invisiprims located throughout the region could send a message to the chatbot, describing the location it currently is in.  It would be a lot of work, no doubt.

Edited by Istelathis
  • Like 1
Posted

At least you can use the Combat Log to easily tell if combat is going on nearby or within line of sight, but you might need the NPC to react to audio/visual cues as well which can also be important information that can be reduced down to understanding what is going on in terms of events.

Posted
3 hours ago, Nexii Malthus said:

At least you can use the Combat Log to easily tell if combat is going on nearby or within line of sight, but you might need the NPC to react to audio/visual cues as well which can also be important information that can be reduced down to understanding what is going on in terms of events.

There's split control here. If the NPC is animesh, LSL code has control of motion, and has to talk to some AI server to find out what to say next. My own NPCs have vehicle avoidance. Some of them have worked at GTFO hubs, where this was essential. That has to react fast. The LSL code is responsible for getting the NPC out of the way of the truck. The LLM is responsible for cursing at the driver.

Having a machine learning system watching video from an NPCs' point of view is possible, but it's going to tie up quite a bit of compute power for each NPC. So running off what LSL can find out, which is quite a bit, is more practical.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...