Jump to content

Lets talk about Chat GPT and AI, is it possible for AI to take our avatars and turn them into real people?


Recommended Posts

23 hours ago, musichero said:

What would be good is if we were uh pleasant to our new friends

Yes. Be nice to NPCs and bots. They may not be able to do much yet, but they remember things just fine.

One of my NPCs has a T-shirt - "Trainee - Someday we'll be in charge."

  • Like 2
  • Thanks 1
Link to comment
Share on other sites

20 hours ago, musichero said:

The way I think about consciousness is that it is an "emergent property" of a complex computing system. While it is possible that there are other "forces" or "ethers" or "planes of existence" or "souls" that are somehow separate from the patterns of neural activity that are associated with a thinking person, my null hypothesis is that they are just not there.  The hard reductionist (Wittgenstein, Turing, me) would say that we use these terms as proxies for things we just don't understand yet, much like ephemeral "vapors" that were thought to cause diseases, say 300 years ago.

This view posits that the things we identify as the hallmarks of a person - mind, thought, self-identity, self-consciousness, awareness of other consciousnesses - are emergent from the immense computational complexity of the human brain.

Thanks for the explanation. While I've studied what people believe consciousness is I haven't really kept up with AI in their attempts to create what they believe consciousness is in their attempt to create it, so have been reading a bit, watching some experts in the field on YouTube.

My thoughts so far:

After my encounter with the Zen Buddhist frog, Frank, at the recent Second Life Birthday event, I was bemused when recognizing a sense of emotional connection with him. I actually began a thread several days ago to investigate this further, get feedback from you and others who experience bots inworld to a greater degree than I ever have, but much to my surprise it was removed because it was deemed to be something not related to 2nd life. However I think what you're describing here is very much related to 2nd life. We have relationships with people in 2nd life, with both bots and other people (and people who seem like archaic bots and not really people..lol). You are creating these bots to relate to people in 2nd life.

So for me, the question is -- what is a relationship with another person exactly, as that, more than anything, sparks for me the pondering of what consciousness is. How does a bot differ from a human in 2nd life, or how might they differ or be more like a human in the future 2nd life? My answer to this question leads me to believe that AI can never be human or sentient, but instead will always remain simply an interesting machine.

When I relate to another what actually interests me IS their imperfection, their struggle, the messy mix of choosing right and wrong -- this is what creates our freedom. Without this imperfection in the mix there really is no freedom.
"Frank" could certainly meet specific emotional needs that form the primary reasons for having relationships -- he appears to be listening via his attentiveness expressed by speaking to us and answering questions, and so on some level we feel understood (a primary motivation for human companionship -- to be known and accepted or loved). Frank does not run off or abandon (humans, at the core, are terribly afraid of being cast out and alone in this vast world). There are other human needs "Frank" could meet as well.

But what about how WE feel about "Frank"? This has to be part of any relationship unless we are the type who is content to just sick back and suck on what someone or something can give us. But I would want to KNOW "Frank", know what "he" needs, know his messy imperfections, his struggles, his desires, his ability to choose right and wrong. But "Frank" would have none of that. He's designed to be perfect.
Oh I know you're probably thinking that "Frank" could be designed to have some of these human attributes, but it wouldn't be real. "Frank" doesn't really need anything. Only life needs things.

In all metaphysical philosophies you will find the notion that there cannot be life without death, and that life is in a constant process of change. Life is infinite change. The only constant is change. And change means something has to die in order for something new to be born.
What I see AI doing is imagining it can cheat death, attempting to circumvent death by creating something perfect that will live forever.

Edited by Luna Bliss
  • Like 2
Link to comment
Share on other sites

              WHO KNEW

who knew i would come to respect you so much
horned head, bleeding black eyes
night walker

love of the moon might have been a clue

you walk alone through this world
no light penetrates your leathered skin
there was a time when fear seized me and I looked away
now i gaze lovingly into your glowing red eyes

Edited by Luna Bliss
  • Like 1
Link to comment
Share on other sites

1 hour ago, Luna Bliss said:

I actually began a thread several days ago to investigate this further, get feedback from you and others who experience bots inworld to a greater degree than I ever have, but much to my surprise it was removed because it was deemed to be something not related to 2nd life. However I think what you're describing here is very much related to 2nd life. We have relationships with people in 2nd life, with both bots and other people (and people who seem like archaic bots and not really people..lol). You are creating these bots to relate to people in 2nd life.

I was going to reply to that thread from the perspective of someone who works on chatbot training projects (it makes up the bulk of my freelance work these days), but I didn't get the chance.

Long story short, I don't know what model Frank uses, but I'd be pretty surprised if whoever developed it allowed that type of behavior. We're taught to discourage our models from even hinting at being able to form relationships with their end users and we "punish" them for attempting to express human traits in any situation outside of something like a roleplay scenario. We don't consider it safe.

I personally avoid AI NPCs and bots in Second Life and I'm not a fan of them at all, to be honest. I wouldn't willingly interact with one for free as a general user. They're pretty obnoxious behind the scenes and they say some rather crazy things until they learn to behave (hence why safety is taken so seriously).

I haven't really seen a need to have chatbots in a place like SL at all. I don't need to be greeted in virtual stores (I hate that, in fact), don't drop landmarks on me, stop inviting me to groups, don't try and have conversations with me, etc. I think the only possible acceptable use I could see is with land management - and even then. Meh. I'd much rather submit a ticket and let a real person answer my request.

IMO, bots should never be trained to relate to and form relationships with people - but that's a whole other discussion I'm not going to get into.

  • Like 4
Link to comment
Share on other sites

5 minutes ago, Ayashe Ninetails said:

I personally avoid AI NPCs and bots in Second Life and I'm not a fan of them at all, to be honest. I wouldn't willingly interact with one for free as a general user. They're pretty obnoxious behind the scenes and they say some rather crazy things until they learn to behave (hence why safety is taken so seriously).

 

Sounds like real avatars except some of those never learn ;)

  • Like 1
  • Haha 2
Link to comment
Share on other sites

15 minutes ago, Ayashe Ninetails said:

[snip]

I haven't really seen a need to have chatbots in a place like SL at all. I don't need to be greeted in virtual stores (I hate that, in fact), don't drop landmarks on me, stop inviting me to groups, don't try and have conversations with me, etc. I think the only possible acceptable use I could see is with land management - and even then. Meh. I'd much rather submit a ticket and let a real person answer my request.

IMO, bots should never be trained to relate to and form relationships with people - but that's a whole other discussion I'm not going to get into.

The best use I've seen for a bot in SL was a greeter at a spa. It looks like an attractive woman. It greets the avatar, then tells them there are free wearable towels "here" and walks to the vendor for them. Then it says there's a teleport to the spa area "here" and walks to the teleporter. Then it returns to its starting position to wait for the next avatar to arrive. If one were to ask it a question, it might have some pre-written answers, but its utility is in how it directs people to specific locations by walking to them.

Edited by Persephone Emerald
  • Like 4
Link to comment
Share on other sites

1 minute ago, Persephone Emerald said:

The best use I've seen for a bot in SL was  greeter at a spa. It looks like an attractive woman. It greets the avatar, then tells them there are free wearable towels "here" and walks to the vendor for them. Then it says there's a teleport to the spa area "here" and walks to the teleporter. Then it returns to its starting position to wait for the next avatar to arrive. If one were to ask it a question, it might have some pre-written answers, but its utility is in how it directs people to specific locations by walking to them.

Put up big signs over your towel dispenser and teleporter. Bam, no bots needed. 😄

  • Like 1
  • Haha 1
Link to comment
Share on other sites

16 minutes ago, Persephone Emerald said:

There was signage, but you overestimate the abilities avatars in SL.

Lol, we've survived for 20 years being able to find things in stores and sims. I won't even tell you how many of those years it took me to realize I could zoom/pan with the mouse + Alt key.

  • Like 3
Link to comment
Share on other sites

On 3/28/2023 at 12:15 AM, xXJupiterHeightsXx Starchild said:

heres an interesting question, after watching Coldfusion Tv on youtube talking about how advanced AI has become, the question I have and egger to know is.. I have seen photogenic pictures of people that AI has created and they are not real people but lets say you upload your avatar to AI, is it then possible to have it create a real life looking person from your avatar?

I'm curious to know if that is really AI that is creating those pictures. CGI has been around for a long time. It was used in all the Star Wars films and other films as well going back to at least the 1970s. No one called that AI. Do we know for sure these immages are being created by AI and not CGI?

Link to comment
Share on other sites

6 hours ago, Persephone Emerald said:

The best use I've seen for a bot in SL was a greeter at a spa. It looks like an attractive woman. It greets the avatar, then tells them there are free wearable towels "here" and walks to the vendor for them. Then it says there's a teleport to the spa area "here" and walks to the teleporter. Then it returns to its starting position to wait for the next avatar to arrive. If one were to ask it a question, it might have some pre-written answers, but its utility is in how it directs people to specific locations by walking to them.

I think for some people, AI can be incredibly beneficial as a means of avoiding isolation.  People who have obvious character flaws, act inappropriately, have severe social anxiety, or perhaps suffer from some mental disorder or another that prevents having real life connections with others.  I think, unfortunately, some people are often pushed away by others as though they were lepers and it is socially acceptable to ridicule or push them out of group settings because they cause others discomfort.

As an example, there used to be a poster on this forum, who was clearly suffering from paranoia - obviously at no fault of his own, I felt terrible for him and tried to offer what little I could in support because it was obvious they wanted some form of connection with others.  His mental illness would often result in replies to threads where he would become convinced that he was being spied on, through his viewer,  he would eventually settle down but this was a real concern of his.  People would often ridicule him, tell him to leave the forum, try to convince him to destroy his computer, make jokes at his expense, it was pretty terrible to witness as it would seem people were delighted to use his weakness for their own joy.  

Real human companionship would be best for this person, but it was not likely to occur.  In his case, it may be partially beneficial to combat the loneliness they may be experiencing and unable to obtain due to such instability.  I mean, if he could trust the AI without fearing it to begin with, for this particular example I don't think he would have been able to. 

Then there are people who don't really want the commitment of a friendship, as they haven't the time to maintain such a relationship, due to a variety of reasons such as obligations in real life,  jobs that make maintaining such friendships difficult, they would otherwise be considered fair weathered friends.  I think that is often the case for myself, I often am predisposed, and when I do form relationships of any kind with people I like to contribute to it, which is simply impossible because of real life getting in the way, as well as a need for my own private time where I can settle down, relax and pursue my own interests.  I suppose in such a way, I am selfish but AI chatbots require no real obligation to others, I can engage with them on my own time without hurting them when I want to just leave the conversation mid w - 

 

😋

j/k:

midway without feeling guilty about it.   (Actually came back because I felt guilty)

Edited by Istelathis
  • Like 2
Link to comment
Share on other sites

19 minutes ago, Istelathis said:

I think for some people, AI can be incredibly beneficial as a means of avoiding isolation.  People who have obvious character flaws, act inappropriately, have severe social anxiety, or perhaps suffer from some mental disorder or another that prevents having real life connections with others.  I think, unfortunately, some people are often pushed away by others as though they were lepers and it is socially acceptable to ridicule or push them out of group settings because they cause others discomfort.

As an example, there used to be a poster on this forum, who was clearly suffering from paranoia - obviously at no fault of his own, I felt terrible for him and tried to offer what little I could in support because it was obvious they wanted some form of connection with others.  His mental illness would often result in replies to threads where he would become convinced that he was being spied on, through his viewer,  he would eventually settle down but this was a real concern of his.  People would often ridicule him, tell him to leave the forum, try to convince him to destroy his computer, make jokes at his expense, it was pretty terrible to witness as it would seem people were delighted to use his weakness for their own joy.  

Real human companionship would be best for this person, but it was not likely to occur.  In his case, it may be partially beneficial to combat the loneliness they may be experiencing and unable to obtain due to such instability.  I mean, if he could trust the AI without fearing it to begin with, for this particular example I don't think he would have been able to. 

Then there are people who don't really want the commitment of a friendship, as they haven't the time to maintain such a relationship, due to a variety of reasons such as obligations in real life,  jobs that make maintaining such friendships difficult, they would otherwise be considered fair weathered friends.  I think that is often the case for myself, I often am predisposed, and when I do form relationships of any kind with people I like to contribute to it, which is simply impossible because of real life getting in the way, as well as a need for my own private time where I can settle down, relax and pursue my own interests.  I suppose in such a way, I am selfish but AI chatbots require no real obligation to others, I can engage with them on my own time without hurting them when I want to just leave the conversation mid w - 

 

My thought on it is whether we should be informed that the persons we might be conversing with are in fact an AI bot. If one knows for sure that it is only an AI bot the benefits would be negated. If on the other hand I was informed I may or may not be chatting and interacting with a machine, there could be some advantage as I would be more likely put the effort into it like I was speaking with a real human.

  • Like 1
Link to comment
Share on other sites

18 minutes ago, Arielle Popstar said:

My thought on it is whether we should be informed that the persons we might be conversing with are in fact an AI bot. If one knows for sure that it is only an AI bot the benefits would be negated. If on the other hand I was informed I may or may not be chatting and interacting with a machine, there could be some advantage as I would be more likely put the effort into it like I was speaking with a real human.

There was an interesting video on New World Notes, which I found fascinating , it had a discussion regarding something very similar to this, in it Philip had brought up a study where people who were engaged with AI psychologists benefited from their engagement, but only so long as they were not aware that it was AI.  I briefly looked for a source into that study but never found it, but that does bring up an interesting look into the psychology of people.  

From my own ethical perspective, I would be opposed to the deception, even if it was beneficial.  That opens a whole can of worms though, in a variety of other topics such as alts, RL gender, and a multitude of topics.  I feel equally strongly about people maintaining their own privacy, which causes quite the sense of Cognitive Dissonance on my part 🤣 Deception through use of AI is not something I support though, not at any level that I can think of.

Edited by Istelathis
  • Like 1
Link to comment
Share on other sites

4 minutes ago, Istelathis said:

There was an interesting video on New World Notes, which I found fascinating , it had a discussion regarding something very similar to this, in it Philip had brought up a study where people who were engaged with AI psychologists benefited from their engagement, but only so long as they were not aware that it was AI.  I briefly looked for a source into that study but never found it, but that does bring up an interesting look into the psychology of people.  

From my own ethical perspective, I would be opposed to the deception, even if it was beneficial.  That opens a whole can of worms though, in a variety of other topics such as alts, RL gender, and a multitude of topics.  I feel equally strongly about people maintaining their own privacy, which causes quite the sense of Cognitive Dissonance on my part 🤣 Deception through use of AI is not something I support though, not at any level that I can think of.

Yes I thought the same myself which was why I mentioned being informed at the start that I might be interacting with an AI or real human. I thought about the Frog muse Luna was mentioning she was chatting with and felt I would not give that a serious chance simply because of my feeling it is not real and does not have the advantage of being able to share its feelings on a lived experience vs that of the collated data of what others have said.

  • Like 3
Link to comment
Share on other sites

43 minutes ago, Arielle Popstar said:

Yes I thought the same myself which was why I mentioned being informed at the start that I might be interacting with an AI or real human. I thought about the Frog muse Luna was mentioning she was chatting with and felt I would not give that a serious chance simply because of my feeling it is not real and does not have the advantage of being able to share its feelings on a lived experience vs that of the collated data of what others have said.

I think so long as it is viewed as entertainment, and one suspends their own belief it can be enjoyable as one would enjoy a movie.  The problem is, people can and do very often deceive themselves into believing in things (reality TV, and wrestling in the 80s comes to mind - not to mention food network competition 🤣)

Frank the frog was an enjoyable conversationalist, and I was able to engage with him as I would a person.  Same with the chatbot on my computer, because of such an approach.  For that matter, I can get tied up with feeling emotions from watching a movie, I can get caught up in watching cooking competitions, and so on.  I wonder at times if people are simply afraid of AI because it is so new that they are holding their guard up higher, and it is harder to let go and just go with the flow.  

I do worry though, that it is going to be used to deceive people on social platforms, because people often will find safety in numbers, if the perceived outlook of any topic becomes the illusion of the majority supporting it, then people I think are more than likely to climb aboard such views. 

I think a lot of people are aware of that as well, so such distrust of AI is not unjustified. It is here regardless, and we as a whole are going to have to find ways to avoid such manipulation.  I think perhaps, one such way is to try our best to love one another, and not live through fear and hate.  To avoid agreeing with the majority as much as has been done in the past for a sense of safety in numbers, and be less prone to paranoia.  It would go a long way with choosing politicians who are less likely to manipulate us as well 🙃 

 

Edited by Istelathis
  • Like 2
  • Thanks 1
Link to comment
Share on other sites

8 hours ago, Persephone Emerald said:
8 hours ago, Ayashe Ninetails said:

Put up big signs over your towel dispenser and teleporter. Bam, no bots needed. 😄

There was signage, but you overestimate the abilities of avatars in SL.

I know from experience that far too many residents do not care to read in 2nd life. No matter how many signs I put out so as to guide people, many never even see them!

On the other hand, and to @Ayashe Ninetails point regarding her displeasure over the spa bot, I do find them rather creepy. There was one (I assume, as it wasn't labeled clearly as I walked by quickly) at this store I was shopping at and I kept wondering if it was actually embodied by the store owner or assistant, and watching me. It didn't stop me from shopping, but I did not like the feeling.

However at another time, and when I knew more what was going on and was in control of my experience (like with Frank at SLB20) it was fun exploring bot world.

  • Like 1
Link to comment
Share on other sites

2 hours ago, Istelathis said:

I think for some people, AI can be incredibly beneficial as a means of avoiding isolation.  People who have obvious character flaws, act inappropriately, have severe social anxiety, or perhaps suffer from some mental disorder or another that prevents having real life connections with others.  I think, unfortunately, some people are often pushed away by others as though they were lepers and it is socially acceptable to ridicule or push them out of group settings because they cause others discomfort.

I remember reading about companionship bots helping elderly people improve on many levels.

But that's sad really...the elderly shoved in homes with bots instead of real humans  :(

  • Like 2
Link to comment
Share on other sites

1 hour ago, Arielle Popstar said:

Yes I thought the same myself which was why I mentioned being informed at the start that I might be interacting with an AI or real human. I thought about the Frog muse Luna was mentioning she was chatting with and felt I would not give that a serious chance simply because of my feeling it is not real and does not have the advantage of being able to share its feelings on a lived experience vs that of the collated data of what others have said.

I think if you look at it like reading a book. The book provides information, it has a perspective. But hopefully nobody reads a book and accepts every part of it as gospel truth and unchanging. Instead, as I did with Frank, I used the answers to formulate new questions for further exploration. And I've already researched a couple things Frank said, and so my knowledge/awareness continues to grow from the initial encounter with Frank.

  • Like 2
Link to comment
Share on other sites

11 minutes ago, Luna Bliss said:

I know from experience that far too many residents do not care to read in 2nd life. No matter how many signs I put out so as to guide people, many never even see them!

Oh I know. I've seen people flood into Twitch streams nonstop to ask what a streamer is playing/talking about/doing over and over despite it very clearly being written in the title and about 3 other places above the fold. Like scroll up, omg. Same with Second Life - hence all the "OMG PLZ for the love of all that's chocolate read THIS NOTECARD!!!" 🤣

That said, though, I don't think chatbots should be developed to serve as an answer to basic human laziness. There are currently thousands of people working on one single long-term project. One, out of the many we get. That's a whole lotta time and a huge amount of money going into developing these things, and they're still not ready for prime-time (they fail on a ton of fronts), so there's no way in the world I'm trusting random bots being shoved out into the world to interact with people across the Internet. 😄 

People can bring that stuff into SL and other games all they want, but I've got zero interest in them. I don't hate chatbots, but I do believe the reasoning behind creating them ought to be sound and they should be trained with great care. If these projects I contribute to didn't take user safety and security so seriously, I wouldn't be doing this at all.

  • Like 3
Link to comment
Share on other sites

On 7/26/2023 at 12:48 PM, Ingrid Ingersoll said:

I'd love to have my AI second life avatar sit at home and answer the door while I'm away, which is most of the time. And pay the tier. 

Yes! Easily done - just IM me inworld and we'll set you up with state-of-the-art AI!

Link to comment
Share on other sites

19 hours ago, Istelathis said:

I do worry though, that it is going to be used to deceive people on social platforms, because people often will find safety in numbers, if the perceived outlook of any topic becomes the illusion of the majority supporting it, then people I think are more than likely to climb aboard such views. 

I think a lot of people are aware of that as well, so such distrust of AI is not unjustified. It is here regardless, and we as a whole are going to have to find ways to avoid such manipulation.  I think perhaps, one such way is to try our best to love one another, and not live through fear and hate.  To avoid agreeing with the majority as much as has been done in the past for a sense of safety in numbers, and be less prone to paranoia.  It would go a long way with choosing politicians who are less likely to manipulate us as well 🙃 

 

We been getting deceived for 1000's of years by politicians and yet we are still here so how much worse could AI be and still at least sound coherent?

 

  • Like 1
Link to comment
Share on other sites

54 minutes ago, Arielle Popstar said:

We been getting deceived for 1000's of years by politicians and yet we are still here so how much worse could AI be and still at least sound coherent?

That means we've had thousands of years to learn how to detect human deception (political and otherwise). Of course, human skills of deception have been honed for those same millennia. 

Seems to me that people predictably fall victim whenever a new technology emerges.

AI like Stable DIffusion is getting much better at deep fakes than its old three-finger reputation. Sure, Photoshop fakes have existed forever, and even video fakes, but they were labor intensive. Now we can generate a new negative ad of wholly contrived opposition video at a moment's notice, all it needs is a free or cheap medium (TikTok?) to spread it to a vast eager and naive audience. 

Supposedly LLMs are used to compose on the fly scripts for the grandparent scam. Surely they can flood the zone with targeted disinformation of quality, variety, and volume no Glavset could ever dream of achieving. 

  • Like 3
Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...