Jump to content

Lets talk about Chat GPT and AI, is it possible for AI to take our avatars and turn them into real people?


Recommended Posts

1 hour ago, animats said:

Skin in SL, even with PBR, is not that good. We need subsurface scattering to get out of the uncanny valley. Without that, skin is in a range from "dead" to "plastic".

Most, if not all SL avatars still use the Robin Wood templates from 2005 as a base UV wrapping. LL has said that we can use 1024 by 1024 images as a maximum size. This is very limiting in today's "demands" when it comes to detail. Our Graphics cards are thousands of times more powerful then the computing power we had in the 60's when we went to the moon. (yes, we did, don't be a *bleep* ;)) But SL is slow in going forwards into the future when it comes to this. And you can't blame LL for this. This was a standard back then and we are still using and building upon it. It's not like you can change it in an instant on with an update like a game would that isn't almost 100% user created content.

This type of issues is a fairly great headache inducing problem LL is constantly fighting with when it comes to introducing new stuf or updating.

 

I don't want to deviate from the topic but i just wanted to reply to this.

  • Thanks 1
Link to comment
Share on other sites

24 minutes ago, Lindal Kidd said:

I'm currently on a 93 day success streak, with an equal number of "wins" at Spelling Bee.

Do I pass?

You got me beat. I'm now on a lucky 13-day streak as of today's #648, so I must have bust on #635, when I had a 100-plus-day streak going. There's a handy website that keeps an updated list of all the words by day, so I know my undoing was "CIDER".

SPOILER WARNING: This is the site, but it includes the current day's word, so beware!

Link to comment
Share on other sites

1 hour ago, Lindal Kidd said:

I'm currently on a 93 day success streak, with an equal number of "wins" at Spelling Bee.

Do I pass?

Yeah, you pass.  I had a run in the 90s until I went to Japan in February, got CoVid, and was isolated. Somewhere in that mess, I missed a day, so I had to start all over again. I'm on day #34 again.  😒

  • Like 1
Link to comment
Share on other sites

On 3/28/2023 at 12:15 AM, xXJupiterHeightsXx Starchild said:

heres an interesting question, after watching Coldfusion Tv on youtube talking about how advanced AI has become, the question I have and egger to know is.. I have seen photogenic pictures of people that AI has created and they are not real people but lets say you upload your avatar to AI, is it then possible to have it create a real life looking person from your avatar?

I made a ChatGPT robot with Pantera's latest script that enables it to work with SL. And I have to say it's not all that. I can't see adding it to my avatar, although the thought of having a "two-way wrist watch" that could spout social inanities during gallery openings while I'm AFK is attractive! Before you imagine this opens up vistas for AFK Motels, keep in mind that OpenAI has a TOS that does not allow adult speech.

So if you use the free version, the chief problem I see with it, is that it doesn't really learn in SL or accumulate memory -- these are paid-for plug-ins that I'm not sure work with SL yet. So I try putting it in quests, where I want it to learn to converse about possible quest hints -- I put it in a wolf. But it never grasps that it should talk about potions or answer about them, instead it will spout factoids like "There are only 23,000 wolves left in Montana." That's still kind of fun, but not quite what I was hoping for.

Be careful putting out a bunch of them. A friend of mine did that with different accounts and they spout the most insane rapid-fire and voluminous nonsense, i.e. not answering your SL-based questions but telling you about vitamin regimens or technical parameters -- it just puts in random stuff and then simply delivers boxes when it is tired or overwhelmed.

I then tried to use it to help me on building things, I tried to get it to tell me the exact coordinates of what I need to set a temp-on-rez script to get an object of XYZ size and location to rez out of XYZ prim.

It would sometimes not seem to understand what "rez" means and make up stuff; when it finally gave the right answer, it didn't remember it next time. It also gave wrong answers.

And of course it hallucinates, as I've seen it do on asking it to write my SL or RL biographies -- it manufactures stuff out of whole cloth based on some random Internet search that "sounds right".

I found it was best for making pictures using DALL-E.

 

Link to comment
Share on other sites

  • 3 months later...

My definition of "real" makes me say no.

The thread and the pic that Scylla posted make me think we'll see many more "Real" Life pictures in the RL profile tab now and soon, where before none or SL pics have been. This might also impact the "Relationships" category soonish and inspire lively discussions. 

YouTube, or my browser, not sure, lately bombards me with ads for a "Face Beautifying" app (interestingly, I'm often finding the "before" more attractive than the "after", but maybe that's just due to bad handling of the app by the user), and I also had a video suggestion that I actually watched, to see what's possible these days, where an "average-looking woman of a certain age" showed her actual self and demonstrated the app she's using to beautify her videos, showing different effects and degrees that she constantly uses, live. It was... eye-opening, and henceforth, I'll just assume that nothing ever is "real" until proven otherwise, like meet the person in the flesh. Amd then, I'll assume they had intensive beauty surgery, corrective surgery, or whatever it's called if it's not to fix an accident. And, of course, you could argue endlessly about whether "fixing" a nose that isn't acceptable to someone according to their current beauty standards, makes them less or more real, or whatever.

Sonetimes, it does make you wonder whether we're heading into a world wide world of hikikimori, because it just will be too disappointing to step outside and into reality.

Maybe a new kind of "includes artificially enhanced pics and footage" disclaimers should become the norm, like those "includes paid advertisements" ones, so there won't be generations growing up with the impression that everyone anywhere else than they live, is hyper beautiful.

However, as long as it's clear that it's not real, it sure could be a lot of fun if applicable "live" and without lag in-world; roleplay with hyper realistic looking avatars, including fantasy creatures and animals could be awesome. Maybe rather frightening, too, though, depending...

Edited by InnerCity Elf
  • Like 2
Link to comment
Share on other sites

Society is still having challenges with the idea of what is and is not real. Then defining what is and is not alive is another challenge. It is sort of the Pluto is/isn't a planet problem.

Love the Homer picture!

Stone Johnson has Adult Sex Robots in SL. (Marketplace) The in-world store has demo models walking the floor. You can chat with them, (you can get down and dirty... right there O.O  ) it is text and voice coming from the bot. It is GPT-3.x (I think 3.5) that runs the voice and chat text. One has to type in chat for the bot to hear you.

Stone's challenge for sexbots is convincing GPT to talk about sex. The demos I saw show he has figured out a way. He is working on getting his bots to be dominant. As it is now they are very submissive. However. there is a model that will take over your RLV collar and does a good job of taking over to force a coupling with its... his... target. There were some glitches. But, the bots were fun and novel.

I suspect getting GPT to draw Homer's private guy parts might run into some roadblocks. And... if GPT is going to remember people... do we need t be concerned about what it thinks of us?

Since these bots are the first try at blending SL and ChatGPT in an interactive exchange I expect it to be bumpy. It was.

If you are into talking dirty... that tended to fail in my experiments. While there is no doubt about what the bot was intending to say nor doing (well... it is obvious what the bot is doing visually), the chat is not crude nor vulgar. So... think of a high-end call girl that is not going to say anything crude.

Will these bots take over the AFK Sex Bot places? I would think a good number of AFK Bot users would go for these. So I am thinking probably. But I also expect there will be a cost for using such bots.

Maybe one day SL will be filled with bots talking to bots.... 🙄

  • Like 2
  • Thanks 2
Link to comment
Share on other sites

On 3/28/2023 at 5:15 AM, xXJupiterHeightsXx Starchild said:

lets say you upload your avatar to AI, is it then possible to have it create a real life looking person from your avatar?

Sure is!  Here's one of me, along with the original pic it was generated from.

c37f2d81bdb20bb1.thumb.jpg.b943af957938c41da1d9817b257725d8.jpg

 

L22_040.thumb.jpg.08e1baa3f537918e428e98b022279d8a.jpg

I had to cheat slightly; the original AI render omitted the tattoo, which I added back in from the original SL pic.

  • Like 3
Link to comment
Share on other sites

On 7/14/2023 at 3:13 PM, Nalates Urriah said:

Society is still having challenges with the idea of what is and is not real. Then defining what is and is not alive is another challenge. It is sort of the Pluto is/isn't a planet problem.

Love the Homer picture!

Stone Johnson has Adult Sex Robots in SL. (Marketplace) The in-world store has demo models walking the floor. You can chat with them, (you can get down and dirty... right there O.O  ) it is text and voice coming from the bot. It is GPT-3.x (I think 3.5) that runs the voice and chat text. One has to type in chat for the bot to hear you.

Stone's challenge for sexbots is convincing GPT to talk about sex. The demos I saw show he has figured out a way. He is working on getting his bots to be dominant. As it is now they are very submissive. However. there is a model that will take over your RLV collar and does a good job of taking over to force a coupling with its... his... target. There were some glitches. But, the bots were fun and novel.

I suspect getting GPT to draw Homer's private guy parts might run into some roadblocks. And... if GPT is going to remember people... do we need t be concerned about what it thinks of us?

Since these bots are the first try at blending SL and ChatGPT in an interactive exchange I expect it to be bumpy. It was.

If you are into talking dirty... that tended to fail in my experiments. While there is no doubt about what the bot was intending to say nor doing (well... it is obvious what the bot is doing visually), the chat is not crude nor vulgar. So... think of a high-end call girl that is not going to say anything crude.

Will these bots take over the AFK Sex Bot places? I would think a good number of AFK Bot users would go for these. So I am thinking probably. But I also expect there will be a cost for using such bots.

Maybe one day SL will be filled with bots talking to bots.... 🙄

Thanks for the kind words, Nal! I love the idea that my sexbots are high-end call girls! Just what I want them to be! I was just chatting with a bartender using one of my systems that the owner had set up to be a uh sassy jerk and he was so annoying!  Of course you can set up their personalities to be whatever you like (there is a text "bio" in my systems that tells the AI servers how they should respond).  I envision SL populated by many bots of this type, but because someone will have to pay for them, I would guess there would be a kind of natural balance of human to botminds.  We can think of a new race - Homo botticus - who could advise us on the best way to write that darn LSL code and then finish up with a happy ending!

test_010.jpg

Link to comment
Share on other sites

18 hours ago, musichero said:

Thanks for the kind words, Nal! I love the idea that my sexbots are high-end call girls! Just what I want them to be! I was just chatting with a bartender using one of my systems that the owner had set up to be a uh sassy jerk and he was so annoying!  Of course you can set up their personalities to be whatever you like (there is a text "bio" in my systems that tells the AI servers how they should respond).  I envision SL populated by many bots of this type, but because someone will have to pay for them, I would guess there would be a kind of natural balance of human to botminds.  We can think of a new race - Homo botticus - who could advise us on the best way to write that darn LSL code and then finish up with a happy ending!

test_010.jpg

Interesting! And maybe a little disquieting.

To be clear, I have zero moral qualms about the use of AI to produce "realistic" partners for virtual sex. I mean, for me, sex is only meaningful -- and by that, I mean "a turn on" -- if I know that I'm engaged with a real person, but pornography has always been about fantasy, and doesn't involve real engagement either. One area of concern might be that sexbots are simply reproducing particular toxic social attitudes about sex and gender (for instance, reinforcing r*pe myths), but so long as the person using them knows that they are engaged with a programmed entity rather than a real person, that's not much different in effect than RP.

What I find disquieting is the possibility that our engagement with AI bots is going to gradually displace our interactions with real people. That's disturbing on two counts: first, that we are no longer going to be exposed to the real perspectives, views, and experiences of actual people, and second, that such bots can be used (as they are now on platforms such as Twitter) to push particular ideologies or perspectives, whether through misinformation or not.

I'll lay aside the second of these for the moment, as it's a larger issue relating to social media and online information in general.

My first named concern is maybe more subtle, but in some ways more potentially worrisome. We learn from our interactions with others. It teaches us that there are other perspectives, it gives us access to new insights that might not otherwise be available to us, and it trains us in empathy and tolerance. And an AI can only provide a simulacrum of such an experience.

We're all familiar with the idea of the "filter bubble": that algorithms tend to feed us more of what we already believe or like, rather than disrupting and questioning our biases and beliefs through exposure to difference. Someone here, a while ago, actually suggested that it would be a great thing if we could produce SL bots who reflected our own likes and dislikes -- who would, it was literally suggested, be exactly like ourselves. What a "perfect friend" that would be! Like talking to a mirror . . .

6368.jpg?v=1678682403

And because AI harvests its "ideas" and "attitudes" and discourse from the most visible and popular existing online sources, it's always going to reflect the "status quo" in terms of perspective. This is a recipe for personal and social stagnation.

My best friend in SL is someone who is, in most regards, entirely unlike me. I'm trained in the humanities, and she's an engineer. I'm straight; she's gay. I'm "vanilla," and she's a Domme. Her popular culture references tend to be from the 40s and 50s, while mine are more current. She has told me that no one sends her scurrying to Google Search more than I do, and the same is true on my side: I have learned so much from her. And the reason I have is because she is so very different than I am in many respects.

Why would I want to exchange her, and her insights and perspectives (not to mention her very human warmth and personality) for a programmed entity that feeds me what I want to hear?

Our bot overlords aren't going to rule over us in some sort of obviously authoritarian manner. They are going to be our best friends, our lovers, our "teachers" -- and they are going to oppress us by keeping us comfortably cocooned in unchallenged and unchallenging "truisms" that reflect the world, not as it is, but as we'd secretly like it to be.

  • Like 4
  • Thanks 1
Link to comment
Share on other sites

16 minutes ago, Scylla Rhiadra said:

And because AI harvests its "ideas" and "attitudes" and discourse from the most visible and popular existing online sources, it's always going to reflect the "status quo" in terms of perspective. This is a recipe for personal and social stagnation.

   You mean bots exposed to too much vanilla are going to act offended when people try to engage in CNC fantasies with them?!

AccurateMediocreGordonsetter-size_restri

  • Haha 1
Link to comment
Share on other sites

Just now, Orwar said:

   You mean bots exposed to too much vanilla are going to act offended when people try to engage in CNC fantasies with them?!

AccurateMediocreGordonsetter-size_restri

Well I suppose arguably the entire notion of "consent" becomes rather moot in the context of a bot. Which is another interesting point.

 Until we reach the moment of The Singularity, of course -- at which point we're all probably dead anyway! 😏

I have known people in SL who treat others as though they were disposable NPCs. For that matter, in RL I've seen people treat cashiers as though they were vending machines, or someone holding a door open for them as though they were an electronic door-opener, so . . .

Personally, yeah, I'd like to see a mechanism for consent built into a sex bot engaged in CNC -- not because it is necessary (the bot being in any case incapable of consenting) but simply to reinforce the point that consent is a vital component of human interactions -- which, of course, sex with a bot is simulating.

But whatevs. So long as people are well-educated about and aware of the importance of consent, it probably doesn't matter too much.

  • Like 2
Link to comment
Share on other sites

25 minutes ago, Scylla Rhiadra said:

What I find disquieting is the possibility that our engagement with AI bots is going to gradually displace our interactions with real people. That's disturbing on two counts: first, that we are no longer going to be exposed to the real perspectives, views, and experiences of actual people, and second, that such bots can be used (as they are now on platforms such as Twitter) to push particular ideologies or perspectives, whether through misinformation or not.

Then maybe we need to collectively ask ourselves what it is that brought us to a point where a preprogrammed bot is preferable to a live person.

If I may I'd like to point out the contradiction here where you imply that the ideologies and perspectives some have are in not real whereas to the contrary they are very much so and are held by a significant portion of the population. Are these not exactly the sort you propose should be learned from elsewhere rather then label them as mis and disinformants?

  • Haha 1
Link to comment
Share on other sites

1 minute ago, Love Zhaoying said:

Design bots to prefer chocolate over vanilla, with the caveat that "vanilla is a flavor too".

Oh, I'm pretty sure that there will be bots designed to prefer sh*t-flavoured ice cream. Which, again, whatevs. And a whole lot of flavours that are probably illegal in some countries.

O Brave New World!

  • Thanks 1
  • Sad 1
Link to comment
Share on other sites

Just now, Arielle Popstar said:

If I may I'd like to point out the contradiction here where you imply that the ideologies and perspectives some have are in not real whereas to the contrary they are very much so and are held by a significant portion of the population. Are these not exactly the sort you propose should be learned from elsewhere rather then label them as mis and disinformants?

i am implying no such thing. A perspective, whether left, right, or centrist, is not "real" if it's being delivered to us by an algorithm, because a bot is incapable of having a "perspective" at all.

The issue isn't the particular ideological flavour: it's just as possible (and in fact, does of course happen) to have algorithms serve up left wing opinions as right wing ones. The issue is whether that opinion is derived from real lived experience and human thought, as opposed to constructed by a literally non-living, non-thinking bundle of code.

4 minutes ago, Arielle Popstar said:

Then maybe we need to collectively ask ourselves what it is that brought us to a point where a preprogrammed bot is preferable to a live person.

This is not, I'd submit, a new phenomenon, per se. We just have a new means at our disposal to avoid real people.

  • Like 2
Link to comment
Share on other sites

On 7/14/2023 at 4:13 PM, Nalates Urriah said:

Maybe one day SL will be filled with bots talking to bots.... 🙄

I've seen it suggested that AI bots -- built from ChatGPT, or using Midjourney or whatever -- are already starting to essentially reproduce themselves because they are harvesting their own "products." So, AI "art" based not on RL art, but on . . . other AI art. Opinions or information built from the opinions and information of other AI machines.

I suppose it'll be a bit like a game of telephone, because AI now tends to introduce a degree of noise into anything it reproduces: subtle or not-so-subtle errors ("Why does that woman have six fingers!") and biases that will, I suppose, become hardened through reproduction and repetition. The ultimate echo chamber, but one empty of human witness, and that resounds with increasingly distorted screaming . . .

If an AI-driven bot falls in a forest . . . does it matter?

Edited by Scylla Rhiadra
Link to comment
Share on other sites

2 minutes ago, Scylla Rhiadra said:

i am implying no such thing. A perspective, whether left, right, or centrist, is not "real" if it's being delivered to us by an algorithm, because a bot is incapable of having a "perspective" at all.

The issue isn't the particular ideological flavour: it's just as possible (and in fact, does of course happen) to have algorithms serve up left wing opinions as right wing ones. The issue is whether that opinion is derived from real lived experience and human thought, as opposed to constructed by a literally non-living, non-thinking bundle of code.

This is not, I'd submit, a new phenomenon, per se. We just have a new means at our disposal to avoid real people.

Ok my thought is that algorithms simply server up the ideas people have, not the invent the thoughts in the first place. In most of the popular social medias originating in the west at least, centrist and r-wing opinions are suppressed other than on Twitter at least. 

Wish I had saved an opinion piece I ran across last week where a leftist woman was expressing what characteristics she wanted in a partner and they all coincided with a typically r-wing male. When that was pointed out to her she admitted she had realized that already and was why she remained single as she couldn't bear the idea of having a partner who was on that side of the ideological line even if that was what she was attracted to.

 

Link to comment
Share on other sites

48 minutes ago, Scylla Rhiadra said:

Interesting! And maybe a little disquieting.

To be clear, I have zero moral qualms about the use of AI to produce "realistic" partners for virtual sex. I mean, for me, sex is only meaningful -- and by that, I mean "a turn on" -- if I know that I'm engaged with a real person, but pornography has always been about fantasy, and doesn't involve real engagement either. One area of concern might be that sexbots are simply reproducing particular toxic social attitudes about sex and gender (for instance, reinforcing r*pe myths), but so long as the person using them knows that they are engaged with a programmed entity rather than a real person, that's not much different in effect than RP.

What I find disquieting is the possibility that our engagement with AI bots is going to gradually displace our interactions with real people. That's disturbing on two counts: first, that we are no longer going to be exposed to the real perspectives, views, and experiences of actual people, and second, that such bots can be used (as they are now on platforms such as Twitter) to push particular ideologies or perspectives, whether through misinformation or not.

I'll lay aside the second of these for the moment, as it's a larger issue relating to social media and online information in general.

My first named concern is maybe more subtle, but in some ways more potentially worrisome. We learn from our interactions with others. It teaches us that there are other perspectives, it gives us access to new insights that might not otherwise be available to us, and it trains us in empathy and tolerance. And an AI can only provide a simulacrum of such an experience.

We're all familiar with the idea of the "filter bubble": that algorithms tend to feed us more of what we already believe or like, rather than disrupting and questioning our biases and beliefs through exposure to difference. Someone here, a while ago, actually suggested that it would be a great thing if we could produce SL bots who reflected our own likes and dislikes -- who would, it was literally suggested, be exactly like ourselves. What a "perfect friend" that would be! Like talking to a mirror . . .

6368.jpg?v=1678682403

And because AI harvests its "ideas" and "attitudes" and discourse from the most visible and popular existing online sources, it's always going to reflect the "status quo" in terms of perspective. This is a recipe for personal and social stagnation.

My best friend in SL is someone who is, in most regards, entirely unlike me. I'm trained in the humanities, and she's an engineer. I'm straight; she's gay. I'm "vanilla," and she's a Domme. Her popular culture references tend to be from the 40s and 50s, while mine are more current. She has told me that no one sends her scurrying to Google Search more than I do, and the same is true on my side: I have learned so much from her. And the reason I have is because she is so very different than I am in many respects.

Why would I want to exchange her, and her insights and perspectives (not to mention her very human warmth and personality) for a programmed entity that feeds me what I want to hear?

Our bot overlords aren't going to rule over us in some sort of obviously authoritarian manner. They are going to be our best friends, our lovers, our "teachers" -- and they are going to oppress us by keeping us comfortably cocooned in unchallenged and unchallenging "truisms" that reflect the world, not as it is, but as we'd secretly like it to be.

Interesting perspectives.

I do notice people talking about what AI is going to do. The actuality is, it will do what the programmers set it up to do. The information push is that AI will be able to think for itself. The benefit of that thinking is no one looks at those controlling AI, just the AI. But, as musichero pointed out, there are settings to control how the AI responds. You can see how Microsoft and Google have set limits on what their AI units will say. People are testing Bart and ChatGPT in the political arena. Asking AI to write about Biden and Trump. You can try it yourself and start to AI has interesting biases and PC limits.

The big problem with AI is who controls it.

  • Like 3
Link to comment
Share on other sites

Just now, Arielle Popstar said:

Ok my thought is that algorithms simply server up the ideas people have, not the invent the thoughts in the first place.

Yes, of course they do -- although in reconstructing them, they produce new variations (like the woman with six fingers I mentioned above). They aren't just "sources of information" like an encyclopedia entry: they synthesize and rework, and they simulate analysis and logic. And the key word here is "simulate."

2 minutes ago, Arielle Popstar said:

In most of the popular social medias originating in the west at least, centrist and r-wing opinions are suppressed other than on Twitter at least.

Uh huh . . . 🙄

2 minutes ago, Arielle Popstar said:

Wish I had saved an opinion piece I ran across last week where a leftist woman was expressing what characteristics she wanted in a partner and they all coincided with a typically r-wing male. When that was pointed out to her she admitted she had realized that already and was why she remained single as she couldn't bear the idea of having a partner who was on that side of the ideological line even if that was what she was attracted to.

Does this anecdote have a point, Arielle?

One of my best friends in SL is an ex-Texas cop who votes Republican (or did, until Trump). So what?

  • Like 1
Link to comment
Share on other sites

1 minute ago, Nalates Urriah said:

The big problem with AI is who controls it.

Absolutely, although we've begun to see instances of things like ChatGPT producing results that were not foreseen (or likely to be approved of) by its programmers.

I think it's an incredibly complicated series of mechanisms operating this . . . and control over AI is unquestionably one, but only one, of the biggest issues.

  • Like 1
Link to comment
Share on other sites

3 minutes ago, Arielle Popstar said:

Ok my thought is that algorithms simply server up the ideas people have, not the invent the thoughts in the first place. In most of the popular social medias originating in the west at least, centrist and r-wing opinions are suppressed other than on Twitter at least. 

Wish I had saved an opinion piece I ran across last week where a leftist woman was expressing what characteristics she wanted in a partner and they all coincided with a typically r-wing male. When that was pointed out to her she admitted she had realized that already and was why she remained single as she couldn't bear the idea of having a partner who was on that side of the ideological line even if that was what she was attracted to.

 

I agree to a point. But, even the old Elisa chat program would come up with novel combinations that presented new concepts. I suspect AI is doing enough of that to alarm those working with it. Not having good definitions of what is live or self-aware leaves room for developers to misunderstand what they are seeing.

  • Like 2
Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...