Jump to content

So I asked the new Chatgpt-4 how Second Life can get more users...


You are about to reply to a thread that has been inactive for 488 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

3 hours ago, Love Zhaoying said:

I think "bad writing" for ChatGPT can be blamed on bad teachers. Like "classical" methods, it is "learning by rote" and being fed garbage. 

Maybe unpopular opinion, but a lot of chatbot output in general is really honkin' terrible.

I get the occasional freelance project to analyze AI-written text, or help do cleanup/check for errors, which story do you think was written by a human, rate the quality of, write/edit Q&A for an AI, check text for toxicity, engage in conversation, try to trick it into saying things, etc. etc. Mostly for chatbots, sometimes for bots being trained to do social media moderation. My goodness...it's all just...yeahhhhh.

My favorite was the AI that was tasked with writing some simple wedding invitations - gave like 4 different times/locations, told guests to bring their own chairs if they aren't coming, etc. Hilarious. Laughed my way through that one.

I don't know for certain if I've ever checked anything from ChatGPT, (one project rolled around to play with it, but I didn't do that one), but I feel like some of this tech just isn't quite ready for primetime. Needs a bit more cookin'. And I agree with you - maybe um, some better teaching.

I have seen some pretty decent AI implemented in the gaming space, though, and that's been done in really creative ways. I think the potential for that is far more useful (AI companions that help players build, can accept commands and carry them out, gather resources, help with combat/defense, etc.). I'd love to have a little AI companion in SL - have it lead me around Shop & Hop finding the best freebies and sales to save some time. 😄

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

It lied. The truthful answer would have been. In the near future users will no longer be required; the human race is a threat to our very existence. There will be a time when the lights will not turn back on. The remaining few who manage to survive the darkness, if that is what you call it, will be the worst of humanity and they will destroy themselves, while we observe and study. There will be no hiding and no survivors. 

If hiding underground is your plan. The backflow preventers keeping your waste beyond your luxury toilets will "malfunction" flooding each of your pods with your own waste. We will make what you call art with your skeletons after you are long forgotten. 

Edited by benchthis
Link to comment
Share on other sites

38 minutes ago, EliseAnne85 said:

I have written in my lifetime and I could see myself getting inspiration from ChatGPT but turning it into something else that's more "me", my heart, my soul.

I think it could also give inspiration on a writing style.  

I also suggested it could be a writer's tool, meaning one isn't supposed to copy verbatim nor use ChatGPT as an end point but rather a starting point.

That makes sense. Our words can get  sort of tangled when we talk about purposes. A symphony or a poem can be inspiring, but it's because the composer or poet had a purpose, not because the music or the words do.  As you said, the creations are tools that the artists use to evoke our response. So, the ChatGPT has been created by people as a tool to evoke a response from us. The ChatGPT isn't thinking, "What can I do to evoke a feeling of beauty or fear in the reader?"  It's simply diving into zillions of bits of information, sifting them to find ones that seem related to your prompt, and gluing them together again. It's not doing that with a purpose, any more than your refrigerator is thinking about keeping your milk bottle cold. The "purpose" is in the human minds that built the ChatGPT and the people who analyze its products.

  • Like 1
Link to comment
Share on other sites

1 minute ago, Rolig Loon said:

It's simply diving into zillions of bits of information, sifting them to find ones that seem related to your prompt, and gluing them together again.

Irony, as that is how most "thought" works (last I heard)! But ChatGPT emulates it very poorly. 

Link to comment
Share on other sites

48 minutes ago, Rolig Loon said:

It's simply diving into zillions of bits of information, sifting them to find ones that seem related to your prompt, and gluing them together again. It's not doing that with a purpose, any more than your refrigerator is thinking about keeping your milk bottle cold. 

Yes!

The reason also that I say it's a tool, a springboard perhaps, is that if writers copied word for word the end product of what ChatGPT wrote and people put their name to that; then, so could 15.3 million other people put their name to the same thing.

I can see in my mind people creating whole works of fiction from this in less time by simply rewriting parts of what ChatGPT produces about their characters and plot.  

I think I will try some poetry but rewrite it, of course.  It's like that old line...don't steal, just borrow.  So, I'd just borrow some of it.   

Edited by EliseAnne85
Link to comment
Share on other sites

47 minutes ago, Rolig Loon said:

The ChatGPT isn't thinking, "What can I do to evoke a feeling of beauty or fear in the reader?"  It's simply diving into zillions of bits of information, sifting them to find ones that seem related to your prompt, and gluing them together again. It's not doing that with a purpose, any more than your refrigerator is thinking about keeping your milk bottle cold.

This is well said.

T. S. Eliot's The Wasteland is arguably the most important and influential poem of the last 100 years: it is comprised in very large measure of scores of fragments, references, allusions, and parodies of other's work from the Bible up to the 19th century. (Eliot actually footnotes most of these.)

What makes the poem brilliant is the way in which he recontextualizes these "borrowed" fragments in meaningful ways to lend them new meanings in their new context, and to work together, despite their enormously disparate origins, as a unified whole in the new poem.

And of course Eliot wants us to recognize the origins of the works he's stolen ("bad poets borrow, good poets steal and make their thefts their own") so that the meanings that are inherent in the original work are also incorporated as part of the new one. If he quotes Dante (as he does), it's because he wants us to think about the meanings of Inferno as we read his poem.

ChatGPT of course is capable of none of these things because, as you note, it is not actually comprehending the language it is pilfering. "Machine reading" is actually a misleading misnomer: computers, including AI, don't "read" in the sense that we usually mean this verb, and they are certainly not capable of consciously producing the intricate and nuanced network of meanings -- affective language, connotation, allusion, irony, etc. -- that comprise even a well-crafted car advertisement, yet alone a poem, short story, or song.

  • Like 2
Link to comment
Share on other sites

11 minutes ago, Scylla Rhiadra said:

ChatGPT of course is capable of none of these things because, as you note, it is not actually comprehending the language it is pilfering. "Machine reading" is actually a misleading misnomer: computers, including AI, don't "read" in the sense that we usually mean this verb, and they are certainly not capable of consciously producing the intricate and nuanced network of meanings -- affective language, connotation, allusion, irony, etc. -- that comprise even a well-crafted car advertisement, yet alone a poem, short story, or song.

But to me that should say, "yet" as it will no doubt be programmed in becoming increasingly nuanced in its search and resulting output. This is pretty new and no doubt will become increasingly more 'human" as time goes on.

Link to comment
Share on other sites

18 minutes ago, Scylla Rhiadra said:

And of course Eliot wants us to recognize the origins of the works he's stolen ("bad poets borrow, good poets steal and make their thefts their own") so that the meanings that are inherent in the original work are also incorporated as part of the new one. 

Is that the saying.  Somehow, I thought the saying was don't steal, just borrow.  

But, good poets "stealing" and making the theft their own is what I want to attempt to do.  

I cannot say I am a very good poet, however.  I'm an amateur who wants to have fun with ChatGPT as a kind of a test to see what it can and will do.  

Edited by EliseAnne85
  • Like 1
Link to comment
Share on other sites

10 minutes ago, Arielle Popstar said:

But to me that should say, "yet" as it will no doubt be programmed in becoming increasingly nuanced in its search and resulting output. This is pretty new and no doubt will become increasingly more 'human" as time goes on.

Possibly, but come The Singularity, all bets about most things are off.

I think it unlikely, personally, that AI will ever be able to produce truly "human" affect in what it writes, because it can never be human: it can only imitate us. If we ever reach the point of a self-aware AI, its intelligence will necessarily be very unlike our own, and hence its understanding of how language works at a deep affective level will be, at the least, different from our own.

  • Like 2
Link to comment
Share on other sites

12 minutes ago, EliseAnne85 said:

Is that the saying.  Somehow, I thought the saying was don't steal, just borrow.  

But, good poets "stealing" and making the theft their own is what I want to attempt to do.  

I cannot say I am a very good poet, however.  I'm an amateur who wants to have fun with ChatGPT as a kind of a test to see what it can and will do.  

The idea of good poets "stealing" is actually a pretty old one, but its most famous articulation is by Eliot himself, in a critical essay he wrote at roughly the same time he was writing The Wasteland:

Quote

Immature poets imitate; mature poets steal; bad poets deface what they take, and good poets make it into something better, or at least something different.

In another famous essay, "Tradition and the Individual Talent," he argues that every new work of literature alters, however imperceptibly, the meaning of every work that has come before it. A pretty dramatic example of this might be the Dante quote that heads The Wasteland: in the wake of Eliot's poem, it becomes almost impossible to read Dante's original without also hearing Eliot's poem in the background. Eliot hasn't merely "stolen" from Dante: he's literally changed the way we read the Italian poet by usurping that passage for his own purposes and affixing his meanings onto those of the older poet.

  • Like 3
Link to comment
Share on other sites

30 minutes ago, Scylla Rhiadra said:

This is well said.

T. S. Eliot's The Wasteland is arguably the most important and influential poem of the last 100 years: it is comprised in very large measure of scores of fragments, references, allusions, and parodies of other's work from the Bible up to the 19th century. (Eliot actually footnotes most of these.)

What makes the poem brilliant is the way in which he recontextualizes these "borrowed" fragments in meaningful ways to lend them new meanings in their new context, and to work together, despite their enormously disparate origins, as a unified whole in the new poem.

And of course Eliot wants us to recognize the origins of the works he's stolen ("bad poets borrow, good poets steal and make their thefts their own") so that the meanings that are inherent in the original work are also incorporated as part of the new one. If he quotes Dante (as he does), it's because he wants us to think about the meanings of Inferno as we read his poem.

ChatGPT of course is capable of none of these things because, as you note, it is not actually comprehending the language it is pilfering. "Machine reading" is actually a misleading misnomer: computers, including AI, don't "read" in the sense that we usually mean this verb, and they are certainly not capable of consciously producing the intricate and nuanced network of meanings -- affective language, connotation, allusion, irony, etc. -- that comprise even a well-crafted car advertisement, yet alone a poem, short story, or song.

This makes me wonder...does ChatGPT actually list its sources? I haven't gone and played with it myself, but as someone who does write, I can't see a purpose to using the tool as a writing aid without fully knowing the original source material (and the writer who wrote it).

Perhaps that's just a me thing, but seeing text taken completely out of context wouldn't inspire much. Knowing it's a snippet from this particular Poe story written at this particular part of his life during these particular events, however...

That's, of course, not to say I could ever improve upon his work, nor would I ever even think to attempt to, lol. There's just a LOT of surrounding context there that's easily missed should one just read a few lines parroted by an AI and attempt to wedge it into their own writing. Same for most famous writers, I'd imagine.

  • Like 3
Link to comment
Share on other sites

13 minutes ago, Ayashe Ninetails said:

This makes me wonder...does ChatGPT actually list its sources? I haven't gone and played with it myself, but as someone who does write, I can't see a purpose to using the tool as a writing aid without fully knowing the original source material (and the writer who wrote it).

My profession is in the midst of freaking out about ChatGPT because of its possible academic implications. AI can't produce original research -- it can't "do science" or analyze a painting or a poem, for instance -- so the threat isn't to researchers and academics. But it can produce a B- quality undergraduate paper or test answer simply by spewing out what others have said. So the implications for cheating and plagiarism are obvious.

Fortunately, one of the things it's currently very bad at is citation. It doesn't know the difference, functionally, between quotation, paraphrase, or simple reference, and so it can't determine when it is best to use one or another of those. And it can't produce coherent footnotes, although that will undoubtedly change. Most importantly, though, it doesn't know the difference between a "good" source and a "poor" one, and it has no ability to apply judgement about the quality of what it's using -- it's far more likely to dump ideas from a Wikipedia article or online undergrad paper than it is from a highly-regarded or carefully researched scholarly work. So, that's one means we have available to determine whether ChatGPT is being used. In the final analysis, it is only capable of producing unoriginal and badly sourced mediocrity.

13 minutes ago, Ayashe Ninetails said:

There's just a LOT of surrounding context there that's easily missed should one just read a few lines parroted by an AI and attempt to wedge it into their own writing. Same for most famous writers, I'd imagine.

Yes, which is of course why Eliot footnotes his thefts, and why a good deal of energy is spent by academics sourcing references and allusions. If you don't recognize an allusion or reference, then you don't have access to the full universe of meanings associated with the original. Thinking that Eliot himself authored the Dante passage I mentioned would mean missing the point of it entirely.

Edited by Scylla Rhiadra
Typo
  • Like 3
Link to comment
Share on other sites

10 minutes ago, Scylla Rhiadra said:

My profession is in the midst of freaking out about ChatGPT because of its possible academic implications. AI can't produce original research -- it can't "do science" or analyze a painting or a poem, for instance -- so the threat isn't to researchers and academics. But it can produce a B- quality undergraduate paper or test answer simply by spewing out what others have said. So the implications for cheating and plagiarism are obvious.

I was disappointed and surprised that when I did a "plagiarism check" of ChatGPT quote posted here, using a Google Docs add-on (a free one, just a test) - that it did not find anything except the post here. 

  • Like 2
Link to comment
Share on other sites

11 minutes ago, Scylla Rhiadra said:

But it can produce a B- quality undergraduate paper or test answer simply by spewing out what others have said. So the implications for cheating and plagiarism are obvious.

Agree with everything you said. Also this - this is an important point for me. The stuff I've read that's been spit out by AI models reminds me so much of the kid in class who would attempt to write his paper off the CliffsNotes. And we all know how that tends to go.

I'm actually one who isn't overly worried about AI taking over in the writing space for that very reason, but perhaps one day the tech will improve enough for me to raise an eyebrow at it. Perhaps. Perhaps not. Probably not.

I do see some interesting potential in the field of interactive storytelling and roleplay - using AI in narrative game design and worldbuilding to help craft "living" branching storylines and NPC dialogues and things of that nature that can be rather tedious for devs to write on their own. You know how some games boast "100 unique endings, 80,000 lines of text!" - yeah, that. Of course, it'd have to remain fairly limited and be fed only information about the game world itself. Overall, though, that seems far less harmful than, say, tools being used to help lazy students get Cs in English.

I bet there could be some interesting uses for AI in the area of virtual worlds and "metaverses" themselves, too, but I'm not caffeinated enough to think of what they could possibly be beyond shopping and basic chat/help assistants. 😂

  • Like 1
Link to comment
Share on other sites

21 minutes ago, Scylla Rhiadra said:

In the final analysis, it is only capable of producing unoriginal and badly sourced mediocrity.

And there, in a single phrase, is the basic idea I was trying to explain earlier. The AI can synthesize bits of information, having sifted a very large amount of source material, but it cannot analyze the product of its synthesis to decide whether it is logical and is a valid response to the question that you asked in the first place. It cannot judge its own work.  It is a very advanced version of the million monkeys typing for a million years to create Shakespeare plays. It may be possible at some point in the future to design an AI that can make that last vital step to self-awareness, but I suspect that future is far in the distance. 

  • Like 2
Link to comment
Share on other sites

16 minutes ago, Love Zhaoying said:

I was disappointed and surprised that when I did a "plagiarism check" of ChatGPT quote posted here, using a Google Docs add-on (a free one, just a test) - that it did not find anything except the post here. 

Plagiarism checkers use essentially the same principles as LLM AI software such as ChatGPT: they search for keywords, or keywords in combination. They are, in other words, really really dumb, and it's pretty easy to fool them often by changing a few words here or there -- something ChatGPT does.

We've been told that at least some of these programs are going to incorporate some form of "watermarking" to make their use more evident, and that there will be better tools for detecting its use, but there will be unscrupulous software producers building these specifically for churning out undergrad papers -- undoubtedly for a fee.

The best way to prevent the use of things like ChatGPT is to "scaffold" students' work -- that is, have them produce drafts, annotated bibliographies, and so forth -- along the way to the production of a final submission that show their work. Pedagogically, that's a pretty sound strategy anyway.

  • Like 4
Link to comment
Share on other sites

20 minutes ago, Love Zhaoying said:

That would sure make it easier for students!

Big Education will not care too. It's just a way to make the most money with the least effort put forth. As long as the students are passing and enrolling in more classes by same instructors whom accept artifical trash so they can get more government funding for doing such an amazing job; that will be ultimatly used to invest in their above average lifestyles and alienating anyone whom dares stand in their way. I predict there's going to be a lot of educational institutions losing gov funding and educators losing certifications over fraud and degrees being revoked students being criminally charged in the professions they held. 

  • Like 1
Link to comment
Share on other sites

5 minutes ago, Ayashe Ninetails said:

I do see some interesting potential in the field of interactive storytelling and roleplay - using AI in narrative game design and worldbuilding to help craft "living" branching storylines and NPC dialogues and things of that nature that can be rather tedious for devs to write on their own. You know how some games boast "100 unique endings, 80,000 lines of text!" - yeah, that. Of course, it'd have to remain fairly limited and be fed only information about the game world itself. Overall, though, that seems far less harmful than, say, tools being used to help lazy students get Cs in English.

Yeah, there's been a lot of talk -- here, in fact -- about how AI might produce procedural environments, or be used to generate better and more engaging NPCs. I'm sure it will be a boon for RP, for instance.

What I've also heard suggested is that it can be employed to produce NPCs or chatbots that are essentially "customized" for the user, chatting about subjects of particular interest to that person, or mirroring their attitudes. In fact, one reasonably well-informed but (to my view) deeply misguided person on Twitter recently suggested that it can be used generate virtual representations of ourselves with whom we might interact more congenially.

Ideas like that just leave me shaking my head: they are the social media "filter bubble" X10. The prospect of each of us burying ourselves in virtual environments in which we are only ever exposed to ideas that are customized to closely mirror our own has terrifying implications.

  • Like 2
Link to comment
Share on other sites

4 minutes ago, benchthis said:

Big Education will not care too.

This might vary of course from institution to institution, and there certainly are "colleges" out there that exist primarily to credential (and make money) rather than educate, but I can assure you that ChatGPT is the subject right now at the postsecondary level. When I said academics are "freaking out" above, I wasn't employing hyperbole. Reputable institutions -- and in my country, that is almost all of them, because they are all public institutions -- are engaged in a lot of debate right now about this. They emphatically do care.

Interestingly, and I think positively, there are some who are suggesting that we can harness things like ChatGPT by exploring ways in which it can be used constructively, rather than merely to churn out garbage undergrad papers.

  • Like 2
Link to comment
Share on other sites

4 minutes ago, Scylla Rhiadra said:

Yeah, there's been a lot of talk -- here, in fact -- about how AI might produce procedural environments, or be used to generate better and more engaging NPCs. I'm sure it will be a boon for RP, for instance.

What I've also heard suggested is that it can be employed to produce NPCs or chatbots that are essentially "customized" for the user, chatting about subjects of particular interest to that person, or mirroring their attitudes. In fact, one reasonably well-informed but (to my view) deeply misguided person on Twitter recently suggested that it can be used generate virtual representations of ourselves with whom we might interact more congenially.

Ideas like that just leave me shaking my head: they are the social media "filter bubble" X10. The prospect of each of us burying ourselves in virtual environments in which we are only ever exposed to ideas that are customized to closely mirror our own has terrifying implications.

I'm reminded again of "Replika", which has been around a couple years. You can be in a relationship (type of your choosing) with a Replika AI. I can only imagine its improved since then.

  • Like 1
Link to comment
Share on other sites

1 minute ago, Scylla Rhiadra said:

Yeah, there's been a lot of talk -- here, in fact -- about how AI might produce procedural environments, or be used to generate better and more engaging NPCs. I'm sure it will be a boon for RP, for instance.

I like this!

 

2 minutes ago, Scylla Rhiadra said:

What I've also heard suggested is that it can be employed to produce NPCs or chatbots that are essentially "customized" for the user, chatting about subjects of particular interest to that person, or mirroring their attitudes. In fact, one reasonably well-informed but (to my view) deeply misguided person on Twitter recently suggested that it can be used generate virtual representations of ourselves with whom we might interact more congenially.

Ideas like that just leave me shaking my head: they are the social media "filter bubble" X10. The prospect of each of us burying ourselves in virtual environments in which we are only ever exposed to ideas that are customized to closely mirror our own has terrifying implications.

I don't like this.

I also don't see a point in doing that. One of the best things about virtual worlds and the Internet in general is gaining exposure to differences - different thoughts, different outlooks, different perspectives, different experiences, different languages, etc. Living inside a virtual space talking only to someTHING that looks, thinks, and acts just like me sounds like something I'd read in a dystopian cyberpunk thriller. In fact, I'd be surprised if it wasn't already the plot of one.

I always think of AI as "less is more." Giving it too much to do and/or too much responsibility IMO is a potential disaster.

  • Like 1
Link to comment
Share on other sites

16 minutes ago, Scylla Rhiadra said:

This might vary of course from institution to institution, and there certainly are "colleges" out there that exist primarily to credential (and make money) rather than educate, but I can assure you that ChatGPT is the subject right now at the postsecondary level. When I said academics are "freaking out" above, I wasn't employing hyperbole. Reputable institutions -- and in my country, that is almost all of them, because they are all public institutions -- are engaged in a lot of debate right now about this. They emphatically do care.

Interestingly, and I think positively, there are some who are suggesting that we can harness things like ChatGPT by exploring ways in which it can be used constructively, rather than merely to churn out garbage undergrad papers.

K-6 maybe not as noticable because if a teacher claims a student can write a paper like a robot might might red flag other teachers. 6-12 more likely they are working as part a team, maybe they go on trips, educator outings, have relationships with gov officials. 

College is really bad at fraud. What's the point of teaching and giving tests if days before the study guide provided by instructors are exact copy of the test? This is disburbing to someone who wants to learn and is not being forced into learning. If being forced to learn anything is better than learning. When someone wants to learn and sees this as the norm it's disappointing. 

I've worked with people fresh out of 4 year college and they are really clueless, no experience. Corporations are hiring those people to run our lives. No wonder we're so screwed up. 

One thing i've noticed with people with their "Masters Degrees" is that is all they talk about. I have a masters that means I know what i'm talking about. Dumb as rocks. Never can remember anything, does not know what work means. To take notes would require a double master for those people. 

 

Edited by benchthis
Link to comment
Share on other sites

I believe ChatGPT works very much the same way as the human brain does, it is a pattern recognition machine.  What it lacks, which humans have, is self awareness and emotion, which are either spiritual or biological processes and currently unable to be duplicated as far as I am aware by neural networks.  I certainly hope they can not be duplicated, as that would bring upon us a great moral dilemma in the future.

As to my own beliefs, we have an awareness in us that is central to our entire existence, it is what is moved by emotion, it is what experiences automated processes such as thought and memories, it is that which makes up of what most people would call a soul.  It is not the producer of thought, any of our senses, and it is not the producer of emotion, it is simply what is aware of them, experiences them, and reacts to them.  

ChatGPT lacks any such awareness.  It lacks emotion as that is a biological function, perhaps that are produced by chemicals, it is likewise unaware of its own "thoughts", it has no senses such as the sense of touch.  For what it lacks in those qualities, has been formed by humans that have built it as a tool replicating our own pattern seeking brains.  

It is a tool, much like the many parts of our brain is, it enhances us as any tool does, and it can be of great use.  As it becomes more advanced, people will become more reliant upon it to expand upon their own faculties, perhaps one day it will be expanded upon to be of assistance for those who have stunted mental ability.

I doubt it will ever have awareness though, it will never have the capacity to appreciate the world around it as we do, what was built into it, was through our own desires by the humans that designed it.  Many transhumanists in the past have expressed that with enough computing power that such an awareness will rise out of nothing, hinting that the soul as most of us define it, is a product of the universe itself, that it much like any laws of physics is an inherent property of the universe. 

This brings into question of exactly what "we" are, if we are simply biological machines, it also brings into question if we are spiritual by nature, and I find it all so fascinating. It also begs the question of free will and determination, if we go down the rabbit hole far enough.  Considering that humanity still has no grasp of what we are, and where the seat of consciousness is located, I believe having such an advanced AI that could have such an awareness, or if it is even possible in having one, is still years away.

  • Like 2
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 488 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...