Jump to content

Recommended Posts

12 minutes ago, Scylla Rhiadra said:

Before we get too excited about integrating ChatGPT into Second Life, it might be worth bearing in mind some of its really rather disturbing problems.

This Twitter thread gives example of the implicit gender bias that it reproduces. Apparently, ChatGPT doesn't believe that women can be doctors, or that men can be nurses, and it will try to correct you if you use a grammatical construction that implies this.

Do we really want NPCs with attitudes straight out of the 50s running around in SL?

Ew. I've never played with ChatGPT, so I can't speak to that one specifically, but knowing what I know about how other chatbot models are being trained, well...I'm not the least bit surprised.

The music example is also quite weird. ChatGPT has obviously never met Gunhild Carling. 😏

 

  • Like 6
Link to comment
Share on other sites

Peeve of the week for me: When random people I hardly know feel it is alright to approach me while I'm at an event to greet me with not my main account's name I'm using, but one of my alt's names. I ignored them and didn't respond. I felt rude, but I also felt they were rude as well since this isn't a person who would have access to any of my alts. I don't hide the fact I have alts, but this was the wrong way to approach me about it.

Edited by Dafadilia Wayfarer
  • Like 5
Link to comment
Share on other sites

20 hours ago, Ayashe Ninetails said:

Ew. I've never played with ChatGPT, so I can't speak to that one specifically, but knowing what I know about how other chatbot models are being trained, well...I'm not the least bit surprised.

The music example is also quite weird. ChatGPT has obviously never met Gunhild Carling. 😏

 

She is my new hero!

  • Like 1
Link to comment
Share on other sites

Peeve: Bought an extra cheap guitar (for my instrument collection), and of COURSE it buzzes. Just on one string and one fret, luckily.

Peeve2: Have to learn some basic luthier skills, and I'm not even Lutheran. <= Is joke!

(Googled how to fix guitar buzz, etc.)

Irony: I have at least one other instrument that you WANT to buzz (tanpura).

 

  • Haha 1
Link to comment
Share on other sites

16 hours ago, Love Zhaoying said:

Peeve: Bought an extra cheap guitar (for my instrument collection), and of COURSE it buzzes. Just on one string and one fret, luckily.

Peeve2: Have to learn some basic luthier skills, and I'm not even Lutheran. <= Is joke!

(Googled how to fix guitar buzz, etc.)

Irony: I have at least one other instrument that you WANT to buzz (tanpura).

 

What do you mean it buzzes Love? The string vibrates against the fret? I had an issue like that with a cheap 5 string banjo I got from someone, but the frets were not even. Fix was to even out the frets..it was fun to troubleshoot it! Sounds good now though 🦁

  • Like 1
Link to comment
Share on other sites

11 minutes ago, Krystina Ferraris said:

What do you mean it buzzes Love? The string vibrates against the fret? I had an issue like that with a cheap 5 string banjo I got from someone, but the frets were not even. Fix was to even out the frets..it was fun to troubleshoot it! Sounds good now though 🦁

Correct. In some cases, you have to adjust the bridge, in other cases, level the frets.  Google was helpful. I may work on it this weekend.

ETA: I need to check out the banjo I got a few months ago lol!  This new super-cheap guitar is a 12-string!

Edited by Love Zhaoying
  • Like 2
Link to comment
Share on other sites

But what about this thing others are doing, it's far worse than the identical thing I'm doing! I'm not peeing in the pool!! I'm peeing in a cup and then dropping it in the pool, I have class. No of course it has nothing to do with politics, mine are impeccable and all about me.

 

  • Like 2
  • Thanks 2
Link to comment
Share on other sites

4 hours ago, Coffee Pancake said:

But what about this thing others are doing, it's far worse than the identical thing I'm doing! I'm not peeing in the pool!! I'm peeing in a cup and then dropping it in the pool, I have class. No of course it has nothing to do with politics, mine are impeccable and all about me.

 

misc_49___96575.1501205462.jpg?c=2

  • Like 2
  • Haha 1
Link to comment
Share on other sites

On 4/28/2023 at 6:07 AM, Scylla Rhiadra said:

Before we get too excited about integrating ChatGPT into Second Life, it might be worth bearing in mind some of its really rather disturbing problems.

This Twitter thread gives example of the implicit gender bias that it reproduces. Apparently, ChatGPT doesn't believe that women can be doctors, or that men can be nurses, and it will try to correct you if you use a grammatical construction that implies this.

Do we really want NPCs with attitudes straight out of the 50s running around in SL?

 

When whether or not someone is male or female in a hypothetical example that has no effect on anything gets panties so bunched up that something can be written off as worthless I really despair for humanity as a whole.

ChatGPT doesn't believe anything that the person programming it doesn't instill into it.  It's not self aware or self functioning.

  • Like 1
Link to comment
Share on other sites

10 minutes ago, Jordan Whitt said:

ChatGPT doesn't believe anything that the person programming it doesn't instill into it.  It's not self aware or self functioning.

I don't disagree, but think it's less about intentional "programming", than the bias it learns from what it is "fed" for data. "Most things I read have male doctors, so when the subject is 'doctor' and gender is ambiguous, gender is male."

Peeve: Missed a delivery last night, either because the doorbell battery was out, or they didn't knock loudly enough.

 

  • Thanks 1
Link to comment
Share on other sites

19 minutes ago, Jordan Whitt said:

When whether or not someone is male or female in a hypothetical example that has no effect on anything gets panties so bunched up that something can be written off as worthless I really despair for humanity as a whole.

ChatGPT doesn't believe anything that the person programming it doesn't instill into it.  It's not self aware or self functioning.

Love is correct: you seem not to understand how ChatGPT works? The software uses "Large Language Model" to produce its output, collecting and reconstituting what it finds elsewhere.

"ChatGPT is an extrapolation of a class of machine learning Natural Language Processing models known as Large Language Model (LLMs). LLMs digest huge quantities of text data and infer relationships between words within the text."

The bias here isn't "instilled" by the programmer: it's replicated from the huge database -- essentially, the Web -- from which the algorithm derives its answers. In other words, it's directly reflecting, and reproducing, larger cultural biases that are embedded in discourse across the entire web. Where a human author will look at a text and say, "Hey, that isn't right -- women can be doctors too!", ChatGPT's "understanding" is produced by automated inference based on the most statistically probable relationship between words. If there are more men than women doctors, for instance, it will necessarily assume -- statistically -- that the sentence opening "The gender of this doctor is . . ." should properly be completed by the word "male."

ChatGPT is being used now, increasingly, to do everything from producing advertising copy to company policy. It's not a toy: it's replacing human labour -- and human judgement in some pretty important spheres of economic, political, and social activity. These biases aren't "hypothetical": they are being reflected, subtly or otherwise, in the "real world" documents that it's being used to generate. It will get better at this, and interventions by the coders (which are being used now to "censor" some of the things ChatGPT says, in order to ensure that it's not using obscenities or being overtly racist) will eventually address this. But right now, the software has problems with this.

And yes, that is important.

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

4 minutes ago, Scylla Rhiadra said:

Love is correct: you seem not to understand how ChatGPT works? The software uses "Large Language Model" to produce its output, collecting and reconstituting what it finds elsewhere.

"ChatGPT is an extrapolation of a class of machine learning Natural Language Processing models known as Large Language Model (LLMs). LLMs digest huge quantities of text data and infer relationships between words within the text."

The bias here isn't "instilled" by the programmer: it's replicated from the huge database -- essentially, the Web -- from which the algorithm derives its answers. In other words, it's directly reflecting, and reproducing, larger cultural biases that are embedded in discourse across the entire web. Where a human author will look at a text and say, "Hey, that isn't right -- women can be doctors too!", ChatGPT's "understanding" is produced by automated inference based on the most statistically probable relationship between words. If there are more men than women doctors, for instance, it will necessarily assume -- statistically -- that the sentence opening "The gender of this doctor is . . ." should properly be completed by the word "male."

ChatGPT is being used now, increasingly, to do everything from producing advertising copy to company policy. It's not a toy: it's replacing human labour -- and human judgement in some pretty important spheres of economic, political, and social activity. These biases aren't "hypothetical": they are being reflected, subtly or otherwise, in the "real world" documents that it's being used to generate. It will get better at this, and interventions by the coders (which are being used now to "censor" some of the things ChatGPT says, in order to ensure that it's not using obscenities or being overtly racist) will eventually address this. But right now, the software has problems with this.

And yes, that is important.

Considering how language is being butchered/erased/cancelled/changed etc...it really doesn't matter to me.

And lets not get started on gender.

  • Like 1
Link to comment
Share on other sites

13 minutes ago, Scylla Rhiadra said:

Love is correct: you seem not to understand how ChatGPT works? The software uses "Large Language Model" to produce its output, collecting and reconstituting what it finds elsewhere.

"ChatGPT is an extrapolation of a class of machine learning Natural Language Processing models known as Large Language Model (LLMs). LLMs digest huge quantities of text data and infer relationships between words within the text."

The bias here isn't "instilled" by the programmer: it's replicated from the huge database -- essentially, the Web -- from which the algorithm derives its answers. In other words, it's directly reflecting, and reproducing, larger cultural biases that are embedded in discourse across the entire web. Where a human author will look at a text and say, "Hey, that isn't right -- women can be doctors too!", ChatGPT's "understanding" is produced by automated inference based on the most statistically probable relationship between words. If there are more men than women doctors, for instance, it will necessarily assume -- statistically -- that the sentence opening "The gender of this doctor is . . ." should properly be completed by the word "male."

ChatGPT is being used now, increasingly, to do everything from producing advertising copy to company policy. It's not a toy: it's replacing human labour -- and human judgement in some pretty important spheres of economic, political, and social activity. These biases aren't "hypothetical": they are being reflected, subtly or otherwise, in the "real world" documents that it's being used to generate. It will get better at this, and interventions by the coders (which are being used now to "censor" some of the things ChatGPT says, in order to ensure that it's not using obscenities or being overtly racist) will eventually address this. But right now, the software has problems with this.

And yes, that is important.

I also read that, as ChatGPT doesn't actually "understand" anything, the "training" can be compared to "good dog!", "bad dog!" training. My limited knowledge has me assuming the "trainers" can run a standard set of "training" for each generation (which consumes a new data set); and apparently, "gender bias" training is lacking.

I bet the same issues as with the "doctor" example come up with other professions: lawyer, landscaper, roofer, etc. But, if the language model fixes the "subject/object" confusion, I think the "doctor example" would be fixed separate from gender bias. Scylla, I assume (being an English expert) you also noticed that the "root" issue was picking the wrong subject/object..?

Peeve: misgendering stories, I got 'em!

  • Like 1
Link to comment
Share on other sites

27 minutes ago, Scylla Rhiadra said:

ChatGPT is being used now, increasingly, to do everything from producing advertising copy to company policy. It's not a toy: it's replacing human labour -- and human judgement in some pretty important spheres of economic, political, and social activity. These biases aren't "hypothetical": they are being reflected, subtly or otherwise, in the "real world" documents that it's being used to generate. It will get better at this, and interventions by the coders (which are being used now to "censor" some of the things ChatGPT says, in order to ensure that it's not using obscenities or being overtly racist) will eventually address this. But right now, the software has problems with this.

And yes, that is important.

This right here is a massive peeve of mine, which is strange to say as someone who quite literally works on training projects for AI. I think the tech is extremely fascinating, yes, but it's also moving at a rapid pace and being tasked with, IMO, a bit too much responsibility. I struggle with this daily - do I want to continue contributing to these projects orrrr is it time to find another gig? Is it going too far? Do I really want to be part of this? I personally will not ever support AI in the realms of art, music, voice acting, writing/literature, creative endeavors, etc. - but is even working this close with it on a much smaller scale a good idea?

I work with what I assume are much smaller chatbots that will have (hopefully) far less impact than what ChatGPT is capable of. It's my understanding that the bots we work with are being trained as chat helpers and will be given rather mundane assistant/customer service functions, but even still - we have to be extremely careful in how we feed info, train, interact, and rate their performance. Is it being harmless, is it avoiding giving terrible advice, is it being toxic, should this be flagged, should I use this text as source, is it okay to say this, etc. Also, some are clearly being trained for chat moderation and threat detection - which is a whole other thing we have to be extremely careful with - is that a real threat, do I flag it for the bot or no... That can absolutely have an impact. I don't know if I trust a bot to make decisions on some of the stuff that comes across my "virtual desk." Like holy moly.

Obviously, engineers will review before data gets fed in and I'm sure there's a whole entire approval process once it leaves our hands, but there's still so much data coming in from so many different sources - some written by us, some reviewed by us, some rated by us. And we're just one team out of how many hundreds (thousands?) doing this today. The field is exploding at such a rapid pace - this is the first time I've ever been swamped with work as a freelancer. The whole thing is just wild to me.

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

11 minutes ago, Ayashe Ninetails said:

This right here is a massive peeve of mine, which is strange to say as someone who quite literally works on training projects for AI. I think the tech is extremely fascinating, yes, but it's also moving at a rapid pace and being tasked with, IMO, a bit too much responsibility. I struggle with this daily - do I want to continue contributing to these projects orrrr is it time to find another gig? Is it going too far? Do I really want to be part of this? I personally will not ever support AI in the realms of art, music, voice acting, writing/literature, creative endeavors, etc. - but is even working this close with it on a much smaller scale a good idea?

I work with what I assume are much smaller chatbots that will have (hopefully) far less impact than what ChatGPT is capable of. It's my understanding that the bots we work with are being trained as chat helpers and will be given rather mundane assistant/customer service functions, but even still - we have to be extremely careful in how we feed info, train, interact, and rate their performance. Is it being harmless, is it avoiding giving terrible advice, is it being toxic, should this be flagged, should I use this text as source, is it okay to say this, etc. Also, some are clearly being trained for chat moderation and threat detection - which is a whole other thing we have to be extremely careful with - is that a real threat, do I flag it for the bot or no... That can absolutely have an impact. I don't know if I trust a bot to make decisions on some of the stuff that comes across my "virtual desk." Like holy moly.

Obviously, engineers will review before data gets fed in and I'm sure there's a whole entire approval process once it leaves our hands, but there's still so much data coming in from so many different sources - some written by us, some reviewed by us, some rated by us. And we're just one team out of how many hundreds (thousands?) doing this today. The field is exploding at such a rapid pace - this is the first time I've ever been swamped with work as a freelancer. The whole thing is just wild to me.

I wonder how much this will impact something like, what courses will be worth taking in college in the future..

I mean if things are going to be bot like in the future, it sure would suck to pay for college to find out that by the time you are finished that it's a bot environment now and you just blew major money because of the world becoming  a so called better place.. hehehe..

I wonder if we'll see a reduction in the cost of certain courses to keep them alive.

ETA: Just to add, I don't know much about this Chat GPT or whatever it's called.. So I don't even know just how far they are with it, but can see it's gonna be stirring up the world in the future.

 

Edited by Ceka Cianci
  • Like 3
Link to comment
Share on other sites

1 minute ago, Ceka Cianci said:

I wonder how much this will impact something like, what courses will be worth taking in college in the future..

I mean if things are going to be bot like in the future, it sure would suck to pay for college to find out that by the time you are finished that it's a bot environment now and you just blew major money because of the world becoming  a so called better place.. hehehe..

We're still a bit far off from that, I think. You can definitely tell which bots have the big budgets behind them and which ones...don't. I've seen some in the "write me a..." realm that make me laugh out loud. The emails, stories, invitations, etc. that they wrote actually made me spit water they were so bad/funny. 😂

ChatGPT and other huge projects like that have the money behind them. Other projects aren't so fortunate, but one day - who knows. Maybe they'll all take over, yeah. I can't see them replacing entire industries just yet, but in the future...maybe? Can't wait. *cries inside*

  • Like 2
Link to comment
Share on other sites

37 minutes ago, Ayashe Ninetails said:

This right here is a massive peeve of mine, which is strange to say as someone who quite literally works on training projects for AI. I think the tech is extremely fascinating, yes, but it's also moving at a rapid pace and being tasked with, IMO, a bit too much responsibility. I struggle with this daily - do I want to continue contributing to these projects orrrr is it time to find another gig? Is it going too far? Do I really want to be part of this? I personally will not ever support AI in the realms of art, music, voice acting, writing/literature, creative endeavors, etc. - but is even working this close with it on a much smaller scale a good idea?

I work with what I assume are much smaller chatbots that will have (hopefully) far less impact than what ChatGPT is capable of. It's my understanding that the bots we work with are being trained as chat helpers and will be given rather mundane assistant/customer service functions, but even still - we have to be extremely careful in how we feed info, train, interact, and rate their performance. Is it being harmless, is it avoiding giving terrible advice, is it being toxic, should this be flagged, should I use this text as source, is it okay to say this, etc. Also, some are clearly being trained for chat moderation and threat detection - which is a whole other thing we have to be extremely careful with - is that a real threat, do I flag it for the bot or no... That can absolutely have an impact. I don't know if I trust a bot to make decisions on some of the stuff that comes across my "virtual desk." Like holy moly.

Obviously, engineers will review before data gets fed in and I'm sure there's a whole entire approval process once it leaves our hands, but there's still so much data coming in from so many different sources - some written by us, some reviewed by us, some rated by us. And we're just one team out of how many hundreds (thousands?) doing this today. The field is exploding at such a rapid pace - this is the first time I've ever been swamped with work as a freelancer. The whole thing is just wild to me.

Thanks Ayashe. This is really valuable insight -- from the inside.

  • Like 1
Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...