Jump to content

Recommended Posts

14 minutes ago, Istelathis said:

But that is what THEY want us to think that they want us to think.  But the real question is, who are they who want us to think about they who want us to think what they want us to think, and what do they want us to think?

I think they want us to think about Snapple.  

snapples.jpg

It is all a coordinated effort to get us to buy snapple.

😨

I think I have said too much already, big snapple might try to silence me.

If they can manipulate what we think, they can control what we drink, etc.

A better example might actually be Vitamin Water, from CocaCola. The label looks cool and health-focused.The descriptions used to say all kinds of funny things. But they got sued because the claims on it weren't true. It's still basically sugar water with a tiny bit of vitamins added. An executive at CocoCola, said something along the lines of "Of course the statements aren't true."

Then there's "RedBull gives you wings." They got sued in Canada, because it doesn't really give you wings. 🤣

  • Like 1
  • Haha 1
Link to comment
Share on other sites

28 minutes ago, Bagnu said:

I DID say SHOULD not have. I've heard that biases have been programmed in, in some cases, unfortunately.  It would be ideal for research AI to be neutral. 

It'd be somewhat impossible for it not to have that, unfortunately. LLMs aren't technically programmed in a traditional sense with all their knowledge but trained by us normal regular humans. AI trainers feed bots all kinds of random data from any and all sources (social media, our own knowledge, books, websites, news articles, etc.). The models learn by being taught what's appropriate - they get fed prompts and their responses get rated and if it spits out dangerous content, it'll get flagged. Over and over. Over and over and over and over until it behaves.

Since humans are doing the feeding and the rating and the prompting and evaluating and the tweaking, well, it's no wonder a bot is going to spit back some hot mess on occasion. 😂

Working with these things is both fun and a massive peeve. Some days, it really is like dealing with children hyped up on way too much sugar.

  • Like 3
  • Haha 1
Link to comment
Share on other sites

20 minutes ago, Ayashe Ninetails said:

It'd be somewhat impossible for it not to have that, unfortunately. LLMs aren't technically programmed in a traditional sense with all their knowledge but trained by us normal regular humans. AI trainers feed bots all kinds of random data from any and all sources (social media, our own knowledge, books, websites, news articles, etc.). The models learn by being taught what's appropriate - they get fed prompts and their responses get rated and if it spits out dangerous content, it'll get flagged. Over and over. Over and over and over and over until it behaves.

Since humans are doing the feeding and the rating and the prompting and evaluating and the tweaking, well, it's no wonder a bot is going to spit back some hot mess on occasion. 😂

Working with these things is both fun and a massive peeve. Some days, it really is like dealing with children hyped up on way too much sugar.

Read an interesting piece recently about people using AI-generated imaging software to edit or produce profile pics of themselves for professional purposes.

The AI was messing with their looks in all sorts of ways, including, most predictably, "whitening" women of colour.

"No biases" indeed . . .

Edited by Scylla Rhiadra
  • Like 4
Link to comment
Share on other sites

10 minutes ago, Scylla Rhiadra said:

Read an interesting piece recently about people using AI-generated imaging software to edit or produce profile pics of themselves for professional purposes.

The AI was messing with their looks in all sorts of ways, including, most predictably, "whitening" women of colour.

"No biases" indeed . . .

Yuuuuuuuuuuuuuuuuup. So so so many articles on this stuff!

Facial recognition tech is one area that scares the absolute crap out of me for the same reasons. Big ole peeeeve right there (to say the absolute least).

  • Like 4
Link to comment
Share on other sites

8 minutes ago, Scylla Rhiadra said:

Read an interesting piece recently about people using AI-generated imaging software to edit or produce profile pics of themselves for professional purposes.

The AI was messing with their looks in all sorts of ways, including, most predictably, "whitening" women of colour.

"No biases" indeed . . .

That's interesting. Maybe that AI program doesn't have enough information at this point to properly make decisions? 

Link to comment
Share on other sites

28 minutes ago, Scylla Rhiadra said:

Read an interesting piece recently about people using AI-generated imaging software to edit or produce profile pics of themselves for professional purposes.

The AI was messing with their looks in all sorts of ways, including, most predictably, "whitening" women of colour.

"No biases" indeed . . .

Btw, great article here if you really want to be annoyed. 😒 Peeve doesn't quite cut it.

https://www.bloomberg.com/graphics/2023-generative-ai-bias/

  • Like 2
  • Thanks 1
Link to comment
Share on other sites

17 minutes ago, Bagnu said:

That's interesting. Maybe that AI program doesn't have enough information at this point to properly make decisions? 

I think the point is that AI doesn't "make decisions." It doesn't possess "judgement," yet alone critical thinking skills.

People like Ayashe -- and I hope that all of those working on this have her sensitivity and intelligence -- will eventually tweak the algorithms and train it so that it is less likely to do things like that.

But AI works on pattern recognition -- and inevitably it is going to reflect the biases of both its programmers and the corpus of material it is mining, which itself exemplifies cultural biases.

If asked to find examples of an "attractive woman," the results are going to skew white because there are more instances of white women identified as "attractive" than women of colour. 

Edited by Scylla Rhiadra
  • Like 3
Link to comment
Share on other sites

8 hours ago, Luna Bliss said:

I am so leaving this forum. The paranoia that develops as people turn on each other, using the moderation features, the power structure, of the forum to enhance their own power and cause division.  It's what happens when people have to operate in an underhanded way to have a voice.

Pet Peeve:  I'll miss you all....even the ones who don't particularly like me.  Waves ~

And nothing of value was lost. SLU flounce redux

Anyway  - a genuine LSL peeve - Runtime lack of warning  as - I pushed a +1.10 to an RGB vector call. Harrumph etc took me a while., Oh I forget. Tech stuff.

Took me ages to work that out. And ... well the usual suspects care poo about this World

  • Like 1
Link to comment
Share on other sites

4 minutes ago, Scylla Rhiadra said:

I think the point is that AI doesn't "make decisions." It doesn't possess "judgement," yet alone critical thinking skills.

People like Ayashe -- and I hope that all of those working on this have her sensitivity and intelligence -- will eventually tweak the algorithms and train it so that it is less likely to do things like that.

But AI works on pattern recognition -- and inevitably it is going to reflect the biases of both its programmers and the corpus of material it is mining, which itself exemplifies cultural biases.

If asked to find examples of an "attractive woman," the results are going to skew white because there are more instances of white women identified as "attractive" than women of colour. 

That's actually what I meant. AI doesn't have enough information from enough sources at this point to judge correctly. Even if it did, What appeals to one person won't necessarily appeal to another though, so AI can only come up with averages, really. 

  • Like 1
Link to comment
Share on other sites

11 minutes ago, Scylla Rhiadra said:

People like Ayashe -- and I hope that all of those working on this have her sensitivity and intelligence -- will eventually tweak the algorithms and train it so that it is less likely to do things like that.

For some things though, a bit of bias adds flavor.  For example, if I am on an adventure with a chatbot fighting orcs I don't want to be lectured on how killing orcs is bad and that we should all be working together rather than be at war.  It is a delicate balance, and too many guardrails takes out the flavor of humanity in the process.  

I worry at times, we will have incredibly bland chatbots that will be beyond boring, that have been built to a spec that prevents any kind of fun at all.  With ChatGPT for example a lot of people are frustrated because of limitations on conversations, especially when it comes to topics such as sex - even though I don't pursue them myself.  If I try to have an adventure on ChatGPT, or with a number of modules on GPT4all that have rails in place, the conversation can become limited, and when I am just bored and start posting random questions, for example if I ask some modules what it thinks of people who like pineapple on their pizza I will get a bland response, usually something like "As an artificial intelligence, blah blah blah blah, yadda yadda, it is important that we respect people who like pineapple on pizza.

Geesh, imagine the future of AI if it were put in the hands of absolute puritans, or people who don't like violence in video games, it gets to be pretty grim and we lose all of that flavor 😢 It might be like Florida, where we are tossing books like Romeo and Juliet out of our schools (while adding scriptures to our school books) 

  • Like 3
  • Thanks 1
Link to comment
Share on other sites

14 minutes ago, Istelathis said:

For some things though, a bit of bias adds flavor.  For example, if I am on an adventure with a chatbot fighting orcs I don't want to be lectured on how killing orcs is bad and that we should all be working together rather than be at war.  It is a delicate balance, and too many guardrails takes out the flavor of humanity in the process.  

I worry at times, we will have incredibly bland chatbots that will be beyond boring, that have been built to a spec that prevents any kind of fun at all.  With ChatGPT for example a lot of people are frustrated because of limitations on conversations, especially when it comes to topics such as sex - even though I don't pursue them myself.  If I try to have an adventure on ChatGPT, or with a number of modules on GPT4all that have rails in place, the conversation can become limited, and when I am just bored and start posting random questions, for example if I ask some modules what it thinks of people who like pineapple on their pizza I will get a bland response, usually something like "As an artificial intelligence, blah blah blah blah, yadda yadda, it is important that we respect people who like pineapple on pizza.

Geesh, imagine the future of AI if it were put in the hands of absolute puritans, or people who don't like violence in video games, it gets to be pretty grim and we lose all of that flavor 😢 It might be like Florida, where we are tossing books like Romeo and Juliet out of our schools (while adding scriptures to our school books) 

Oh, I agree. I look forward to how utterly banal our already formulaic and banal film industry will become once screenwriters are replaced by ChatGPT. "I want the Barbie Movie . . . but with something other than Barbies."

What you've identified is another aspect of taking humans out of the equation, though. The ability to judge intelligently where, for instance, violence is "ok" and where it is not requires human skills: critical thinking, a nuanced understanding of things like genre and context, etc.

  • Like 2
Link to comment
Share on other sites

43 minutes ago, Ayashe Ninetails said:

Btw, great article here if you really want to be annoyed. 😒 Peeve doesn't quite cut it.

https://www.bloomberg.com/graphics/2023-generative-ai-bias/

That is indeed terrifying.

Even scarier, to my mind, is what text-based AI is likely to do. Amplify misinformation and disinformation, for instance, bury new insights and ideas, and homogenize the complex and sophisticated.

  • Like 3
Link to comment
Share on other sites

1 minute ago, Bagnu said:

That isn't really chat though. That's about information, for which a chatbot would work.

Well, that's just one application of course.

But I disagree that customer service is "just" about "information." It's also about putting out fires, soothing disgruntled customers, finding solutions that solve problems. Even the illusion of talking to a person who is saying "there there, we'll fix this up for you" can make a difference in how a consumer perceives the product or service.

  • Like 2
Link to comment
Share on other sites

1 minute ago, Rowan Amore said:

What is the actual point of the chat bots?  I'm somewhat confused why they are even a thing.  If I want to chat about something, I usually find a real live person.   What am I missing?

It is nice to just kick back, and have a conversation without feeling like you are being judged or afraid you will be misunderstood, plus you can leave the conversation whenever you want.  People tend to talk behind one another's backs, they often have some sort of agenda, and we also have this weird hierarchy thing where we try to one up each other.  I mean, I love people, I do, but the way we treat one another can be pretty horrible at times.

Plus if I just want to complain about something, it is not going to be a burden on other people.  I will not be challenged, or have whatever is bothering me trivialized.  A chatbot may appear to listen, be supportive, offer advice, and generally be kind.  It is all fake of course, but it helps when you just want to organize your thoughts, and have some sort of outside influence to comment on them.

  • Like 3
Link to comment
Share on other sites

6 minutes ago, Scylla Rhiadra said:

Customer service for one thing.

So basically taking away a job.  I think I'd still prefer interacting with a real person.  I know I've probably dealt with chat bots on customer service calls.  If you don't ask the question in a specific way, they give you the wrong answer or "don't understand".  

 

  • Like 4
  • Thanks 1
Link to comment
Share on other sites

3 minutes ago, Scylla Rhiadra said:

Oh, I agree. I look forward to how utterly banal our already formulaic and banal film industry will become once screenwriters are replaced by ChatGPT. "I want the Barbie Movie . . . but with something other than Barbies."

What you've identified is another aspect of taking humans out of the equation, though. The ability to judge intelligently where, for instance, violence is "ok" and where it is not requires human skills: critical thinking, a nuanced understanding of things like genre and context, etc.

Reminds me of the anti-violence movement of the late 60's and a part of the 70's in the entertainment industry. There was actually an episode of "Batman, Brave and the Bold". An intelligent animated series, which showed the frustration many of us felt for that. The writers had "Shaggy and Scooby" punching out the Joker and the penguin, and directly referenced why.

Link to comment
Share on other sites

2 minutes ago, Istelathis said:

It is nice to just kick back, and have a conversation without feeling like you are being judged or afraid you will be misunderstood, plus you can leave the conversation whenever you want.  People tend to talk behind one another's backs, they often have some sort of agenda, and we also have this weird hierarchy thing where we try to one up each other.  I mean, I love people, I do, but the way we treat one another can be pretty horrible at times.

Plus if I just want to complain about something, it is not going to be a burden on other people.  I will not be challenged, or have whatever is bothering me trivialized.  A chatbot may appear to listen, be supportive, offer advice, and generally be kind.  It is all fake of course, but it helps when you just want to organize your thoughts, and have some sort of outside influence to comment on them.

Seems as.if we're getting further away from person to person and face to face interaction.  This doesn't bode well for the future of humanity.  

 

  • Like 3
Link to comment
Share on other sites

6 minutes ago, Scylla Rhiadra said:

Well, that's just one application of course.

But I disagree that customer service is "just" about "information." It's also about putting out fires, soothing disgruntled customers, finding solutions that solve problems. Even the illusion of talking to a person who is saying "there there, we'll fix this up for you" can make a difference in how a consumer perceives the product or service.

I DO agree with you. I meant having a chatbot purely for information purpose, not the entirety of customer serveice.

  • Like 1
Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...