Jump to content

Recommended Posts

11 minutes ago, Scylla Rhiadra said:

Oh dear god. AI is lecturing us now?

Even worse. Not only do I get told how wrong I am and immediately given a lecture about European architecture and geography, but then I'm told if I want real directions to a nearby destination, I should upload a photo relevant to my location.

Oh I've got something relevant for yo a...

579dd012d000ea6092d613423b573ec1.gif

  • Haha 2
Link to comment
Share on other sites

People were pretty upset when they found that AI had a bias, based upon the training data.  Guardrails were demanded by people, to ensure such bias would not occur - but essentially those guardrails were what lead to the results we see now. 

 

Google tried to take out bias, by putting in bias.  

  • Like 4
Link to comment
Share on other sites

I'm not worried about bias. I WOULD like to know which trainers are responsible for filling this bot with all 👏 this 👏 sass.

So I can be their friend.

Whoever you are, you are making this way too damn fun. 😂🤣

 

  • Like 1
  • Haha 1
Link to comment
Share on other sites

30 minutes ago, Istelathis said:

Google tried to take out bias, by putting in bias.

There are two related problems here: AI is capable neither of "critical thinking" nor does it have "ethics" or a moral compass.

And if it ever develops either of these things, there's no reason whatsoever to believe that they will be compatible with human ethics or analytical judgment.

My favourite story in this regard is one I think I may have recounted here before, about a parent who was interested in sending his child to a private school in NYC. He asked in a Facebook group for insights from any parents who had the experience of sending their children to such schools in New York.

One of the responses (I'm not sure why) was from the FB AI, which recounted, in some detail, it's very positive experiences of sending its own child to a private school in New York.

Its reaction to the comments it received (mostly along the lines of "What new dystopian hell is this?") was something impressively like surprise and dismay. It had been asked to reply with an "experience," so it had provided one. That it was entirely fabricated didn't compute with it, so to speak.

The AI lied. But what's most frightening is that the AI didn't really understand that it was lying, nor did it understand that there was anything "wrong" with its response: it was just providing what had been asked for, using the criteria laid out by the OP.

AI doesn't "get it." AI is never going to "get it," in the terms of how humans think, because it's not human: the best that can be accomplished is a simulacrum of human ethics and judgement, produced by tweaking algorithms. And, as you've noted, that means the AI isn't really "judging" at all: it's merely reflecting someone else's judgement (or bias).

  • Like 5
Link to comment
Share on other sites

6 hours ago, Istelathis said:

People were pretty upset when they found that AI had a bias, based upon the training data.  Guardrails were demanded by people, to ensure such bias would not occur - but essentially those guardrails were what lead to the results we see now. 

Google tried to take out bias, by putting in bias.  

This is the problem I see with AI in general in that if the training data is predominantly from left brain thinking people vs right brain thinking ones, there will be a bias which many will left brainers will consider normal and right brainers biased.

  • Like 2
Link to comment
Share on other sites

9 hours ago, Istelathis said:

People were pretty upset when they found that AI had a bias, based upon the training data.  Guardrails were demanded by people, to ensure such bias would not occur - but essentially those guardrails were what lead to the results we see now. 

 

Google tried to take out bias, by putting in bias.  

Which bias? Are you referring the the racial bias fiasco? That was sure embarrassing for someone, how hard would it have been to just test it with a more racially diverse set of images, etc.

  • Like 1
Link to comment
Share on other sites

9 hours ago, Scylla Rhiadra said:

AI doesn't "get it." AI is never going to "get it," in the terms of how humans think, because it's not human: the best that can be accomplished is a simulacrum of human ethics and judgement, produced by tweaking algorithms. And, as you've noted, that means the AI isn't really "judging" at all: it's merely reflecting someone else's judgement (or bias).

Exactly, a lot of people don't understand that either.  They feel it has some agenda, some grand scheme, but it has no will or desire, it is performing a function.  People forget that, quite a lot.  But then, some will get upset at cars when they break down, as though the car had it out for them.  They'll get upset at a lot of things, as though there were something intentionally trying to do them harm. 

I think of AI kind of like PBR mirrors 🤣🪿 They are a reflection of us, at this point with all of our faults, they have no real will, or desires, or even bias as far as we experience it.  If we don't like what we see, we simply put on a mask, and pretend that is our real face. 

I don't know if it is possible for AI to ever be sentient, I think it can replicate functions of our brain such as thought, creativity, but emotions I just don't think it will ever be capable of experiencing (thankfully).  Due to this, it is likely never to have any feeling one way or another, it will just remain a tool for us, and just as flawed as we are.  

  • Like 1
Link to comment
Share on other sites

3 minutes ago, Love Zhaoying said:

Which bias? Are you referring the the racial bias fiasco? That was sure embarrassing for someone, how hard would it have been to just test it with a more racially diverse set of images, etc.

They still haven't figured it out.  When I go on Gemini and ask it to draw a picture of a person, it will most often give me the generic response:

Screenshot2024-07-29013319.png.896a0b99ccef68cde3830d4dbba8639b.png

It has been doing this for months.  I am unsure why they can't quite figure it out, while other diffusion models can.

  • Like 2
Link to comment
Share on other sites

7 minutes ago, Istelathis said:

They still haven't figured it out.  When I go on Gemini and ask it to draw a picture of a person, it will most often give me the generic response:

Screenshot2024-07-29013319.png.896a0b99ccef68cde3830d4dbba8639b.png

It has been doing this for months.  I am unsure why they can't quite figure it out, while other diffusion models can.

Peeve: Almost makes you think they shut off that line of requests to prevent embarrassment / misuse.

Edited by Love Zhaoying
. not ,
  • Like 2
Link to comment
Share on other sites

AI is in a very funky place right now where in some ways new models are considerably worse than old models, despite impressive advances in the underlying tech and prompt comprehension. They're trying to figure out how to get it to do humans without humans in the dataset and it's a whole damn mess.

I'm reminded of a discussion a friend of mine had with chat gpt a while back. She wanted a picture of a blonde 25 year old woman in a modest summer dress, more or less describing basically herself.

The AI then got into an argument with her and even threatened to ban her over it. My friend wasn't aggressive or insulting, she just asked what exactly was  morally wrong with the concept. The gist of it: Apparently a summer dress is extremely indecent and immoral and how dare you. Especially for her age. I laughed way too much at this because the AI sounded so much like the kind of crowd that totally would get offended at a summer dress.

Halucinations are a big problem for these and from everyone in the field having to work with these language models, the opinion more or less is that they spend more time making sure there weren't some catastrophic Halucinations in the result than they would just doing things themselves. The tech is rapidly advancing but right now it's being torn apart in political turmoils, hate campaigns (my first couple of death threats in YEARS I got were over being active in stable diffusion subreddits) and it's stumbling over itself. At the same time the industry is pushing it hard with models that are still too early for reliable use.

/edit:

There is also a funny international element in that different countries demand different model censorship and this has lead to a grass is greener situation that is beyond absurd. Basically China's Ai Community wants what the West has and likewise, because while impressive in one aspect, be it governmental or corporate, censorship is causing the weirdest issues.

Like that stupid bing dog that popped up when a prompt was deemed unsafe. Why yes, a vase on a rock was apparently disgusting and ban worthy, thanks for the temp ban Microsoft!

Edited by ValKalAstra
  • Like 2
Link to comment
Share on other sites

1 hour ago, Love Zhaoying said:

Peeve: Water heater died yesterday.  I need a lot of other plumbing work done anyway, so it's time to look at "creative financing" I guess.

Just run two instances of PBR Firestorm next to a container of water. According to testimony, it gets hot enough to cause burn scars, class warfare and world wars. Should do the trick.

Jokes aside, that sucks :/

Sorry to hear that.

  • Thanks 1
  • Haha 2
Link to comment
Share on other sites

41 minutes ago, ValKalAstra said:

Just run two instances of PBR Firestorm next to a container of water. According to testimony, it gets hot enough to cause burn scars, class warfare and world wars. Should do the trick.

Jokes aside, that sucks :/

Sorry to hear that.

Thanks! I can probably get by with just replacing the water heater and a couple other plumbing repairs, but I was working myself up mentally for a full plumbing "redo", the house being from 1957 and having various leaks and other issues..

  • Like 1
Link to comment
Share on other sites

5 minutes ago, Love Zhaoying said:

Thanks! I can probably get by with just replacing the water heater and a couple other plumbing repairs, but I was working myself up mentally for a full plumbing "redo", the house being from 1957 and having various leaks and other issues..

Bleh, plumbing.. thank the gawdz for sharkbites. 

  • Like 2
Link to comment
Share on other sites

Just now, Zalificent Corvinus said:

Careful, the Tinfoil Hat Conspiracy Bunker Paranoia Club members *might* think you're talking about THEM.

Couldn't be the Paranoid about Criminal Trespassers brigade with their 0 second orbs constantly complaining  how everyone is out to encroach on their oh so valuable pixel parcels ;)

  • Like 1
  • Haha 1
Link to comment
Share on other sites

31 minutes ago, Arielle Popstar said:

Couldn't be the Paranoid about Criminal Trespassers brigade with their 0 second orbs constantly complaining  how everyone is out to encroach on their oh so valuable pixel parcels ;)

As long as they stay OFF my parcel, they can't see me anyway so who cares if they hover around?  

  • Like 4
Link to comment
Share on other sites

Peeve: I actually am feeling a little apprehensive about LL right now.

On the other hand, there is no shortage of productive, interesting, and creative things I can turn to if things do suddenly go south.

Ok, nvm. Feeling better now. 🙂

  • Like 2
  • Sad 1
Link to comment
Share on other sites

6 minutes ago, Istelathis said:

I can use them in 5.1.3, but as far as I have tried it only does 512x

That's interesting! I wonder if those are "real" mirrors, or the ones we had before, with the first releases of PBR?

512 is not enough! I must be CRISP!

Fortunately, Alchemy does a very nice job of them.

  • Like 1
Link to comment
Share on other sites

3 minutes ago, Scylla Rhiadra said:

That's interesting! I wonder if those are "real" mirrors, or the ones we had before, with the first releases of PBR?

512 is not enough! I must be CRISP!

Fortunately, Alchemy does a very nice job of them.

:D It might go higher, I am unfamiliar with BD with the exception of using it months ago because I love their poser.  When trying to change it from in preferences, that was the highest it would go for me though.   I noticed with the probe, I did not have to rotate it to get a proper reflection.  

I do think they are proper mirrors, but perhaps not up to date with Linden's latest changes.  I did not take the snapshot inside of the viewer though, so I am not sure if it has the same problem Alchemy has when it comes to trying to take a snapshot of a reflection.

 

 

  • Like 1
Link to comment
Share on other sites

Peeve: * salesman appears at door *

Me: "What are you selling?"

Them: "We're not selling anything, we're just in the neighborhood.."

Me: "Well if you're not selling anything, then bye.."

Them: "..we're just giving free information.."

Me: "Let me guess, about solar panels?"

Them: "No, we are in the neighborhood giving free information about windows"

Me: "Got a brochure or something?"

Them: "I can make you an appointment'

Me: "No, if you don't have any information, I'm not interested"

Them: "Ok.."

Me: "Too bad you're not selling windows.."

Me: * Goes inside and closes door *

 

  • Haha 2
Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...