Jump to content

SL replicants


SkylabPatel
 Share

You are about to reply to a thread that has been inactive for 1674 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

What if bots on SL evolved, like replicants in BladeRunner...so that they created their own profiles, took inworld jobs, and acted like intelligent, human residents...

Would you be comfortable interacting with, having relationships with bots? Would you buy one for 100,000L if it came pre-programmed to be your lover or best friend or loyal club host and acted just like a human?

Link to comment
Share on other sites

15 minutes ago, SkylabPatel said:

What if bots on SL evolved, like replicants in BladeRunner...so that they created their own profiles, took inworld jobs, and acted like intelligent, human residents...

Would you be comfortable interacting with, having relationships with bots? Would you buy one for 100,000L if it came pre-programmed to be your lover or best friend or loyal club host and acted just like a human?

No, absolutely not to both questions. I would much prefer a useful bot like the old Xelegot bots created for Active Worlds.

http://www.imatowns.com/xelagot/xelagot_x1.html

Link to comment
Share on other sites

25 minutes ago, SkylabPatel said:

Would you be comfortable interacting [ ] with bots?

Sure.

26 minutes ago, SkylabPatel said:

Would you be comfortable [ ] having relationships with bots?

I really don't understand this question.  It would be no different than the relationship I have with my refrigerator so I would be comfortable with it I suppose.

 

28 minutes ago, SkylabPatel said:

Would you buy one for 100,000L

No.  All given conditions are pretty much the same.  Maybe if it were programed to earn 1000L a week I would consider it.  If it is just going to be another programmed pet I could see paying 5,000L if it was well done. 

Link to comment
Share on other sites

Side topic, in the next decade or two there will likely be discussion on ethics with robots and artificial intelligence to give future man made creations rights of their own.

And that’s 50/50 the coolest thing ever and also a bit concerning on a social level. Because there are always going to be a lot of people who see anything not alive as simply an object. We’re getting very close to the point where AI is no longer a toy, no longer just lines of code and basic question-answer communication.

So I guess if we hit the point where AI was distinctly “individual”, and for some reason they played SL, you’d have to become normalized to them.

Link to comment
Share on other sites

47 minutes ago, cheesecurd said:

Side topic, in the next decade or two there will likely be discussion on ethics with robots and artificial intelligence to give future man made creations rights of their own.

And that’s 50/50 the coolest thing ever and also a bit concerning on a social level. Because there are always going to be a lot of people who see anything not alive as simply an object. We’re getting very close to the point where AI is no longer a toy, no longer just lines of code and basic question-answer communication.

So I guess if we hit the point where AI was distinctly “individual”, and for some reason they played SL, you’d have to become normalized to them.

is a pretty interesting problem this, and is fast becoming appropriate to begin working thru the ethical ramifications

self-driving cars are now a real thing. A ethical question posed is:

suppose a person in a self-drive car is motoring along. An obstacle crosses into their path. Say a monster truck. The AI sees this and has to make a decision. Take the hit or take evasive action. The AI calculates that taking the hit will result in 100% death to its rider. It then decides to take evasive action. However its only evasive path to avoid 100% death for its rider, is blocked by a parent and their child standing on the sidewalk

how does the AI determine that it could take the evasive path, that could end up killing both the parent and child, and if so then how can the AI decision to do this be deemed ethical by an objective human observer ?

the AI can calculate a score for every death scenario.

a) hit the monster truck. 100% death score for my rider

b) avoid the monster truck. Turn toward the pedestrians. Calculate the death score. Parent may snatch up the child and jump out the way: 30% say. 70% that we going to hit them. Given distance, speed and trajectory then: 60% of 70 kill the child. 50% of 70 kill the parent. 10% kill our rider on colliding with the pedestrians

60% of 70 is 42. 50% of 70 is 35. 10% of 70 is 7. 42 + 35 + 7 = 84% death score

the b) option is objectively the more ethical. 100% vs 84%

where option b) is unethical is when the pedestrians cannot escape being hit, there is nowhere for them to escape too. The calculation is then: 60% of 100. 50% of 100% and 10% of 100 = 120% death score

to be ethical the AI would in this instance choose a). 100 vs 120. Take the hit, killing its rider

Link to comment
Share on other sites

2 hours ago, Rhonda Huntress said:

I really don't understand this question.  It would be no different than the relationship I have with my refrigerator so I would be comfortable with it I suppose.

In a purely logical sense, I agree.

But when I think of all the actually inanimate objects I've genuinely cared about, it probably wouldn't be that simple.

Besides, a lover I can just disable and throw into the basement whenever I'm not in the mood for interaction, without consequences? Yes please!

Edited by Wulfie Reanimator
Link to comment
Share on other sites

2 hours ago, Mollymews said:

is a pretty interesting problem this, and is fast becoming appropriate to begin working thru the ethical ramifications

self-driving cars are now a real thing. A ethical question posed is:

suppose a person in a self-drive car is motoring along. An obstacle crosses into their path. Say a monster truck. The AI sees this and has to make a decision. Take the hit or take evasive action. The AI calculates that taking the hit will result in 100% death to its rider. It then decides to take evasive action. However its only evasive path to avoid 100% death for its rider, is blocked by a parent and their child standing on the sidewalk

how does the AI determine that it could take the evasive path, that could end up killing both the parent and child, and if so then how can the AI decision to do this be deemed ethical by an objective human observer ?

the AI can calculate a score for every death scenario.

a) hit the monster truck. 100% death score for my rider

b) avoid the monster truck. Turn toward the pedestrians. Calculate the death score. Parent may snatch up the child and jump out the way: 30% say. 70% that we going to hit them. Given distance, speed and trajectory then: 60% of 70 kill the child. 50% of 70 kill the parent. 10% kill our rider on colliding with the pedestrians

60% of 70 is 42. 50% of 70 is 35. 10% of 70 is 7. 42 + 35 + 7 = 84% death score

the b) option is objectively the more ethical. 100% vs 84%

where option b) is unethical is when the pedestrians cannot escape being hit, there is nowhere for them to escape too. The calculation is then: 60% of 100. 50% of 100% and 10% of 100 = 120% death score

to be ethical the AI would in this instance choose a). 100 vs 120. Take the hit, killing its rider

You've been reading/watching I, Robot haven't you. ;)

  • Like 1
Link to comment
Share on other sites

22 minutes ago, Selene Gregoire said:

You've been reading/watching I, Robot haven't you. ;)

i like the questions posed by these kinds of movies.  Is pretty interesting to try and work out how an answer could be arrived at.  Is not so much the answer that may result (except when either the rider or on the sidewalk then the answer is personal, the outcome of the answer is). Is more about the process used to arrive at an answer that I find most interesting

on the more general topic, the ethics of owning a robot that does what we want on command.  When the robot is say a vacuum cleaner, or a self-cleaning fridge, or a house humidity/temperature controller, or a orchard picker,  then I think is fair to treat it as a machine and not imbue it with attributes it doesn't actually have

this said with my analytical brain turned on, but turning on the other part of my brain

human beings do have a tendency to 'humanise' things.  Like I read an article the other day about a woman who cried when she handed over her laptop to the service person at the workshop where all good and faithful  laptops go to their heavenly reward.  Her laptop had been her faithful and trusty companion thru years of working toward her doctorate. Countless hours of her alone with her faithful companion,  what was always there for her, sharing her joys, ups and downs on the long journey. The lady felt some sense of betrayal, so she cried a little at the workshop

i get this feeling. I still have my very first laptop, is in its box bed in my wardrobe and I can't bear to throw it away, even if it can hardly do anything anymore.  My mum is the same. Mum had a little red Mazda car for years. Did everything together they did. Spent heaps of money on it over the years to keep it alive (as Mum thought about it). Then one day it couldn't go anymore and Mum had to let it go. Mum cried about that as well

my Dad also. He has a bush jacket. Is nearly 40 years old now. And has been repaired and patched by Dad a zillion times over. Mum hates it, is smelly and ugly and Dad has to hang it up in the shed, never to be worn anywhere near the house.  One time Mum threw it out and bought him a new one. Dad never cried but he was mad as anything. Got it out of the rubbish bin and put it back in his shed.  And he said to Mum: Me and my jacket were together in the bush, long before you came into my life. Don't make this into a competition.  And I am looking at my Dad thinking seriously!? and Mum just rolled her eyes at him and at me and smiled. But it was out there and was never mentioned ever again. Imbuing a thing with something greater than what it actually is

so I think that no matter how 'intelligent' a thing may appear then people are going to continue to do this to some degree. Some will treat it in the same way as a self-cleaning fridge and not think much beyond that, which for them is ok to do.  Other people will treat the thing as a companion because to them it is, and imbue it with all the attributes that a companion brings to them

 

  • Like 1
Link to comment
Share on other sites

9 hours ago, SkylabPatel said:

[...] Would you buy one for 100,000L if it came pre-programmed to be your lover or best friend or loyal club host [...]

If I had to pay that to get a lover, a friend or a loyal host, I’d have bigger problems to worry about.

  • Like 3
Link to comment
Share on other sites

8 hours ago, Wulfie Reanimator said:

But when I think of all the actually inanimate objects I've genuinely cared about, it probably wouldn't be that simple.

But I really like my refrigerator.  It has a tablet built into the door.  I can ask it to find me recipes, it tells me the news and even has my email calendar on the door.  I even have it call me by a pet name.  I love my 'fridge.  But I'm not buying it a Christmas present.

 

  • Like 1
  • Haha 1
Link to comment
Share on other sites

I too am fascinated by the ethical dilemmas Molly poses. It will be interesting to see how the public reacts to the first example of an AI vehicle taking the utilitarian approach to a traffic accident. I don't think we're going to see deontological AI anytime soon, as it seems the creation of autonomous vehicles requires a fairly utilitarian mindset.

For a very long time, I have imagined a hypothetical situation in which humans design hardware underpinnings for AI that are significantly more capable than they anticipate. The human designed AI algorithms that run on the hardware are impressive, but also vastly underutilize the hardware because the designers miscomprehended their own design. One of the problems the humans eventually loose the AI machine on is improvement in its own algorithms. The machine hums along trying out various re-imaginings of its own internal workings. After some days or months of musing, it discovers, mid-day, a configuration that produces a 10% improvement in its own ability. By 5PM, as the researchers head for happy-hour, it's up to 100%. At 4AM the machine plateaus at a thousand fold improvement.

Later that morning, the researchers wake to a brave new world. We cannot think our way to a thousand fold improvement in intelligence overnight. Our neural circuits have a very limited capacity for change within one lifetime and a very slow rate of evolutionary adaptation over many. Once AI is inside its own design loop, it seems destined to outrun us. Yet that day won't result in robots taking over the world. Though that AI machine might suddenly advance 1000x and be naughty, there will still be too many human controlled kill-switches along the way to mass replication. I really don't foresee a future in which we're smart enough to create such AI, but dumb enough to put in a position of capitalizing on a massive step change in its own capability

But, we don't need to think about that future. The most rudimentary AI is having potentially profound and unseen effects on us today. Credit rating algorithms are reflecting unconscious bias at mass scale. Political canvasing and ideological manipulation, powered by AI, is affecting societies around the world.

If AI is our road to ruin, we'll drive ourselves into the ditch long before it slithers off the ground and grabs the wheel.

If it's our road to salvation, I can't wait!

Edited by Madelaine McMasters
  • Like 1
Link to comment
Share on other sites

On the topic of self driving cars, id like to note such an ethical dilemma wouldn’t realistically come around that way.

Self driving cars right now kinda suck, they’re learning. But even if they were great we cannot have them on the road. Our roads are terrible, among other issues, but people are the biggest problem with any self driving vehicles. You can’t have other cars on the road that don’t communicate with eachother like the self driving cars do. You’ll just have the same traffic as now but people will be asleep more often.

Until the only cars are self driving or there are dedicated self driving only expressways and districts, you won’t see any self driving cars on the road commercially. So in this future where such a thing should occur, kill the cars own passenger or potentially injure a passerby (which is a simple answer anyway, the passenger, always.) would be such a rare issue to ever come across in a world of just self driving cars that it wouldn’t be accounted for. 
What reason would the computer have to know what to do if someone randomly drove a non self driving car on the wrong side of the road? If there are no non self driving cars it’s not a concern.

As for the answer though, think of it from a legal sense. The driver of a self driving car is letting a computer take over. The computer is not it’s own entity, it’s owned by a company or maybe even the driver. Meaning it’s their legal responsibility. If the choice is to let the cars passenger or passengers die or involve an unrelated party, it’s going to avoid the unrelated party. Even just the tiniest chance to harm the uninvolved party would be more risky fiscally than just taking the metaphorical L.

Outcome A is a driver was killed in an accident because of some dude driving the wrong way.

Option B is potentially a person on the sidewalk or road being killed by a self driving car deciding to risk killing them to save their passenger. 
Thats letting a computer put value on a human life, even if just risk. And subsequently the company or cars owner putting the value of their customer or own life over the value of others.

 

Oh, semi related, pun. Self driving cars have the same issues as self driving truck technology. There’s a lot more to driving a truck than basically cruise control + steering. Driving a car is a more complicated matter than our current technology can handle on a reasonable scale. Remember, self driving cars don’t work unless they are the only cars. So how do you train a self driving car to manage the finer points of driving? Gas stations under construction, fresh paved roads with no signage or lines yet, avoidance situations that involve not being on the road? 

  • Like 1
Link to comment
Share on other sites

19 hours ago, Mollymews said:

is a pretty interesting problem this, and is fast becoming appropriate to begin working thru the ethical ramifications

self-driving cars are now a real thing. A ethical question posed is:

suppose a person in a self-drive car is motoring along. An obstacle crosses into their path. Say a monster truck. The AI sees this and has to make a decision. Take the hit or take evasive action. The AI calculates that taking the hit will result in 100% death to its rider. It then decides to take evasive action. However its only evasive path to avoid 100% death for its rider, is blocked by a parent and their child standing on the sidewalk

how does the AI determine that it could take the evasive path, that could end up killing both the parent and child, and if so then how can the AI decision to do this be deemed ethical by an objective human observer ?

the AI can calculate a score for every death scenario.

a) hit the monster truck. 100% death score for my rider

b) avoid the monster truck. Turn toward the pedestrians. Calculate the death score. Parent may snatch up the child and jump out the way: 30% say. 70% that we going to hit them. Given distance, speed and trajectory then: 60% of 70 kill the child. 50% of 70 kill the parent. 10% kill our rider on colliding with the pedestrians

60% of 70 is 42. 50% of 70 is 35. 10% of 70 is 7. 42 + 35 + 7 = 84% death score

the b) option is objectively the more ethical. 100% vs 84%

where option b) is unethical is when the pedestrians cannot escape being hit, there is nowhere for them to escape too. The calculation is then: 60% of 100. 50% of 100% and 10% of 100 = 120% death score

to be ethical the AI would in this instance choose a). 100 vs 120. Take the hit, killing its rider

I think this should be modifiable via user settings.  That way, the owner of a self-driving car could decide in advance just how moral or selfish she wants to be.

  • Like 1
Link to comment
Share on other sites

picking up on a point that cheesecurd, Lindal and Maddy have raised

in the present mixed-mode vehicle circumstance. Mixed mode meaning some vehicles are totally human controlled and others are AI-assisted with human override. Then the ethical questions needing answers more fall on the legislative/regulative bodies to answer, rather than left to the owner/driver or the vehicle manufacturer

in the mixed-mode free path model then ethically the legislator/regulator would have to determine that the driver is always culpable. That being assisted by an AI does not absolve the driver. Most people including manufacturers get this already. On the whys. Some vehicles have cruise control for example. Hitting a pedestrian while in cruise control doesn't absolve the driver. Hitting a pedestrian while the vehicle is in AI-assisted mode doesn't absolve the driver either because the driver does have an override method, same with cruise control

and I think this will always be the case in the mixed-form free path circumstance. Free path meaning the vehicle can be driven anywhere

the precursor to how total self-drive vehicles could eventuate in the real world, and is the most likely model I think, is already in SL. The Yavascript pod system

when we think about how this could be modeled in the real world, then every person could own their own pod. Pods can come in different models/styles/colors/etc and we buy whichever model we want.  To go somewhere we jump in our pod, tell it where we want to go and off it goes.  We don't drive it, we are a passenger

is like a railcar system, where each pod, like a railcar, can be the engine or a carriage  

our pod drives out of our garage and then onto the grid. The grid is a fixed laned system. Our pod when entering a lane occupied by other pods just slots in to a gap. At busy times a lane will be filled with ribbons of pods, all nose to tail, interlocked like train carriages, moving at the same speed. In some really busy places there will be multi-lanes (think like motorways) maybe even stacks of lanes (super motorways)

when the pod arrives at our destination it parks in a bay, we get out and done

as we don't have any driver override capability in this model then the liability is on the grid operator as it is with train services. I think for the grid operator to accept liability then when our own pod goes to enter the grid then the grid operator will do a safety check on our pod. If the pod fails the safety check then the grid operator will refuse entry. I think everybody else using the system would be happy with this safety check also

Link to comment
Share on other sites

On 11/10/2019 at 2:08 AM, SkylabPatel said:

What if bots on SL evolved, like replicants in BladeRunner...so that they created their own profiles, took inworld jobs, and acted like intelligent, human residents...

Would you be comfortable interacting with, having relationships with bots? Would you buy one for 100,000L if it came pre-programmed to be your lover or best friend or loyal club host and acted just like a human?

This would be great for NPCs in RP regions. I wouldn't object to interacting with a bot host at a club either, but I wouldn't tip it.  And I certainly wouldn't pay that much for one, no matter how perfect it was. That's like -  about £300. Nothing in SL is worth that much to me, except land.  The most I would pay for one - probably not more than L$5,000.  That's the point at which I have to start considering my RL budget as well as my SL budget.

 

Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 1674 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...