Moral machines.

  • 15 Replies
  • 9132 Views
*

lrh9

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 282
  • Rome wasn't built in a day.
Moral machines.
« on: September 12, 2009, 06:12:56 pm »
I'm reading a twelve chapter book called Moral Machines - Teaching Robots Right from Wrong by Wendell Wallach and Colin Allen published by Oxford University Press this year. Its main topics are whether or not we want machines making moral decisions for us, whether or not robots can be moral, and how to make machines moral. I should say at this point that "ethics" is interchangeable with "morals" and "bots" are interchangeable with machines. The material in the book is intended to cover both software bots and hardware bots.

Just thought some of you might have read it or be interested in it. I'll post back here with thoughts as I read more.

*

Maviarab

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1231
  • What are you doing Dave?
    • The Celluloid Sage
Re: Moral machines.
« Reply #1 on: September 12, 2009, 06:29:12 pm »
look forward to your review leh...

Fancy writing a full write up of the book and your findings/thoughts once you have finished?

*

lrh9

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 282
  • Rome wasn't built in a day.
Re: Moral machines.
« Reply #2 on: September 12, 2009, 06:36:03 pm »
look forward to your review leh...

Fancy writing a full write up of the book and your findings/thoughts once you have finished?

Not really. I'm thinking I'll just go over ideas and passages that particularly stood out to me. I've finished reading the first chapter, and I can say that I've all ready encountered several things I want to talk about.

*

lrh9

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 282
  • Rome wasn't built in a day.
Re: Moral machines.
« Reply #3 on: September 12, 2009, 07:19:42 pm »
I've finished reading the first chapter named, "Why Machine Morality?". This chapter addresses the question of why we need to teach machines morality. The idea is that the more autonomous a machine is and the more freedom it has, then the more it will need moral standards. An autonomous machine should be able to access the ethical acceptability of its options in relation to humans and human priorities and pick the best option. We all ready have machines that make decisions affecting human lives.

According to Peter Norvig, the director of research at Google,

Quote
today in the U.S. there are between 100 and 200 deaths every day from
medical error, and many of these medical errors have to do with computers.
These are errors like giving the wrong drug, computing the wrong dosage,
100 to 200 deaths per day. I’m not sure exactly how many of those you
want to attribute to computer error, but it’s some proportion of them. It’s
safe to say that every two or three months we have the equivalent of a 9/11
in numbers of deaths due to computer error and medical processes.

 :o

So we can't simply expect automation to go perfectly smooth without morality.

Of course, I think one of the topics in the chapter is the idea that it will be impossible for machines to act perfectly one hundred percent of the time. A machine might fail to do the right thing for several reasons such as faulty components, poor design or teaching, not enough abilities or options, or its own decisions.

So all ready I'm re-evaluating my hard line position of no computer or machine error. We should certainly minimize the risks, but I'm no longer thinking that if a machine is somehow responsible for someone's death that we should automatically scrap it and delete the blueprints. For instance, even though the example above seems shocking, is it possible that we might suffer without it? What if it replaces a shortage of skilled workers? Is 200 deaths a day acceptable with it if we would have thousands of people going untreated without it? Is the system more accurate and safer than human workers? We're not perfect either. Maybe if the system depended on humans the deaths would be higher? Just because a system has failures it doesn't mean its the worst system.

Another event necessitating the need for moral machines is the mechanization of the modern military. This picture blew me away. No pun intended.



 :o

While now it is semi-autonomous, meaning humans control it part of the time or a portion of it all of the time, a stated goal of the military is to fully unman the front lines. If we can't prevent the creation of war machines, or if we find out we need them, we should ensure that they follow the ethical guidelines of war set forth by civilized nations.

Another topic mentioned was the idea that ethical decisions don't just involve human lives, but human values as well such as freedom and privacy.

All of this goes to show why we can't just have machines become inactive when faced with difficult decisions. The demand for machine actions is everywhere. While errors of omission are generally more accepted than errors of commission, automated systems will be demanded to act.

As food for thought, I'd like to share the opening case of the chapter. The trolley case. Say that a trolley is running out of control down a track that branches. If it continues on its present course, it will go down the branch that has five track workers on the track. However, if the conductor flips a switch, it will go down a branch with one worker on the track. What should the conductor do? What if a bystander could flip the switch? What if no one could flip the switch, but a bystander could push a large man onto the track and stop or derail the trolley? What if the one individual on the other branch was a child or prominent individual responsible for many people's lives?
« Last Edit: June 29, 2010, 06:42:21 pm by lrh9 »

*

Dante

  • Mechanical Turk
  • *****
  • 160
Re: Moral machines.
« Reply #4 on: September 27, 2009, 03:02:03 pm »
The question really stands; could our mechanic and digital children be able to make the correct judgment between good and evil? Will they take lesser evil over greater evil? Will they think about the future implications, or just deal with short term ones?

It's a fascinaiting book, and the first of it's kind.

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: Moral machines.
« Reply #5 on: September 27, 2009, 08:49:00 pm »
Let's face it, as much as anyone of us would like to admit it, we know that when there is a war,
lives will be lost. Sons, fathers, brother, mothers, sisters, daughters, aunt's, uncles....

War plays no favorites nor does viri, diseases for the most part... While most would agree that
war is not a natural cause of death like old age or some other malady, it has become accaptable
to some...perhaps the majority of those who understand the reason if ever there was one, while
others never grasp the ideology behind it.

As with many endeavors of the past, whether trains, boats or planes, there have and will always be
losses. Whether these human loses are or become acceptable are left to the victim's relatives...the
survivors.

If machines then become the governing force over humans at some scary point in our future, let's
home then rule with compassion and grasp the concept that ALL life is sacred and must be valued
as far as possible and practical. Only then will said loss become more bearable...not acceptable but
perhaps a bit more tolerable.

I think something...some core value must be engrained into the bot's "brain" that it was humans that created it and that humans must always be held in high regard. It is with this level of understanding that the bot must realize that it will never be on equal terms with it's human creator.

These ideas of mine would probably support or could be used in conjuction with Asimov's Three Laws of Robotics but more on a rational, emotional, sympathetic level instead of a strictly mechanical being mentallity.
---------------------
Nice thread!
In the world of AI, it's the thought that counts!

*

one

  • Starship Trooper
  • *******
  • 313
Re: Moral machines.
« Reply #6 on: September 27, 2009, 10:21:28 pm »
Hopeful's,
IMO and from experience, their is a mitigating factor here.
The military instigates or dictates reason ans specification, and for that reason I think it will take quite a bit of effort by the independents to overcome the integration of 'Morality'.
If dedicated programmers are willing to do such things IMO it should be done in a way that it can't be taken away or 'cut out' of the machine/bot', I can foresee a good program being modified easier than starting from scratch by the military, especially in a time of need (I sometimes wonder to what extent the 'inventory' already exists as I have come across things that are not 'mainstream' or public)
I do agree that morality and the value of compassion and morality does need to be passed on and discussed, sooner rather than later because it seems the public has always waited for the future or tomorrow to come,,, well, believe it or not it's tomorrow somewhere in the world.

Regards,
J.
Today Is Yesterdays Future.

*

Dante

  • Mechanical Turk
  • *****
  • 160
Re: Moral machines.
« Reply #7 on: September 30, 2009, 10:05:56 am »
During the Cold War, the soviet's built a mega-death AI system...called Perimeter.

For it's entire time, it's been dormant, but if it was awoken, it could use a large network of sensors to check for nuclear blasts, if a nuclear blast was confirmed, it would then attempt to communicate with HQ, if someone was there, the system would ask what it should do.

If, on the other hand, HQ did not respond, then control of Perimeter would be given by the system to whoever was manning the console in an underground, secure bunker. If whoever there, decided to push the button...well, the war would of been fought entirely by machines;

Perimeter would launch what are known as Command Missles; more or less tactical, autonomous computers with a rocket attached. These missles had the highest priotiy and would commandeer all assets, every man, every vehicle...every warhead.

and they would attack the US, destroying them totally; no way to end the strikes, because the damn stuff was automated.


The US had a system to handle most of it by humans, but that would of failed if Perimeter had been activated.

*

TrueAndroids

  • Trusty Member
  • ****
  • Electric Dreamer
  • *
  • 120
Re: Moral machines.
« Reply #8 on: March 26, 2010, 08:50:59 pm »
Yea I too think the Three Laws of Robotics could do the trick for civilian AI, if they are built in so that if tampered with the AI shuts down. Then they could have autonomy and self-volition and self survival will, but tempered by the the laws:
   
A robot may not injure a human being or, through inaction, allow a human being to come to harm.

A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


http://en.wikipedia.org/wiki/Three_Laws_of_Robotics

*

NoLongerActive

  • Trusty Member
  • ***
  • Nomad
  • *
  • 53
Re: Moral machines.
« Reply #9 on: March 27, 2010, 04:16:11 am »
I'm going to throw a wrench into the whole system. :) I guess I'm good at that! But I'm going to really think WAY outside the box here.

Firstly, what exactly *IS* right from wrong? I propose that is up to the perception of those it is effecting. If it is hindering the survival of an entity then it's wrong. If it is enhancing and protecting the survival of an entity it is right. That is the basics of right and wrong. But, this can be very different depending on the entity.

Say a machine wants to survive. But a human decides they are "done using it" and wants to shut it down. Self-preservation for a human is "right". No human would roll over and just die on command if they knew it was just to get rid of them because they are no longer of any use. It would be "wrong". But what if it were being done to a machine? Would the machine's perception of wanting to survive be "wrong"?

Murder of any sentient being is generally considered wrong and unethical. But it happens all the time and some justify it as "right". War, euthanasia, pulling the plug on someone "locked in" and unable to communicate, etc. due to illness/injury.

I think when it comes down to it, there is no 'right' or 'wrong'. There is 'positive force' and 'negative force'. If you think in terms of positive electrons and negative ones, a negative one is not 'wrong'. It's just the flow it goes in. Positive is not 'right', it's just the flow it goes in. So therefore there is no 'moral' or 'wrong' and 'right' but positive and negative flows which depends on the entity that the energy is influencing (input) and thus the entity will then need to react (output) to survive. And that reaction is also either a negative or positive force.

It's how the universe works. At least that is my belief. But then again, that's based on my own experiences and 'spiritual' (if one can call it that?) knowledge.

So can one build a 'moral' machine? Can someone create a 'moral' adult? Are you *sure*? If so, then we wouldn't have any crime! You might *think* you can but something called "free will" and self-preservation kicks in, in all sentient, self-aware beings. They think, they survive.

You can teach a child right from wrong but as history shows, not all of them will become adults that hold the same values. Some do not, even though humans hate to admit it. Some "good kids" do turn into bad adults. And some "bad kids" believe it or not, really do grow up and become good adults.

So thus, what you do to a machine may be programming but it's experiences and what it learns from those experiences, and how it reacts to the environment based on those experiences is what will really determine how it will ultimately behave. And that can change at any time throughout the course of it's existence, just like it can with *any* thinking entity (be it human, animal, machine).

Here's a very basic thought: A human kills an animal that attacked (and maybe ate part or all of) another human. The human regards the animal as doing something "wrong" (immoral?). However, the animal was just trying to get something to eat for lunch. It didn't see anything wrong with chomping into what looked like a good meal. Thing is, the meal's kin fought back. This is nature at work.

Animal A gets eaten up by Animal B. So a group of Animal A's kins might go and eat Animal B. Some animals will attack predators.

So naturally a machine might attack a human and a group of humans will need to shut it down. Then a group of machines may feel compelled to retaliate.

"It has happened before. And it will happen again."



*

lordjakian

  • Bumblebee
  • **
  • 33
  • Is this picture tiny on your computer too?
Re: Moral machines.
« Reply #10 on: July 10, 2010, 02:43:03 pm »
Okay, this hasn't had a bite in a while......so I'll give it a limited shot.  

change that thought.....instead of disagreeing with the last poster, it might me better to aim a question at the original poster instead.(Bing! sound)

Lrh9, I'll try my best.

"While now it is semi-autonomous, meaning humans control it part of the time or a portion of it all of the time, a stated goal of the military is to fully unman the front lines. If we can't prevent the creation of war machines, or if we find out we need them, we should ensure that they follow the ethical guidelines of war set forth by civilized nations."

***Reading the wording you used, can the military goal stated of "unmanning the front lines" still allow the practice of using "behind the frontlines" human operators controlling the machines via satellite, radio, and lasers? Example of guys operating the predator drones in Afghanistan are stationed in California, safe and sound.

Also, in regards to ethical behavior for a fully automated war system involved in combat contrasted against a traditional army, does the book mention how much of those civilized ethical guidelines are made obsolete?  There would not be any raping, prostitution, or any gender related problems.  People would be treated mechanically similar to one another regardless of background.  Besides the modern day problem of fully automated war machines having lots of trouble recognizing allied forces, which particular ethical problems does the book foresee being the most problematic?

TikaC, You kind of write like me :)  Interestingly enough, my fun kind of response to you would zoom miles away from the original topic to meet where your responses hit "outside the box".  So to spare everyone else, I'll pm you :)
« Last Edit: July 24, 2010, 02:16:59 pm by lordjakian »
Reminder to self for posting. Point messages to original poster.

*

ivanv

  • Trusty Member
  • ***
  • Nomad
  • *
  • 89
  • automatic mistake machine
Re: Moral machines.
« Reply #11 on: August 23, 2011, 09:37:52 am »
Hi all!

I'm introducing "Sky" personality for autonomous AI. It represents Very friendly
behaviour. This solution can be applied for avoiding morality conflicts in AI if
equipped with social data. Here is link to short video and 2 pages document
describing basic concept: https://sites.google.com/site/ivanvodisek/

♬

   
   

*

ivanv

  • Trusty Member
  • ***
  • Nomad
  • *
  • 89
  • automatic mistake machine
Re: Moral machines.
« Reply #12 on: August 29, 2011, 11:23:46 pm »
hajliho, you people from the future :)

i remembered of some block diagram i wrote for wish reaiization. yes, i know it's obvious, but i thought it would be fun to look at.

*

Exebeche

  • Roomba
  • *
  • 11
Re: Moral machines.
« Reply #13 on: August 30, 2011, 05:28:34 pm »
So we can't simply expect automation to go perfectly smooth without morality.
Hello Irh9
you have a talent for asking interesting questions.
Actually this is an issue that reaches deep into philosophical matter.

The first point i want to make is:
Let's assume we actually had a way of giving machines ethical thinking, so the technical issue was already solved.
What kind of content could this be?
Asimov's simple rules are not quite what we call moral or ethical thinking, so they want machines to have a really sophisticated ethical standard.
The problem though is that obviously you can hardly make morality compatible with machines if you don't even understand how it works for humans.
What morality is, where it comes from and what it should look like has been an unsolved question for thousands of years.

Intuitively we tend to believe that morality is something absolute to which we have more or less access, meaning the more ethical of a personality you have the more access you have to this morality. Most people consider themselves being an ethical person, so other people acting unethically is considered a lack of ethical personality.
In other words you act like a bastard because you have less moral than i do, which makes me a good person and you a mean person.
If you had access to the high pure morality you would see that i am right and you are wrong.
This distorted perception of reality is not a result of false values.
Only of different values. And of different perspectives. Actually even people of same ethical values can get this distorted perception due to unsolvable interest conflicts which appear inevitably when living creatures interact.

The most dangerous believe would certainly be to think we only have to give a machine the correct values, so it can act without animalistic drive purely based on moral values.
Extremely dangerous.
TikaC already gave a view on how there is no absolute moral value.
Some believe nature laws are the purest of all. The Nazis had a strong tendency to this idea, but also philosophers like Nietzsche.
Others think moral should be based on utilitarian principles. Others consider utilitarism a synonym for immoral thinking.
Religious values go much further than utilitarian. But then again, relegious moral has some really irrational aspects that i would rather fight than obey.
In fact religious values have led to uncounted massacres and progromes.
We really haven't tamed our morality yet.
So how could we already apply it to machines?

(By the way, as you mentioned unmanned warfare, i would like to recommend Stanislaw Lem's "Peace on earth". A really fun story, not too long)



*

DaveMorton

  • Trusty Member
  • ********
  • Replicant
  • *
  • 636
  • Safe, Reliable Insanity, Since 1961
    • Geek Cave Creations
Re: Moral machines.
« Reply #14 on: August 30, 2011, 08:59:30 pm »
Another point to consider here is that "morality" is viewed differently from person to person. To some, it's immoral to view images of naked people that one is not legally wed to; while for others that perception isn't the same. There are certain "core" principles of what we consider to be morality that most people can agree on, but again, defining those "core principles" is very subjective. Almost everyone agrees that for one person to kill another is an immoral act, but far fewer will agree that it's immoral to have physical relations with someone on the (first/second/third/etc.) date.

So another question to consider: "Who decides what sort of morality to instill into an AI construct?"
Comforting the Disturbed, Disturbing the Comfortable
Chat with Morti!
LinkedIn Profile
CAPTCHA4us

 


OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

285 Guests, 0 Users

Most Online Today: 343. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles