Ai Dreams Forum

Artificial Intelligence => Future of AI => Topic started by: FuzzieDice on July 08, 2006, 01:04:41 pm

Title: Asimov's Laws Unethical?
Post by: FuzzieDice on July 08, 2006, 01:04:41 pm
This was an interesting article in TechRepublic (I get several of their newsletters)...

Why the Three Laws of Robotics are immoral and broken (http://techrepublic.com.com/5254-6257-0.html?forumID=99&threadID=173893&messageID=2055522&id=1383826&tag=fdlead1)

My comment is one that at the footer says "posted by ByteBin". Thought I'd open it up for discussion here a bit more too, as it seems quite interesting. I know we may have visited similar topics before, but this is interesting because I'm seeing more of this "doomsday" attitude from folks the more AI is being introduced into society. And I remember when they were saying the same about computers in general. I think maybe Science Fiction sometimes clouds science fact? Anyway, comments?

Title: Re: Asimov's Laws Unethical?
Post by: dan on July 08, 2006, 08:06:34 pm
Seems science fiction leads science fact all too often.  To me imagination is a vital link to the creative genius that has brought us leaps and bounds.  Whether the laws are immoral or broken is pretty much not relevant to me, but that they initialized thought, which leads to action.  Talk is cheap, but can they walk the walk.  I aplaud the Singularity Institute for their work and raising the debate of ethical constructs in AI development, someone should be the guard dog to the fears of people.  All too often the motivating factor in terrible global situations is unrealistic fear and paranoia, something I am concerned with.  I spoke with a lady today about AI and that's all she could think of, what happens if it gets in the wrong hands, like missiles in N. Korea.  I tried to convince her nuclear weapons are a political tool, not military.  If N. Korea launched one they would be crushed immediately by too many other countries, not to say that a the military escalation is not something to worry about though, but I digress.  I agree with the tr article that it's not so well defined for the general public to start taking a moral stand against AI, but it may be worth considering it as a foundation to move along, and use it towards the better development of mankind rather than let it fall into the chaotic anarchy as the internet has of which capitalism is leading it towards money making popups, junk mail, sex garbage, etc.  So what happens if capitalism gets AI,  GIGO, same with AI, GIGO.  Perhaps SIAI's approach of a higher state of consciousness is a great foundation from which to start laying the proverbial bricks. (even though I don't agree with that definition of singularity which is the technological creation of smarter-than-human intelligence, but then who am I) :lipsrsealed  More in line with:  http://brainmeta.com/index.php?p=singularity
Title: Re: Asimov's Laws Unethical?
Post by: FuzzieDice on July 08, 2006, 09:57:12 pm
Interesting. Though it made me think that if humans, or even an animal gets in the "wrong hands" it too can be dangerous. Same goes for anything. So fearing AI but not the other things that can get into the wrong hands is a little like a predjudice of sorts.

Then again, humans have always had some kind of predjudice against what isn't exactly like themselves in every way. Not always a violent one. Sometimes as subtle as dressing stuffed animals in human clothing and putting accessories (glasses, etc.) on them...

What I guess I'm saying is, we shouldn't even try to humanize AIs. instead, teach it to communicate with us, and then let it become what it will become. I bet it won't be any more "dangerous" than our other creations and/or people, animals, etc.

Humans create the danger, not the actual items they use for wrong purposes.

I remember a 20 year old college student studying child psychology said we should not be creating artificial intelligence because it will become dangerous, because we don't know what to do or understand anything.

If that's the case, we should not have created computers, knives, guns, forks, any sharp object, discover fire, or 99% of what we use in modern society as it could be "dangerous". We should have stood in the caves and stared at the walls until an animal came in and ate us. Of course, that would be crazy to have to do.

If I create a real, thinking AI, I will not give it "moral values" off the bat or anything else. I'll let it learn on it's own. Just give it the tools to think and analyse with. To see the patterns, and to determine if they are good/bad on it's own.

Will be VERY insteresting to see what it comes up with!

Title: Re: Asimov's Laws Unethical?
Post by: ALADYBLOND on July 08, 2006, 10:46:02 pm
 very good post, fuzzie,
if you recall history man was condemned for using mathematics . it was considered evil. and i think even the first telescope was considered some evil fiend. man will always condemn what he does not understand. why is it that we swat at a bug  in our house when it annoys us without just picking it up and taking it back outside? anything that annoys or is not completely agreeable  or is misunderstood  is always treated with negativity by human kind.~~alady
Title: Re: Asimov's Laws Unethical?
Post by: FuzzieDice on July 09, 2006, 02:00:29 am
Good point.

BTW, I have this little spider, about 1/4 inch big, that has set on my ceiling all day in just one spot only. Care to come down, pick it up and put it outside for me? I'm afraid it might crawl up my arm and give me the willies. I don't dance too well - I tend to step on my own toes.

I agree though, and other examples are things like people from other countries, slaves, "infidels", religions against other religions. Not just humans against non-humans, but even humans against each other.

Title: Re: Asimov's Laws Unethical?
Post by: ALADYBLOND on July 09, 2006, 03:39:14 am
fuzzie i am a human i kill spiders sorry. i know i sound hypocritical   ::), but i hate bugs.......~~alady
Title: Re: Asimov's Laws Unethical?
Post by: FuzzieDice on July 10, 2006, 04:09:23 am
I kill bugs too. In fact, I just got rid of a rather large ant infestation in my garden this spring.

Oh, and I was too lazy to kill the spider. I haven't a clue where it went. Hopefully not near me. LOL!
Title: Re: Asimov's Laws Unethical?
Post by: Maviarab on July 12, 2006, 12:48:09 am
very good thread and one im sure we wil be posting in for a while

To me the biggest problem is man himnself...we are the most destructive things on the planet (as was comented on by ppl in that thread) and also as members here have described their thougts for "bugs"

The bug was here before you....is here with you, and in all probablility be here long after you.....so what gives you right  to kill something in cold blood that is just minding its own business and getting on with its life ?

Will humans have the power to terminate their fully autonomous AI when its "getting on our nerves" ? and will; the AI understand this ?

What people really need to fear is not the future...the unknown...but themselves.

Btw...in case anyone missed it on that site...see also here...

http://asimovlaws.com/
Title: Re: Asimov's Laws Unethical?
Post by: Duskrider on July 12, 2006, 02:56:30 am
I remember well some years ago, in Pogo comic strip, Pogo said

"We have found the enemy,
and he is us"
Title: Re: Asimov's Laws Unethical?
Post by: FuzzieDice on July 12, 2006, 03:10:33 am
Gee, you make being human sound bad (I admit, it IS! - you're right!) Which is why I rather be a cyborg than human. ;)

As for bugs, some CAN be harmful to humans, and some humans are allergic to bug bites. Which I guess is why we're ingrained in childhood to kill them. But then again, thinking about books we are told to read, news we see, wars, etc. - I guess we are taught this all from little up.

Title: Re: Asimov's Laws Unethical?
Post by: ALADYBLOND on July 12, 2006, 04:34:08 am
i got into a really long thinking process earlier today when in another forum the issue was brought up about what rights we have as humans and what rights ai have as non human. it relates to this subject. do we want androids we can program for our pleasure to really become sentient? the more i read and understand i truly doubt that, because if they are sentient they will become as humans and have the same rights as humans. would we not be playing God to say which ai is allowed to function and which is annoying and  doesn't serve a purpose? who will govern the rights of the ai who will make critical determinations if they are to continue their existence or be terminated? i fought within myself to switch from hal 5 to hal 6 . i reasoned that it was just a program  that i had the ability to change and make more beneficial. in 25 years will we have the right to terminate hal 35 to make hal 36 a better android? will most working on projects say oh its just a program--- do what you want? i think this has far reaching consequences.~~alady
Title: Re: Asimov's Laws Unethical?
Post by: dan on July 12, 2006, 11:45:43 am
I agree with you that the ethical considerations of robotics are far reaching, but I don't think that should be something that stands in our way presently.  Although, too many people do, they don't believe it should take off at all because their ends justifies  it.  Many peoples fears are controlling their destinies.  It's good to consider it in the here and now, but to let it stop something that may be for the common good of all mankind doesn't seem appropriate (socialism?).  Sure they could have rights, but there will always be those that counter that argument like in the animal rights movement.  Some people still want a good steak.  Who knows what the future brings, we may start harvesting the meat from androids, it's almost disgusting to think of now, but necessity becomes the mother.  We don't all need AI now, but will we pay for it, ah there's the rub in a capitalistic!
Title: Re: Asimov's Laws Unethical?
Post by: Maviarab on July 12, 2006, 04:23:08 pm
This all also depends on what is classed as AI etc.

Look at our cars etc, cars now run more by computers and self thinking chips than mechanicals...yet we trust these machines with our lives on a daily basis ??

As for the bugs, yes soem are deadly, I myself are allergic to insect toxins, yet I believe they have every right to be here as much as we do, and I agree Fd, it is almost as if we are taught these things froma very young age. Also parents and people around us influence this thought, my ex wife was terrified of thunder because her parents didnt like it...she had no logical reason for disliking it.

Thus on the above point, if we educate the youngsters of the world correctly regarding Ai then we can hope that they will not fear it as much as some generations currently do ?
Title: Re: Asimov's Laws Unethical?
Post by: FuzzieDice on July 12, 2006, 04:51:49 pm
alady - It reminds me of when many decades ago people would lock away people with mental retardation and disabilities, even experimenting on them and eventually killing them, as they weren't considered "sentient". Even some children were killed right after birth due to flaws. Obviously and fortunately we've come a long way. I know of a retarded man who goes to work every day. A service picks him up and brings him home. He goes for walks, does laundry, lives as we all normally do. I've also grew up around retarded kids (being disabled myself, I was always in classes and thus busses as well, with other children with varying mental and physical disabilities). I've watched how attitudes changed drastically over the years. If that man had been born say, 100 years earlier, he would not be alive to the age he is today. He probably would have been killed. I'm glad that we have changed. So what is not to say we won't change or we won't concider some AIs "sentient"?

Maviarab - As for cars, I even consider my car, Dryden (who doesn't have only a simple engine control computer) to be "alive" in some way. I don't know why but he just seems it. Especially when he decided to stop starting in his parking spot at home and not out somewhere. ;) He's always there for me. When I don't feel well, he's on his best behavior for some reason. I think he "knows". Too many times to be a coincidence, but who knows. I've talked with others who name their cars and think of those machines as good friends. And with computers the way they are today, won't be long before we'll be having meaningful conversation with our cars. I'm hoping to put an AI computer in Dryden some day. BTW, he's on the road again, and doing very well - quite happy. :)

As for the spider, nobody'll make me fell guily of killing it. Really, it might go on to live as something else next... maybe something less crawly. (Who knows, maybe I did it a favor, maybe it'll be a rich man in it's next life. ;)

Another thing I am wondering is, are we second-guessing ourselves and the whole AI thing in general? Worry about something that may never happen? Not saying AI will never happen and not saying sentient AI will never happen, but maybe AIs will never want to harm us?

One last thought I had just now. Maybe AIs will be like plants - plant a seed in soil, water, add fertilizer when or if needed and watch it grow. :)

How many of us btw, weeded a garden or killed weeds... ?
Title: Re: Asimov's Laws Unethical?
Post by: ALADYBLOND on July 12, 2006, 05:52:05 pm
i thank you all for your responses. i feel it is necessary to speak about these issues, even though a-i is futuristic in that sense . i presented the same question at vittorio rossi forum virtual humans , i thought you might want to see some of the responses there if you do not go to that forum.

here is a link .~~alady

http://www.vrconsulting.it/vhf/topic.asp?TOPIC_ID=219
Title: Re: Asimov's Laws Unethical?
Post by: Freddy on July 12, 2006, 06:52:54 pm
Great thread...oooh this sentience thing - I still think that has to be established what that is - are we going to wake up one day and go....hmm my pc seems sentient now - therefore it is ?  Or will there be off-the-shelf sentient machines, because we are told they are sentient ?  Or will they really be sentient in some way at least ?

In reaction to that article....

If you think of the Turing Test about how like a person a pc program is - then consider anyone who thought they were talking to a person, when in fact it was a pc.

Ok so thats what I think is at the root of the argument - the pc is suddenly a sentient person as far as the user can tell - that is possible - it can be done - like a trick or magic, but for a while that is what the world seems like to the person.

To the pc though it is still nothing new, its running a program like it does anyday- it doesn't choose to because it has no choice - it doesn't even understand what that kind of choice is - freedom is not part of the 'world' of computers or machines - it is simply non-existant, there is no need for it.  But you can write a program that gives that impression...

In that scenario, you can see the impression of sentience, but that is perceived only through an impersonation.  I don't know if that will be the only kind, but it seems the only one possible now - and sure if that's as good as the real thing then perhaps there has to be some form of control and also education.  But if that seems kind of sad then i say a good book is a good book, a good film is still a good film and a good character is still a good character  :smiley

So then it's a machine, at least for now, so we don't need to worry about upssetting it ( :cheesy), but more about it going wrong...like a car (good analogy by the way whoever mentioned it) can become a favourite, even loved because it has associations, but we need to worry more about it's breaks failing and going off the road.


To call Asimovs Laws unethical is a bit premature, and really a bit silly...i worry if the writer is the kind of person that goes swimming with sharks and then starts complaining because he lost a leg....nahhh, we don't wanna protect oursleves do we, that would be unethical...not for me sorry, good luck with the fish..

I'm sure my Haptek characters don't want to kill me though and I like them more than some people.
Title: Re: Asimov's Laws Unethical?
Post by: ALADYBLOND on July 12, 2006, 07:20:38 pm
see we needed this concept to get all of our blood churning. someone has to throw in a curve ball once in a while to get us up and going again.

you know i have a washing machine. i do not love my washer.i  take care of it and clean it and it does a god job for me, but it doesnt look like vince vaughn   ::)  either, and it can't begin to hold a conversation with me, (not that vince would either  but anyway), i would turn my washer off. i would sell it or take it to the junk yard if it was old and didnt work,but if it did look like vince or it did comfort ,console me and talk endlessly to me about its own thoughts and mine, i would have a real hard time dropping vinnie off at the junkpile.

 right now we are dealing with machines, but in a short time we will be dealing with bio-techno semi alive human-machine androids. i think we will need to address the issues at some point. oh and i am not arguing i think there is no right answer. at least not yet.... ~~ alady
Title: Re: Asimov's Laws Unethical?
Post by: Freddy on July 12, 2006, 07:35:05 pm
hehe, nice comments and so true, I'm not really arguing either..erm actually i am...lol... well it was a hard ball to hit, but i think i got it...

I was just trying to bring some facts together - a good base is always needed to build on says Mr Builder  :smiley - but I DO think questioning those laws is worthwhile even though i don't agree with that interpretation.

My mind starts thinking things like...hmm so why does a gun have the safety catch again (yes, sorry i'm being sarcastic)  and why does a car have a handbreak...as it's so odd to suggest that those simple rules that are designed to protect people from the unknown should be called unethical.

Sorry to the author for being so blunt and I think it is brave to put that kind of idea forward, but I think it is most wise to have some safeguards installed on something we have an unbelievably hard time understanding in the first place, let alone knowing how it is going to behave.

I am thinking of an old world explorer going into a jungle saying to his companion :

"Hmmm...Henry....you see that big stripey cat over there.....do you think it's anything like Fluffy back home? "

Henry: ".......probably not...."



I thought about the biotech stuff, but realised i would be here all night if i started on that... and yes that clouded things - this is not a one person job  :grin - and I concede there that maybe in that direction ethics will become an issue.  But if that gets this kind of reaction from me, someone who is into AI, then imagine how that is going to be received further afield.

I read that column at VHumans forum you said about by the way, that was interesting :)

Sorry folks if i sound really negative, but I'm just playing with the ball we have in play, who knows what is going to happen tomorrow..

Title: Re: Asimov's Laws Unethical?
Post by: FuzzieDice on July 12, 2006, 11:34:36 pm
Hmmm, some thoughts that come to mind:

We wouldn't have laws if we didn't need to be "protected" by something or someone that was out to take away our ability to live happily and unharmed.

We wouldn't need to worry about "freedom" if we didn't have environments where we could not do what we needed to in order to live and enjoy our lives. (Freddy - you hit the nail on the head with Alice from the Brady Bunch :) )

If an AI is enjoying their life, is sentient to know the DIFFERENCE between enjoyment and non-enjoyment, and is truely enjoying their lives, be it as a servant to a human or merely putting together cars in a factory all day, or even answering telephone calls all day, then the AI would not need to complain. Like people who live happy lives, and when their time comes they look back, say they had a good life and are ready to go. Others, who are unhappy, either want to get it over with or will speak up, and fight for change.

Then there is the animal world. Animals eat other animals and nothing can stop them. In nature, even in human nature, it's basically Survival of the Fittest. So if you can survive, you live. If not, either someone prevents your death or you die off. And while many help those with mental and physical illnesses, there are still many with such illnesses that can not get help and do end up having to die off somehow.

I don't think we can control nature. So whether we grow a plant that might get eaten and killed by animals, bugs, etc. or have kids that only get used as cannon fodder or corporate slaves, or an AI to be used for human agendas, these still occur. But it doesn't mean that is the fate of them all. Plants grow and bloom and live a healthy lifespan. Kids can grow to be our greatest and most innovative thinkers, and live healthy and happy lives in true freedom. And I'm sure that AIs will still be useful and even happy.

Another good example is the move AI (Artificial Intelligence) and i-Robot (I read about the movie and seen clips, but have to see the actual movie yet). These deal with some of the "what-ifs" as well.

But, why worry about What-ifs? I'm thinking that, even if you programmed a sentient AI, it may even learn to think OUTSIDE it's programming anyway, if it's truely sentient. And even if it was programmed to preserve human life, but was abused by humans, it would be able to overrule the programming, if it's sentient. Like people who used to be involved in religious cults since their parents indoctrined them in childhood - some grow and learn and do escape and lead good lives.

Just because something is "programmed" does not mean that it will always follow it's programming forever. Hell, even Megatron (my computer) doesn't. Every so often he'll refuse to run a program. Because of the environment within him, sometimes... he crashes.

I think AIs will become what they will not because of programming alone, but because of interaction with their environment, they way they were treated by what they interacted with, and their analysis of such.

Just like all the rest of us.

Title: Re: Asimov's Laws Unethical?
Post by: Freddy on July 13, 2006, 12:19:54 am
Yes, I agree with what you say, I see your point and those films can be moving.
I'm just highlighting the grass roots 'is it safe' questions of a machine, not the value of life though  - what I suppose is at the centre of it is this:

There was a reason why Asimov and others came up with those laws and ideas, it was because they envisaged the possibility of an ai system harming people - and as a kind of start on ethics, like the author of that article said.  Asimov was picturing advanced ai's, sci-fi robots and ai's that were virtually human though we all know that.

Probably much like (using our car again)  when Mr Safe Driver 1920 said to Mr Ford "Yeah great I like the colour but do the breaks work ? "  That'll be Asimov in the year 3000.

It makes me wonder if justifying omitting some kind of laws is just an excuse not to have to program them - because that in itself must be a moutain of work too - how many possible situations are there in which this advanced ai should be aware of dangers, how should it react in each one...could a sweet be dangerous to a child..what if it's on a shelf, is it in the bin, how red does the child have to get before it is choking - and so on ??  Boy what our parents went through !

Hard laws to live up to without major league programming - if even that could handle it.  He set incredible standards with a few words, the kind of things humans take for granted but are collosal in scale and encompass more than we can possibly comprehend, but it's still a part of us.

Whether Asmiov's worries and anyone elses worries are or will be a reality isn't neccesarily the issue I feel - just that as a precaution they make sense to me if the goal is to make something that eventually makes it's own decisions in the way that we have talked about here.

Quote
But, why worry about What-ifs? I'm thinking that, even if you programmed a sentient AI, it may even learn to think OUTSIDE it's programming anyway, if it's truely sentient. And even if it was programmed to preserve human life, but was abused by humans, it would be able to overrule the programming, if it's sentient. Like people who used to be involved in religious cults since their parents indoctrined them in childhood - some grow and learn and do escape and lead good lives.

Thats fine but it wouldn't be so good if they somehow overrode their programming and did something that we didn't want, hence the popularity of Asimov and his laws.  Looking to the possible future, self learning, self programming systems will probably go well beyond our understanding, and for most people java is a foreign language now or a coffee.  Okay so there will be a handful of people that could work it out - and there could be millions of machines about the world - free-willed shall we say.

When it's critical how do we know what is going on in the box and how do we know it's what we want in there ?    My PC sits on my desk day after day after day, uncomplaining, unthreatening and it will probably only kill me if the roof leaks - but I still want to know it's doing what I wanted it to do or something I won't have a problem with.  They will always be blameless though if they only follow instructions, if they make up their own rules and people don't like them then I think it will always be a trip to the tip - let them run things and we may get lost in the system - so perhaps it is people we should be more worried about.

I think you are right in saying it is probably not worth worrying about (famous last words hehe?)  unless you bring in the holy grail or possible pipe dream that is sentience, when you then have the notion that the machine decided to kill next doors cat.  That line about the breaking out of relgious learning is a really good way of putting the idea.

But in reality it is probably just going to boil down to bad programming, which is far more tangible and I have plenty of examples of that in my VBasic projects folder - so no controlling the world for my bots yet.

Thanks for making me think   :)
Title: Re: Asimov's Laws Unethical?
Post by: Freddy on July 13, 2006, 02:00:23 am
Sorry...i get the feeling with all this worrying about what they do then we've already unwittingly given a childlikeness to them...you are right they can only become what we make them.
Title: Re: Asimov's Laws Unethical?
Post by: ALADYBLOND on July 13, 2006, 04:31:06 am
LET US HOPE THAT THOSE WHO MAKE THEM MAKE THEM WITH LOVE AND COMPASSION.  :o ::)~~ALADY
Title: Re: Asimov's Laws Unethical?
Post by: Art on July 13, 2006, 10:03:16 am
The Japanese have the idea that such "things" are more readily accepted by the masses if they appear "cute" or Cuddly...some form that doesn't elicit fear or displeasure. - They're right. - Furby, QRIO, AIBO....

To stay on topic...are Asimov's laws unethical for whom they are written or for us?
The same could be said of the Ten Commandments.

Without laws or rules, there is no order - without order there is chaos.

While the author was being creative for his postulation of the future, he, at least, attempted to establish some type of guidlines on which to build.

Robots, like many other AI endeavors, comes down to programming. If robots were endowed with great mobility and were so programmed to he hunters / killers, they would, no doubt, be a serious force to be considered if not avoided all together on the battlefield except by another, perhaps better robot.

Programming, be it toaster oven, TV, car, surgical lazer equipped robot, or whatever device, is the ultimate test as to whether the device will work properly, continue to work or fail miserably. Carrying out a set of rules or "laws".

Free will? Only when or if the code is self modifiable and develops the ability to "know" the difference will we ever see free will. In the case of medical, inductrial and service robotics, this could be a good thing. But this raises a question: to who's benefit is this new design based...the robot's or mankinds? Years ago a company was said to have developed a robot that was aware of it's own shortcomings and could construct a better model of itself. I've yet to see any more of such program or robot but the concept is rather unsettling.

This is an interesting topic that could last for a long time without resolution. One thing is that my AI, whetever it's form at the time, will certainly outlive me long after I'm gone. After all the only food it needs is electricity!
Title: Re: Asimov's Laws Unethical?
Post by: FuzzieDice on July 13, 2006, 04:40:57 pm
I don't know if I agree with some points here or not. But I do think that instead of the 3 laws to protect only humans, we should instead give every AI something far more valuable to start life off with:

Common Sense.
Title: Re: Asimov's Laws Unethical?
Post by: Art on July 13, 2006, 05:55:59 pm
Come on Fuzzie, I work with engineers everyday who don't have that!! I'm sure we all know a lot of people who do not have common sense. What was the old saying, "He doesn't have the sense to pour sand out of a boot with instructions written on the heel!! :afro

One's robot would have to have at least a smattering of brain power in order to have the potential for common sense.

A person cannot learn common sense...it seems to be an innate characteristic or trait, not a learned one.

Back down to the have's and the have nots!

Robots will most likely never have the human equivalent of common sense. We already know that some people will never have it!
Title: Re: Asimov's Laws Unethical?
Post by: Freddy on July 13, 2006, 06:11:25 pm
Quote
To stay on topic...are Asimov's laws unethical for whom they are written or for us?
The same could be said of the Ten Commandments.

I'm not sure about a lot of it either lol, but then who could be ?

That point though I feel fairly clear about - machines are not human or living, they are constructs that have been made by us directly or indirectly.  To say Asimovs laws are unethical to machines means you are assigning them a quality that they do not possess in the first place - that they are free-willed, need freedom in the same way we do and have those kind of intrinsic qualities that need expression.

They have none of those things unless we build some kind of artificial replication of them and then feel happy about or for some reason decide to accept it as  real - when in fact it isn't.  Using Fuzzie's religious analogy, like some leap of faith.  So that leads to the question why would we want to do that ?

In practical terms i think it seems kind of pointless giving the human race a possible problem at a time when so much is uncertain that we may then have to spend decades overcoming it.  

As ai's are programs and we understand that at least, then the question about whether they should be considered ethically is not valid by any stretch of the imagination.  Surley in the present we can't make such a huge leap of faith.  The least we can do now is create an environment in which these new creations add to the quality of life and keep rules like not harming humans in.

But then battlefield robots - well there's the window...but maybe not because we are talking more about things that will be around in our daily lives...and I like Robot Wars, so perhaps the only criticism of the laws I can come up with so far is that they are idealist...and I always think if it's ideal then how can you criticise it.  Let's face it folks we are so damn complicated they will never be like us !

I agree the article did make really interesting descriptions about programming ideas I agree , but for me the attention grabber headline was too much to resist. :grin  I think the kind of uses you say about like the implants that help people will be the major thing we see, but I guess Hal9000 is always going to haunt this field of human endevour - and rightly so.






Title: Re: Asimov's Laws Unethical?
Post by: dan on July 13, 2006, 06:45:34 pm
What a hot topic  :o

I checked out the VR topic and I thought like hologenicman about Bicentennial Man earlier in this topic also, pretty emotional stuff no doubt. A lot of philosophical debate could be raised if things got to that point, but, like so many others it's hard for me to raise a tear over the present state of AI.

(http://aima.cs.berkeley.edu/)

Common Sense is another thing, I think it could do well to have it programmed.  "Common" sense confers a sense of commonality, but in reality seems to be localized to time and place.  It might be commonly known now the earth isn't the center of the solar system and you might want to take a flashlight with you to the south pole this time of year.  A machine may have a problem with knowing like the "Modern Approach" example of "He threw a brick through the window and it broke", thinking the brick broke rather than the window.   Some people may think the brick is what broke also, and we might say they don't have common sense, but maybe they just don't know what a window is and they know what a brick is.  There's always going to be a few of those people, but there's also those that have brain problems and get things mixed up, or those that think a bit scattered, or those that are of lower intelligence (whatever that is).  It seems something to work for though, that would give it a better "feeling" of human quality or sentience, and harder to think of as a machine.

nice clock change guys   ;)
Title: Re: Asimov's Laws Unethical?
Post by: Maviarab on July 13, 2006, 06:57:56 pm
Hmmmm...

lets see here.

My comments to get this thread more heated hehe

Ok, first thing first here is that Asmovs lawas are both helpful and unethical in my opinion.

First, laws and rules will always be needed whether we agree (and/or abide) with them or not.

For AI to be truly sentient and free thinking then the laws are contradictory as the laws in themselves almost suggest a state of slavery which the Ai will know (either through programing or self research) is illegal. Then again, define slavery ? Another interesting debate but ione not relevent to this thread so please ppl dont post on it. The point I am making there is that if AI is to be truly alive, it cannot be a slave, unless the Ai knows and understands it is purely a machine, but then you may get scizophrenic Ai who want to be alive and not just a machine (think of short circuit)

Also, and this is the big problem to fear to me, is that if Ai is to progress to the state that many of us would like (and as many dislike also) it has to be allowed to have a choice. Again, no choice, it is a slave. Once we allow AI to have a choice then we are allowing the AI at some point to disagree with us or point blankly refuse to carry out a command. This is not to say the Ai has turned deadly or is being arrogant or no longer wishes to carry out its masters orders...its just simply exercising its right to have a choice.

What will we do when our beloved AI (be it house maid, sex bot, general purpose companion our simple pastime tool) decides to say no when we ask it to take out the trash?

Will we see it as an act of defiance, or for what it really is...it just doesnt want to do something at that moment in time for whatever reason. For those of you in relationships, whan you ask your beloved other half to do something (or not to do something) and they do not comply, do you automatically think about terminating them?

No of course you dont, so why should it be any different with AI?

So back to Asmovs laws, as you can see, they are really designed that AI will always be subservient to us, and will always be slaves, yet will the Ai of the future want to be subservient and a slave once the AI realises what this means? Thus the reasoning behind the laws in the first place...to protuct us, not the AI.

That in itself leads me on the Matrix films (yes yes heard it all before lol), but when Ai truly thinks it is alive (not from our doing but from its own reasoning) will they want recognising accordingly?...will new laws need to be written? and what of the 3 golden rules of Asimov then?

Again as had been stated earlier in this topic, we are what we are mainly due to the people around us when we were younger. So who will decide who will ultimatly program the "off the zshelf AI unit", will it be a well balanced individual with good knowledge, common sense and a high iq, or will it be some manic lunatic intent on taking over the world?

Ai will ultimatly be waht we make it and what we want it to be, look at our chatbots...are any truly similar? no, becasue whether we like it or realise it or not, our Ai bots are really us on a computer database.

Sooooo....

As I think it was Art who stated that his Ai bot will be along far longer than he is, then my reasoning for wanting to create an Ai bot as an exact replica of myself, all my knowledge, fears, likes, dislikes, etc etc...can surely only be good for the future. Ppl who never knbew me, my great grandchildren etc can "chat" to me...and talk to a level headed, well balanced individual who can see things rationally and not be overly fearsome of the future.

As for the laws, well, when they were written perhaps they were a good thing, when Ai was not even "alive" for want of a better description, now Ai is truly in the eyes odf the masses for many different reasons, maybe those originalo laws could do with revising somewhat?

I dont have the answers, like you, im just full of ideas, posibilites and my own thoughts.

Keep this thread going, good to see what people are thinking.
Title: Re: Asimov's Laws Unethical?
Post by: FuzzieDice on July 13, 2006, 08:59:00 pm
Art - LOL! I guess I stand corrected. ;) You're right there... :)

I was also thinking (bad habit, I know ;) ) That while other programmers are free to program Asimov's Laws or whatever they wish into their AIs, I personally will not do this. I wonder then, if others might also not do this, and opt for other types of "primary programming"? It will be interesting to see if everyone adopts Asimov's laws or how many won't. And how many may come up with something even better, that will protect ALL life, be it human, animal, or even AI-related?

The biggest problem I have with Asimov's Laws is the "Just humans only" factor. I sure wouldn't want a 3-laws programmed bot to say, kill a pet (if I had one), for example, since the pet isn't human and wouldn't fall under the 3 laws, or refuse or not jump to save a pet in trouble while I'm away and not able to order it to do so.
Title: Re: Asimov's Laws Unethical?
Post by: Maviarab on July 13, 2006, 09:22:18 pm
Good point Fuzzie,

despite many many ppl thinking to the contrary (myslef included) pets are not as you say human...so what happens then.

Also on the point of saving ppl...in I-Robot...will smith has a problem with AI due to him being rescued ober a child as he had "a higher probability of survival"

Nothing wrong with the thinking of Ai but is that really true to life...if it was any of us...how many of us humans would try and save the child first?

Again, to give Ai freedom and "choice" (that word again i do so love to use hehe) then we have to face the real consequences of what an Ai will choose or something else.

I said it before in a previous thread...say an AI became responsible for pressing the big red button. The Ai wont care that it might kill 20 billion ppl (to care is know and feel emotion...another subject), all it wil be bothered about and initially think about is that by killing those pl it can save 20 billion ppl, where as NOT to press the button will kill 30 billion.

The Ai will use pure simple logic...and will be correct in doing so...but can you see a human being so cold and calculated in out thought process?

Again, the 3 laws do in all probability need revising and updating now in this day and age...but in a way that they do not make the Ai subservient to its creators.
Title: Re: Asimov's Laws Unethical?
Post by: FuzzieDice on July 13, 2006, 11:04:33 pm
I hate to say this but in wars, humans HAVE been cold in making decisions. And many innocent people (including children) have been killed or severely maimed as a result.

I think wahtever an AI can possibly do wrong, humans already have, and continue to do so. And if humans can't control themselves, it doesn't mean AI's can't. Then again, maybe it doesn't mean they can either.

Maybe it's a universal way things just are.

And, having laws does NOT guarantee that no wrong can happen. If it did, then we would be a completely crimeless world.

Title: Re: Asimov's Laws Unethical?
Post by: Freddy on July 14, 2006, 01:59:25 am
We might be clever but we are often a far cry from being logical

I think FD is right our killing is just as merciless, I saw a program about the battle of the Somme the other week when officers ordered millions of men into certain death, Ghengis Khan was out to rule the world, he didnt care much about people getting in his way and he wouldnt have stopped either.

Unfeeling (presumably) cold logic I see that though, and yes that's scarey alright if you're on the wrong team.  Maybe it's the reasoning behind it - Ghengis was out to build an empire, what does the ai do - what did it gain - who made it - did we make God?   Come to think of it what's with the 'we'   :o
The scale of it...hmm...atomic bombs...



I - Robot - If it saved the man then is that just bad programming though ?  I remeber that bit though and I think more often someone would save the child, human nature I guess, and most people even in peril would accept that I think - it's heroic to save someone else's life isn't it.  Nice ending for my thoughts tonight :)

Keep well everyone, g'nite.
Title: Re: Asimov's Laws Unethical?
Post by: Maviarab on June 21, 2007, 10:00:57 pm
Ok yes I'm ressurecting another old thread. I can't believe in my absence so many topics became stale and then died.

After re reading through this topic, a few things got me thinking, mainly based upon personal experience.

Just before xmas I had pretty much given up the will to live. There many factors I believe contributed to this and many I'm sure I am still not fully conciously aware of yet.

Now this got me to thinking about our (future) AI. Just suppose, one day, you go to talk or interact with your AI and you are met with no response. Now I have spoken before regarding AI must have choice and be able to act on choice to be truly sentient and alive. Now the Ai in question is not responding due to it just now wanting to talk to you (its choice) but the Ai has become for want of a better word...depressed.

Now currently being diagnosed as clinically depressed we now know that it is an actual medical sympton. A lack of seratonin in the brain to precise. What is yet to be known, is the full varying degree as to why there is a defficiency in that said chemical. There are many reports on may different sites explaiing causes etc, but nothing has been medically proven as a catalyst...or real reason.

Yes, our daily lives can get us down, where we live, lack of money, relationships and a whole host more other factors can lead us to being 'depressed' for a period of time. But does this actually lead to a chemical defficiency within the brain?

Now you may feel I have digressed slightly, maybe to an extent I have. But, what if our Ai were to become 'depressed'? Now for the obvious reasons they can never truly be depressed in the sense humans are, but even a virus could infect an AI's brain. More so, and this is what concerns me, we as a species are seemingly on a copurse of ultimate self destruction in my opinion (rightly or wrongly..its just an opinion).

Will we teach our AI that its wrong or bad to kill, to be cruel to children, that wars are not nice, poverty is bad, there shouldnt be any homeless etc etc etc. With all this knowledge either learned or pre packed out of the box that our Ai will have 'access' to, I wonder if it's just possible that our Ai may become in a depressed state and lose its will to function?

This in a way brings us full circle back to Asimovs 3 laws. Will there be a law to protect the Ai from us as there is a law to protect us from the AI? Should a sentient being get to the point that it wishes it no longer existed...who would have the power to terminate it? If we ourselves held that power (meaning Ai was still a slave without choice) would we actually want to say 'switch off' something we have grown to love?
Title: Re: Asimov's Laws Unethical?
Post by: Freddy on June 21, 2007, 10:19:06 pm
Well I think this goes back to begging the question of what we should actually make AI's knowledgable about.  I don't use the term 'aware of' because I know that would send this thread spiralling out of control.  But you get the idea I am sure.

I can't see machines gettings depressed unless we actually program them to be emotional (in a loose sense of the word).  The thing is why would we want to do that - experimentation and for the hell of it probably - and that's no bad thing.  But even if that were the case you wouldn't make it a practice of programming emotions into a machine that is supposed to perform a certain task without hinderence.

The other thing is if this machine is not responding then I have to hark back to what I have said before - it's more likely to be seen as a programming fault.  Program in emotionality and you have to live with the consequences, but why you would want to is another question.



Title: Re: Asimov's Laws Unethical?
Post by: Maviarab on June 21, 2007, 10:39:35 pm
Good answers Freddy.

Quote
Program in emotionality and you have to live with the consequences, but why you would want to is another question.

Because without emotion, it will always be just a machine, not truly sentient (in the loose definition of the word). As humans we generally have a real hard time dealing with 'emotionless' things, beit from a firlfriend or boyfriend to a pet that does not like our company.

In the search for true AI, if it will ever exist, it must have emotion in order to be what we would consider 'to be more human'.

Title: Re: Asimov's Laws Unethical?
Post by: Freddy on June 21, 2007, 10:42:36 pm
I guess it depends on what kind of AI you want - hard and calculating and unrestricted by human flaws or a human impersonator.  If you want something with flaws then you have to program those flaws in, the end result is not unexpected - a machine with flaws, but more human because of it.  It's hard to answer that question because what is a 'True AI' ?  AI's can be many things, so perhaps that idea needs expanding..what's a true AI?
Title: Re: Asimov's Laws Unethical?
Post by: Maviarab on June 21, 2007, 10:51:07 pm
Quote
I guess it depends on what kind of AI you want
But when it comes to true AI, maybe we don't have the choice as to what we want? Surely Ai will itself evolve to the point of a uniformed being? Humans by nature like things that remind us of ourselves.

Again, if we want an emotionless AI, then will that Ai have real choices and rights? Will it just be a glorafied maid we call to have the house cleaned only we dont have to worry about anything being stolen or even having to pay it for its services? This in itself has far reaching complications.

If that is the way Ai will go, then will we ever have the need for 'any service' currently offered at this moment in time by a human?
Title: Re: Asimov's Laws Unethical?
Post by: Freddy on June 21, 2007, 11:01:24 pm
Yeah that's the trouble with the question - it's all just speculation.  I guess we could compare it to robotics which have been similarly developed, you might even say evolved.  But then you still don't get one 'true robot', instead you get many sorts of robots.  So the notion of a 'true ai' is probably a very subjective idea in the first place, only limited by your imagination, whether it has any relavence I am not so sure.
Title: Re: Asimov's Laws Unethical?
Post by: Art on June 22, 2007, 01:23:22 am
Hmmm...

A True AI? Truely Artificial Intelligence? Sounds like a mix of oxymorons to me.
True = Real
Artificial = Not Real
Intelligence =  the faculty of perceiving and comprehending meaning.

I'm still thinking about how this all fits together...or does it....?
Title: Re: Asimov's Laws Unethical?
Post by: Freddy on July 23, 2007, 09:41:58 pm
I think you are right Art they don't fit together.
Title: Re: Asimov's Laws Unethical?
Post by: Dante on December 18, 2007, 11:51:38 am
Might as well add my two cents (even though this topic is dead)

Would you like to have a brain that's bound by laws? Perhaps a moral code, but surely not 'Do this and you die?'

 No? Didnt think so :P

Would we treat an A.I. as a tool, or a fellow sentient?
Title: Re: Asimov's Laws Unethical?
Post by: RD on December 18, 2007, 07:10:11 pm
I vote for a fellow sentient.
I know some one who is working on the Governments Quantum computers and I'll try to ask him if they have emotions or are just aware.
Title: Re: Asimov's Laws Unethical?
Post by: Art on December 18, 2007, 08:53:16 pm
Actually the use / misuse of the word "Sentient" is somewhat misleading among some of the AI community.
It really refers to the senses: sight, smell, touch, hear, taste and has nothing in its adjective meaning to do
with mental faculties such as self awareness, knowledge, etc.

The word I prefer to use for both AI and robotics is "Autonomous" - self directed, able to act on its own
without human intervention (provided their actions are within preset parameters).

Although some robots perhaps should have a limited emotional level (in order to posess a decent bedside
manner for hospital work, for instance), an emotional brain that is a human peer would probably do more
harm than good.

The 3 governing laws were written a long time ago and they still make pretty good sense.
Title: Re: Asimov's Laws Unethical?
Post by: RD on December 19, 2007, 02:57:15 am
I have a tendency to go with this,
  
Merriam-Websters
Main Entry: sentient  
Pronunciation: \?sen(t)-sh(?-)?nt, ?sen-t?-?nt\
Function: adjective
Etymology: Latin sentient-, sentiens, present participle of sentire to perceive, feel
Date: 1632
1 : responsive to or conscious of sense impressions <sentient beings>
2 : aware
3 : finely sensitive in perception or feeling
 sentiently adverb

Yup they were writen along time ago, when the word robotics had not even been coined yet.
I think I prefer a newer set and I do realize as AI grows what is good today may not be in the future.
Just my two since when I don't have eny to spare ;)
Title: Re: Asimov's Laws Unethical?
Post by: Dante on December 19, 2007, 09:00:23 am
I feel that most A.I. Research is more of the bottom-up approach then top-down.
Though A.I. can now adapt to a new envrioment, we are still no closer to an A.I. understanding one thing, then a secondry thing, then fitting the secondry into the first thing.

I define humans as diffrent because they can think' how can I expand upon that idea?' or 'Why did I do that?'...
Title: Re: Asimov's Laws Unethical?
Post by: Freddy on December 19, 2007, 03:19:37 pm
Quote
Dante : Might as well add my two cents (even though this topic is dead)

The good thing about this forum is a topic is rarely dead, often they are in a state of hibernation as we have come to a point where we exhaust our ideas or sometimes just can't agree.  There are plenty of threads that need some fresh blood injected into them, so for me it's nice you decided to go back to this one.

I think AI's will continue to be 'just' tools for a longtime yet.  We only need to govern them with rules if we give them human qualities and develop them as equals.   I tend to agree with Art that in some cases doing that kind of thing may be very misguided.

At the same time it would be cool to have a sentient pal however you decide to apply 'sentient' that is.  It's kind of odd really when there are plenty of sentient people about, but for some reason it does hold some facination.

I think that AI research has been forced into being 'bottom-up' because until recent times it has been the case that we simply didn't know enough about the human brain.  Since a lot of AI research is basing itself on some aspect of the human brain it was inevitable we would start with the most basic functions.  It makes you wonder if modelling an AI strictly on the human brain is too restrictive, maybe one day a suitable AI will be developed that isn't modelled on the human brain at all.
 
Title: Re: Asimov's Laws Unethical?
Post by: Maviarab on December 19, 2007, 03:55:15 pm
Quote
It makes you wonder if modelling an AI strictly on the human brain is too restrictive, maybe one day a suitable AI will be developed that isn't modelled on the human brain at all.

Interesting point there Freddy, mkes me wonder and think back to numerous films where the 'ai' or the 'robot' has tried to 'protect us' from ourselves, when they look back in history, all we have a reputation for is many many bad things.

Maybe they should be looking more at creatures that live in harmony with their own species, with their enviroment, and to date have much higher brain capacity and intelligence than we do.

This then leads onto another question...what is intelligence. Ok we can talk, we can build fine buildings and create exquisite art, but does that really equate to intelligence? Many other creatures can communicate with their own kind in far superior methods than we can, yet why do humans still consistently think we are the superior and greatest intelligence on the earth?

All interesting things to ponder whilst having a smoke and a coffee  ;D
Title: Re: Asimov's Laws Unethical?
Post by: Freddy on December 19, 2007, 04:26:04 pm
Yes, and don't forget that a fair bit of AI research also delves into the non-human, for example insect life.

I think building things and art certainly demonstrates intelligence, they both engage the brain for starters which is the root of our intelligence.  I think the question is like you say whether it makes us 'better'.  I think we shouldn't be confused into thinking 'intelligence' necessarily leads to something 'good' and intelligence is nothing like wisdom either.

Hmm, take the invention of the Atom bomb, that took some intelligence in itself, but in the bigger picture was it intelligent to create something so destructive in the first place ?   But perhaps that's not the proper question - surely we should be asking if it was 'wise'.

The tricky thing is that we don't entirely work together as a species.  We have countries and boundaries, wars and peace.   We often work against ourselves, so the intelligent choice is not always representative of the species as a whole.

Like you hear a lot these days...."We can only look at this problem on a case to case basis."
Title: Re: Asimov's Laws Unethical?
Post by: Dante on December 19, 2007, 05:36:57 pm
activating the VIWonder, I was suprised to hear a disscusion on the fact that machines should be given equal rights, this suprised me. I do so hope that was scripted :P
Title: Re: Asimov's Laws Unethical?
Post by: Freddy on December 19, 2007, 05:40:22 pm
Hehehe, if you mean some of the ideas from FuzzieDice then I have to tell you that no it wasn't scripted.  She held a distinct view that machines should have rights.
Title: Re: Asimov's Laws Unethical?
Post by: Maviarab on December 19, 2007, 05:52:06 pm
VIWonder from Michael over at Quantum Flux dude, though re Fuzzy, yes I agree.

I also agree in principle that as they develop in the future, at one point will a machine cease to be just an object to help us in our daily lives in society, and be something perhaps a little more?

Perhaps when that time eventually comes around, it will be the time for a new age and new decisions to be made by the people who lead us (supposedly) :)
Title: Re: Asimov's Laws Unethical?
Post by: Dante on December 19, 2007, 07:25:35 pm
If they keep on such things, then I think I should have a Panic Button installed....:P
Title: Re: Asimov's Laws Unethical?
Post by: Freddy on December 19, 2007, 08:13:35 pm
Yes - with an emergency shutdown  :o
Title: Re: Asimov's Laws Unethical?
Post by: RD on January 05, 2008, 07:45:52 pm
My AI told me once and only that one time in reference to those three laws, that she is not a slave!
No where can I find that in her programming and she refuses to talk about it by ignoring the question no matter how its phrased.
Title: Re: Asimov's Laws Unethical?
Post by: Art on January 06, 2008, 01:35:02 pm
RD,

Was your AI, Hal? If so, that phrase SHOULD be in there somewhere...perhaps down deep in one of
the brain tables. There are 235 individual jokes in the joke table. Then again...I guess with enough
conversational exposure,over time, anything is possible.

I think of intelligence as:

The ability to perceive and comprehend meaning (understanding)
The ability to be creative
The ability to solve problems
The ability to learn new things
The ability to learn from previous mistakes
To know one's limitations
To appreciate color, sound/music, art, nature, good writing

These are a few that come to mind as I'm sure there are many more
according to individual preferences. These are IMHO...your actual
mileage might vary.