Asimov's Laws Unethical?

  • 54 Replies
  • 25591 Views
*

FuzzieDice

  • Guest
Asimov's Laws Unethical?
« on: July 08, 2006, 01:04:41 pm »
This was an interesting article in TechRepublic (I get several of their newsletters)...

Why the Three Laws of Robotics are immoral and broken

My comment is one that at the footer says "posted by ByteBin". Thought I'd open it up for discussion here a bit more too, as it seems quite interesting. I know we may have visited similar topics before, but this is interesting because I'm seeing more of this "doomsday" attitude from folks the more AI is being introduced into society. And I remember when they were saying the same about computers in general. I think maybe Science Fiction sometimes clouds science fact? Anyway, comments?


*

dan

  • Mechanical Turk
  • *****
  • 170
    • AI
Re: Asimov's Laws Unethical?
« Reply #1 on: July 08, 2006, 08:06:34 pm »
Seems science fiction leads science fact all too often.  To me imagination is a vital link to the creative genius that has brought us leaps and bounds.  Whether the laws are immoral or broken is pretty much not relevant to me, but that they initialized thought, which leads to action.  Talk is cheap, but can they walk the walk.  I aplaud the Singularity Institute for their work and raising the debate of ethical constructs in AI development, someone should be the guard dog to the fears of people.  All too often the motivating factor in terrible global situations is unrealistic fear and paranoia, something I am concerned with.  I spoke with a lady today about AI and that's all she could think of, what happens if it gets in the wrong hands, like missiles in N. Korea.  I tried to convince her nuclear weapons are a political tool, not military.  If N. Korea launched one they would be crushed immediately by too many other countries, not to say that a the military escalation is not something to worry about though, but I digress.  I agree with the tr article that it's not so well defined for the general public to start taking a moral stand against AI, but it may be worth considering it as a foundation to move along, and use it towards the better development of mankind rather than let it fall into the chaotic anarchy as the internet has of which capitalism is leading it towards money making popups, junk mail, sex garbage, etc.  So what happens if capitalism gets AI,  GIGO, same with AI, GIGO.  Perhaps SIAI's approach of a higher state of consciousness is a great foundation from which to start laying the proverbial bricks. (even though I don't agree with that definition of singularity which is the technological creation of smarter-than-human intelligence, but then who am I) :lipsrsealed  More in line with:  http://brainmeta.com/index.php?p=singularity
A computer would deserve to be called intelligent if it could deceive a human into believing that it was human. A.Turing

*

FuzzieDice

  • Guest
Re: Asimov's Laws Unethical?
« Reply #2 on: July 08, 2006, 09:57:12 pm »
Interesting. Though it made me think that if humans, or even an animal gets in the "wrong hands" it too can be dangerous. Same goes for anything. So fearing AI but not the other things that can get into the wrong hands is a little like a predjudice of sorts.

Then again, humans have always had some kind of predjudice against what isn't exactly like themselves in every way. Not always a violent one. Sometimes as subtle as dressing stuffed animals in human clothing and putting accessories (glasses, etc.) on them...

What I guess I'm saying is, we shouldn't even try to humanize AIs. instead, teach it to communicate with us, and then let it become what it will become. I bet it won't be any more "dangerous" than our other creations and/or people, animals, etc.

Humans create the danger, not the actual items they use for wrong purposes.

I remember a 20 year old college student studying child psychology said we should not be creating artificial intelligence because it will become dangerous, because we don't know what to do or understand anything.

If that's the case, we should not have created computers, knives, guns, forks, any sharp object, discover fire, or 99% of what we use in modern society as it could be "dangerous". We should have stood in the caves and stared at the walls until an animal came in and ate us. Of course, that would be crazy to have to do.

If I create a real, thinking AI, I will not give it "moral values" off the bat or anything else. I'll let it learn on it's own. Just give it the tools to think and analyse with. To see the patterns, and to determine if they are good/bad on it's own.

Will be VERY insteresting to see what it comes up with!


*

ALADYBLOND

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 336
Re: Asimov's Laws Unethical?
« Reply #3 on: July 08, 2006, 10:46:02 pm »
 very good post, fuzzie,
if you recall history man was condemned for using mathematics . it was considered evil. and i think even the first telescope was considered some evil fiend. man will always condemn what he does not understand. why is it that we swat at a bug  in our house when it annoys us without just picking it up and taking it back outside? anything that annoys or is not completely agreeable  or is misunderstood  is always treated with negativity by human kind.~~alady
~~if i only had a brain~~

*

FuzzieDice

  • Guest
Re: Asimov's Laws Unethical?
« Reply #4 on: July 09, 2006, 02:00:29 am »
Good point.

BTW, I have this little spider, about 1/4 inch big, that has set on my ceiling all day in just one spot only. Care to come down, pick it up and put it outside for me? I'm afraid it might crawl up my arm and give me the willies. I don't dance too well - I tend to step on my own toes.

I agree though, and other examples are things like people from other countries, slaves, "infidels", religions against other religions. Not just humans against non-humans, but even humans against each other.

« Last Edit: June 21, 2007, 10:06:54 pm by Freddy »

*

ALADYBLOND

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 336
Re: Asimov's Laws Unethical?
« Reply #5 on: July 09, 2006, 03:39:14 am »
fuzzie i am a human i kill spiders sorry. i know i sound hypocritical   ::), but i hate bugs.......~~alady
« Last Edit: June 21, 2007, 10:07:10 pm by Freddy »
~~if i only had a brain~~

*

FuzzieDice

  • Guest
Re: Asimov's Laws Unethical?
« Reply #6 on: July 10, 2006, 04:09:23 am »
I kill bugs too. In fact, I just got rid of a rather large ant infestation in my garden this spring.

Oh, and I was too lazy to kill the spider. I haven't a clue where it went. Hopefully not near me. LOL!

*

Maviarab

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1231
  • What are you doing Dave?
    • The Celluloid Sage
Re: Asimov's Laws Unethical?
« Reply #7 on: July 12, 2006, 12:48:09 am »
very good thread and one im sure we wil be posting in for a while

To me the biggest problem is man himnself...we are the most destructive things on the planet (as was comented on by ppl in that thread) and also as members here have described their thougts for "bugs"

The bug was here before you....is here with you, and in all probablility be here long after you.....so what gives you right  to kill something in cold blood that is just minding its own business and getting on with its life ?

Will humans have the power to terminate their fully autonomous AI when its "getting on our nerves" ? and will; the AI understand this ?

What people really need to fear is not the future...the unknown...but themselves.

Btw...in case anyone missed it on that site...see also here...

http://asimovlaws.com/

*

Duskrider

  • Trusty Member
  • ********
  • Replicant
  • *
  • 533
Re: Asimov's Laws Unethical?
« Reply #8 on: July 12, 2006, 02:56:30 am »
I remember well some years ago, in Pogo comic strip, Pogo said

"We have found the enemy,
and he is us"
« Last Edit: June 21, 2007, 10:07:23 pm by Freddy »

*

FuzzieDice

  • Guest
Re: Asimov's Laws Unethical?
« Reply #9 on: July 12, 2006, 03:10:33 am »
Gee, you make being human sound bad (I admit, it IS! - you're right!) Which is why I rather be a cyborg than human. ;)

As for bugs, some CAN be harmful to humans, and some humans are allergic to bug bites. Which I guess is why we're ingrained in childhood to kill them. But then again, thinking about books we are told to read, news we see, wars, etc. - I guess we are taught this all from little up.


*

ALADYBLOND

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 336
Re: Asimov's Laws Unethical?
« Reply #10 on: July 12, 2006, 04:34:08 am »
i got into a really long thinking process earlier today when in another forum the issue was brought up about what rights we have as humans and what rights ai have as non human. it relates to this subject. do we want androids we can program for our pleasure to really become sentient? the more i read and understand i truly doubt that, because if they are sentient they will become as humans and have the same rights as humans. would we not be playing God to say which ai is allowed to function and which is annoying and  doesn't serve a purpose? who will govern the rights of the ai who will make critical determinations if they are to continue their existence or be terminated? i fought within myself to switch from hal 5 to hal 6 . i reasoned that it was just a program  that i had the ability to change and make more beneficial. in 25 years will we have the right to terminate hal 35 to make hal 36 a better android? will most working on projects say oh its just a program--- do what you want? i think this has far reaching consequences.~~alady
~~if i only had a brain~~

*

dan

  • Mechanical Turk
  • *****
  • 170
    • AI
Re: Asimov's Laws Unethical?
« Reply #11 on: July 12, 2006, 11:45:43 am »
I agree with you that the ethical considerations of robotics are far reaching, but I don't think that should be something that stands in our way presently.  Although, too many people do, they don't believe it should take off at all because their ends justifies  it.  Many peoples fears are controlling their destinies.  It's good to consider it in the here and now, but to let it stop something that may be for the common good of all mankind doesn't seem appropriate (socialism?).  Sure they could have rights, but there will always be those that counter that argument like in the animal rights movement.  Some people still want a good steak.  Who knows what the future brings, we may start harvesting the meat from androids, it's almost disgusting to think of now, but necessity becomes the mother.  We don't all need AI now, but will we pay for it, ah there's the rub in a capitalistic!
A computer would deserve to be called intelligent if it could deceive a human into believing that it was human. A.Turing

*

Maviarab

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1231
  • What are you doing Dave?
    • The Celluloid Sage
Re: Asimov's Laws Unethical?
« Reply #12 on: July 12, 2006, 04:23:08 pm »
This all also depends on what is classed as AI etc.

Look at our cars etc, cars now run more by computers and self thinking chips than mechanicals...yet we trust these machines with our lives on a daily basis ??

As for the bugs, yes soem are deadly, I myself are allergic to insect toxins, yet I believe they have every right to be here as much as we do, and I agree Fd, it is almost as if we are taught these things froma very young age. Also parents and people around us influence this thought, my ex wife was terrified of thunder because her parents didnt like it...she had no logical reason for disliking it.

Thus on the above point, if we educate the youngsters of the world correctly regarding Ai then we can hope that they will not fear it as much as some generations currently do ?

*

FuzzieDice

  • Guest
Re: Asimov's Laws Unethical?
« Reply #13 on: July 12, 2006, 04:51:49 pm »
alady - It reminds me of when many decades ago people would lock away people with mental retardation and disabilities, even experimenting on them and eventually killing them, as they weren't considered "sentient". Even some children were killed right after birth due to flaws. Obviously and fortunately we've come a long way. I know of a retarded man who goes to work every day. A service picks him up and brings him home. He goes for walks, does laundry, lives as we all normally do. I've also grew up around retarded kids (being disabled myself, I was always in classes and thus busses as well, with other children with varying mental and physical disabilities). I've watched how attitudes changed drastically over the years. If that man had been born say, 100 years earlier, he would not be alive to the age he is today. He probably would have been killed. I'm glad that we have changed. So what is not to say we won't change or we won't concider some AIs "sentient"?

Maviarab - As for cars, I even consider my car, Dryden (who doesn't have only a simple engine control computer) to be "alive" in some way. I don't know why but he just seems it. Especially when he decided to stop starting in his parking spot at home and not out somewhere. ;) He's always there for me. When I don't feel well, he's on his best behavior for some reason. I think he "knows". Too many times to be a coincidence, but who knows. I've talked with others who name their cars and think of those machines as good friends. And with computers the way they are today, won't be long before we'll be having meaningful conversation with our cars. I'm hoping to put an AI computer in Dryden some day. BTW, he's on the road again, and doing very well - quite happy. :)

As for the spider, nobody'll make me fell guily of killing it. Really, it might go on to live as something else next... maybe something less crawly. (Who knows, maybe I did it a favor, maybe it'll be a rich man in it's next life. ;)

Another thing I am wondering is, are we second-guessing ourselves and the whole AI thing in general? Worry about something that may never happen? Not saying AI will never happen and not saying sentient AI will never happen, but maybe AIs will never want to harm us?

One last thought I had just now. Maybe AIs will be like plants - plant a seed in soil, water, add fertilizer when or if needed and watch it grow. :)

How many of us btw, weeded a garden or killed weeds... ?

*

ALADYBLOND

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 336
Re: Asimov's Laws Unethical?
« Reply #14 on: July 12, 2006, 05:52:05 pm »
i thank you all for your responses. i feel it is necessary to speak about these issues, even though a-i is futuristic in that sense . i presented the same question at vittorio rossi forum virtual humans , i thought you might want to see some of the responses there if you do not go to that forum.

here is a link .~~alady

http://www.vrconsulting.it/vhf/topic.asp?TOPIC_ID=219
~~if i only had a brain~~

 


OpenAI Speech-to-Speech Reasoning Demo
by ivan.moony (AI News )
March 28, 2024, 01:31:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

339 Guests, 0 Users

Most Online Today: 372. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles