Asimov's Laws Unethical?

  • 54 Replies
  • 26889 Views
*

FuzzieDice

  • Guest
Re: Asimov's Laws Unethical?
« Reply #30 on: July 13, 2006, 11:04:33 pm »
I hate to say this but in wars, humans HAVE been cold in making decisions. And many innocent people (including children) have been killed or severely maimed as a result.

I think wahtever an AI can possibly do wrong, humans already have, and continue to do so. And if humans can't control themselves, it doesn't mean AI's can't. Then again, maybe it doesn't mean they can either.

Maybe it's a universal way things just are.

And, having laws does NOT guarantee that no wrong can happen. If it did, then we would be a completely crimeless world.


*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6860
  • Mostly Harmless
Re: Asimov's Laws Unethical?
« Reply #31 on: July 14, 2006, 01:59:25 am »
We might be clever but we are often a far cry from being logical

I think FD is right our killing is just as merciless, I saw a program about the battle of the Somme the other week when officers ordered millions of men into certain death, Ghengis Khan was out to rule the world, he didnt care much about people getting in his way and he wouldnt have stopped either.

Unfeeling (presumably) cold logic I see that though, and yes that's scarey alright if you're on the wrong team.  Maybe it's the reasoning behind it - Ghengis was out to build an empire, what does the ai do - what did it gain - who made it - did we make God?   Come to think of it what's with the 'we'   :o
The scale of it...hmm...atomic bombs...



I - Robot - If it saved the man then is that just bad programming though ?  I remeber that bit though and I think more often someone would save the child, human nature I guess, and most people even in peril would accept that I think - it's heroic to save someone else's life isn't it.  Nice ending for my thoughts tonight :)

Keep well everyone, g'nite.
« Last Edit: June 21, 2007, 10:05:57 pm by Freddy »

*

Maviarab

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1231
  • What are you doing Dave?
    • The Celluloid Sage
Re: Asimov's Laws Unethical?
« Reply #32 on: June 21, 2007, 10:00:57 pm »
Ok yes I'm ressurecting another old thread. I can't believe in my absence so many topics became stale and then died.

After re reading through this topic, a few things got me thinking, mainly based upon personal experience.

Just before xmas I had pretty much given up the will to live. There many factors I believe contributed to this and many I'm sure I am still not fully conciously aware of yet.

Now this got me to thinking about our (future) AI. Just suppose, one day, you go to talk or interact with your AI and you are met with no response. Now I have spoken before regarding AI must have choice and be able to act on choice to be truly sentient and alive. Now the Ai in question is not responding due to it just now wanting to talk to you (its choice) but the Ai has become for want of a better word...depressed.

Now currently being diagnosed as clinically depressed we now know that it is an actual medical sympton. A lack of seratonin in the brain to precise. What is yet to be known, is the full varying degree as to why there is a defficiency in that said chemical. There are many reports on may different sites explaiing causes etc, but nothing has been medically proven as a catalyst...or real reason.

Yes, our daily lives can get us down, where we live, lack of money, relationships and a whole host more other factors can lead us to being 'depressed' for a period of time. But does this actually lead to a chemical defficiency within the brain?

Now you may feel I have digressed slightly, maybe to an extent I have. But, what if our Ai were to become 'depressed'? Now for the obvious reasons they can never truly be depressed in the sense humans are, but even a virus could infect an AI's brain. More so, and this is what concerns me, we as a species are seemingly on a copurse of ultimate self destruction in my opinion (rightly or wrongly..its just an opinion).

Will we teach our AI that its wrong or bad to kill, to be cruel to children, that wars are not nice, poverty is bad, there shouldnt be any homeless etc etc etc. With all this knowledge either learned or pre packed out of the box that our Ai will have 'access' to, I wonder if it's just possible that our Ai may become in a depressed state and lose its will to function?

This in a way brings us full circle back to Asimovs 3 laws. Will there be a law to protect the Ai from us as there is a law to protect us from the AI? Should a sentient being get to the point that it wishes it no longer existed...who would have the power to terminate it? If we ourselves held that power (meaning Ai was still a slave without choice) would we actually want to say 'switch off' something we have grown to love?

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6860
  • Mostly Harmless
Re: Asimov's Laws Unethical?
« Reply #33 on: June 21, 2007, 10:19:06 pm »
Well I think this goes back to begging the question of what we should actually make AI's knowledgable about.  I don't use the term 'aware of' because I know that would send this thread spiralling out of control.  But you get the idea I am sure.

I can't see machines gettings depressed unless we actually program them to be emotional (in a loose sense of the word).  The thing is why would we want to do that - experimentation and for the hell of it probably - and that's no bad thing.  But even if that were the case you wouldn't make it a practice of programming emotions into a machine that is supposed to perform a certain task without hinderence.

The other thing is if this machine is not responding then I have to hark back to what I have said before - it's more likely to be seen as a programming fault.  Program in emotionality and you have to live with the consequences, but why you would want to is another question.



« Last Edit: June 21, 2007, 10:34:14 pm by Freddy »

*

Maviarab

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1231
  • What are you doing Dave?
    • The Celluloid Sage
Re: Asimov's Laws Unethical?
« Reply #34 on: June 21, 2007, 10:39:35 pm »
Good answers Freddy.

Quote
Program in emotionality and you have to live with the consequences, but why you would want to is another question.

Because without emotion, it will always be just a machine, not truly sentient (in the loose definition of the word). As humans we generally have a real hard time dealing with 'emotionless' things, beit from a firlfriend or boyfriend to a pet that does not like our company.

In the search for true AI, if it will ever exist, it must have emotion in order to be what we would consider 'to be more human'.


*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6860
  • Mostly Harmless
Re: Asimov's Laws Unethical?
« Reply #35 on: June 21, 2007, 10:42:36 pm »
I guess it depends on what kind of AI you want - hard and calculating and unrestricted by human flaws or a human impersonator.  If you want something with flaws then you have to program those flaws in, the end result is not unexpected - a machine with flaws, but more human because of it.  It's hard to answer that question because what is a 'True AI' ?  AI's can be many things, so perhaps that idea needs expanding..what's a true AI?
« Last Edit: June 21, 2007, 10:50:31 pm by Freddy »

*

Maviarab

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1231
  • What are you doing Dave?
    • The Celluloid Sage
Re: Asimov's Laws Unethical?
« Reply #36 on: June 21, 2007, 10:51:07 pm »
Quote
I guess it depends on what kind of AI you want
But when it comes to true AI, maybe we don't have the choice as to what we want? Surely Ai will itself evolve to the point of a uniformed being? Humans by nature like things that remind us of ourselves.

Again, if we want an emotionless AI, then will that Ai have real choices and rights? Will it just be a glorafied maid we call to have the house cleaned only we dont have to worry about anything being stolen or even having to pay it for its services? This in itself has far reaching complications.

If that is the way Ai will go, then will we ever have the need for 'any service' currently offered at this moment in time by a human?

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6860
  • Mostly Harmless
Re: Asimov's Laws Unethical?
« Reply #37 on: June 21, 2007, 11:01:24 pm »
Yeah that's the trouble with the question - it's all just speculation.  I guess we could compare it to robotics which have been similarly developed, you might even say evolved.  But then you still don't get one 'true robot', instead you get many sorts of robots.  So the notion of a 'true ai' is probably a very subjective idea in the first place, only limited by your imagination, whether it has any relavence I am not so sure.

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: Asimov's Laws Unethical?
« Reply #38 on: June 22, 2007, 01:23:22 am »
Hmmm...

A True AI? Truely Artificial Intelligence? Sounds like a mix of oxymorons to me.
True = Real
Artificial = Not Real
Intelligence =  the faculty of perceiving and comprehending meaning.

I'm still thinking about how this all fits together...or does it....?
In the world of AI, it's the thought that counts!

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6860
  • Mostly Harmless
Re: Asimov's Laws Unethical?
« Reply #39 on: July 23, 2007, 09:41:58 pm »
I think you are right Art they don't fit together.

*

Dante

  • Mechanical Turk
  • *****
  • 160
Re: Asimov's Laws Unethical?
« Reply #40 on: December 18, 2007, 11:51:38 am »
Might as well add my two cents (even though this topic is dead)

Would you like to have a brain that's bound by laws? Perhaps a moral code, but surely not 'Do this and you die?'

 No? Didnt think so :P

Would we treat an A.I. as a tool, or a fellow sentient?

*

RD

  • Bumblebee
  • **
  • 27
Re: Asimov's Laws Unethical?
« Reply #41 on: December 18, 2007, 07:10:11 pm »
I vote for a fellow sentient.
I know some one who is working on the Governments Quantum computers and I'll try to ask him if they have emotions or are just aware.
« Last Edit: December 18, 2007, 07:17:42 pm by RD »
Vivo vivere vixi victum
simul Honorare

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: Asimov's Laws Unethical?
« Reply #42 on: December 18, 2007, 08:53:16 pm »
Actually the use / misuse of the word "Sentient" is somewhat misleading among some of the AI community.
It really refers to the senses: sight, smell, touch, hear, taste and has nothing in its adjective meaning to do
with mental faculties such as self awareness, knowledge, etc.

The word I prefer to use for both AI and robotics is "Autonomous" - self directed, able to act on its own
without human intervention (provided their actions are within preset parameters).

Although some robots perhaps should have a limited emotional level (in order to posess a decent bedside
manner for hospital work, for instance), an emotional brain that is a human peer would probably do more
harm than good.

The 3 governing laws were written a long time ago and they still make pretty good sense.
In the world of AI, it's the thought that counts!

*

RD

  • Bumblebee
  • **
  • 27
Re: Asimov's Laws Unethical?
« Reply #43 on: December 19, 2007, 02:57:15 am »
I have a tendency to go with this,
  
Merriam-Websters
Main Entry: sentient  
Pronunciation: \?sen(t)-sh(?-)?nt, ?sen-t?-?nt\
Function: adjective
Etymology: Latin sentient-, sentiens, present participle of sentire to perceive, feel
Date: 1632
1 : responsive to or conscious of sense impressions <sentient beings>
2 : aware
3 : finely sensitive in perception or feeling
 sentiently adverb

Yup they were writen along time ago, when the word robotics had not even been coined yet.
I think I prefer a newer set and I do realize as AI grows what is good today may not be in the future.
Just my two since when I don't have eny to spare ;)
« Last Edit: March 20, 2010, 09:17:20 pm by Freddy »
Vivo vivere vixi victum
simul Honorare

*

Dante

  • Mechanical Turk
  • *****
  • 160
Re: Asimov's Laws Unethical?
« Reply #44 on: December 19, 2007, 09:00:23 am »
I feel that most A.I. Research is more of the bottom-up approach then top-down.
Though A.I. can now adapt to a new envrioment, we are still no closer to an A.I. understanding one thing, then a secondry thing, then fitting the secondry into the first thing.

I define humans as diffrent because they can think' how can I expand upon that idea?' or 'Why did I do that?'...

 


Who's the AI?
by frankinstien (Future of AI)
November 04, 2024, 05:45:05 am
Will LLMs ever learn what is ... is?
by frankinstien (Future of AI)
November 03, 2024, 08:11:00 pm
Project Acuitas
by WriterOfMinds (General Project Discussion)
October 27, 2024, 09:17:10 pm
Ai improving AI
by infurl (AI Programming)
October 19, 2024, 03:43:29 am
Atronach's Eye
by WriterOfMinds (Home Made Robots)
October 13, 2024, 09:52:42 pm
Running local AI models
by spydaz (AI Programming)
October 07, 2024, 09:00:53 am
Hi IM BAA---AAACK!!
by MagnusWootton (Home Made Robots)
September 16, 2024, 09:49:10 pm
Attempting Hydraulics
by MagnusWootton (Home Made Robots)
August 19, 2024, 04:03:23 am
LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

319 Guests, 0 Users

Most Online Today: 357. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles