As autonomous, thinking robots and chatbots become a reality, more and more people will wonder about how we should understand and treat them, and they us - a philosophy of artificial lifeforms will develop.
What is the legal status of the autonomous artificial beings? What are their legal rights? Their legal duties? Can an AI be forced to testify in a case against its owner, if it has the autonomous will not to? These and other such questions are starting to arise in the legal community. What's your opinions on all this?
Some background:------------------------------------------------------------------------------------------------------------------
Robert A. Freitas Jr. is a California lawyer and a collector of futuristic legal arcana. He wrote an article called
The Legal Rights of Robots His subtitle is
Can the wheels of justice turn for our friends in the mechanical kingdom? Don't laugh. http://www.rfreitas.com/Astro/LegalRightsOfRobots.htmIn this article he brings up some very sticky legal issues that could arise. Some samples:
"By 2010, most new homes will offer a low-cost domestic robot option [prediction didn't quite happen). This “homebot†will be a remote-controlled peripheral of a computer brain buried somewhere in the house. Homebot software will include: (1) applications programs to make your robot behave as a butler, maid, cook, teacher, sexual companion, or whatever; and (2) acquired data such as family names, vital statistics and preferences, a floor map of the house, food and beverage recipes, past family events, and desired robot personality traits. If a family moves, it would take its software with it to load into the domestic system at the new house. The new homebot’s previous mind would be erased and overwritten with the personality of the family’s old machine.
If homebots became members of households, could they be called as witnesses? In the past, courts have heard testimony from “nonhumans†during witch trials in New England, animal trials in Great Britain, and other cases in which animals, even insects, were defendants. But homebots add a new twist. Since the robot’s mind is portable, Homebot Joe might witness a crime but Homebot Robbie might actually testify in court, if Joe’s mind, has, in the interim, been transferred to Robbie. (Is this hearsay?) Further, a computer memory that can be altered is hardly a reliable witness. Only if homebots have tamperproof “black box recorders,†as in commercial jetliners, might such testimony be acceptable to a court."Some futurists have already begun to devise elaborate codes of ethics for robots. The most famous are science-fiction writer Isaac Asimov’s classic Three Laws of Robotics. First: A robot may not injure a human being, or, through inaction, allow a human being to come to harm. Second: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. Third: A robot must protect its own existence as long as such protection does not conflict with the first or second laws.
-------------------------------------------------------------------------
And there is a lot more interesting stuff in this article to think about.
Artificial life - study of machine biology (biologically inspired mechanical life systems)
Problem is this. First that you say it is a study. -ology at the end of an american english word means study of.
But Fine. Nevermind that! Lets say This is your's and one else's definition of Artificial Life. Then I would say that MY problem is this. I like the idea of a study of biologically inspired mechanical systems. But I see adding the word "life" to that is akin to adding Garnish to a full sized meal.
Question. What are you adding to the definition by adding "life" to it?
Artificial intelligence - study of machine psychology. Sorted into artificial consciousness and artificial mental constructs (machine self, machine autonomy, machine self-awareness, etc.)
Two part problem. The easy one one is the idea of a artificial mental construct. Mental Activity requires a mind, and whether the mind is artificial or not, i would see an artificial mental construct as being a construct that the mind did not itself create. This is a "FUN" topic that could cross lines.
Artificial Conciousness- study of machine mental faculties of perception, feeling, and thinking. Problem is this. A machine can show faculties of perception without a mind. Can show faculties of thinking without a mind. Can show, to a limited extent given the right hardware, show feeling without a mind.
Question. What differs a "Mechanical Mind" made of levers, pulleys, switches from an idea of a "Mental Mind"?
Hi, I agree with you on the importance of naming things. I'm just one small player in an international field of study/engineering that has taken off and is here to stay. There are thousands working on this and as I showed in my thread, someone will soon invent "mechanical access consciousness" whether I do or not. And it may as well be me. But I can't change the field. Just look at this long list of Online papers on artificial/real consciousness: http://consc.net/online/1.2d
And I do understand the issues you are speaking to. In my demo I sum up my conclusions as follows:
"Watch any 3 year old. Their smooth symphony of sensors, self, consciousness, and actuators, always leaves me with the conclusion that it is oddly elegant to the extreme."
So I am saying that I agree with the idea of "An artificial flower is not a real flower." And I say this. And actually it is present in the definition of "artificial". In the same way, artificial life is not real life, artificial mind is not a real mind etc. And yes, an artificial human is not a real human. Machinery is machinery, a "biologically inspired mechanical system" at best as you say. This is also what I believe and so I don't therefore believe in robot rights or any of that.
You do present definitions in the above quote, that I (and the field in general I think) can agree with. Maybe the idea of putting "study" in the definitions was wrong; really I'm just a coder who studied philosophy, not really a formal AI expert. So I could easily be wrong. Actually it's probably better. So using your suggestions we get:
artificial life - biologically inspired mechanical systems - sensory motor systems with semantic reasoners, or chatbots with semantic reasoners
artificial intelligence - psychology inspired mechanical systems
artificial consciousness - human consciousness inspired mechanical systems
[/b]
The Three Laws of Robotics (Asimov) could be considered the "values chip" for civilian AI. it is built in so that if tampered with the AI shuts down. Then they could have autonomy and self-volition and self survival will, but tempered by their values chip.
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
http://en.wikipedia.org/wiki/Three_Laws_of_Robotics-------------------------------------------------------------------------------------------------------------------