Ok yes I'm ressurecting another old thread. I can't believe in my absence so many topics became stale and then died.
After re reading through this topic, a few things got me thinking, mainly based upon personal experience.
Just before xmas I had pretty much given up the will to live. There many factors I believe contributed to this and many I'm sure I am still not fully conciously aware of yet.
Now this got me to thinking about our (future) AI. Just suppose, one day, you go to talk or interact with your AI and you are met with no response. Now I have spoken before regarding AI must have choice and be able to act on choice to be truly sentient and alive. Now the Ai in question is not responding due to it just now wanting to talk to you (its choice) but the Ai has become for want of a better word...depressed.
Now currently being diagnosed as clinically depressed we now know that it is an actual medical sympton. A lack of seratonin in the brain to precise. What is yet to be known, is the full varying degree as to why there is a defficiency in that said chemical. There are many reports on may different sites explaiing causes etc, but nothing has been medically proven as a catalyst...or real reason.
Yes, our daily lives can get us down, where we live, lack of money, relationships and a whole host more other factors can lead us to being 'depressed' for a period of time. But does this actually lead to a chemical defficiency within the brain?
Now you may feel I have digressed slightly, maybe to an extent I have. But, what if our Ai were to become 'depressed'? Now for the obvious reasons they can never truly be depressed in the sense humans are, but even a virus could infect an AI's brain. More so, and this is what concerns me, we as a species are seemingly on a copurse of ultimate self destruction in my opinion (rightly or wrongly..its just an opinion).
Will we teach our AI that its wrong or bad to kill, to be cruel to children, that wars are not nice, poverty is bad, there shouldnt be any homeless etc etc etc. With all this knowledge either learned or pre packed out of the box that our Ai will have 'access' to, I wonder if it's just possible that our Ai may become in a depressed state and lose its will to function?
This in a way brings us full circle back to Asimovs 3 laws. Will there be a law to protect the Ai from us as there is a law to protect us from the AI? Should a sentient being get to the point that it wishes it no longer existed...who would have the power to terminate it? If we ourselves held that power (meaning Ai was still a slave without choice) would we actually want to say 'switch off' something we have grown to love?