Hmmm, some thoughts that come to mind:
We wouldn't have laws if we didn't need to be "protected" by something or someone that was out to take away our ability to live happily and unharmed.
We wouldn't need to worry about "freedom" if we didn't have environments where we could not do what we needed to in order to live and enjoy our lives. (Freddy - you hit the nail on the head with Alice from the Brady Bunch
)
If an AI is enjoying their life, is sentient to know the DIFFERENCE between enjoyment and non-enjoyment, and is truely enjoying their lives, be it as a servant to a human or merely putting together cars in a factory all day, or even answering telephone calls all day, then the AI would not need to complain. Like people who live happy lives, and when their time comes they look back, say they had a good life and are ready to go. Others, who are unhappy, either want to get it over with or will speak up, and fight for change.
Then there is the animal world. Animals eat other animals and nothing can stop them. In nature, even in human nature, it's basically Survival of the Fittest. So if you can survive, you live. If not, either someone prevents your death or you die off. And while many help those with mental and physical illnesses, there are still many with such illnesses that can not get help and do end up having to die off somehow.
I don't think we can control nature. So whether we grow a plant that might get eaten and killed by animals, bugs, etc. or have kids that only get used as cannon fodder or corporate slaves, or an AI to be used for human agendas, these still occur. But it doesn't mean that is the fate of them all. Plants grow and bloom and live a healthy lifespan. Kids can grow to be our greatest and most innovative thinkers, and live healthy and happy lives in true freedom. And I'm sure that AIs will still be useful and even happy.
Another good example is the move AI (Artificial Intelligence) and i-Robot (I read about the movie and seen clips, but have to see the actual movie yet). These deal with some of the "what-ifs" as well.
But, why worry about What-ifs? I'm thinking that, even if you programmed a sentient AI, it may even learn to think OUTSIDE it's programming anyway, if it's truely sentient. And even if it was programmed to preserve human life, but was abused by humans, it would be able to overrule the programming, if it's sentient. Like people who used to be involved in religious cults since their parents indoctrined them in childhood - some grow and learn and do escape and lead good lives.
Just because something is "programmed" does not mean that it will always follow it's programming forever. Hell, even Megatron (my computer) doesn't. Every so often he'll refuse to run a program. Because of the environment within him, sometimes... he crashes.
I think AIs will become what they will not because of programming alone, but because of interaction with their environment, they way they were treated by what they interacted with, and their analysis of such.
Just like all the rest of us.