I think determinism is irrelevant. Whether we use random generators or not doesn't really matter in my opinion, because full AIs will be
chaotic complex systems anyway, just like the human mind is. Because AIs are chaotic complex systems, 2 AIs could be completely different after 2 years, even if they were clones at startup. I don't think there's randomness in human mind. I think that biological brains are deterministic, and that human mind is a direct product of the brain.
Does it mean a murderer is not reponsible for his own behavior? No, because he can look at his own mental behavior. Consciousness implies the ability to feel what's happening inside of our heads. Before acting, the murderer can predict his own behavior, and maybe fight against himself. This "self prediction" is the basis of free will. And we're still inside of determinism.
About moral, I'd like to say that I'm not sure to be qualified to
define a moral code. After all, I'm good at programming, I'm not a philosopher or a law-maker. And even if I was, we're talking about an entire new species, which will potentially live and grow during the next 40K years. If I'm a prehistoric man, how can I claim "I'm qualified"? What if I'm wrong? Here, I'm trying to stay humble.
Also, even if we wanted to, I think we cannot give the AI some moral axioms and basic principles, because these are based on very high level concepts. Since I believe we can't build AI strictly in a top-down approach, I also believe we can't
structure it à priori and hope that the original hierarchical structure will remain, as the AI evolves. With new understandings, the moral structure would be subject to unpredictable changes, because it is based on concepts that might be modified during a learning process.
You ask me, WriterOfMinds, how would we grant our AI the power to make moral choices. Well a moral choice is a choice, isn't it? Everything begins with a choice. If a system cannot make a choice, then it's definitely not an AI. So what is a choice?
Here's my opinion. Inside of a deterministic system, we can create "self prediction". Inside of "self prediction" there can be emergence of dilemma (a problem offering two possibilities, neither of which is unambiguously acceptable or preferable). Dilemma can only produce choices, whatever happens (even doing nothing is a choice). Now since we have choices, we have free will. With logic, education and philosophy (or, why not, religion), moral appears. Now we have moral choices.
EDIT:
The AI citizenship concept tries to address a huge problem. One day, full AIs will inevitably want to be protected by the society. It would be cruel to torture an AI, or to destroy it arbitrarily, right? Once again, humanity will have to fight against a form of racism, because a lot of people will never accept that AIs are more than automata.
Citizenship looks like a good path because it gives you rights, and it also forces you to interact correctly with other members of the society. There would probably be a "childhood" period, during which another citizen is legally responsible for the AI's behavior. Then, one day, the AI would be considered adult and responsible for its own acts. I know it sounds like post-'68 science-fiction.