If someone reads what I post here in this thread, I have a good feeling about copying human behavior as an AI behavior process.
If computer says A and a human responds B, then a computer will remember it and it would respond B whenever human says A. This, of course, should have a deeper meaning where B is a function of A (written B = f(A)). This replaces esoteric "intelligence detector" from initial Sky algorithm by more formal analyzing of action and response from textual terminal. Function f would be guessed and checked systematically by a genetic algorithm.
Observing further, human will say "A->B" in one moment (in his own words) and after, when computer says A, human will respond B. In that moment computer will learn the meaning of "->" and what to do when its left side raises in conversation (to respond the right side). This being learnt, when a human writes "C->D" and says C afterwards, computer would respond D. I think that this could be called higher order reasoning. With higher order reasoning we can upload a whole book of knowledge with a lot of X->Y sentences, making a fast learner after initial face to face raising of AI.
So, what when we have a criminal human saying a criminal deed B (say, steal) on mentioning A? We will also have a victim saying C (say, don't steal) on mentioning A. Then, when A is said to computer, we have answers B and C in contradiction on which we can apply Sky conflict resolver which finds a solution D, satisfying both criminal and victim, thus avoiding the criminal output.
The more people that computer interfaces and the more knowledge it collects, the safer response will be. The best scenario is where an AI is being taught over web site by a lot of people to be safe enough. The worst scenario is when an AI is being taught by a single criminal, fed by wrong books. Maybe single teacher scenario should be forbidden by the law itself when using behavior copying algorithm. On the other side, when passing a whole decent book into computer, AI learns a lot of responses which would also make it safe. So, it is possible that AI, being taught by a criminal, grows up a good personality by reading right books.
Anyway,
happens with a criminal teaching AI and passing to it mean-thought books. After that, one right book and it turns to nice personality. Maybe the only acceptable solution with copying behavior is a crowdsourced-site taught AI.