So we can't simply expect automation to go perfectly smooth without morality.
Hello Irh9
you have a talent for asking interesting questions.
Actually this is an issue that reaches deep into philosophical matter.
The first point i want to make is:
Let's assume we actually had a way of giving machines ethical thinking, so the technical issue was already solved.
What kind of content could this be?
Asimov's simple rules are not quite what we call moral or ethical thinking, so they want machines to have a really sophisticated ethical standard.
The problem though is that obviously you can hardly make morality compatible with machines if you don't even understand how it works for humans.
What morality is, where it comes from and what it should look like has been an unsolved question for thousands of years.
Intuitively we tend to believe that morality is something absolute to which we have more or less access, meaning the more ethical of a personality you have the more access you have to this morality. Most people consider themselves being an ethical person, so other people acting unethically is considered a lack of ethical personality.
In other words you act like a bastard because you have less moral than i do, which makes me a good person and you a mean person.
If you had access to the high pure morality you would see that i am right and you are wrong.
This distorted perception of reality is not a result of false values.
Only of different values. And of different perspectives. Actually even people of same ethical values can get this distorted perception due to unsolvable interest conflicts which appear inevitably when living creatures interact.
The most dangerous believe would certainly be to think we only have to give a machine the correct values, so it can act without animalistic drive purely based on moral values.
Extremely dangerous.
TikaC already gave a view on how there is no absolute moral value.
Some believe nature laws are the purest of all. The Nazis had a strong tendency to this idea, but also philosophers like Nietzsche.
Others think moral should be based on utilitarian principles. Others consider utilitarism a synonym for immoral thinking.
Religious values go much further than utilitarian. But then again, relegious moral has some really irrational aspects that i would rather fight than obey.
In fact religious values have led to uncounted massacres and progromes.
We really haven't tamed our morality yet.
So how could we already apply it to machines?
(By the way, as you mentioned unmanned warfare, i would like to recommend Stanislaw Lem's "Peace on earth". A really fun story, not too long)