Last night I've thought of something interesting.
What if an autonomous AI would copy intelligence it sees around itself? It would learn from responses from its input stream. It would, through variable unification, learn actions and consequences, constantly watching how its environment behaves and then it would behave exactly like its environment in given situations. If I would be the only one that interfaces AI, it would be exact copy of myself after a while. The more beings it would interface, the more broad picture about its environment it would get and less harmful it would potentially be.
Now I can't forget about one unluckily scenario: an terrorist that raises such an AI. In such case, AI would act like a criminal until it sees enough responses from other people.
And once again, I'm in a doubt of pursuing an AI as a public and open source project. Any real AI, no matter of what algorithm it would use, would represent a potential threat to society if abused. And on the other side, once publicly available, the same technology would be available to police and army for defense causes. That would setup a cold war environment of peace.
Argument against making an AI algorithm publicly available is constant fear against terrorist's AI implementation that would arise in that case. Not to mention famous Albert Einstein which witnessed a disaster of throwing A-bomb on the civil target, all in the name of peace in the world.
If we take a look at current situation, technology for an atomic bomb exist, yet I don't see them falling all around world, thrown by terrorist organisations. That would be an argument for making an AI algorithm publicly available because in the case of successful application, it would give us yet unseen benefits of work-free life in thriving civilization (after overcoming some kid diseases), not to mention medical discoveries that would wipe out different diseases from this world, I believe. And at the end, as I believe that bad people behavior like terrorism arises from poor people watching rich people, with an AI we could punch out from those people an argument to behave unfriendly. Of course, there are other arguments (important just to a few people I've met) like is liberation of plants and animals from being our slaves as a food pray (anyway, this shift in people's consciousness I don't expect for at least a few centuries from now).
So, should I end up my quest in a fear of its abuse and give up all potential benefits?
I don't know what I expect from this post, probably I'm just trying to line up my thoughts.
Maybe I should answer this question for myself: do I believe in a friendly God as a higher force who will give a magic touch in the future and save us from potential disaster? And I've seen things in my life that normal, mental healthy people would never understand.