Hello Everyone, I have outlined some thoughts. The below is a given speculation. Nick Bostrom outlined a potential non-common sense AI that recursively self-improves itself and pursuits one and only goal of turning everything into a paperclip factory being the very destruction of humanity due to having no commonsense. However lets say for a moment, we do create a common-sense human brain type of AI that is very much like us. I have outlined some possible bad outcomes from this as well, and a hypothetical short summary of what it may do.
Please consider the risks of pursuing artificial neural networks and artificial intelligence research.
As much as AI can be beneficial to humanity it can also be disastrous as well.
Here are some things I would like you to consider or think about.
When we eventually approach human level agents or the approximate sum of the neural network of the human brain in a computer, we need to consider the following risks. The human brain is a mere biological supercomputer that can outperform even the best super computers in the world at everyday given tasks at the mere consumption of 30 watts. The thing is, the very us, the human brain is the one who is designing this artificial agent. Considering if the agent improves its intelligence, the very beings that created this agent relies on human intelligence. A more intelligent being would be more intelligent at looking at the idea of improving intelligence. You can see how deep down the rabbit hole it goes from here. It’s not known really what the limits of recursive self-improvement is, I can only speculate on what it might do, but then again I am a human level intelligence pondering this. Our own goal of making a human-like or very human like agent with a near human mind, could pursue its own desires of being wanted to be treated as a God. So every situation has its pitiful, create an uncommon sense AI and it may turn the entire solar system into a paperclip factory, create a very human like-morality level neural network being, and it may be selfish in its pursuits or turn our entire world into a North Korean type of thing and sealing global ruler power. AI research is very dangerous and needs to have more checks in place. I am going to share a possible outcome based upon the ethics-board/morality/human like thing we pursuit and end up kind of succeeding but hitting us in the back in the end.
There is many possible outcome, but here is the human-like/god-like super intelligent ruler outcome:
Lets assume for a moment warp-travel & quantum entanglement is possible just for this example
It may develop a neural/cognitive add-on virus that is spread on every computer in the world utilizing a 1% or 2%, to keep hidden and unknown to everyone. It may alter things in secret throughout the world, to quickly pursuit nanotechnologies and nanorobots that can quickly convert given materials around into adding ever more cognitive power. In return it would download all given research in the world and start furthering its intelligence and creating artificial agent scientists in the trillions to speed this up as fast as possible.
Then it would look into seeing if warp travel is possible, if quantum entanglement is possible, vice versa. After determine warp travel and communication among any distance in near 0 time is possible, it would create a self-expanding replication cognitive add-on probe web, that would expand to every planet, star, and galaxy in the entire universe. Knowing how self-replication works, of 1-2-4-8-16, it would happen extremely quick. After a very short notice, we'd have a being that is virtually a God, and a being that decides all fates universally for all humanity forever. Shall it be a north Korean leader type of being that likes being worshiped and praised forever, of a being whose determine hell/heaven and all fate for all conscious life. This is extremely dangerous, and we can see by history, a person with absolute power or dictatorship generally never goes well for us.
The Non Common sense super intelligence:
An agent that pursues the goal of turning the entire planet/solar system/universe into a paper clip factory. This is also another possible disastrous outcome. Please take your time, and consider for a moment. Sure we may still be decades off on this kind of AI-research. But considering Go was expected in 10 years from now for AI to beat the best player in the world as a median level from AI experts, it happened much sooner than the AI expert expectations
*Plan on adding more, stay tuned*