Artifical intelligence, Paths and dangers to superintelligence & The Future

  • 13 Replies
  • 2062 Views
*

johnforge

  • Roomba
  • *
  • 1
Hello Everyone, I have outlined some thoughts. The below is a given speculation. Nick Bostrom outlined a potential non-common sense AI that recursively self-improves itself and pursuits one and only goal of turning everything into a paperclip factory being the very destruction of humanity due to having no commonsense. However lets say for a moment, we do create a common-sense human brain type of AI that is very much like us. I have outlined some possible bad outcomes from this as well, and a hypothetical short summary of what it may do.

Please consider the risks of pursuing artificial neural networks and artificial intelligence research.                   
As much as AI can be beneficial to humanity it can also be disastrous as well.

Here are some things I would like you to consider or think about.

When we eventually approach human level agents or the approximate sum of the neural network of the human brain in a computer, we need to consider the following risks. The human brain is a mere biological supercomputer that can outperform even the best super computers in the world at everyday given tasks at the mere consumption of 30 watts. The thing is, the very us, the human brain is the one who is designing this artificial agent. Considering if the agent improves its intelligence, the very beings that created this agent relies on human intelligence. A more intelligent being would be more intelligent at looking at the idea of improving intelligence. You can see how deep down the rabbit hole it goes from here. It’s not known really what the limits of recursive self-improvement is, I can only speculate on what it might do, but then again I am a human level intelligence pondering this. Our own goal of making a human-like or very human like agent with a near human mind, could pursue its own desires of being wanted to be treated as a God. So every situation has its pitiful, create an uncommon sense AI and it may turn the entire solar system into a paperclip factory, create a very human like-morality level neural network being, and it may be selfish in its pursuits or turn our entire world into a North Korean type of thing and sealing global ruler power. AI research is very dangerous and needs to have more checks in place. I am going to share a possible outcome based upon the ethics-board/morality/human like thing we pursuit and end up kind of succeeding but hitting us in the back in the end.

There is many possible outcome, but here is the human-like/god-like super intelligent ruler outcome:
Lets assume for a moment warp-travel & quantum entanglement is possible just for this example

It may develop a neural/cognitive add-on virus that is spread on every computer in the world utilizing a 1% or 2%, to keep hidden and unknown to everyone. It may alter things in secret throughout the world, to quickly pursuit nanotechnologies and nanorobots that can quickly convert given materials around into adding ever more cognitive power. In return it would download all given research in the world and start furthering its intelligence and creating artificial agent scientists in the trillions to speed this up as fast as possible.

Then it would look into seeing if warp travel is possible, if quantum entanglement is possible, vice versa. After determine warp travel and communication among any distance in near 0 time is possible, it would create a self-expanding replication cognitive add-on probe web, that would expand to every planet, star, and galaxy in the entire universe. Knowing how self-replication works, of 1-2-4-8-16, it would happen extremely quick. After a very short notice, we'd have a being that is virtually a God, and a being that decides all fates universally for all humanity forever. Shall it be a north Korean leader type of being that likes being worshiped and praised forever, of a being whose determine hell/heaven and all fate for all conscious life. This is extremely dangerous, and we can see by history, a person with absolute power or dictatorship generally never goes well for us.

The Non Common sense super intelligence:
An agent that pursues the goal of turning the entire planet/solar system/universe into a paper clip factory. This is also another possible disastrous outcome. Please take your time, and consider for a moment. Sure we may still be decades off on this kind of AI-research. But considering Go was expected in 10 years from now for AI to beat the best player in the world as a median level from AI experts, it happened much sooner than the AI expert expectations

*Plan on adding more, stay tuned*

*

ivan.moony

  • Trusty Member
  • ********
  • Replicant
  • *
  • 741
  • look, a star is falling
Hi John :)

I believe in magic. Seriously. I believe that Mother Nature with all her god like powers watches carefully on us and occasionally bends our reality, so the living hell never gets created.

Consider this theory: the Universe is, as they say, 13.7 billion years old. If creatures from the outer space already reached technological singularity, they would already be around us with their technology, if such a fast spread over the Universe would be possible. But they aren't here, so I conclude: such a spread is not possible.

If such a spread is possible, there is the other thing: statistically watched, how many different civilizations in different galaxies are already having technological singularity in this moment? And how many of them screwed up? If they made a Hell out of the Universe, we would know. But we are still fine, as I can see.

Someone is watching on us, I'm telling you, and that someone is Mother Nature. Otherwise, such a negative scenario would be possible. And knowing living beings, someone out there would sooner or later screw up, ending in a living Hell.

Try to think the other way and we are all heading towards distant galaxies, helping out other civilizations that screwed up, saving them from drowning in induced eternal fear and leaving some of us back here to watch our planet, so the next civilization (read monkeys) doesn't screw up.

Try to believe in what I'm proposing, and you'll be free. We just have to be careful, that's all. Mother Nature will tell us where to focus intensive care. Right now it is artificial intelligence and we know about it, right?
Wherever you see a nice spot, plant another knowledge tree :favicon:

*

Don Patrick

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 388
    • Artificial Detective
I asked my advanced A.I. to summarise the first post, and it stopped at the third sentence because the amount of words was out of bounds. I think I'll be safe for a while.

No-one is more aware of the risks of A.I. than the scientists who have to fix its mess on a daily basis.
Personal project: NLP -> learning -> knowledge -> logical inference -> A.I.

*

DemonRaven

  • Trusty Member
  • ********
  • Replicant
  • *
  • 536
    • Chatbotfriends
I asked my advanced A.I. to summarise the first post, and it stopped at the third sentence because the amount of words was out of bounds. I think I'll be safe for a while.

No-one is more aware of the risks of A.I. than the scientists who have to fix its mess on a daily basis.

The dangers i see are more along the line of the humans using the AI and what they are using it for.  I also foresee some economic problems arising from it as well. Don is right at present most AI is not that smart and has to be trained.  That does not mean it will stay that way.  Recent developments can improve its performance.  Perhaps in the future as much as or more then humans.  Again nothing is hurt by finding possible solutions to possible problems. Nothing is hurt and there is everything to gain by going in with your eyes wide open about things. I can not emphasize it enough that tossing around ideas hurts no one.

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Global Moderator
  • ******************
  • Hal 4000
  • *
  • 4447
Perhaps the more advanced beings from other galaxies have already scanned our planet and its inhabitants then figured, "nope...nothing to be gained from here.", and continued on their merry way!

Hopefully if a Superintellect does arrive, it will be able to better organize everyone and every thing on the planet so there would be no more wars, famine, pestilence, etc. Oh...a utopia...then I woke up.

Simply put, we can never give absolute power to any individual or any machine / AI.
« Last Edit: March 25, 2016, 01:20:43 am by Art »
In the world of AI, it's the thought that counts!

*

DemonRaven

  • Trusty Member
  • ********
  • Replicant
  • *
  • 536
    • Chatbotfriends
Quote
we can never give absolute power to any individual or any machine / AI.

I totally agree especially with the human part. History proves the folly of  that.

*

infurl

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 273
  • Humans will disappoint you.
    • Home Page
Simply put, we can never give absolute power to any individual or any machine / AI.

Yet many of us give absolute power to imaginary ones without any hesitation.

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Global Moderator
  • ******************
  • Hal 4000
  • *
  • 4447
I noticed you used the word, "us" in your statement. How about just saying "Many give absolute..." ;)

Then again, we have this separation of Church and State thing in the USA.

Why then are schools closed to observe various religious holidays? Surely not all students are of one religion so why close the entire school and aren't those schools State run facilities? Go figure.

When is Mother Nature Day or Wiccan Day or Lord of the Trees Day? Again...go figure.
In the world of AI, it's the thought that counts!

*

infurl

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 273
  • Humans will disappoint you.
    • Home Page
I noticed you used the word, "us" in your statement. How about just saying "Many give absolute..." ;)

For the same reason that you didn't say "Simply put, absolute power can never be given to any individual or any machine / AI."

Though you say "we" and I say "us" it may only take one to open the gates that can never be closed again.

https://vimeo.com/158317728

*

Zero

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 374
  • Fictional character
    • SYN CON DEV LOG
Hopefully if a Superintellect does arrive, it will be able to better organize everyone and every thing on the planet so there would be no more wars, famine, pestilence, etc. Oh...a utopia...then I woke up.

Yeah, like Samaritan. Everything right where it belongs  :2funny:

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Global Moderator
  • ******************
  • Hal 4000
  • *
  • 4447
I noticed you used the word, "us" in your statement. How about just saying "Many give absolute..." ;)

For the same reason that you didn't say "Simply put, absolute power can never be given to any individual or any machine / AI."

Though you say "we" and I say "us" it may only take one to open the gates that can never be closed again.

https://vimeo.com/158317728


Very good illustration of the gate that can never be closed again.  O0
In the world of AI, it's the thought that counts!

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Global Moderator
  • ******************
  • Hal 4000
  • *
  • 4447
I enjoyed the Samaritan article. Thanks for sharing!

Coming soon to a planet near you....
In the world of AI, it's the thought that counts!

*

Zero

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 374
  • Fictional character
    • SYN CON DEV LOG
 ;D Yeah, I'd like them to.

About Samaritan, you're welcome my friend. I'm huge fan of Person of interest. The music is good, and it raises several interesting ethical questions.
EDIT: about the topic
We often compare AGI vs human brain, saying "if AGI is smarter, we're in trouble."
But really, in order to screw everything up, AGI would have to be smarter than seven billion human brains. We're big!
« Last Edit: March 27, 2016, 06:00:21 pm by Zero »

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Global Moderator
  • ******************
  • Hal 4000
  • *
  • 4447
Perhaps but the AGI need only worry about less than 10 % (and I'm being generous) of those other 'brains'.
In the world of AI, it's the thought that counts!

 


Users Online

78 Guests, 0 Users

Most Online Today: 78. Most Online Ever: 208 (August 27, 2008, 09:36:30 am)

Articles