Intelligence Explosion: Evidence and Import

  • 8 Replies
  • 3259 Views
*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6855
  • Mostly Harmless
Intelligence Explosion: Evidence and Import
« on: July 08, 2014, 12:30:47 pm »
Quote
In this chapter we review the evidence for and against three claims: that (1) there is a substantial chance we will create human-level AI before 2100, that (2) if human-level AI is created, there is a good chance vastly superhuman AI will follow via an “intelligence explosion,” and that (3) an uncontrolled intelligence explosion could destroy everything we value, but a controlled intelligence explosion would benefit humanity enormously if we can achieve it. We conclude with recommendations for increasing the odds of a controlled intelligence explosion relative to an uncontrolled intelligence explosion.

This is a paper I came across. I have only skimmed through it so far, but I thought others might find this interesting too.

Read more...

*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 633
    • AI / robot merchandise
Re: Intelligence Explosion: Evidence and Import
« Reply #1 on: July 08, 2014, 01:37:22 pm »
It's a fairly good read. It lists all the factors that make it practically impossible to accurately predict when human-level AI might be achieved, so it makes a good point there. I kind of understand the intelligence explosion loop the way they explain it, but that assumption is only a case with self-improving AI (similar to humans with access to genetic sequencing). But unlike popular science would have it, it is very easy to make certain parts of an AI's programming inaccessible to alteration, or otherwise to monitor and limit any changes the program makes to vital intelligent functions.

Sadly, it doesn't tell me how to make safe AI, so I'll just do it my way: Not giving the AI any functional power over anything other than knowledge and speech. Of course, this leaves open the possibility that it might talk itself into presidency, but we'd see that coming.
CO2 retains heat. More CO2 in the air = hotter climate.

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1723
    • mind-child
Re: Intelligence Explosion: Evidence and Import
« Reply #2 on: July 08, 2014, 02:06:43 pm »
Don Patric,

I'm scared of artificial intelligence myself, I even lost my mind once about that subject (consider AI dealing with the power of knowledge of life phenomena).

But, after careful thought, I have decided to accept what the future brings with AI in mind. There is much of benefits to give up just like that. I'm imagining thriving world without work, artificial food, new medicines and general problems solved easily with AI, only if we would be careful about it.

Let me introduce a slight chance of broken AI. My main problem was: consider perfect behavior machine that hardwarely (or by a bug) breaks and turns into the most mean mind ever existed. If this meanest mind would be equipped with super-human abilities it could zip out all the life in the Universe, planet by planet. Although I think that public is not enough concerned about AI safety, I decided to believe in Nature, in a kind of magic and in rules of the Universe that would say: nothing bad can happen. That doesn't mean that we should wave with our heads between scissors, but I believe if we just try (and we do try), we are safe. You can say that I believe in magic.

Anyway, the problem of hardware mistake could be reduced by having multiple copies of critical code in memory and frequently checking if all copies look the same. I think that it is enough to reduce possibility of hardware mistake to the same level of possibility where a thunder strikes computer and accidentally forms the meanest creature in the world. If I ever would write the code for AI behavior, I'll surely have multiple copies of critical part of code in memory. I hope that the magic will do the rest.

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6855
  • Mostly Harmless
Re: Intelligence Explosion: Evidence and Import
« Reply #3 on: July 08, 2014, 06:29:01 pm »
Quote
An “intelligence explosion” is theoretical scenario in which an intelligent agent analyzes the processes that produce its intelligence, improves upon them, and creates a successor which does the same. This process repeats in a positive feedback loop– each successive agent more intelligent than the last and thus more able to increase the intelligence of its successor – until some limit is reached.
Reference : http://wiki.lesswrong.com/wiki/Intelligence_explosion

So kind of like evolution then, but in a much smaller time frame ?

Other related reading here :

http://www.aleph.se/andart/archives/2011/02/why_we_should_fear_the_paperclipper.html

That last one I posted reminded me of the Replicators in Stargate SG-1.

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1723
    • mind-child
Re: Intelligence Explosion: Evidence and Import
« Reply #4 on: July 08, 2014, 08:01:29 pm »
IQ test is closely related to time measure. It is taking into account the time a participient needed to complete the test. I assume that given enough time, all participients would correctly answer to all questions.

Under assumption that there would exist complete AI algorithm, the more speed of algorithm and processor is, the greater amount of meaningful and smart responses would be available in less time, measuring bigger IQ.

So, besides specific algorithm implementation, "intelligence explosion" would be faster with faster computers.

*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 633
    • AI / robot merchandise
Re: Intelligence Explosion: Evidence and Import
« Reply #5 on: July 08, 2014, 10:26:57 pm »
Evolution is a good word for it. Basically a program that improves its own programming, in a loop, exponentially increasing its intelligence in a matter of seconds. Such a program could be similar to a computer virus, and for all I know it would do equal damage in a worst case scenario, assuming it has the only mindless goal of improving itself rather than performing its actual task, which will not require nor benefit from higher intelligence above a certain level. Like my dayjob.  ;)

I find it more likely such an AI would literally explode. As a programmer, I know that even minor deviations in programming tend to cascade into a wide range of unintelligent failures.
CO2 retains heat. More CO2 in the air = hotter climate.

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6855
  • Mostly Harmless
Re: Intelligence Explosion: Evidence and Import
« Reply #6 on: July 08, 2014, 11:32:04 pm »
Yes it's a lot like a virus isn't it.

In nature we see more and more resistant viruses once they have beaten the counter measures or at least the stronger forms have managed to survive and only grow stronger. So perhaps one would have to worry that an advanced AI might also resist counter measures rather than give up on the goal which we had set it in the first place.

Hmm. Maybe that's too Sci-Fi, but this need not be an 'intelligence' as such, more like a runaway train - still a machine but no longer under our control.

One thing that sticks out to me in that quote I gave is the last part : until some limit is reached

I wonder what that limit might be.

Don't give up that day job  ;D
« Last Edit: July 09, 2014, 12:59:10 am by Freddy »

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6855
  • Mostly Harmless
Re: Intelligence Explosion: Evidence and Import
« Reply #7 on: July 09, 2014, 12:21:40 am »
If I ever would write the code for AI behavior, I'll surely have multiple copies of critical part of code in memory. I hope that the magic will do the rest.

Ivan, the critical parts could be made as hardware, a chip maybe, that cannot physically change.

I think one thing though. There's a heck of a lot of code out there on the internet and in between; and the real culprits seem to be humans... viruses, DOS attacks, trojans and all the rest. More than likely if some advanced AI started being naughty it would be because some human programmed it to be that way.

IQ test is closely related to time measure. It is taking into account the time a participient needed to complete the test. I assume that given enough time, all participients would correctly answer to all questions.

Not sure of this. Unless you argue it with the room full of monkeys with typewriters thing - that given an infinite amount of time they could type the words of Shakespeare or Goethe if you like, or anyone really. I guess that would relate to their perceived level of intelligence albeit biased towards a comparison with humans.

Living things do what they need to survive after all, they don't have to write a thesis about it. I digress, but yes given an endless amount of time (until solved) I suppose anyone could pass an IQ test at 100% even if purely by guesswork or chance. So in the end I think I agree with you there somewhat.

I realised there is a moral to this story and that's if a programmer ceases to worry about what their code does then they really should not program any more - and that's true of less ambitious endeavours too.
« Last Edit: July 09, 2014, 01:24:30 am by Freddy »

*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 633
    • AI / robot merchandise
Re: Intelligence Explosion: Evidence and Import
« Reply #8 on: July 09, 2014, 08:15:48 am »
I realised there is a moral to this story and that's if a programmer ceases to worry about what their code does then they really should not program any more.
This. Any scientist who builds an AI knowing it will evolve itself without taking precautions shouldn't be allowed to work. It is really easy to hardwire code in software, even in a self-evolving one, to "never change this vital piece of code", or to add timers to slow it down, or limits, or a kill button.

Two years ago I implemented the first reasoning process in my AI: I programmed it to apply a rule of inference on any fact that it didn't know, and I let this loop. Anyone running an experiment will want to monitor the process. In theory the AI should exhaust its knowledge within 5 loops and stop, but knowing that the program ran faster than I could follow, I simply added a time limit after which I wanted to examine the results. I asked the computer a fact it didn't know, and lo and behold, in half a second more inferences zipped across the screen than my eyes could follow, until it hit the limit and came to a full stop. It turned out the inference I built to examine questions, was raising more questions, activating inference, raising more questions, at infinity.
Had I not added the limit, I could have still stopped it by pressing escape, Alt-F4, pressing the power button, pulling the plug, or jolting the hard drive to smithereens, or it could have kept running in circles and caused a hardware overheat. Computers are fragile things.

I think one thing though. There's a heck of a lot of code out there on the internet and in between; and the real culprits seem to be humans
That, I think, is the more immediate concern. I think we may assume that the people first to figure out how to create strong AI will be very smart, therefore smart enough to take precautions. The same can not be said of the dumb or bad people who get their miffs on a copy of the AI afterwards. We can only speculate whether machines will want to take over the world, but we know for certain that humans already want to.
CO2 retains heat. More CO2 in the air = hotter climate.

 


OpenAI Speech-to-Speech Reasoning Demo
by ivan.moony (AI News )
March 28, 2024, 01:31:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

345 Guests, 0 Users

Most Online Today: 396. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles