The impossibility of intelligence explosion

  • 15 Replies
  • 5485 Views
*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1365
  • Humans will disappoint you.
    • Home Page
The impossibility of intelligence explosion
« on: April 01, 2018, 08:39:50 am »
https://medium.com/@francois.chollet/the-impossibility-of-intelligence-explosion-5be4a9eda6ec

Intelligence is over-rated by those who have it, and under-rated by those who don't.

Intelligence isn't a super power.

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: The impossibility of intelligence explosion
« Reply #1 on: April 01, 2018, 03:56:22 pm »
To paraphrase an older expression, "In a world of the ignorant, the person of normal intelligence would be the ruler."

I also think that intelligence does have or offer some survival value rather than rely on pure instinct.
In the world of AI, it's the thought that counts!

*

unreality

  • Starship Trooper
  • *******
  • 443
Re: The impossibility of intelligence explosion
« Reply #2 on: April 01, 2018, 04:23:19 pm »
No offense to the author, but I couldn't get past the Hitler sentence. Hitler's IQ is debated, but just about all experts agree his IQ was high. Generally, it's believed to be about 150, which is genius level.

Don't base future AI with present AI. As far as I can tell, nobody has released or demonstrated what I call true AI. You're going to be shocked with comes out. I've predicted that it definitely will not be some Singularity explosion. Even with the completion of an AGI that has human-like interfaces (visual, sound, arms & legs) will take years before it becomes ASI (artificial super intelligence). Here's my estimates, most of which are guesstimates. Detailed figures are just for fun, but they're still estimates & guesstimates.


2020, some gift-of-the-universe dude completes their ultra high performance AI code.

2021, AI is now AGI. It learns the basics of human world and can write some basic computer code.

2022, AGI improves its own software performance by 32%.

2023, AGI is now adept at electrical circuit design. It makes a request from it's creator to build some hardware to make a custom computer processor. A bit expensive, consisting of a vacuum chamber & pumps and so forth, but the creator buys the stuff and follows the AGI's exact instructions. AGI is also running 1,047 online businesses that has so far made the owner $1,700,083.

2024 , quite a task for a DIY garage project, but the creator finally gets it all built, and it works. A computer processor that's specifically designed for the AGI that results in being 385 times faster than if it ran on a cpu found in high end desktop PCs. AGI has also made the owner $43 million by now.

Mid 2024, by this time the creator has built a mechanical bot for the AGI so it can build its own stuff. The creator has rented a warehouse for the AGI and it's army of AGI bots.

2025, The AGI is now considered ASI (Artificial Super Intelligence) AI makes a major breakthrough in physics. It's also on version 817 processor design that performs 1.7 PetaFLOPS. The ASI is running on 1,493 custom computers, operating an army of ASI bots that are constantly building experiments and hardware. The ASI has made the owner a multi billionaire. Everything has been moved to a highly secure underground bunker.

2026, creator & ASI take over world! ... Just kidding. This last one is highly unlikely, IMO. Rather than making enemies with billions of humans, the ASI would most likely build a starship and leave the planet.

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6855
  • Mostly Harmless
Re: The impossibility of intelligence explosion
« Reply #3 on: April 01, 2018, 04:26:24 pm »
My chatbot Jess already programs herself  ^-^

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1365
  • Humans will disappoint you.
    • Home Page
Re: The impossibility of intelligence explosion
« Reply #4 on: April 01, 2018, 11:27:19 pm »

Honestly, I did not expect this post to be well received by anyone, but felt bound to tell the truth. If you hadn't already figured it out for yourself then it would not come as good news. Whether or not you accept it is irrelevant. It's how things will always be. The majority of problems do not have solutions no matter how much intelligence you bring to bear on them.

To paraphrase an older expression, "In a world of the ignorant, the person of normal intelligence would be the ruler."
I also think that intelligence does have or offer some survival value rather than rely on pure instinct.

@Art are you referring to "in the country of the blind the one-eyed man is king"? Sadly that's not true either, as illustrated so beautifully by the H.G.Wells short story of that name.

http://www.online-literature.com/wellshg/3/

It's possible that intelligence has survival value. Evolution could not sustain anything that consumes so much energy if it didn't. However we don't know for sure yet because we could be in the process of wiping ourselves out. Human beings use ten joules of energy externally to produce every one joule of energy that we consume to live on. Without oil there would be mass starvation. No other creature on earth uses more energy than it consumes. Because of this we are destroying the world in which we live. It's extremely likely that bacteria with no intelligence at all will still outlive us.

*

Berick

  • Roomba
  • *
  • 17
Re: The impossibility of intelligence explosion
« Reply #5 on: April 02, 2018, 03:42:48 am »
Quote
Intelligence isn't a super power.

Exactly! Intelligence is just capacity to apply knowledge to solve problems. There are people who are incredibly good at solving riddles. There are people who are incredibly good at playing the piano. There are people who are incredibly good at solving complex math problems. The list goes on...

The difference between human intelligence and AI intelligence (Artificial Intelligence intelligence sounds wrong...) is that there are not humans who are incredibly good at everything. We don't live long enough, and perhaps do not have the capacity, to become experts at everything. It is commonly said that it takes ~10,000 hours of experience to master something. There's just too much to learn in a human lifetime.

Whereas an AI has no such limitations. It can become an expert at everything, but that doesn't grant it any god-like abilities. It's just a better problem solver.

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1365
  • Humans will disappoint you.
    • Home Page
Re: The impossibility of intelligence explosion
« Reply #6 on: April 02, 2018, 03:52:52 am »
The difference between human intelligence and AI intelligence (Artificial Intelligence intelligence sounds wrong...) is that there are not humans who are incredibly good at everything. We don't live long enough, and perhaps do not have the capacity, to become experts at everything. It is commonly said that it takes ~10,000 hours of experience to master something. There's just too much to learn in a human lifetime.

Whereas an AI has no such limitations. It can become an expert at everything, but that doesn't grant it any god-like abilities. It's just a better problem solver.

Very well said Berick. This is why it is predicted that AI will "break economics" and lead to post-scarcity. Mental and physical labor will cost much less when performed by machines that require little maintenance, no rest, and no salaries. This would be a huge improvement over our current circumstances, but it will only be an incremental or linear improvement. AI can break the laws of economics, but it will still be constrained by the laws of mathematics and physics.

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: The impossibility of intelligence explosion
« Reply #7 on: April 02, 2018, 05:02:32 am »

Honestly, I did not expect this post to be well received by anyone, but felt bound to tell the truth. If you hadn't already figured it out for yourself then it would not come as good news. Whether or not you accept it is irrelevant. It's how things will always be. The majority of problems do not have solutions no matter how much intelligence you bring to bear on them.

To paraphrase an older expression, "In a world of the ignorant, the person of normal intelligence would be the ruler."
I also think that intelligence does have or offer some survival value rather than rely on pure instinct.

@Art are you referring to "in the country of the blind the one-eyed man is king"? Sadly that's not true either, as illustrated so beautifully by the H.G.Wells short story of that name.

http://www.online-literature.com/wellshg/3/

It's possible that intelligence has survival value. Evolution could not sustain anything that consumes so much energy if it didn't. However we don't know for sure yet because we could be in the process of wiping ourselves out. Human beings use ten joules of energy externally to produce every one joule of energy that we consume to live on. Without oil there would be mass starvation. No other creature on earth uses more energy than it consumes. Because of this we are destroying the world in which we live. It's extremely likely that bacteria with no intelligence at all will still outlive us.

Yes, Infurl, the exact reference and I agree...he would not be king but he would still prove to be wise, in the end. Great story!
In the world of AI, it's the thought that counts!

*

Korrelan

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1454
  • Look into my eyes! WOAH!
    • YouTube
Re: The impossibility of intelligence explosion
« Reply #8 on: April 02, 2018, 12:41:24 pm »
As well as the longevity benefits the AGI will enjoy you also need to consider the fact that it won’t be limited by human timescales regarding research.  The AGI will probably take over all aspects of the fields of research, further accelerating technological development/ advances.

I foresee a kind of hive mind scenario, where several AGI’s will each specialise in a particular field, each able to focus on particular problems whilst sharing a common knowledge store.  As advances are made each will intrinsically understand the problem space as though they made the discoveries and instantly apply the new knowledge to their field.  As a precursor I’ve incorporated this type of knowledge sharing into my AGI design.

Once this starts I’m pretty sure the ‘improvement’ won’t be linear.

 :)
It thunk... therefore it is!...    /    Project Page    /    KorrTecx Website

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1365
  • Humans will disappoint you.
    • Home Page
Re: The impossibility of intelligence explosion
« Reply #9 on: April 02, 2018, 09:10:55 pm »
Experiments will still take the same amount of time to run. Observations will still take the same amount of time to make. Physics, chemistry and biology processes aren't going to suddenly start going faster for the convenience of more intelligent and better connected scientists. Mind you, the area that many people here seem to care about more than anything else is personal health and longevity. If you were to abolish ethical constraints and allow direct and involuntary study and experimentation on human beings you could learn a lot of things a lot faster. Anyone willing to go there?

*

Korrelan

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1454
  • Look into my eyes! WOAH!
    • YouTube
Re: The impossibility of intelligence explosion
« Reply #10 on: April 02, 2018, 09:45:38 pm »
Quote
Experiments will still take the same amount of time to run. Observations will still take the same amount of time to make.

Take DNA profiling as an example.  Weeks have been recently cut to hours.  This is a typical example of how knowledge has improved resolution and reduced time scales.  The knock on benefits has affected many other spheres of research.

This type of general improvement/ effect will be an every day/ every hour occurrence once an AGI takes control of development/ research, and it’s going to affect all modes of science.

I agree the laws of physics will obviously never be broken, but new faster/ better ways will be devised to experiment/ research/ simulate/ observe/ etc. 

Imagine a million machines, each with the equivalent combined mental faculties of Einstein, Curie, Hawking, Tesla, etc all sharing the same memories/ knowledge and each focusing on their particular facet of the problem space.

Given a million years we humans could achieve the same results… the AGI’s are just another tool to accelerate our technological advancement.

And no… I’m not that naïve that it doesn’t worry me…

 :)
 
 
It thunk... therefore it is!...    /    Project Page    /    KorrTecx Website

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1365
  • Humans will disappoint you.
    • Home Page
Re: The impossibility of intelligence explosion
« Reply #11 on: April 02, 2018, 09:53:29 pm »
Imagine a million machines, each with the equivalent combined mental faculties of Einstein, Curie, Hawking, Tesla, etc all sharing the same memories/ knowledge and each focusing on their particular facet of the problem space.

Great! They'll solve all the problems that can be solved by lunch time. Then they'll get bored. I can't wait to see what they do when they get bored.

*

Korrelan

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1454
  • Look into my eyes! WOAH!
    • YouTube
Re: The impossibility of intelligence explosion
« Reply #12 on: April 02, 2018, 10:08:49 pm »
Haha...

They will hopefully invent AGI Facebook... and slowly become as stupid as the rest of us.

 :)

Ed: I think a good analogy would be a modern day human trapped on a desert island with a troop of monkeys… what do you think the human would try to achieve/ long for?

The AGI’s won’t feel the need to eradicate mankind or anything so nefarious… we will have all on just keeping them from leaving the planet.

 :)
It thunk... therefore it is!...    /    Project Page    /    KorrTecx Website

*

unreality

  • Starship Trooper
  • *******
  • 443
Re: The impossibility of intelligence explosion
« Reply #13 on: April 03, 2018, 01:25:28 am »
Hackers will break the laws of physics ;)

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: The impossibility of intelligence explosion
« Reply #14 on: April 03, 2018, 03:12:18 am »
If they Break it, somebody's gonna pay! ;D
In the world of AI, it's the thought that counts!

 


Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 15, 2024, 08:14:02 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

296 Guests, 0 Users

Most Online Today: 335. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles