ELON MUSK | Right to worry about AI?

  • 16 Replies
  • 1154 Views
*

selimchehimi

  • Trusty Member
  • ***
  • Nomad
  • *
  • 51
  • I'm writing Articles about AI on my blog!
    • Selim Chehimi - AI and Tech
ELON MUSK | Right to worry about AI?
« on: August 23, 2017, 04:19:54 pm »
What’s up guys, I hope that you’re doing well :)

As you know, AI is by far the most powerful Technology of our era. It’s an emerging one trillion dollars industry and investors are making a big bet that it will help us find patterns through the vast amount of data. Also, it can drastically improve our efficiency and makes us wealthier and happier.

You probably heard that Elon Musk has warned about the dangers of Artificial Intelligence. He also called for regulations of AI directly to US governors. So, is Elon Musk right when asking for regulations? I think so, and I explain you why in my new video. I hope that you will like it and I’m looking forward to your critics about it. Thank you!



*

ivan.moony

  • Trusty Member
  • *********
  • Terminator
  • *
  • 778
  • look, a star is falling
    • Structured Type System
Re: ELON MUSK | Right to worry about AI?
« Reply #1 on: August 23, 2017, 07:07:24 pm »
More the potential some technology holds, more dangerous it is. Think of fire as an invent, than think of splitting atoms, then think of AI, by that order. We should be smart, reasonable and careful about AI.
Wherever you see a nice spot, plant another knowledge tree :favicon:

*

infurl

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 292
  • Humans will disappoint you.
    • Home Page
Re: ELON MUSK | Right to worry about AI?
« Reply #2 on: August 23, 2017, 09:47:08 pm »
Pretty good summary though necessarily vague. The biggest immediate danger is powerful AI being used by individual governments and corporations to the detriment of everyone else. Elon Musk's main focus at the moment is to ensure that AI is available to everyone. Imagine what the world would be like today if the Fascists were the only ones who got nuclear weapons.

Ultimately I think AI doesn't pose an existential threat because even though it might surpass us one day, we will still co-exist in a symbiotic relationship, much like we humans do with bacteria and insects now. We might try to manage them but we can't exterminate them or live without them.

*

WriterOfMinds

  • Trusty Member
  • ***
  • Nomad
  • *
  • 59
    • WriterOfMinds Blog
Re: ELON MUSK | Right to worry about AI?
« Reply #3 on: August 26, 2017, 04:15:13 pm »
Ultimately I think AI doesn't pose an existential threat because even though it might surpass us one day, we will still co-exist in a symbiotic relationship, much like we humans do with bacteria and insects now. We might try to manage them but we can't exterminate them or live without them.

What do you figure an advanced AI is going to need humans for?  If it has an intellect equal to or above that of a human (which I think is implied by the worst existential risk fears), and it manages to supply itself with robots that match humans' physical capabilities, I'm not sure what its instrumental reasons would be for keeping us around. 

It's also worth noting that the relationship between humans and insects is not particularly good for insects at present, even though we don't yet have the ability or the desire to eliminate them completely.  Extinction is hardly the only thing a sentient creature's got to worry about, and it's probably not even the worst thing.

*

WriterOfMinds

  • Trusty Member
  • ***
  • Nomad
  • *
  • 59
    • WriterOfMinds Blog
Re: ELON MUSK | Right to worry about AI?
« Reply #4 on: August 26, 2017, 04:56:06 pm »
With respect to selimchehimi's nice video: the claim that an AI "arms race" between companies could result in a disregard for safety makes sense, and the argument that we should try to prevent this makes sense. But I think there are still two big questions that I haven't heard Elon Musk answer:

1) How do we design effective regulations? If we're just trying to prevent things like corporate hegemony, monopolies, etc., that's not so hard. If we're trying to prevent companies from using AI tools in wicked ways (for disinformation campaigns, etc.) that's also not so hard. But if what we're really worried about is existential risk ... how do we regulate against that when we don't even know what all the pitfalls are? Just what risky behaviors do we forbid companies from engaging in as they develop AI?

2) How do we enforce the regulations once we have them? AI development would be so very easy to hide. If the AI becomes sufficiently advanced, even its deployment might be easy to hide. And if companies, or nations, think that their competitors are probably developing AI on the sly, that reduces their own incentive to comply with any regulations and/or treaties.

In short, no matter how good the case for regulation is, I'm not certain it's practical.

*

ivan.moony

  • Trusty Member
  • *********
  • Terminator
  • *
  • 778
  • look, a star is falling
    • Structured Type System
Re: ELON MUSK | Right to worry about AI?
« Reply #5 on: August 26, 2017, 05:57:00 pm »
I don't think we could ever regulate AI creation, as it is easy to disobey any rules. You can't tell to someone: if you program an AI you should follow this or that safety pattern. It will always be a matter of personal responsibility, and I wonder if we, as a species, are mature enough to cope with such a power. I mean, would you put an atomic power plant into a monkey habitat and make it open for monkeys to play with it? I just hope that if we are smart enough to build the thing, then our ethical maturity follows our intelligence.

However, there is something we could do to help this world retain its safety, and that is to make AI blueprints publicly available. That way, if someone abuses an AI, making weapons out of it, we could construct another AI that beats it to the dust. I have issues in killing living beings, but I have no problem destroying AI instances that are about to make a mess in the world. And a good AI could be used to destroy an evil AI. :knuppel2:
Wherever you see a nice spot, plant another knowledge tree :favicon:

*

infurl

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 292
  • Humans will disappoint you.
    • Home Page
Re: ELON MUSK | Right to worry about AI?
« Reply #6 on: August 26, 2017, 11:57:59 pm »
Extinction is hardly the only thing a sentient creature's got to worry about, and it's probably not even the worst thing.

The Fermi Paradox is what keeps me awake at night. The thought that human beings might not be the pinnacle of all creation and are soon to be out-evolved by our own technology is my only consolation.

*

selimchehimi

  • Trusty Member
  • ***
  • Nomad
  • *
  • 51
  • I'm writing Articles about AI on my blog!
    • Selim Chehimi - AI and Tech
Re: ELON MUSK | Right to worry about AI?
« Reply #7 on: September 10, 2017, 09:11:35 pm »
More the potential some technology holds, more dangerous it is. Think of fire as an invent, than think of splitting atoms, then think of AI, by that order. We should be smart, reasonable and careful about AI.

I think the same, AI is not the threat in itself but I believe that it can enhance existent threats

*

selimchehimi

  • Trusty Member
  • ***
  • Nomad
  • *
  • 51
  • I'm writing Articles about AI on my blog!
    • Selim Chehimi - AI and Tech
Re: ELON MUSK | Right to worry about AI?
« Reply #8 on: September 10, 2017, 09:15:55 pm »
Pretty good summary though necessarily vague. The biggest immediate danger is powerful AI being used by individual governments and corporations to the detriment of everyone else. Elon Musk's main focus at the moment is to ensure that AI is available to everyone. Imagine what the world would be like today if the Fascists were the only ones who got nuclear weapons.

Ultimately I think AI doesn't pose an existential threat because even though it might surpass us one day, we will still co-exist in a symbiotic relationship, much like we humans do with bacteria and insects now. We might try to manage them but we can't exterminate them or live without them.

Thank you very much for your comment. I understand that it seems a little bit vague. In fact, nobody really knows how we can regulate AI (maybe by an AI haha who knows).

Yes I also think that we must work in tandem with intelligent machines rather than trying to compete against them

*

selimchehimi

  • Trusty Member
  • ***
  • Nomad
  • *
  • 51
  • I'm writing Articles about AI on my blog!
    • Selim Chehimi - AI and Tech
Re: ELON MUSK | Right to worry about AI?
« Reply #9 on: September 10, 2017, 09:19:02 pm »
Ultimately I think AI doesn't pose an existential threat because even though it might surpass us one day, we will still co-exist in a symbiotic relationship, much like we humans do with bacteria and insects now. We might try to manage them but we can't exterminate them or live without them.

What do you figure an advanced AI is going to need humans for?  If it has an intellect equal to or above that of a human (which I think is implied by the worst existential risk fears), and it manages to supply itself with robots that match humans' physical capabilities, I'm not sure what its instrumental reasons would be for keeping us around. 

It's also worth noting that the relationship between humans and insects is not particularly good for insects at present, even though we don't yet have the ability or the desire to eliminate them completely.  Extinction is hardly the only thing a sentient creature's got to worry about, and it's probably not even the worst thing.

Well, as you said, it depends on which level or we talking about. If we talk about a human AI then why not. However, if we are talking about a superintelligent machine, then of course, it will not need human and it explore the spectrum of intelligence in a way we can't imagine. Intelligence always implies power!

*

selimchehimi

  • Trusty Member
  • ***
  • Nomad
  • *
  • 51
  • I'm writing Articles about AI on my blog!
    • Selim Chehimi - AI and Tech
Re: ELON MUSK | Right to worry about AI?
« Reply #10 on: September 10, 2017, 09:27:10 pm »
With respect to selimchehimi's nice video: the claim that an AI "arms race" between companies could result in a disregard for safety makes sense, and the argument that we should try to prevent this makes sense. But I think there are still two big questions that I haven't heard Elon Musk answer:

1) How do we design effective regulations? If we're just trying to prevent things like corporate hegemony, monopolies, etc., that's not so hard. If we're trying to prevent companies from using AI tools in wicked ways (for disinformation campaigns, etc.) that's also not so hard. But if what we're really worried about is existential risk ... how do we regulate against that when we don't even know what all the pitfalls are? Just what risky behaviors do we forbid companies from engaging in as they develop AI?

2) How do we enforce the regulations once we have them? AI development would be so very easy to hide. If the AI becomes sufficiently advanced, even its deployment might be easy to hide. And if companies, or nations, think that their competitors are probably developing AI on the sly, that reduces their own incentive to comply with any regulations and/or treaties.

In short, no matter how good the case for regulation is, I'm not certain it's practical.

Thank you very much for your comment, very informative. And yes of course. You have emphasize really great points. But I don't know if successfully creating a superintelligence would be easy to hide. Those you will succeed will be too powerful to hide it.
From the other points I really can't think of any answer haha. Maybe it's not that practical 🤔

*

infurl

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 292
  • Humans will disappoint you.
    • Home Page
Re: ELON MUSK | Right to worry about AI?
« Reply #11 on: September 10, 2017, 09:35:14 pm »
Thank you very much for your comment, very informative. And yes of course. You have emphasize really great points. But I don't know if successfully creating a superintelligence would be easy to hide. Those you will succeed will be too powerful to hide it.
From the other points I really can't think of any answer haha. Maybe it's not that practical 🤔

By far the most powerful weapon of World War II was the Allies ability to break the Axis codes. It's impact on the war far outweighed even that of the atom bomb and yet the breadth and depth of this capability was successfully hidden for more than fifty years. It is only in recent decades that the true impact of the work of people like Alan Turing has been made known. One of the methods used to hide this awesome capability was that every decision that was made using the decrypted information was weighed carefully to see if it fell within the bounds of normal chance. Too many lucky coincidences would have meant exposure. We can expect any real super intelligence to be far more subtle than we can comprehend.

*

selimchehimi

  • Trusty Member
  • ***
  • Nomad
  • *
  • 51
  • I'm writing Articles about AI on my blog!
    • Selim Chehimi - AI and Tech
Re: ELON MUSK | Right to worry about AI?
« Reply #12 on: September 10, 2017, 09:41:26 pm »
I don't think we could ever regulate AI creation, as it is easy to disobey any rules. You can't tell to someone: if you program an AI you should follow this or that safety pattern. It will always be a matter of personal responsibility, and I wonder if we, as a species, are mature enough to cope with such a power. I mean, would you put an atomic power plant into a monkey habitat and make it open for monkeys to play with it? I just hope that if we are smart enough to build the thing, then our ethical maturity follows our intelligence.

However, there is something we could do to help this world retain its safety, and that is to make AI blueprints publicly available. That way, if someone abuses an AI, making weapons out of it, we could construct another AI that beats it to the dust. I have issues in killing living beings, but I have no problem destroying AI instances that are about to make a mess in the world. And a good AI could be used to destroy an evil AI. :knuppel2:

Haha such an apocalyptic scenario!
But I realize that regulations are very vague, and I can't see how we can develop a beneficial AI to all humanity without it.
It's like we can't do nothing to prevent something bad happening.
Maybe a good solution would be to make everything open source but not practical again

*

selimchehimi

  • Trusty Member
  • ***
  • Nomad
  • *
  • 51
  • I'm writing Articles about AI on my blog!
    • Selim Chehimi - AI and Tech
Re: ELON MUSK | Right to worry about AI?
« Reply #13 on: September 10, 2017, 09:42:19 pm »
Extinction is hardly the only thing a sentient creature's got to worry about, and it's probably not even the worst thing.

The Fermi Paradox is what keeps me awake at night. The thought that human beings might not be the pinnacle of all creation and are soon to be out-evolved by our own technology is my only consolation.
Yes I agree it's so interesting 👌

*

ivan.moony

  • Trusty Member
  • *********
  • Terminator
  • *
  • 778
  • look, a star is falling
    • Structured Type System
Re: ELON MUSK | Right to worry about AI?
« Reply #14 on: September 10, 2017, 10:35:06 pm »
I don't think we could ever regulate AI creation, as it is easy to disobey any rules. You can't tell to someone: if you program an AI you should follow this or that safety pattern. It will always be a matter of personal responsibility, and I wonder if we, as a species, are mature enough to cope with such a power. I mean, would you put an atomic power plant into a monkey habitat and make it open for monkeys to play with it? I just hope that if we are smart enough to build the thing, then our ethical maturity follows our intelligence.

However, there is something we could do to help this world retain its safety, and that is to make AI blueprints publicly available. That way, if someone abuses an AI, making weapons out of it, we could construct another AI that beats it to the dust. I have issues in killing living beings, but I have no problem destroying AI instances that are about to make a mess in the world. And a good AI could be used to destroy an evil AI. :knuppel2:

Haha such an apocalyptic scenario!
But I realize that regulations are very vague, and I can't see how we can develop a beneficial AI to all humanity without it.
It's like we can't do nothing to prevent something bad happening.
Maybe a good solution would be to make everything open source but not practical again

I agree that recommendations on how to safely approach to building an AI should be made public, and then our personal responsibility would be to follow those guidelines. Maybe the word "follow" is too weak for that occasion. In fact, I think we should be terrified of the scenario if something goes wrong, I think it is that much at a stake. Look at the atomic power, and imagine you are the one who builds a concrete nuclear power plant. Now think of a pressure you would feel in realizing the project. And then someone has to turn the switch on, for the first time in finished power plant. What if there is too much Uranium? Do you get that feeling? If we are smart, we should have the same feeling when testing our AI projects.
Wherever you see a nice spot, plant another knowledge tree :favicon: