ELON MUSK | Right to worry about AI?

  • 16 Replies
  • 1160 Views
*

8pla.net

  • Trusty Member
  • *********
  • Terminator
  • *
  • 869
    • 8pla.net
Re: ELON MUSK | Right to worry about AI?
« Reply #15 on: September 11, 2017, 04:51:48 am »
Elon said, "I have exposure to the very most cutting edge A.I."

Then, I don't think Elon has seen anything like my A.I. research yet.
My Very Enormous Monster Just Stopped Using Nine

*

selimchehimi

  • Trusty Member
  • ***
  • Nomad
  • *
  • 51
  • I'm writing Articles about AI on my blog!
    • Selim Chehimi - AI and Tech
Re: ELON MUSK | Right to worry about AI?
« Reply #16 on: September 16, 2017, 04:04:51 pm »
Elon said, "I have exposure to the very most cutting edge A.I."

Then, I don't think Elon has seen anything like my A.I. research yet.

I would love to know more about your research !! I love all AI Researchers haha

I don't think we could ever regulate AI creation, as it is easy to disobey any rules. You can't tell to someone: if you program an AI you should follow this or that safety pattern. It will always be a matter of personal responsibility, and I wonder if we, as a species, are mature enough to cope with such a power. I mean, would you put an atomic power plant into a monkey habitat and make it open for monkeys to play with it? I just hope that if we are smart enough to build the thing, then our ethical maturity follows our intelligence.

However, there is something we could do to help this world retain its safety, and that is to make AI blueprints publicly available. That way, if someone abuses an AI, making weapons out of it, we could construct another AI that beats it to the dust. I have issues in killing living beings, but I have no problem destroying AI instances that are about to make a mess in the world. And a good AI could be used to destroy an evil AI. :knuppel2:

Haha such an apocalyptic scenario!
But I realize that regulations are very vague, and I can't see how we can develop a beneficial AI to all humanity without it.
It's like we can't do nothing to prevent something bad happening.
Maybe a good solution would be to make everything open source but not practical again

I agree that recommendations on how to safely approach to building an AI should be made public, and then our personal responsibility would be to follow those guidelines. Maybe the word "follow" is too weak for that occasion. In fact, I think we should be terrified of the scenario if something goes wrong, I think it is that much at a stake. Look at the atomic power, and imagine you are the one who builds a concrete nuclear power plant. Now think of a pressure you would feel in realizing the project. And then someone has to turn the switch on, for the first time in finished power plant. What if there is too much Uranium? Do you get that feeling? If we are smart, we should have the same feeling when testing our AI projects.

Yes I entirely agree, it would be a tremendous pression