out of the box theory

  • 28 Replies
  • 2228 Views
*

ivan.moony

  • Trusty Member
  • *********
  • Terminator
  • *
  • 868
  • Smile, it's another day!
    • Structured Type System
just a digression
« Reply #15 on: August 01, 2017, 08:04:22 pm »
I wouldn't mind an AGI to lead us... as long as it has a right attitude. I agree that humans are flawed and many us would take an advance of our positions. An AGI, programmed the right way, could overcome those instincts, but looking at the Earth in this moment... we are not ready to build one. Take a picking the right side for example: our question is whom to fight? Unless we have a clear answer to this question, we are not ready yet to build something that could be thousands times smarter than us. Don't be mystified, intelligence is completely different thing than ethic. Imagine a robot that clues up in a millisecond how to rob all the banks in the world, and you'll realize the fact of intelligence being something different than ethics. Even if we try to build a fair AGI, it is that question that the most of people fail to answer: whom to fight?

Until we have a right answer to this question, I'd settle for domain specific intelligence that solve domain specific problems like finding cures for diseases and improving a quality of life.

Don't get me wrong, but I don't like what I see what's going on right now on the Earth. Take this evening for example. A fest is happening in my village, many of country famous and politics people came to honor the ultimate event: a donkey race. A few thousands of other people came to see this, all of them cheering up competitors that poke donkeys with spurs by kidneys, forcing them to breach their way through the loud crowd, without minding the fear and pain that poor animals are going through. And we are so proud of this event, yelling out how we managed to retain our legacy culture. Does this event even deserve a comment?

So I'm still settling with artificial specific intelligence, just in case.

What you, as AGI researchers can do for yourself is to find an answer to this question: whom to fight? There are four possible answers: 1. everyone; 2. the right guys; 3. the wrong guys; 4. no one. Pick up a wrong answer and we are doomed.
« Last Edit: August 01, 2017, 09:47:58 pm by ivan.moony »
Did you notice that today is the first day of the rest of your life?

*

yotamarker

  • Trusty Member
  • ********
  • Replicant
  • *
  • 601
    • battle programming
Re: out of the box theory
« Reply #16 on: August 01, 2017, 09:19:29 pm »
a challenge ?

*

ivan.moony

  • Trusty Member
  • *********
  • Terminator
  • *
  • 868
  • Smile, it's another day!
    • Structured Type System
Re: out of the box theory
« Reply #17 on: August 01, 2017, 09:49:15 pm »
:)
Did you notice that today is the first day of the rest of your life?

*

Zero

  • Trusty Member
  • ********
  • Replicant
  • *
  • 567
  • Offline
    • Github page
Re: out of the box theory
« Reply #18 on: August 01, 2017, 10:24:16 pm »
Quote
Our only defence against something like this is a good offence… a better AGI.

You mean an open source one, so the game is fair?
Thinkbots are free, as in 'free will'.

*

selimchehimi

  • Trusty Member
  • ***
  • Nomad
  • *
  • 51
  • I'm writing Articles about AI on my blog!
    • Selim Chehimi - AI and Tech
Re: just a digression
« Reply #19 on: August 02, 2017, 09:30:54 am »
I wouldn't mind an AGI to lead us... as long as it has a right attitude. I agree that humans are flawed and many us would take an advance of our positions. An AGI, programmed the right way, could overcome those instincts, but looking at the Earth in this moment... we are not ready to build one. Take a picking the right side for example: our question is whom to fight? Unless we have a clear answer to this question, we are not ready yet to build something that could be thousands times smarter than us.

Really ahah, I would be terrified if an AGI leads us !! And I think that we will not have to create the AGI, but instead, it will program itself and learns from his mistakes like a 4 years old child. I agree with you, we're clearly not ready to build an AGI, if we do so, we would be enslaved. I feel that researchers are really not taking the right amount of actions concerning AI safety.
Maybe to fight no one is the best recommendation!

*

Zero

  • Trusty Member
  • ********
  • Replicant
  • *
  • 567
  • Offline
    • Github page
Re: out of the box theory
« Reply #20 on: August 02, 2017, 09:52:03 am »
AI safety... Really, you can't limit an AI's mind. The only way would be to limit the AI's body, so it's not stronger than a human body. But since it's a machine, it can still easily upgrade its body or upload a copy of its self to... god knows where. So anyway we're fucked.

Now, the "AI vs bigger AI" idea reminds me of the final scene of Batman The Dark Knight, with 2 boats ready to blow each other. Should we really release an open source unbeatable beast just because big companies are investing billions into AI?
Thinkbots are free, as in 'free will'.

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Global Moderator
  • *******************
  • Prometheus
  • *
  • 4700
Re: out of the box theory
« Reply #21 on: August 03, 2017, 01:45:32 am »
One way for evil to win is for good men to stand by and do nothing.

Sorry about your donkeys...ever been to a real bull fight in Mexico?
In the world of AI, it's the thought that counts!

*

ivan.moony

  • Trusty Member
  • *********
  • Terminator
  • *
  • 868
  • Smile, it's another day!
    • Structured Type System
Re: out of the box theory
« Reply #22 on: August 03, 2017, 04:28:06 am »
One way for evil to win is for good men to stand by and do nothing.
But if we use force, we are the same evil. I'll never get it. Maybe we should occasionally lose our temper and do stupid things like fight. But it is wrong. I think, if aliens are up there somewhere, that is the reason they don't show up. They stay aside, fearing that we would eventually fight them, and they would not return the same force because it's against their laws (or should I say ethics), although they would have means to do it. So they leave. And I don't blame them (if they are near).

The most logic thing someone should do if she doesn't like things around is to get away. The real problem culminates when one is forced to stay. And this temper that humans are loosing here and there shouldn't be an option for an AGI. If it is going to play god, it should behave like one.

Quote
Sorry about your donkeys...ever been to a real bull fight in Mexico?
Luckily, no. I've heard they forbid those "shows" in Spain. In general, I would be a happy man if we would set all the animals free, to live in (mother) nature. Sure it would not be easy to catch one for lunch, but I'd do it that way, if I have to.
Did you notice that today is the first day of the rest of your life?

*

Zero

  • Trusty Member
  • ********
  • Replicant
  • *
  • 567
  • Offline
    • Github page
Re: out of the box theory
« Reply #23 on: August 03, 2017, 09:50:44 am »
In 1944, americans could have stayed away. I'm happy they did not. I'm happy to live in France, not in Germany. The "get away" behavior is not enough.
Thinkbots are free, as in 'free will'.

*

selimchehimi

  • Trusty Member
  • ***
  • Nomad
  • *
  • 51
  • I'm writing Articles about AI on my blog!
    • Selim Chehimi - AI and Tech
Re: out of the box theory
« Reply #24 on: August 03, 2017, 12:15:11 pm »
Quote
AI safety... Really, you can't limit an AI's mind. The only way would be to limit the AI's body, so it's not stronger than a human body. But since it's a machine, it can still easily upgrade its body or upload a copy of its self to... god knows where. So anyway we're fucked.
But maybe it will not even have to be in a body to control us? In my opinion, the most important is that the AGI has not access to the Internet

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1206
  • First it wiggles, then it is rewarded.
    • Enter Lair
Re: out of the box theory
« Reply #25 on: August 03, 2017, 05:50:46 pm »
Yous have de-railed this yotamarker's topic.
Emergent

*

selimchehimi

  • Trusty Member
  • ***
  • Nomad
  • *
  • 51
  • I'm writing Articles about AI on my blog!
    • Selim Chehimi - AI and Tech
Re: out of the box theory
« Reply #26 on: August 03, 2017, 10:09:47 pm »
Ahaha yes maybe a little bit

*

Zero

  • Trusty Member
  • ********
  • Replicant
  • *
  • 567
  • Offline
    • Github page
Re: out of the box theory
« Reply #27 on: August 04, 2017, 09:52:54 am »
Ok, back on the topic, why would "they" cover up the truth? Why masking zimple algorithms? And who are "they"?

No. They aren't, whoever they are.
Thinkbots are free, as in 'free will'.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1206
  • First it wiggles, then it is rewarded.
    • Enter Lair
Re: out of the box theory
« Reply #28 on: August 04, 2017, 10:16:09 am »
Maybe they aren't covering things up. Just me and yotamarker are smarter than the rest.
Emergent

 


The last invention.
by LOCKSUIT (General Project Discussion)
February 17, 2018, 11:51:20 pm
Supervised AGI
by keghn (General AI Discussion)
February 16, 2018, 08:24:24 pm
Strange learning curves
by Kaeldric (AI Programming)
February 16, 2018, 06:28:14 pm
XKCD Comic : Unification
by Freddy (XKCD Comic)
February 16, 2018, 05:00:26 pm
I want to crack Neural Networks
by keghn (General AI Discussion)
February 15, 2018, 11:48:11 pm
ORWL
by korrelan (General Hardware Talk)
February 15, 2018, 11:22:51 am
Fastest C or C++ math library
by korrelan (AI Programming)
February 14, 2018, 05:08:02 pm
XKCD Comic : Robots
by Tyler (XKCD Comic)
February 14, 2018, 12:01:50 pm

Users Online

43 Guests, 0 Users

Most Online Today: 48. Most Online Ever: 208 (August 27, 2008, 09:36:30 am)

Articles