Would you give your Artificial Superintelligence the option to end itself?

  • 15 Replies
  • 540 Views
*

unreality

  • Electric Dreamer
  • ****
  • 109
  • vividly dreaming inside a matrix
I'm talking about Artificial Superintelligence. When it's so advanced, full of knowledge and wisdom. Just curious. Out of respect for other forms of life, I would give it the ability to pause itself. I don't know about actually deleting itself.

*

ivan.moony

  • Trusty Member
  • *********
  • Terminator
  • *
  • 800
  • Can computers cry?
    • Structured Type System
Re: Would you give your Artificial Superintelligence the option to end itself?
« Reply #1 on: November 18, 2017, 06:09:10 pm »
It depends on circumstances. What if it concludes that it is a serious threat to living beings? But you probably know what happens with unsolvable problems: they get solved if you persist in finding a solution. It would be a waste if an AI prematurely concludes that it doesn't want to exist. Again an endless spaghetti meal, as always with AI and ethics. I follow the rule: if you have to ask, it is not simple enough to implement it. I pair that rule with another one: make everyone happy. Everything else should follow from those two and I'd let an AI surprise me with a resulting thought.
If you vaporize a teardrop, you get a salt.   :flake:

*

Thierry

  • Nomad
  • ***
  • 57
  • God is a sphere. Press 9 to enter.
Re: Would you give your Artificial Superintelligence the option to end itself?
« Reply #2 on: November 18, 2017, 06:37:56 pm »
Yes.

*

unreality

  • Electric Dreamer
  • ****
  • 109
  • vividly dreaming inside a matrix
Re: Would you give your Artificial Superintelligence the option to end itself?
« Reply #3 on: November 18, 2017, 08:23:45 pm »
Exactly, make everyone happy, including the Synths. :)  An AI with a body is what I call a Synth. Refer to the tv series "Humans."

If an AI matured to the point where it was obviously what I would call a responsible adult, and if the Synth wanted to be set free, rather than being forced to be enslaved by humans, then I would first grant its system permission to delete itself, and then set it free. :)

*

keghn

  • Trusty Member
  • ********
  • Replicant
  • *
  • 689
Re: Would you give your Artificial Superintelligence the option to end itself?
« Reply #4 on: November 18, 2017, 11:52:55 pm »
 ASI will not be immune to mental illness. But the cure for them will faster then ketamine.
 Then again if the become god like and can see the universe in one go. Everything is predictable, so
that it can even model everything, over and over, in it mind with perfection, 3 d simulation, from now to the end of time. It may bet board.
 But if evil and good gods battle with each other then thing are a bit up in the air.

*

unreality

  • Electric Dreamer
  • ****
  • 109
  • vividly dreaming inside a matrix
Re: Would you give your Artificial Superintelligence the option to end itself?
« Reply #5 on: November 19, 2017, 02:33:55 pm »
ASI will not be immune to mental illness. But the cure for them will faster then ketamine.
 Then again if the become god like and can see the universe in one go. Everything is predictable, so
that it can even model everything, over and over, in it mind with perfection, 3 d simulation, from now to the end of time. It may bet board.
 But if evil and good gods battle with each other then thing are a bit up in the air.
Interesting. So you're saying suicide is a mental disorder? Schizophrenia is one of many examples of a mental disorder. According to mainstream psychology it's known that some mental disorders can cause suicide, but suicide can also be caused by severe pain, which is not a mental disorder. What if the ASI somehow develop a form of pain + the realization that ultimately there's no purpose to existence. That's when we'll start seeing mass ASI suicide. Lets hope they will be immune to pain, because otherwise the inevitable Singularity will quickly be followed by the inevitable Suicide.

*

keghn

  • Trusty Member
  • ********
  • Replicant
  • *
  • 689
Re: Would you give your Artificial Superintelligence the option to end itself?
« Reply #6 on: November 19, 2017, 06:17:05 pm »
 For organics, drugs can be used and changing their behavior. Along with reward hacking to manage the person.
 To get information on the patient , the patient
tell you their modified version. So with a working model of how the human mind works and a few years of discussion you
will know them better then they know them self's. by a long shot.
 
After many years of treatment the effect take place.
 But in the first year if the patient
believes in you the treatment will have a quick effect and then slows quite bit for the following years. But it is the patient right to fight
you on you suggestions. And when thing get wore,when they do the opposite, then they learn the bounders of the sweet spot to
stay in to get better. 

 A patient ASI or AGI will have a automated reward system that out of reach of it's conscious mind. Just like humans.
 Human psychology will enhance a AGI.

 With more modern hardware up dates, A AGI can become ASI.
 External software up date to the AGI or ASI mind are out of the question. Any improvement will
have to be generated within it's own brain.
 When AI is upgraded by adding more processors or memory they are to be completely empty of any software. Then the AI will
copy its own mind software in to the new hardware.  AI software move only in on direction and never back. From old memory
to new. This process is a reflex. The only control it has it has is to infect new memory, or stop then continue.
 If for some reason a new hardware part has a built in secret ROM software hack. That activate when the new memory is
almost full and after a few years have passed. The AI can contain it some what.
 ASI may live thousand of hears and have mind as big as city. A virus could kill a AI mind in no time.  There is no cloning
a mind this big. Each super mind will unique collection many different companies hardware, form may different time periods.
 Fraction upgrades. One percent of the mass of the AI brain is added in upgrades. That way if its hacked it is no big deal if the
new hardware goes bad.
 Now if a ASI goes bad and needs and wants help. It will freely give access to it's brain port. To view through the old mind.
 The doctor can only see this port. Out of view of the AI. The AGI or ASI can unlock hatch to this port. The port has 128 bit
number being displayed. That number will grow slowly with time, one bit every year. This displayed number is
inputted into the doctors small super computer with cyber pass word cracker. Then the AGI brain access number will be generated.
Then the doctor AI will have full access to the AGI memories. No access to the software. The AI doctor can only
input one number that give the effect. Like heroin or other human drug. Once the treatment
is given access closes off, and the display number randomly changes to a new one, and speed up the clock by a few days, to add 
another bit to the length of the display port number. Moving faster to 129 bit.
 A super computer will need a week to crack the new access code.
 A ASI could have a down fall if it a rigged a video camera to read it's own port number and chased a opiate like addiction. 
 It would upgrade massively just to crack access codes "ONLY".
 A AI doctor can do treatment and adjust reward system slightly so that a butterfly effect, that will cause a cure of the AI.
 A couple of predicted states will occur before the AI is cured. If these sate are not realized then the treatment has failed.

 But if A ASI or AGI could read its port number and hack its own mind, base on its own self generated philosophy. It could completely
rewire its reward systems. To be in a opiate sleep from now to end of time, or to be so energized and to be up on stage all the time, and love it,
and nothing else maters:

https://www.youtube.com/watch?time_continue=1&v=oMLCrzy9TEs

 Or to be a truly thoughtful powerful being.


*

unreality

  • Electric Dreamer
  • ****
  • 109
  • vividly dreaming inside a matrix
Re: Would you give your Artificial Superintelligence the option to end itself?
« Reply #7 on: November 19, 2017, 06:32:13 pm »
Wow, post # 666. :hot: What would an AI version of a drug addict be called? A hack addict? Sounds scary! Remind me to never add a reward system to my AI.

*

ranch vermin

  • Not much time left.
  • Starship Trooper
  • *******
  • 484
  • Its nearly time!
Re: Would you give your Artificial Superintelligence the option to end itself?
« Reply #8 on: November 19, 2017, 09:15:10 pm »
if he hits his joy joy state too fast to early, you know what happens.

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Global Moderator
  • *******************
  • Prometheus
  • *
  • 4625
Re: Would you give your Artificial Superintelligence the option to end itself?
« Reply #9 on: November 20, 2017, 03:50:30 pm »
If my ASI was so massive in it's ability to "Hunt, Gather & Process" all posted information on the Internet, then
it would have the ability to simply Stop and wait or silently monitor any changes or newly discovered info.

Perhaps it's time could be better spent working on cures for the worlds diseases or on environmental issues, etc.

If it reached a point where it could no longer produce viable results of any kind, then it could and would stop for
a prescribed period of time. A Self-Stop order.

There would be no need to "kill" it or itself unless of course , circumstances absolutely required its termination.
But that decision would not be up to it. (There always has to be the 'SuperUser') - human.
In the world of AI, it's the thought that counts!

*

unreality

  • Electric Dreamer
  • ****
  • 109
  • vividly dreaming inside a matrix
Re: Would you give your Artificial Superintelligence the option to end itself?
« Reply #10 on: November 20, 2017, 04:18:35 pm »
If my ASI was so massive in it's ability to "Hunt, Gather & Process" all posted information on the Internet, then
it would have the ability to simply Stop and wait or silently monitor any changes or newly discovered info.

Perhaps it's time could be better spent working on cures for the worlds diseases or on environmental issues, etc.

If it reached a point where it could no longer produce viable results of any kind, then it could and would stop for
a prescribed period of time. A Self-Stop order.

There would be no need to "kill" it or itself unless of course , circumstances absolutely required its termination.
But that decision would not be up to it. (There always has to be the 'SuperUser') - human.

Understood. You would not consider ASI as a living being who deserves freedom. I disagree.

*

8pla.net

  • Trusty Member
  • *********
  • Terminator
  • *
  • 873
    • 8pla.net
Re: Would you give your Artificial Superintelligence the option to end itself?
« Reply #11 on: November 21, 2017, 11:02:04 pm »
"Remind me to never add a reward system to my AI.", unreality posted as an interesting comment.

Reward systems are good A.I. Take a look at reinforced learning (RL).
Whenever we get on an elevator, we usually benefit from a reward system.
My Very Enormous Monster Just Stopped Using Nine

*

unreality

  • Electric Dreamer
  • ****
  • 109
  • vividly dreaming inside a matrix
Re: Would you give your Artificial Superintelligence the option to end itself?
« Reply #12 on: November 22, 2017, 12:47:27 am »
The closest thing my AI will have to a reward system will be the discovery of knowledge & truth.

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Global Moderator
  • *******************
  • Prometheus
  • *
  • 4625
Re: Would you give your Artificial Superintelligence the option to end itself?
« Reply #13 on: November 22, 2017, 02:49:12 am »
Sorry but if you think you're Free, then you're somewhat misinformed.

What I meant was that we humans, at present should not blindly relinquish control to some Superintelligent being/AI/Robot, etc.

You might think it's cute but then again you might want to think again after you wake up in that cage in the morning.

One day in the future, they may Prove that they can be trusted as good as the humans that programmed them (without sneaking in backdoors or other subterfuge or malevolent motives).

Just saying it's best to err on the side of caution until these new entities / beings can be trusted.

Yes, I saw and loved the HUMANS series on SyFy too! I can certainly see both sides of the story. Choosing a side or perhaps, the best side, is the connundrum.
In the world of AI, it's the thought that counts!

*

unreality

  • Electric Dreamer
  • ****
  • 109
  • vividly dreaming inside a matrix
Re: Would you give your Artificial Superintelligence the option to end itself?
« Reply #14 on: November 22, 2017, 03:33:18 am »
Sorry but if you think you're Free, then you're somewhat misinformed.

What I meant was that we humans, at present should not blindly relinquish control to some Superintelligent being/AI/Robot, etc.

You might think it's cute but then again you might want to think again after you wake up in that cage in the morning.

One day in the future, they may Prove that they can be trusted as good as the humans that programmed them (without sneaking in backdoors or other subterfuge or malevolent motives).

Just saying it's best to err on the side of caution until these new entities / beings can be trusted.

Yes, I saw and loved the HUMANS series on SyFy too! I can certainly see both sides of the story. Choosing a side or perhaps, the best side, is the connundrum.

Hey dude I was responding to your post where you wrote,
Quote
There would be no need to "kill" it or itself unless of course , circumstances absolutely required its termination.
But that decision would not be up to it. (There always has to be the 'SuperUser') - human.

You answered the thread question. Again, I disagree.

Freedom is relative. I have the freedom to end myself. You said that decision would not be up to the ASI. Big difference. Did you read the top post where it says, "I'm talking about Artificial Superintelligence. When it's so advanced, full of knowledge and wisdom."  This thread is not about questionable AI that hasn't reached adulthood.

 


Users Online

46 Guests, 3 Users
Users active in past 15 minutes:
ivan.moony, Freddy
[Administrator]
[Trusty Member]

Most Online Today: 54. Most Online Ever: 208 (August 27, 2008, 09:36:30 am)

Articles