ai

  • 15 Replies
  • 4931 Views
*

unreality

  • Starship Trooper
  • *******
  • 443
ai
« on: November 18, 2017, 05:43:15 pm »
[Deleted]
« Last Edit: November 26, 2019, 05:30:38 pm by unreality »

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1723
    • mind-child
Re: Would you give your Artificial Superintelligence the option to end itself?
« Reply #1 on: November 18, 2017, 06:09:10 pm »
It depends on circumstances. What if it concludes that it is a serious threat to living beings? But you probably know what happens with unsolvable problems: they get solved if you persist in finding a solution. It would be a waste if an AI prematurely concludes that it doesn't want to exist. Again an endless spaghetti meal, as always with AI and ethics. I follow the rule: if you have to ask, it is not simple enough to implement it. I pair that rule with another one: make everyone happy. Everything else should follow from those two and I'd let an AI surprise me with a resulting thought.

*

Thierry

  • Nomad
  • ***
  • 60
  • God is a sphere. Press 9 to enter.
Re: Would you give your Artificial Superintelligence the option to end itself?
« Reply #2 on: November 18, 2017, 06:37:56 pm »
Yes.

*

unreality

  • Starship Trooper
  • *******
  • 443
ai
« Reply #3 on: November 18, 2017, 08:23:45 pm »
[Deleted]
« Last Edit: November 26, 2019, 05:31:53 pm by unreality »

*

keghn

  • Trusty Member
  • *********
  • Terminator
  • *
  • 824
Re: Would you give your Artificial Superintelligence the option to end itself?
« Reply #4 on: November 18, 2017, 11:52:55 pm »
 ASI will not be immune to mental illness. But the cure for them will faster then ketamine.
 Then again if the become god like and can see the universe in one go. Everything is predictable, so
that it can even model everything, over and over, in it mind with perfection, 3 d simulation, from now to the end of time. It may bet board.
 But if evil and good gods battle with each other then thing are a bit up in the air.

*

unreality

  • Starship Trooper
  • *******
  • 443
ai
« Reply #5 on: November 19, 2017, 02:33:55 pm »
[Deleted]
« Last Edit: November 26, 2019, 05:32:03 pm by unreality »

*

keghn

  • Trusty Member
  • *********
  • Terminator
  • *
  • 824
Re: Would you give your Artificial Superintelligence the option to end itself?
« Reply #6 on: November 19, 2017, 06:17:05 pm »
 For organics, drugs can be used and changing their behavior. Along with reward hacking to manage the person.
 To get information on the patient , the patient
tell you their modified version. So with a working model of how the human mind works and a few years of discussion you
will know them better then they know them self's. by a long shot.
 
After many years of treatment the effect take place.
 But in the first year if the patient
believes in you the treatment will have a quick effect and then slows quite bit for the following years. But it is the patient right to fight
you on you suggestions. And when thing get wore,when they do the opposite, then they learn the bounders of the sweet spot to
stay in to get better. 

 A patient ASI or AGI will have a automated reward system that out of reach of it's conscious mind. Just like humans.
 Human psychology will enhance a AGI.

 With more modern hardware up dates, A AGI can become ASI.
 External software up date to the AGI or ASI mind are out of the question. Any improvement will
have to be generated within it's own brain.
 When AI is upgraded by adding more processors or memory they are to be completely empty of any software. Then the AI will
copy its own mind software in to the new hardware.  AI software move only in on direction and never back. From old memory
to new. This process is a reflex. The only control it has it has is to infect new memory, or stop then continue.
 If for some reason a new hardware part has a built in secret ROM software hack. That activate when the new memory is
almost full and after a few years have passed. The AI can contain it some what.
 ASI may live thousand of hears and have mind as big as city. A virus could kill a AI mind in no time.  There is no cloning
a mind this big. Each super mind will unique collection many different companies hardware, form may different time periods.
 Fraction upgrades. One percent of the mass of the AI brain is added in upgrades. That way if its hacked it is no big deal if the
new hardware goes bad.
 Now if a ASI goes bad and needs and wants help. It will freely give access to it's brain port. To view through the old mind.
 The doctor can only see this port. Out of view of the AI. The AGI or ASI can unlock hatch to this port. The port has 128 bit
number being displayed. That number will grow slowly with time, one bit every year. This displayed number is
inputted into the doctors small super computer with cyber pass word cracker. Then the AGI brain access number will be generated.
Then the doctor AI will have full access to the AGI memories. No access to the software. The AI doctor can only
input one number that give the effect. Like heroin or other human drug. Once the treatment
is given access closes off, and the display number randomly changes to a new one, and speed up the clock by a few days, to add 
another bit to the length of the display port number. Moving faster to 129 bit.
 A super computer will need a week to crack the new access code.
 A ASI could have a down fall if it a rigged a video camera to read it's own port number and chased a opiate like addiction. 
 It would upgrade massively just to crack access codes "ONLY".
 A AI doctor can do treatment and adjust reward system slightly so that a butterfly effect, that will cause a cure of the AI.
 A couple of predicted states will occur before the AI is cured. If these sate are not realized then the treatment has failed.

 But if A ASI or AGI could read its port number and hack its own mind, base on its own self generated philosophy. It could completely
rewire its reward systems. To be in a opiate sleep from now to end of time, or to be so energized and to be up on stage all the time, and love it,
and nothing else maters:

https://www.youtube.com/watch?time_continue=1&v=oMLCrzy9TEs

 Or to be a truly thoughtful powerful being.


*

unreality

  • Starship Trooper
  • *******
  • 443
ai
« Reply #7 on: November 19, 2017, 06:32:13 pm »
[Deleted]
« Last Edit: November 26, 2019, 05:32:12 pm by unreality »

*

ranch vermin

  • Not much time left.
  • Terminator
  • *********
  • 947
  • Its nearly time!
Re: Would you give your Artificial Superintelligence the option to end itself?
« Reply #8 on: November 19, 2017, 09:15:10 pm »
if he hits his joy joy state too fast to early, you know what happens.

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: Would you give your Artificial Superintelligence the option to end itself?
« Reply #9 on: November 20, 2017, 03:50:30 pm »
If my ASI was so massive in it's ability to "Hunt, Gather & Process" all posted information on the Internet, then
it would have the ability to simply Stop and wait or silently monitor any changes or newly discovered info.

Perhaps it's time could be better spent working on cures for the worlds diseases or on environmental issues, etc.

If it reached a point where it could no longer produce viable results of any kind, then it could and would stop for
a prescribed period of time. A Self-Stop order.

There would be no need to "kill" it or itself unless of course , circumstances absolutely required its termination.
But that decision would not be up to it. (There always has to be the 'SuperUser') - human.
In the world of AI, it's the thought that counts!

*

unreality

  • Starship Trooper
  • *******
  • 443
ai
« Reply #10 on: November 20, 2017, 04:18:35 pm »
[Deleted]
« Last Edit: November 26, 2019, 05:32:21 pm by unreality »

*

8pla.net

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1302
  • TV News. Pub. UAL (PhD). Robitron Mod. LPC Judge.
    • 8pla.net
Re: Would you give your Artificial Superintelligence the option to end itself?
« Reply #11 on: November 21, 2017, 11:02:04 pm »
"Remind me to never add a reward system to my AI.", unreality posted as an interesting comment.

Reward systems are good A.I. Take a look at reinforced learning (RL).
Whenever we get on an elevator, we usually benefit from a reward system.
My Very Enormous Monster Just Stopped Using Nine

*

unreality

  • Starship Trooper
  • *******
  • 443
ai
« Reply #12 on: November 22, 2017, 12:47:27 am »
[Deleted]
« Last Edit: November 26, 2019, 05:31:28 pm by unreality »

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: Would you give your Artificial Superintelligence the option to end itself?
« Reply #13 on: November 22, 2017, 02:49:12 am »
Sorry but if you think you're Free, then you're somewhat misinformed.

What I meant was that we humans, at present should not blindly relinquish control to some Superintelligent being/AI/Robot, etc.

You might think it's cute but then again you might want to think again after you wake up in that cage in the morning.

One day in the future, they may Prove that they can be trusted as good as the humans that programmed them (without sneaking in backdoors or other subterfuge or malevolent motives).

Just saying it's best to err on the side of caution until these new entities / beings can be trusted.

Yes, I saw and loved the HUMANS series on SyFy too! I can certainly see both sides of the story. Choosing a side or perhaps, the best side, is the connundrum.
In the world of AI, it's the thought that counts!

*

unreality

  • Starship Trooper
  • *******
  • 443
ai
« Reply #14 on: November 22, 2017, 03:33:18 am »
[Deleted]
« Last Edit: November 26, 2019, 05:31:38 pm by unreality »

 


OpenAI Speech-to-Speech Reasoning Demo
by ivan.moony (AI News )
Today at 01:31:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

320 Guests, 0 Users

Most Online Today: 346. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles