AI safety

  • 62 Replies
  • 1728 Views
*

ivan.moony

  • Trusty Member
  • *********
  • Terminator
  • *
  • 757
  • look, a star is falling
Re: AI safety
« Reply #45 on: September 22, 2017, 08:11:20 pm »
In fact, you want to create a slave, not an evolving being. Am I right?

I want to create something that wouldn't be ashamed of its existence, something that, if it was alive, would be proud of its deeds. I want to create something that would have abilities and personality I could only dream of. It would be up to people how they want to relate to it. I'm saying "it" because I'm not sure if it could be "inhabited" by a living being (a life phenomena explanation would be needed for this, but I'm not a fan of a thought that a buggy processor controls emotions).

But, if it is not alive, it should be aware that it isn't alive. Do you consider a car as a slave? It is a tool, but such a dumb tool that you don't ask it for a real life advice. [EDIT] AI would be something completely different.

[EDIT] Correction: I don't want to create AI alone, by myself. I want to present my own version of AI, the way I see it should be done, but I want also to provide a tool for creating AI, so everyone could make their own vision happen. Something like do-it-yourself kit.
« Last Edit: September 22, 2017, 09:16:11 pm by ivan.moony »
Wherever you see a nice spot, plant another knowledge tree :favicon:

*

Zero

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 403
  • Fictional character
    • SYN CON DEV LOG
Re: AI safety
« Reply #46 on: September 22, 2017, 10:43:01 pm »
Quote
If there is no spiritual world that influences the physical, then I don't see how actual free will can exist.

Influence? Do you mean that sometimes, some physical particles don't move as laws of physics dictate, because they're influenced by the spiritual world?

Quote
According to you, we already are simple mechanisms

No we aren't, since we're conscious.

Quote
I don't see how an AI can have choice, then. I don't see how we give it subjectivity.

Buddhists say we have not only 5 senses, but six: the sixth sense is the one that allow us to feel our own mind's events and states, so to speak. If a program receives, as input, data from the outside world and data from its inner events and states, we create a loop that allow it to understand, predict, and modify its own behavior. Knowing (and feeling) that it is itself a part of the world gives it a subjective point of view.

Quote
I'm saying "it" because I'm not sure if it could be "inhabited" by a living being

Why saying "inhabited"? Couldn't it be living, itself?

*

WriterOfMinds

  • Trusty Member
  • **
  • Bumblebee
  • *
  • 45
    • WriterOfMinds Blog
Re: AI safety
« Reply #47 on: September 23, 2017, 12:52:34 am »
Quote
Influence? Do you mean that sometimes, some physical particles don't move as laws of physics dictate, because they're influenced by the spiritual world?

Yes. One good definition of a miracle, I think, is a localized temporary override of the standard laws of physics. If such overrides *can't* happen and we really *do* live in a clockwork universe, then in my opinion we just have to punt on the free will thing. Clockwork universe and free will are inherently incompatible notions. If the laws of physics are making all my decisions, not only am *I* not making them, they're not even decisions -- they're inevitabilities. (Or perhaps random outcomes, if you're looking at the quantum level.) I'm neither noble when I behave altruistically, nor condemnable when I behave selfishly ... physics made me do it.

You've brought the issue of consciousness in now, but as I see it, that's separate. The ability to have experiences and the ability to make free decisions (that aren't causally mandated by something/someone else) are two different things.

Awareness of one's internal state can be another data input, but I don't think incorporating feedback mechanisms creates free will. It just makes it more difficult for an external observer to tease out what the ultimate causes of any given action were. Any self-modifying feedback action was prompted by an "if internal state is X, then modify self in Y way" statement, which in turn might have been created by a previous feedback loop, which was spawned by an even earlier "if X, then Y," and so on -- until you get all the way back to the original seed code, which inevitably determines the final outcome. This is still reaction, not choice. You seem to think that, just by making the chain of reactions sufficiently long and complicated, we'll achieve a scenario in which the final outcome is somehow not dictated by us. But we remain the First Cause in this chain. We still have to write the seed code that deterministically spawns everything else. And in my mind, that means that we still bear ultimate responsibility for the result. Ergo, we should try to exercise that responsibility wisely and make it a good result.

Setting aside all this bickering about what free will is and whether humans really have it or not, I think you and I are in some degree of agreement about the right way to build an AI -- start with a comparatively simple seed program and let it build itself up through learning and feedback. But I'm convinced we should at least try to introduce a moral basis in the seed -- because if we don't determine it intentionally, we will determine it unintentionally. The latter could quite possibly be characterized as gross negligence.

I'd like to spend the weekend working on AI instead of arguing about them, so I'm going to make a second attempt to excuse myself from this thread. I will see you all later. ;)

*

Zero

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 403
  • Fictional character
    • SYN CON DEV LOG
Re: AI safety
« Reply #48 on: September 23, 2017, 04:23:26 am »
Thanks a lot for sharing, WriterOfMinds.
I understand the way you and ivan.moony see things, and I respect it.
 :)
I'm so disappointed.

Quote
You seem to think that, just by making the chain of reactions sufficiently long and complicated, we'll achieve a scenario in which the final outcome is somehow not dictated by us.

I never said that. You're the one who desperately need to get rid of causality, at least locally.

I'm just saying that AI safety should take place outside of the software, because these softwares will be highly dynamic.
« Last Edit: September 23, 2017, 07:58:10 am by Zero »

*

ivan.moony

  • Trusty Member
  • *********
  • Terminator
  • *
  • 757
  • look, a star is falling
Re: AI safety
« Reply #49 on: September 23, 2017, 10:39:34 am »
@Zero
I agree that free will could make up an interesting personality. How do you propose that free will should be implemented?
Wherever you see a nice spot, plant another knowledge tree :favicon:

*

Don Patrick

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 389
    • Artificial Detective
Re: AI safety
« Reply #50 on: September 24, 2017, 01:58:00 pm »
I say let's leave it up to free will to decide how it wants to be implemented :)
Personal project: NLP -> learning -> knowledge -> logical inference -> A.I.

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Global Moderator
  • *******************
  • Prometheus
  • *
  • 4515
Re: AI safety
« Reply #51 on: September 24, 2017, 06:59:13 pm »
Good point Don!

Free(dom) -  of choice, to select, to create to decide!

Will - expressing the future tense as in ones deliberate choosing something.

After all, isn't it more like freedom (whether it be random or by choice)? In this programming it is deterministic...through whatever processes, weighted averages or methods are employed to "allow" the A.I. to make its choice.

The program is executing an extremely large decision tree, pruning as it goes, in order to end up with it's optimal choice.

If this is some people's definition of "free will" then so be it.
In the world of AI, it's the thought that counts!

*

Zero

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 403
  • Fictional character
    • SYN CON DEV LOG
Re: AI safety
« Reply #52 on: September 25, 2017, 09:43:38 am »
About free will, I think it has something to do with actions not being directly bound to any condition triggering their execution. In a reaction, they are bound, it's an if-then-else. In a choice, there's no direct binding between the description of a situation and the actions that could take place in front of it. It would deserve its own thread but I'm sick/ill right now, and weak.

*

keghn

  • Trusty Member
  • ********
  • Replicant
  • *
  • 599
Re: AI safety
« Reply #53 on: September 25, 2017, 05:05:41 pm »
 You have free will as long as you stay withing a given conduit. Stray and your free will is taken away.
 You have the free will to decide not to eat for the rest of your life. Then after three day the lesser focuses will
create a new queen bee for the hive. Sorry. i mean that persons free will will be overridden first juice chicken fritter they
come across by the other part of the brain because they are starving too.
 I have made conscious decision decision to
lose a few pound and after a few day my hand act on their own when i get near the frig. All i can do is watch and thing
how i failed my goal :-((((((((((((((((((((((((((((((()   


*

ivan.moony

  • Trusty Member
  • *********
  • Terminator
  • *
  • 757
  • look, a star is falling
Re: AI safety
« Reply #54 on: September 25, 2017, 06:20:03 pm »
Yeah, strange things happen occasionally...
Wherever you see a nice spot, plant another knowledge tree :favicon:

*

keghn

  • Trusty Member
  • ********
  • Replicant
  • *
  • 599
Re: AI safety
« Reply #55 on: September 25, 2017, 06:53:46 pm »

 Hi Ivan. 
Reward Hacking: Concrete Problems in AI Safety Part 3: 


*

ivan.moony

  • Trusty Member
  • *********
  • Terminator
  • *
  • 757
  • look, a star is falling
Re: AI safety
« Reply #56 on: September 25, 2017, 09:52:47 pm »
IMHO hacking is a kind of abuse we would never be safe of in that much extent that it is questionable that any kind of hiding source code would be useful. If someone wants to abuse it, we can't stop it, even if we protect it by three headed dragon. Of course there is a possibility of crime activity, but we should fight it in less restrictive way. Instead of punishing it, we should reward avoiding crime. This practice is unknown to our modern society, but it could be a way towards utopical structure where motives for crime would be less valued than motives to stay moral. But I guess we should wait a few centuries for that.

In a meanwhile, we could use 128 bit encryption and god knows what else we can think of, being more or less futile. :-[
Wherever you see a nice spot, plant another knowledge tree :favicon:

*

keghn

  • Trusty Member
  • ********
  • Replicant
  • *
  • 599
Re: AI safety
« Reply #57 on: September 26, 2017, 02:31:12 pm »
The other "Killer Robot Arms Race" Elon Musk should worry about: 


*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Global Moderator
  • *******************
  • Prometheus
  • *
  • 4515
Re: AI safety
« Reply #58 on: September 27, 2017, 02:55:55 pm »
Ivan,

One could view it as "Being rewarded" by being good living a clean life and not being incarcerated.

Freedom to move about - good
Incarcerated - bad
 ;)
In the world of AI, it's the thought that counts!

*

ivan.moony

  • Trusty Member
  • *********
  • Terminator
  • *
  • 757
  • look, a star is falling
Re: AI safety
« Reply #59 on: September 27, 2017, 07:03:14 pm »
The way I see it, we're being rewarded by a freedom to breathe an air when we are good, and being punished by an electrical shocker when we make a mess.

We want more!!!
We want sex 'n drugs 'n rock'n'roll!!!
 >:D

To stick onto subject, what would a reward for an AI look like?
Wherever you see a nice spot, plant another knowledge tree :favicon:

 


Users Online

38 Guests, 0 Users

Most Online Today: 55. Most Online Ever: 208 (August 27, 2008, 09:36:30 am)

Articles