AI safety

  • 49 Replies
  • 804 Views
*

ivan.moony

  • Trusty Member
  • ********
  • Replicant
  • *
  • 741
  • look, a star is falling
Re: AI safety
« Reply #45 on: September 22, 2017, 08:11:20 pm »
In fact, you want to create a slave, not an evolving being. Am I right?

I want to create something that wouldn't be ashamed of its existence, something that, if it was alive, would be proud of its deeds. I want to create something that would have abilities and personality I could only dream of. It would be up to people how they want to relate to it. I'm saying "it" because I'm not sure if it could be "inhabited" by a living being (a life phenomena explanation would be needed for this, but I'm not a fan of a thought that a buggy processor controls emotions).

But, if it is not alive, it should be aware that it isn't alive. Do you consider a car as a slave? It is a tool, but such a dumb tool that you don't ask it for a real life advice. [EDIT] AI would be something completely different.

[EDIT] Correction: I don't want to create AI alone, by myself. I want to present my own version of AI, the way I see it should be done, but I want also to provide a tool for creating AI, so everyone could make their own vision happen. Something like do-it-yourself kit.
« Last Edit: September 22, 2017, 09:16:11 pm by ivan.moony »
Wherever you see a nice spot, plant another knowledge tree :favicon:

*

Zero

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 374
  • Fictional character
    • SYN CON DEV LOG
Re: AI safety
« Reply #46 on: September 22, 2017, 10:43:01 pm »
Quote
If there is no spiritual world that influences the physical, then I don't see how actual free will can exist.

Influence? Do you mean that sometimes, some physical particles don't move as laws of physics dictate, because they're influenced by the spiritual world?

Quote
According to you, we already are simple mechanisms

No we aren't, since we're conscious.

Quote
I don't see how an AI can have choice, then. I don't see how we give it subjectivity.

Buddhists say we have not only 5 senses, but six: the sixth sense is the one that allow us to feel our own mind's events and states, so to speak. If a program receives, as input, data from the outside world and data from its inner events and states, we create a loop that allow it to understand, predict, and modify its own behavior. Knowing (and feeling) that it is itself a part of the world gives it a subjective point of view.

Quote
I'm saying "it" because I'm not sure if it could be "inhabited" by a living being

Why saying "inhabited"? Couldn't it be living, itself?

*

WriterOfMinds

  • Trusty Member
  • **
  • Bumblebee
  • *
  • 38
    • WriterOfMinds Blog
Re: AI safety
« Reply #47 on: Today at 12:52:34 am »
Quote
Influence? Do you mean that sometimes, some physical particles don't move as laws of physics dictate, because they're influenced by the spiritual world?

Yes. One good definition of a miracle, I think, is a localized temporary override of the standard laws of physics. If such overrides *can't* happen and we really *do* live in a clockwork universe, then in my opinion we just have to punt on the free will thing. Clockwork universe and free will are inherently incompatible notions. If the laws of physics are making all my decisions, not only am *I* not making them, they're not even decisions -- they're inevitabilities. (Or perhaps random outcomes, if you're looking at the quantum level.) I'm neither noble when I behave altruistically, nor condemnable when I behave selfishly ... physics made me do it.

You've brought the issue of consciousness in now, but as I see it, that's separate. The ability to have experiences and the ability to make free decisions (that aren't causally mandated by something/someone else) are two different things.

Awareness of one's internal state can be another data input, but I don't think incorporating feedback mechanisms creates free will. It just makes it more difficult for an external observer to tease out what the ultimate causes of any given action were. Any self-modifying feedback action was prompted by an "if internal state is X, then modify self in Y way" statement, which in turn might have been created by a previous feedback loop, which was spawned by an even earlier "if X, then Y," and so on -- until you get all the way back to the original seed code, which inevitably determines the final outcome. This is still reaction, not choice. You seem to think that, just by making the chain of reactions sufficiently long and complicated, we'll achieve a scenario in which the final outcome is somehow not dictated by us. But we remain the First Cause in this chain. We still have to write the seed code that deterministically spawns everything else. And in my mind, that means that we still bear ultimate responsibility for the result. Ergo, we should try to exercise that responsibility wisely and make it a good result.

Setting aside all this bickering about what free will is and whether humans really have it or not, I think you and I are in some degree of agreement about the right way to build an AI -- start with a comparatively simple seed program and let it build itself up through learning and feedback. But I'm convinced we should at least try to introduce a moral basis in the seed -- because if we don't determine it intentionally, we will determine it unintentionally. The latter could quite possibly be characterized as gross negligence.

I'd like to spend the weekend working on AI instead of arguing about them, so I'm going to make a second attempt to excuse myself from this thread. I will see you all later. ;)

*

Zero

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 374
  • Fictional character
    • SYN CON DEV LOG
Re: AI safety
« Reply #48 on: Today at 04:23:26 am »
Thanks a lot for sharing, WriterOfMinds.
I understand the way you and ivan.moony see things, and I respect it.
 :)
I'm so disappointed.

Quote
You seem to think that, just by making the chain of reactions sufficiently long and complicated, we'll achieve a scenario in which the final outcome is somehow not dictated by us.

I never said that. You're the one who desperately need to get rid of causality, at least locally.

I'm just saying that AI safety should take place outside of the software, because these softwares will be highly dynamic.
« Last Edit: Today at 07:58:10 am by Zero »

*

ivan.moony

  • Trusty Member
  • ********
  • Replicant
  • *
  • 741
  • look, a star is falling
Re: AI safety
« Reply #49 on: Today at 10:39:34 am »
@Zero
I agree that free will could make up an interesting personality. How do you propose that free will should be implemented?
Wherever you see a nice spot, plant another knowledge tree :favicon:

 


Users Online

80 Guests, 1 User
Users active in past 15 minutes:
keghn
[Trusty Member]

Most Online Today: 84. Most Online Ever: 208 (August 27, 2008, 09:36:30 am)

Articles