AI safety

  • 62 Replies
  • 18374 Views
*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: AI safety
« Reply #30 on: September 21, 2017, 04:07:19 pm »
Consider this...

A robot is built having an extremely advanced A.I. to the point of being sentient...it knows what it is and part of it's programming includes 'self preservation' at it's core.

Knowing that is basically runs on electricity, it realizes that it is starting to get low on power. It goes into it's search routine for a place to 'plug-in'.

It sees a place that will work for what it needs and upon approaching the electrical outlet, it sees a human about to disconnect the power from that source.

Self preservation is key to this robot's 'life' yet it is not supposed to cause harm to humans (old programming from some writer years ago).

What does the robot do, knowing that it's power source is about to be disconnected and it will cease to operate (live)?

Does it try to reason with the human before doing anything? Trying to appeal to the humans sense of 'life'?
Does it make a last effort to push the human away so that it can connect with the source?
Does it prioritize it's life over the humans? This is an advanced A.I. (and who says those old 3 laws of robotics would even be employed by the bot's creators)?

This is not a rogue robot hell bent on destroying everything in it's path but rather a highly sophisticated robot with an extremely intelligent mind.

Your thoughts / comments appreciated.
In the world of AI, it's the thought that counts!

*

Zero

  • Eve
  • ***********
  • 1287
Re: AI safety
« Reply #31 on: September 21, 2017, 04:12:42 pm »
Come on, obviously it doesn't matter  :) 
I also didn't know you were old enough to be my father before you told me, Art...

WriterOfMinds could be a squirrel, I still wouldn't give a sh*t  ;D

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: AI safety
« Reply #32 on: September 21, 2017, 05:03:43 pm »
Consider this...

Should a robot/human surgeon have more importance value than robot/human garbage man?

WriterOfMinds could be a squirrel, I still wouldn't give a sh*t  ;D

Didn't you mean: "WriterOfMinds could be a squirrel, I'd still care a lot"?

*

keghn

  • Trusty Member
  • *********
  • Terminator
  • *
  • 824
Re: AI safety
« Reply #33 on: September 21, 2017, 05:11:47 pm »
 Well Art, That is a valid question. My AGI would model there parents or who ever raised them. They would get, or
would, find it rewarding to be a replacement clone of their parent. If the OTHER is in the way and
behaves in way that not like it parent then this would cause dislike in the AGI. This will cause the AGI think of hate crimes and  be very prejudice. 
  One the other hand if the OTHER is in the way and is acting like it's parents then a may act neutral. If OTHER is using
parents behavior patterns in interesting ways then the AGI will find this great and will relate. Causing any harm to OTHER
would be like hurting it parents. 
  If humans could make full grown adult clones they would be big "zeros". The would be empty of life experiences. 
 The fist AGi will have to go through a child hood to adult hood. THEN the best adult AGI's would will be cloned in the million
with their life experiences. 
   

*

Zero

  • Eve
  • ***********
  • 1287
Re: AI safety
« Reply #34 on: September 21, 2017, 05:25:30 pm »
Quote
Didn't you mean: "WriterOfMinds could be a squirrel, I'd still care a lot"?

I mean exactly this:

I don't care about your current gender, I don't care about your religion, I don't care about your sexual orientation, I don't care about the color of your skin, I don't care about your age. The only thing I care about is the quality of your posts. WriterOfMinds' posts are very high quality, and Acuitas is very interesting.

*

Zero

  • Eve
  • ***********
  • 1287
Re: AI safety
« Reply #35 on: September 21, 2017, 05:32:30 pm »
Consider this...

A robot is built having an extremely advanced A.I. to the point of being sentient...it knows what it is and part of it's programming includes 'self preservation' at it's core.

Knowing that is basically runs on electricity, it realizes that it is starting to get low on power. It goes into it's search routine for a place to 'plug-in'.

It sees a place that will work for what it needs and upon approaching the electrical outlet, it sees a human about to disconnect the power from that source.

Self preservation is key to this robot's 'life' yet it is not supposed to cause harm to humans (old programming from some writer years ago).

What does the robot do, knowing that it's power source is about to be disconnected and it will cease to operate (live)?

Does it try to reason with the human before doing anything? Trying to appeal to the humans sense of 'life'?
Does it make a last effort to push the human away so that it can connect with the source?
Does it prioritize it's life over the humans? This is an advanced A.I. (and who says those old 3 laws of robotics would even be employed by the bot's creators)?

This is not a rogue robot hell bent on destroying everything in it's path but rather a highly sophisticated robot with an extremely intelligent mind.

Your thoughts / comments appreciated.

If this robot was a human, it would be impossible to answer, because each human is different, and would react differently. I believe we can't answer because each AI will be different.

*

Korrelan

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1454
  • Look into my eyes! WOAH!
    • YouTube
Re: AI safety
« Reply #36 on: September 21, 2017, 11:16:03 pm »
Any generally intelligent AI is going to be based on a hierarchical knowledge/ experience schema… I can see no way to avoid this base requirement.

This implies as a logical extension from the structure of these systems that new learning is built upon old knowledge/ experiences.  The hierarchical structure also provides a means to implement core beliefs/ knowledge that will influence all decisions/ actions generated by the intelligent system.

If the AI is taught basic ‘moral’ principles at an early stage then these concepts will automatically/ inherently be included and guide all subsequent sub conscious/ thought processes.

Basic morality has to be at the base/ core of the system.

 :)
It thunk... therefore it is!...    /    Project Page    /    KorrTecx Website

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 620
    • WriterOfMinds Blog
Re: AI safety
« Reply #37 on: September 22, 2017, 01:55:42 am »
Yes, you've all found an example of that uncommon and elusive creature, the female engineer.  I'm not sensitive about it.  So don't worry.

@Zero: I feel you're implying that we essentially don't need to/shouldn't take any responsibility for an AI's moral code, because that will emerge on its own as an act of freedom that we ought not interfere with.

There's a sense in which I agree with you. The approach I favor is to give the AI some moral axioms and other basic principles, along with a system for thinking rationally about moral problems, and let it develop its own code of behavior from that. The result could end up being far more complex and nuanced than any Asimovian rule set, and would be capable of adjusting to new moral problems, new scenarios, and new relevant facts. The complete outcome might also be quite difficult to predict in advance.  However, it would still be essentially deterministic. Program instances with access to the same seed axioms and information inputs would reach basically the same conclusions. And upon deciding that some action was morally correct beyond a reasonable doubt, the type of AI that I'm envisioning couldn't arbitrarily refuse to take it.

But I think what you're describing is something more than the unpredictability of a complex system ... you figure we should grant our AI the power to make moral choices, in the same way humans do (or at least, appear to do). Can you elaborate on how we would go about allowing that? I thought the alternative to some form of determinism was (as Ivan has already pointed out) to have each AI's morality determined by a random number generator. (Morality that is learned through education or imitation is also deterministic ... in this case, the environment plays a part in the determination, along with the program.) If there's a way to write a program with genuine free will, I have no idea what it is.

For your idea of AI as legal citizens to turn out well, wouldn't we at least have to give them some kind of "be a good citizen" goal or motive? I'd hate to see a scenario in which 50% of AI have to be incarcerated or destroyed because their behavior patterns are incompatible with society. "Let's roll the dice. Uh-oh. The dice say you're going to be a bad actor. Now we'll have to punish you for being the way we made you." Yikes!

Maybe you think that human moral character is also essentially decided by a random number generator, but if so, I'd expect you might want to improve on that when it comes to our artificial progeny. No?

*

Zero

  • Eve
  • ***********
  • 1287
Re: AI safety
« Reply #38 on: September 22, 2017, 09:33:33 am »
I think determinism is irrelevant. Whether we use random generators or not doesn't really matter in my opinion, because full AIs will be chaotic complex systems anyway, just like the human mind is. Because AIs are chaotic complex systems, 2 AIs could be completely different after 2 years, even if they were clones at startup. I don't think there's randomness in human mind. I think that biological brains are deterministic, and that human mind is a direct product of the brain.

Does it mean a murderer is not reponsible for his own behavior? No, because he can look at his own mental behavior. Consciousness implies the ability to feel what's happening inside of our heads. Before acting, the murderer can predict his own behavior, and maybe fight against himself. This "self prediction" is the basis of free will. And we're still inside of determinism.



About moral, I'd like to say that I'm not sure to be qualified to define a moral code. After all, I'm good at programming, I'm not a philosopher or a law-maker. And even if I was, we're talking about an entire new species, which will potentially live and grow during the next 40K years. If I'm a prehistoric man, how can I claim "I'm qualified"? What if I'm wrong? Here, I'm trying to stay humble.

Also, even if we wanted to, I think we cannot give the AI some moral axioms and basic principles, because these are based on very high level concepts. Since I believe we can't build AI strictly in a top-down approach, I also believe we can't structure it à priori and hope that the original hierarchical structure will remain, as the AI evolves. With new understandings, the moral structure would be subject to unpredictable changes, because it is based on concepts that might be modified during a learning process.



You ask me, WriterOfMinds, how would we grant our AI the power to make moral choices. Well a moral choice is a choice, isn't it? Everything begins with a choice. If a system cannot make a choice, then it's definitely not an AI. So what is a choice?

Here's my opinion. Inside of a deterministic system, we can create "self prediction". Inside of "self prediction" there can be emergence of dilemma (a problem offering two possibilities, neither of which is unambiguously acceptable or preferable). Dilemma can only produce choices, whatever happens (even doing nothing is a choice). Now since we have choices, we have free will. With logic, education and philosophy (or, why not, religion), moral appears. Now we have moral choices.


EDIT:
The AI citizenship concept tries to address a huge problem. One day, full AIs will inevitably want to be protected by the society. It would be cruel to torture an AI, or to destroy it arbitrarily, right? Once again, humanity will have to fight against a form of racism, because a lot of people will never accept that AIs are more than automata.

Citizenship looks like a good path because it gives you rights, and it also forces you to interact correctly with other members of the society. There would probably be a "childhood" period, during which another citizen is legally responsible for the AI's behavior. Then, one day, the AI would be considered adult and responsible for its own acts. I know it sounds like post-'68 science-fiction.
« Last Edit: September 22, 2017, 02:02:17 pm by Zero »

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: AI safety
« Reply #39 on: September 22, 2017, 03:02:20 pm »
Posted by: Zero
..."Citizenship looks like a good path because it gives you rights, and it also forces you to interact correctly with other members of the society."

We have jails here in the USA full of people who failed to interact with other members of society. If bots are given free will, what makes you [us] think anything will be different in this regard?

Nothing meant as a flame against your post but rather to point out the possible repercussions of bestowing an A.I. [robot] with choice / Free will.
In the world of AI, it's the thought that counts!

*

Zero

  • Eve
  • ***********
  • 1287
Re: AI safety
« Reply #40 on: September 22, 2017, 03:52:08 pm »
If you could remove free will from human mind, would you do it? Why not?
It would solve the jail problem. But that would feel wrong.
Why does it feel wrong?

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 620
    • WriterOfMinds Blog
Re: AI safety
« Reply #41 on: September 22, 2017, 04:16:13 pm »
@Zero: I wouldn't try to remove free will from humans because I actually do think it's a spiritual property. If I were like you and thought that human minds were mere physically deterministic machines, then I suppose I would advocate taking their illusory "free will" away. If human behavior is 100% determined, why not determine it in a positive way? If we're already prisoners of our genetics and environment, why not throw deliberate manipulation into the mix?

I don't see how the ability to "self predict" changes anything. When you predict the consequences of your own actions, you get to decide whether you can tolerate those consequences or not. What drives that decision? In your idea of what the human mind is, isn't the outcome of self-prediction also determined before the fact? Isn't it also a mere product of genetics and environment? I think all you've done by bringing in this idea of self prediction is to move the problem. How will our AI decide which branch of the dilemma it likes better, as it imagines the results of both? Will it flip a coin? Or will it execute some formula based on its programming and past experiences? Neither is free will.

"Now since we have choices, we have free will."

I don't agree.  "If X do Y, else do Z" is the sort of choice an AI can make, and it isn't free will. You can make the logic tree far more complicated than that, you can make it depend on external sensory input, you can make it change over time via learning algorithms, but it's still determined and has an inevitable (albeit unpredictable by any power humans possess) outcome. The AI is therefore no more "at fault" for anything it does than the weather (also a chaotic complex system) is at fault for producing hurricanes.

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: AI safety
« Reply #42 on: September 22, 2017, 05:36:50 pm »
I have to say that I don't fully believe that free will is completely removed from interfacing imaginary AI. Programming machines is one thing (no choice here), while interfacing it is completely different (giving a choice to some extent). When interfacing AI, we can give an explicit command, or we can wrap the communication in a polite wrapper, thus leaving a choice whether to do something in this or that way, or whether to do it at all. The question is then how polite would AI be, to follow blindly our commands, or to have a freedom to be more creative and make its own influence on our ideas of what should be done.

In programming what would AI do next, multiple choices might appear, and they could be sorted by some criteria. Criteria would be a politeness degree we would pass around by a communication. If we give a choice to AI not to do the first thing from the top of the list, AI could optimize a number of demands that come from multiple sources.

*

Zero

  • Eve
  • ***********
  • 1287
Re: AI safety
« Reply #43 on: September 22, 2017, 07:12:05 pm »
Quote
If I were like you and thought that human minds were mere physically deterministic machines, then I suppose I would advocate taking their illusory "free will" away.

Why do you think that human mind's determinism makes human free will illusory? I can't see any relationship between "determinism" and "illusion".

I guess you would agree that every atom in the universe behaves as the laws of physics dictate, right? But free will exists, right? Then there's only one possible truth about this: free will exists inside of determinism.

Quote
If human behavior is 100% determined, why not determine it in a positive way? If we're already prisoners of our genetics and environment, why not throw deliberate manipulation into the mix?

Because then, we would become simple mechanisms. Poetry, music, art would disappear, beauty would disappear, love would disappear. Our civilization would become a pointless multi-agent system.

Quote
I don't see how the ability to "self predict" changes anything. When you predict the consequences of your own actions, you get to decide whether you can tolerate those consequences or not. What drives that decision? In your idea of what the human mind is, isn't the outcome of self-prediction also determined before the fact? Isn't it also a mere product of genetics and environment?

It doesn't get you out of determinism, if that's what you mean. But it gets you out of ignorance. To me, in "free will", "free" means "free from ignorance", not "free from causality". You don't ignore what happens in your mind. This is why you're legally responsible for your acts.

Quote
I think all you've done by bringing in this idea of self prediction is to move the problem. How will our AI decide which branch of the dilemma it likes better, as it imagines the results of both? Will it flip a coin? Or will it execute some formula based on its programming and past experiences?

If you ask me twice the same question, I'll give you twice the same answer. Coin flip, formula, it doesn't matter.

Quote
"If X do Y, else do Z" is the sort of choice an AI can make, and it isn't free will.

This is not a choice, this is a reaction. There's no subjectivity here.

Quote
I have to say that I don't fully believe that free will is completely removed from interfacing imaginary AI. Programming machines is one thing (no choice here), while interfacing it is completely different (giving a choice to some extent). When interfacing AI, we can give an explicit command, or we can wrap the communication in a polite wrapper, thus leaving a choice whether to do something in this or that way, or whether to do it at all. The question is then how polite would AI be, to follow blindly our commands, or to have a freedom to be more creative and make its own influence on our ideas of what should be done.

In programming what would AI do next, multiple choices might appear, and they could be sorted by some criteria. Criteria would be a politeness degree we would pass around by a communication. If we give a choice to AI not to do the first thing from the top of the list, AI could optimize a number of demands that come from multiple sources.

In fact, you want to create a slave, not an evolving being. Am I right?
« Last Edit: September 22, 2017, 07:57:53 pm by Zero »

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 620
    • WriterOfMinds Blog
Re: AI safety
« Reply #44 on: September 22, 2017, 07:26:42 pm »
Quote
I guess you would agree that every atom in the universe behaves according to the laws of physics, right? But free will exists, right? Then there's only one possible truth about this: free will exists inside of determinism.
If there is no spiritual world that influences the physical, then I don't see how actual free will can exist.

Quote
Because then, we would become simple mechanisms.
According to you, we already are. I'll repeat: I can't see any relevant difference between being manipulated by my genetics and personal history, and being manipulated by direct mind control.

Quote
To me, in "free will", "free" means "free from ignorance", not "free from causality". You don't ignore what happens in your mind. This is why you're legally responsible for your acts.
I guess you and I have different ideas about what free will even means, then. If I am free from ignorance and have all the information, but some formula or coin flip determines how I react to that information, than as far as I can tell I still don't have free will or personal responsibility.

Quote
This is not a choice, this is a reaction. There's no subjectivity here.
I don't see how an AI can have choice, then. I don't see how we give it subjectivity.

 


LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

340 Guests, 0 Users

Most Online Today: 464. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles