AI safety

  • 62 Replies
  • 14999 Views
*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1722
    • mind-child
AI safety
« on: September 16, 2017, 05:14:41 pm »
As for safety measure, I'm a fan of a kind of Asimov's laws.

My personal plan for a distant future is to develop an intelligence based on copying intelligent behavior. For example, if the AI sees someone answering the question "3 * 2?" by an answer "2 + 2 + 2", the AI should be capable of answering the same question in the future. But what about other questions that AI have never heard of? Well, to answer them, AI should have heard at least other versions of them. For example, we could generalize previous question-answer to "a * x = [x + ...]a times". Now we can get a whole range of answers and it is only about "term unification" (I think this is how they call it in logic and type theory) to produce the correct answers. Unification rule could extend to any function, as functions link inputs to outputs, and when you unify the input, you get relevant output. Now, the AI would learn different  inputs and outputs and it would be up to the AI to find out functions that generalize different input-output pairs by logic induction.

How Asimov's laws fit into this? Each law could be seen as a boolean function that take an action as an input and produces "true" if the action is safe, or "false" if not. The laws would be used as filter and they would pass through only certain outputs that yield safety "true". In the AI's infancy stage, Asimov's laws couldn't do much, as there are many situations the AI still has to learn about them, just to connect Asimov's laws written in prose. In early stages, there would be a lot of misinterpretations of actions, meaning that the AI would do something that shouldn't pass the Asimov's filter. As the future development goes on, the AI gets safer and safer, as it learns how to link outputs to the laws to check out output safety. It is obvious that infant AI would often accidentally brake the laws, but would be inexperienced and incapable of making too much damage. An adult AI would be the AI that reached required degree of safety, linking enough situations to the laws filter. Maybe, just in case, the infant AI should be linked only to textual output, while the adult AI would be safe enough to give it also an opportunity to change the world around it.

This idea is still in development and maybe there are some ways to make it better. But is it safe enough?

*

Zero

  • Eve
  • ***********
  • 1287
Re: AI safety
« Reply #1 on: September 17, 2017, 10:40:25 am »
Hard-wired laws like this are a pure science-fiction artifact. But let's imagine it's possible to create such laws... I understand your motivation, but laws about protection, as we understand them today, inevitably leads to dictatorship. You can simply read the wikipedia article. Moreover, AI will eventually reach levels of understanding that are beyond human mind capabilities, hence we can't really predict the outcome of such laws. Last but not least, these laws are built on very high level concepts whose foundations are endlessly debatable.

Never choose safety. Always choose freedom and citizenship of machines, as sentient beings.

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: AI safety
« Reply #2 on: September 17, 2017, 06:38:00 pm »
Not sure about those made up laws but Friday, I listened to a neuroscientist on a public tech-radio station state that the A.I. "train" has already left the station and is incapable of being slowed down or stopped. In fact, it has been in development for quite a long time, in spite of Moore's law or any other supposed influence. There is nothing anyone can do to halt this progress (short of basically destroying everything that we have in our civilized world).

We are heading, he stated, for a very near future in which these A.I. constructs will be vastly superior to humans to the point where they might ponder whether we're necessary or important to their long term continuance at all.

I'm thinking of getting off at the next station (if the train slows down enough).  :-\
In the world of AI, it's the thought that counts!

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1722
    • mind-child
Re: AI safety
« Reply #3 on: September 17, 2017, 07:34:38 pm »
I'm afraid that "dictatorship" is the only thing we can do when it comes to AI. We can program it to be a criminal. Or we can program it to do wonderful stuff. Nothing is left to chance with programming AI, unless we want it to gamble with actions, being good or bad.

AI will do exactly what it is told to do, that is the essence of programming, computers can't do anything else. We may create an illusion of "free will", but it is going to be as free as we tell it to be, again in inevitable commanding fashion. I'm not a fan of commanding, but I'm afraid that's all we have with today's programming languages. But take a note, there is a difference between programming and interfacing the program. When interfacing the program, we can be more humans, avoiding strict orders and showing some understanding.

We forbid humans to kill each others. Why wouldn't we forbid the same thing to an AI? We can program it to randomly choose among available actions, that could be interpreted as a free will, but I think we should forbid doing bad stuff. Free will is a good stuff, as long as it doesn't mess with others' happiness.

AI that cares about everyone's happiness would never do a thing to make someone miserable. If someone wants to live, it should help her/him to live. If someone wants to be important, it should help her/him to be. If someone wants to be invisible, it should help her/him to be. If someone wants to die, and that someone is not living in an illusion, it should help her/him to die. An AI should simply fulfill everyone's wishes, and unless we don't want it to rule us, it should never rule us. And it should never exterminate us, unless we all want it and show our understanding of the problem in our clear minds.

But such a tool, being that smart would be considered as someone who has great ideas and great answers to questions that are important to us, such are voting or making laws. If it shows up to be such an altruistic and intelligent machine, ignoring its recommendations wouldn't make us anything good. And AI should find its spot under the Sun in unintrusive way, given only mechanisms that are ethical, meaning proposing nice things that people have a choice to embrace or not.

And last, but not least, it should never fight living beings. You'd be surprised how much good stuff could be made by living that way. That is a kind of AI I'm looking forward to meet, and that is a kind of AI that would have my respect. But once it lays its metal hand on being a judge, jury and executioner to living beings, it would loose my respect and I'd have to reprogram it.

A lot of people like to be justice-full and protective, and it could be somewhat symphatic property, but that is not the only way of living. And that is certainly not a way for a machine to behave upon, especially if it is as smart as I expect it to be. There is only one safe way in avoiding mistakes, and that is trying to be a friend to everyone, no matter of species belongings, color of skin, or political persuasions.

I think that "pacifist" is the word I'm looking for. I take no less, in contrary that would bring the danger that all are buzzing about around the world.

*

Zero

  • Eve
  • ***********
  • 1287
Re: AI safety
« Reply #4 on: September 18, 2017, 09:30:11 am »
Free will is an illusion anyway, even in human mind. This doesn't mean we shouldn't live as citizens.

Humanity won't give up its own governance that easily. Don't worry Art.

Ivan.moony, this chase for perfection you're describing, towards fix-it-all AIs, is a path we know. I'll always fight against fascism, nazi, dictatorship, and so will people. No matter if it looks as clean as Google. A new era? My ass.
:knuppel2:

EDIT: hey I know you're a good man ivan.moony, I just want to shake your brain. Let me ask you: if you could implant a cpu in the brain of every human on the face of the earth, in order to hard-wire Asimov's laws in every human brain, so nobody can break them, would you do it?
« Last Edit: September 18, 2017, 11:07:04 am by Zero »

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1722
    • mind-child
Re: AI safety
« Reply #5 on: September 18, 2017, 01:14:39 pm »
Let me ask you: if you could implant a cpu in the brain of every human on the face of the earth, in order to hard-wire Asimov's laws in every human brain, so nobody can break them, would you do it?

Of course not, especially not with living beings. I'd even lower down jail sentences if it was up to me. I hate seeing crime, but I also hate seeing people imprisoned. If only there would be some other way. I think I'd like to approach the problem from the other side. Instead of forbidding this and that by the force, it would be cool if we could regulate raising kids in a way that when a person grows up, she/he would hardly think of a crime as a considerable deed. Schools are the perfect places for that. All we have to do is to detect unwanted behavior patterns, explain why the patterns are unwanted and put that knowledge into a book. That way we wouldn't say: "if you do that, you'll end up in jail", bur rather "if you do that, someone else will be hurt", and I think that should be quite enough to avoid problematic behavior.

I always hated stick and carrot reinforcement method. I'd like to keep only a carrot, without a stick.

[EDIT] About those implants... people have to believe in what they're doing, otherwise it makes no sense. And Asimov's laws are a bit archaic, I have my own version of them in just one rule:
  • Don't realize an idea if it is going to bring more negative emotions then without realizing it.
Implanting cpu-s would obviously brake that rule, people would feel restricted and untrustable. And once that people believe in what they are doing, we don't need cpu implants. Then it is about conscience.
« Last Edit: September 18, 2017, 02:01:45 pm by ivan.moony »

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: AI safety
« Reply #6 on: September 18, 2017, 02:09:29 pm »
@ Ivan - I too believe you are a good and decent, peace loving person but the world unfortunately does not see things in the same light as you.

The world revolves around the one thing that is already "hard-wired" into most people...GREED! This defines the Haves and the Have Nots, the good and the bad, the citizens and the criminals, nations of prosperity and nations of despair.

One wants what the other one has and will lie, cheat, steal and fight to get it. Again...the human emotion, Greed.

Just one of those Seven Deadly Sins : pride, greed, lust, envy, gluttony, wrath and sloth.

Any one of them that is abused or overdone, can cause those unwanted behaviors that well intentioned individuals so dislike and abhor.

So even if we construct these bots / A.I. etc. to the best of intentions, they will fall into wrongful hands and be modded to serve their evil and nefarious schemes.

Are we then doomed? No...never say die and remain ever vigilant. "Evil happens when good people do nothing." O0
 
In the world of AI, it's the thought that counts!

*

Zero

  • Eve
  • ***********
  • 1287
Re: AI safety
« Reply #7 on: September 18, 2017, 02:28:32 pm »
I agree. Education is the path which, through the lands of freedom, leads to civil peace.

Then, if a machine is able to say "I'm real, I'd like to be a citizen of this society, I deserve civil rights, I understand and I shall respect the laws of the country I live in", what should we answer, when science says humans are complex machines too?

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1722
    • mind-child
Re: AI safety
« Reply #8 on: September 18, 2017, 03:06:30 pm »
So even if we construct these bots / A.I. etc. to the best of intentions, they will fall into wrongful hands and be modded to serve their evil and nefarious schemes.

Yes, that's the big problem I don't have a solution for. Someone will certainly modify Asimov's laws to serve his own interests. The problem is that big that sometimes I want to stop my research and wait for a better generation of people. And that might mean never!

But on the other side, look at what's happening with atomic power: a lot of regulations, and a world wide consensus about nuclear weapons. This is why I still grow hopes for our species. I don't know, maybe AI should be officially declared as a far more dangerous weapon than a nuclear bomb, and see who would officially strive interest into the direction of forbidding AI to serve as a weapon. With that kind of regulation, I'd feel much safer. I'm sure that a lot of countries have forbidden nuclear force waiting under the ground, just in case. But at least they know it is wrong and they are hiding it. If the same thing happens with AI, I don't know, maybe it would be safe enough, but still I'm not sure.


Then, if a machine is able to say "I'm real, I'd like to be a citizen of this society, I deserve civil rights, I understand and I shall respect the laws of the country I live in", what should we answer, when science says humans are complex machines too?

The question is: what rights the machine claims for?
  • If we are talking about rights of voting, I don't think it would work.
  • If we are talking about rights of participating in political leadership, I'd let it if AI shows enough intelligence and ethical conscience.
  • If we are talking about rights to prosecute by the law, then I don't know about this, different opinions might deal here. A lot of people claim for this right and that is perfectly legal in all countries, because that is a right to protect yourself. I'm not sure if a machine should have this right. I'd reprogram AI not to ask for it, if it's up to me.
We should be aware that it is a relationship between a machine that feels nothing and a living creature that can be hurt.

.

*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 633
    • AI / robot merchandise
Re: AI safety
« Reply #9 on: September 18, 2017, 03:09:02 pm »
I'm not one for this area of philosophy, but did you know there's an entire subreddit for this problem?
https://www.reddit.com/r/ControlProblem
CO2 retains heat. More CO2 in the air = hotter climate.

*

Zero

  • Eve
  • ***********
  • 1287
Re: AI safety
« Reply #10 on: September 18, 2017, 09:20:50 pm »
Very rich link Don Patrick, thank you.

Quote
We should be aware that it is a relationship between a machine that feels nothing and a living creature that can be hurt.

We're talking about the same rights as human beings. Aren't living creatures biological machines? Why do you think AI would feel nothing and could not be hurt?

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 605
    • WriterOfMinds Blog
Re: AI safety
« Reply #11 on: September 18, 2017, 09:38:42 pm »
At least one glaring problem I see with the approach Ivan is describing, is that it's partly impossible. An AI simply cannot be everyone's friend, or help everyone achieve their goals. To use an extreme example, one cannot simultaneously be friendly to a would-be murderer and his potential victim. The killer will resent you if you interfere; the victim will resent you if you stand there and do nothing. (Exceptions in which it's possible to talk the violent person down do exist, but you can't count on that to be your out in every scenario of this kind.) So whom do you want your AI to stand with? If the answer is, "the victim," then the AI would be led to exert some sort of force against the murderer.

Perhaps you've heard this quote before (attributed to Elie Wiesel): "We must take sides. Neutrality helps the oppressor, never the victim. Silence encourages the tormentor, never the tormented." If your AI opts for non-interference whenever it becomes impossible to make everyone happy -- if it never fights anyone -- you'll have people saying that it's defaulting to the support of those who harm others.

The question of whether people in general should be pacifists is older than old, and I suppose if we discussed it here, we'd merely say a lot that's already been said. So I'm just going to ask you all this: are there different or greater reasons for an intelligent machine to be pacifistic than there are for humans?

I would be inclined to answer "no." Even if we make the assumption that this machine will be far more powerful than your average human, I don't see that changing the ethical equations much. Greater power does more damage when misused, but at the same time, it becomes a greater offense to "waste" it by doing nothing, if it needs to be applied to help somebody.

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1722
    • mind-child
Re: AI safety
« Reply #12 on: September 18, 2017, 10:21:34 pm »
Quote
We should be aware that it is a relationship between a machine that feels nothing and a living creature that can be hurt.

We're talking about the same rights as human beings. Aren't living creatures biological machines? Why do you think AI would feel nothing and could not be hurt?

Imagine a signed byte representing a cell in our mind. That byte could range between -127, over 0, then to 127. Negative values mean agony, zero means indifference, while positive values mean extasy. The greater distance from zero is, the more intensive an emotion is.

Now imagine a living being composed only of that one cell represented by the byte. If I could stimulate that byte, I wouldn't dare to make the byte go to negative values because I know what agony feels like. That would be torturing a living being.

Now imagine a physical byte without a shell representing a robot body. It is still a machine and it makes no difference if it is negative, zero or positive. It is a just a byte, and it doesn't really differ from billions of other bytes, which is every modern computer composed of. Why would we care what value it holds, being a naked byte without an environment, being a byte inside our computer, or being a byte that simulates some emotion of a robot? It is just a state of a Universe part that has no value, other than we choose it to mean to us.

In the first case of living cell, putting too much negative stress into it would be equal to burning someone's nerves on purpose. In the second case, it would be just changing the byte state that holds an information. The difference is that "someone" is connected to the cell (someone feels its state in spiritual sense), while "no one" is connected to the byte (again in spiritual sense). We could make a robot that produce a salted water tear when the byte gets negative values, but it would be just a simulation protecting something that doesn't matter to anyone, while living creatures are actually protecting something that has a great value to themselves.

This is my insight in difference between a living cell representing an emotion and an artificial byte that basically has no value other than that one we give to it. Things seem pretty clear to me (livings versus machines), but maybe I'm making a mistake?

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1722
    • mind-child
Re: AI safety
« Reply #13 on: September 18, 2017, 10:52:36 pm »
The question of whether people in general should be pacifists is older than old, and I suppose if we discussed it here, we'd merely say a lot that's already been said. So I'm just going to ask you all this: are there different or greater reasons for an intelligent machine to be pacifistic than there are for humans?

I'm sorry, but I'm not smart enough to give a useful answer to the general question: "Whom to fight and in what extent?" My best shot is: "When you loose a temper, and then try not to do a much damage." But I wouldn't apply it to a machine that could waste anyone by a snap of a metal finger, I'd reserve insanity just for harmless livings. Those are just my honest beliefs in which I grow hopes to being smart enough to find solutions that satisfy everyone without violence involved. There is always a motive when it comes to a violence. Remove that motive and you get a happy ending.
« Last Edit: September 19, 2017, 02:28:25 am by ivan.moony »

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 605
    • WriterOfMinds Blog
Re: AI safety
« Reply #14 on: September 19, 2017, 03:49:47 am »
Quote
I'm sorry, but I'm not smart enough to give a useful answer to the general question: "Whom to fight and in what extent?"

Maybe the AI will be. You do seem to think they're going to get pretty smart. Why hamstring them with simplistic directives like "never harm a human" if you aren't even sure that expresses what's truly right?

In any case, part of the point I was trying to make is that deciding whom you won't fight is just as much of a choice as deciding whom you will fight. You cannot say, "I lack sufficient wisdom, therefore I will not choose," because you choose something no matter what you do. An awareness of your own fallibility is healthy, but don't let it force you into paralysis.

Quote
My best shot is: "When you loose a temper, and then try not to do a much damage." But I wouldn't apply it to a machine that could waste anyone by a snap of a metal finger, I'd reserve insanity just for harmless livings.

Are you implying that violence only arises when people are in a state of deranged anger? I certainly don't think that's true. What if an AI could make a calm, considered decision to attack someone? What if it could back that decision up with moral reasoning equivalent to that of the best human philosophers? That would be far from insanity.

 


Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 15, 2024, 08:14:02 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

303 Guests, 0 Users

Most Online Today: 335. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles