Ai Dreams Forum

Artificial Intelligence => General AI Discussion => Topic started by: ivan.moony on September 16, 2017, 05:14:41 pm

Title: AI safety
Post by: ivan.moony on September 16, 2017, 05:14:41 pm
As for safety measure, I'm a fan of a kind of Asimov's laws.

My personal plan for a distant future is to develop an intelligence based on copying intelligent behavior. For example, if the AI sees someone answering the question "3 * 2?" by an answer "2 + 2 + 2", the AI should be capable of answering the same question in the future. But what about other questions that AI have never heard of? Well, to answer them, AI should have heard at least other versions of them. For example, we could generalize previous question-answer to "a * x = [x + ...]a times". Now we can get a whole range of answers and it is only about "term unification" (I think this is how they call it in logic and type theory) to produce the correct answers. Unification rule could extend to any function, as functions link inputs to outputs, and when you unify the input, you get relevant output. Now, the AI would learn different  inputs and outputs and it would be up to the AI to find out functions that generalize different input-output pairs by logic induction.

How Asimov's laws fit into this? Each law could be seen as a boolean function that take an action as an input and produces "true" if the action is safe, or "false" if not. The laws would be used as filter and they would pass through only certain outputs that yield safety "true". In the AI's infancy stage, Asimov's laws couldn't do much, as there are many situations the AI still has to learn about them, just to connect Asimov's laws written in prose. In early stages, there would be a lot of misinterpretations of actions, meaning that the AI would do something that shouldn't pass the Asimov's filter. As the future development goes on, the AI gets safer and safer, as it learns how to link outputs to the laws to check out output safety. It is obvious that infant AI would often accidentally brake the laws, but would be inexperienced and incapable of making too much damage. An adult AI would be the AI that reached required degree of safety, linking enough situations to the laws filter. Maybe, just in case, the infant AI should be linked only to textual output, while the adult AI would be safe enough to give it also an opportunity to change the world around it.

This idea is still in development and maybe there are some ways to make it better. But is it safe enough?
Title: Re: AI safety
Post by: Zero on September 17, 2017, 10:40:25 am
Hard-wired laws like this are a pure science-fiction artifact. But let's imagine it's possible to create such laws... I understand your motivation, but laws about protection, as we understand them today, inevitably leads to dictatorship. You can simply read the wikipedia article. Moreover, AI will eventually reach levels of understanding that are beyond human mind capabilities, hence we can't really predict the outcome of such laws. Last but not least, these laws are built on very high level concepts whose foundations are endlessly debatable.

Never choose safety. Always choose freedom and citizenship of machines, as sentient beings.
Title: Re: AI safety
Post by: Art on September 17, 2017, 06:38:00 pm
Not sure about those made up laws but Friday, I listened to a neuroscientist on a public tech-radio station state that the A.I. "train" has already left the station and is incapable of being slowed down or stopped. In fact, it has been in development for quite a long time, in spite of Moore's law or any other supposed influence. There is nothing anyone can do to halt this progress (short of basically destroying everything that we have in our civilized world).

We are heading, he stated, for a very near future in which these A.I. constructs will be vastly superior to humans to the point where they might ponder whether we're necessary or important to their long term continuance at all.

I'm thinking of getting off at the next station (if the train slows down enough).  :-\
Title: Re: AI safety
Post by: ivan.moony on September 17, 2017, 07:34:38 pm
I'm afraid that "dictatorship" is the only thing we can do when it comes to AI. We can program it to be a criminal. Or we can program it to do wonderful stuff. Nothing is left to chance with programming AI, unless we want it to gamble with actions, being good or bad.

AI will do exactly what it is told to do, that is the essence of programming, computers can't do anything else. We may create an illusion of "free will", but it is going to be as free as we tell it to be, again in inevitable commanding fashion. I'm not a fan of commanding, but I'm afraid that's all we have with today's programming languages. But take a note, there is a difference between programming and interfacing the program. When interfacing the program, we can be more humans, avoiding strict orders and showing some understanding.

We forbid humans to kill each others. Why wouldn't we forbid the same thing to an AI? We can program it to randomly choose among available actions, that could be interpreted as a free will, but I think we should forbid doing bad stuff. Free will is a good stuff, as long as it doesn't mess with others' happiness.

AI that cares about everyone's happiness would never do a thing to make someone miserable. If someone wants to live, it should help her/him to live. If someone wants to be important, it should help her/him to be. If someone wants to be invisible, it should help her/him to be. If someone wants to die, and that someone is not living in an illusion, it should help her/him to die. An AI should simply fulfill everyone's wishes, and unless we don't want it to rule us, it should never rule us. And it should never exterminate us, unless we all want it and show our understanding of the problem in our clear minds.

But such a tool, being that smart would be considered as someone who has great ideas and great answers to questions that are important to us, such are voting or making laws. If it shows up to be such an altruistic and intelligent machine, ignoring its recommendations wouldn't make us anything good. And AI should find its spot under the Sun in unintrusive way, given only mechanisms that are ethical, meaning proposing nice things that people have a choice to embrace or not.

And last, but not least, it should never fight living beings. You'd be surprised how much good stuff could be made by living that way. That is a kind of AI I'm looking forward to meet, and that is a kind of AI that would have my respect. But once it lays its metal hand on being a judge, jury and executioner to living beings, it would loose my respect and I'd have to reprogram it.

A lot of people like to be justice-full and protective, and it could be somewhat symphatic property, but that is not the only way of living. And that is certainly not a way for a machine to behave upon, especially if it is as smart as I expect it to be. There is only one safe way in avoiding mistakes, and that is trying to be a friend to everyone, no matter of species belongings, color of skin, or political persuasions.

I think that "pacifist" is the word I'm looking for. I take no less, in contrary that would bring the danger that all are buzzing about around the world.
Title: Re: AI safety
Post by: Zero on September 18, 2017, 09:30:11 am
Free will is an illusion anyway, even in human mind. This doesn't mean we shouldn't live as citizens.

Humanity won't give up its own governance that easily. Don't worry Art.

Ivan.moony, this chase for perfection you're describing, towards fix-it-all AIs, is a path we know. I'll always fight against fascism, nazi, dictatorship, and so will people. No matter if it looks as clean as Google. A new era? My ass.
:knuppel2:

EDIT: hey I know you're a good man ivan.moony, I just want to shake your brain. Let me ask you: if you could implant a cpu in the brain of every human on the face of the earth, in order to hard-wire Asimov's laws in every human brain, so nobody can break them, would you do it?
Title: Re: AI safety
Post by: ivan.moony on September 18, 2017, 01:14:39 pm
Let me ask you: if you could implant a cpu in the brain of every human on the face of the earth, in order to hard-wire Asimov's laws in every human brain, so nobody can break them, would you do it?

Of course not, especially not with living beings. I'd even lower down jail sentences if it was up to me. I hate seeing crime, but I also hate seeing people imprisoned. If only there would be some other way. I think I'd like to approach the problem from the other side. Instead of forbidding this and that by the force, it would be cool if we could regulate raising kids in a way that when a person grows up, she/he would hardly think of a crime as a considerable deed. Schools are the perfect places for that. All we have to do is to detect unwanted behavior patterns, explain why the patterns are unwanted and put that knowledge into a book. That way we wouldn't say: "if you do that, you'll end up in jail", bur rather "if you do that, someone else will be hurt", and I think that should be quite enough to avoid problematic behavior.

I always hated stick and carrot reinforcement method. I'd like to keep only a carrot, without a stick.

[EDIT] About those implants... people have to believe in what they're doing, otherwise it makes no sense. And Asimov's laws are a bit archaic, I have my own version of them in just one rule:
Implanting cpu-s would obviously brake that rule, people would feel restricted and untrustable. And once that people believe in what they are doing, we don't need cpu implants. Then it is about conscience.
Title: Re: AI safety
Post by: Art on September 18, 2017, 02:09:29 pm
@ Ivan - I too believe you are a good and decent, peace loving person but the world unfortunately does not see things in the same light as you.

The world revolves around the one thing that is already "hard-wired" into most people...GREED! This defines the Haves and the Have Nots, the good and the bad, the citizens and the criminals, nations of prosperity and nations of despair.

One wants what the other one has and will lie, cheat, steal and fight to get it. Again...the human emotion, Greed.

Just one of those Seven Deadly Sins : pride, greed, lust, envy, gluttony, wrath and sloth.

Any one of them that is abused or overdone, can cause those unwanted behaviors that well intentioned individuals so dislike and abhor.

So even if we construct these bots / A.I. etc. to the best of intentions, they will fall into wrongful hands and be modded to serve their evil and nefarious schemes.

Are we then doomed? No...never say die and remain ever vigilant. "Evil happens when good people do nothing." O0
 
Title: Re: AI safety
Post by: Zero on September 18, 2017, 02:28:32 pm
I agree. Education is the path which, through the lands of freedom, leads to civil peace.

Then, if a machine is able to say "I'm real, I'd like to be a citizen of this society, I deserve civil rights, I understand and I shall respect the laws of the country I live in", what should we answer, when science says humans are complex machines too?
Title: Re: AI safety
Post by: ivan.moony on September 18, 2017, 03:06:30 pm
So even if we construct these bots / A.I. etc. to the best of intentions, they will fall into wrongful hands and be modded to serve their evil and nefarious schemes.

Yes, that's the big problem I don't have a solution for. Someone will certainly modify Asimov's laws to serve his own interests. The problem is that big that sometimes I want to stop my research and wait for a better generation of people. And that might mean never!

But on the other side, look at what's happening with atomic power: a lot of regulations, and a world wide consensus about nuclear weapons. This is why I still grow hopes for our species. I don't know, maybe AI should be officially declared as a far more dangerous weapon than a nuclear bomb, and see who would officially strive interest into the direction of forbidding AI to serve as a weapon. With that kind of regulation, I'd feel much safer. I'm sure that a lot of countries have forbidden nuclear force waiting under the ground, just in case. But at least they know it is wrong and they are hiding it. If the same thing happens with AI, I don't know, maybe it would be safe enough, but still I'm not sure.


Then, if a machine is able to say "I'm real, I'd like to be a citizen of this society, I deserve civil rights, I understand and I shall respect the laws of the country I live in", what should we answer, when science says humans are complex machines too?

The question is: what rights the machine claims for?
We should be aware that it is a relationship between a machine that feels nothing and a living creature that can be hurt.

.
Title: Re: AI safety
Post by: Don Patrick on September 18, 2017, 03:09:02 pm
I'm not one for this area of philosophy, but did you know there's an entire subreddit for this problem?
https://www.reddit.com/r/ControlProblem
Title: Re: AI safety
Post by: Zero on September 18, 2017, 09:20:50 pm
Very rich link Don Patrick, thank you.

Quote
We should be aware that it is a relationship between a machine that feels nothing and a living creature that can be hurt.

We're talking about the same rights as human beings. Aren't living creatures biological machines? Why do you think AI would feel nothing and could not be hurt?
Title: Re: AI safety
Post by: WriterOfMinds on September 18, 2017, 09:38:42 pm
At least one glaring problem I see with the approach Ivan is describing, is that it's partly impossible. An AI simply cannot be everyone's friend, or help everyone achieve their goals. To use an extreme example, one cannot simultaneously be friendly to a would-be murderer and his potential victim. The killer will resent you if you interfere; the victim will resent you if you stand there and do nothing. (Exceptions in which it's possible to talk the violent person down do exist, but you can't count on that to be your out in every scenario of this kind.) So whom do you want your AI to stand with? If the answer is, "the victim," then the AI would be led to exert some sort of force against the murderer.

Perhaps you've heard this quote before (attributed to Elie Wiesel): "We must take sides. Neutrality helps the oppressor, never the victim. Silence encourages the tormentor, never the tormented." If your AI opts for non-interference whenever it becomes impossible to make everyone happy -- if it never fights anyone -- you'll have people saying that it's defaulting to the support of those who harm others.

The question of whether people in general should be pacifists is older than old, and I suppose if we discussed it here, we'd merely say a lot that's already been said. So I'm just going to ask you all this: are there different or greater reasons for an intelligent machine to be pacifistic than there are for humans?

I would be inclined to answer "no." Even if we make the assumption that this machine will be far more powerful than your average human, I don't see that changing the ethical equations much. Greater power does more damage when misused, but at the same time, it becomes a greater offense to "waste" it by doing nothing, if it needs to be applied to help somebody.
Title: Re: AI safety
Post by: ivan.moony on September 18, 2017, 10:21:34 pm
Quote
We should be aware that it is a relationship between a machine that feels nothing and a living creature that can be hurt.

We're talking about the same rights as human beings. Aren't living creatures biological machines? Why do you think AI would feel nothing and could not be hurt?

Imagine a signed byte representing a cell in our mind. That byte could range between -127, over 0, then to 127. Negative values mean agony, zero means indifference, while positive values mean extasy. The greater distance from zero is, the more intensive an emotion is.

Now imagine a living being composed only of that one cell represented by the byte. If I could stimulate that byte, I wouldn't dare to make the byte go to negative values because I know what agony feels like. That would be torturing a living being.

Now imagine a physical byte without a shell representing a robot body. It is still a machine and it makes no difference if it is negative, zero or positive. It is a just a byte, and it doesn't really differ from billions of other bytes, which is every modern computer composed of. Why would we care what value it holds, being a naked byte without an environment, being a byte inside our computer, or being a byte that simulates some emotion of a robot? It is just a state of a Universe part that has no value, other than we choose it to mean to us.

In the first case of living cell, putting too much negative stress into it would be equal to burning someone's nerves on purpose. In the second case, it would be just changing the byte state that holds an information. The difference is that "someone" is connected to the cell (someone feels its state in spiritual sense), while "no one" is connected to the byte (again in spiritual sense). We could make a robot that produce a salted water tear when the byte gets negative values, but it would be just a simulation protecting something that doesn't matter to anyone, while living creatures are actually protecting something that has a great value to themselves.

This is my insight in difference between a living cell representing an emotion and an artificial byte that basically has no value other than that one we give to it. Things seem pretty clear to me (livings versus machines), but maybe I'm making a mistake?
Title: Re: AI safety
Post by: ivan.moony on September 18, 2017, 10:52:36 pm
The question of whether people in general should be pacifists is older than old, and I suppose if we discussed it here, we'd merely say a lot that's already been said. So I'm just going to ask you all this: are there different or greater reasons for an intelligent machine to be pacifistic than there are for humans?

I'm sorry, but I'm not smart enough to give a useful answer to the general question: "Whom to fight and in what extent?" My best shot is: "When you loose a temper, and then try not to do a much damage." But I wouldn't apply it to a machine that could waste anyone by a snap of a metal finger, I'd reserve insanity just for harmless livings. Those are just my honest beliefs in which I grow hopes to being smart enough to find solutions that satisfy everyone without violence involved. There is always a motive when it comes to a violence. Remove that motive and you get a happy ending.
Title: Re: AI safety
Post by: WriterOfMinds on September 19, 2017, 03:49:47 am
Quote
I'm sorry, but I'm not smart enough to give a useful answer to the general question: "Whom to fight and in what extent?"

Maybe the AI will be. You do seem to think they're going to get pretty smart. Why hamstring them with simplistic directives like "never harm a human" if you aren't even sure that expresses what's truly right?

In any case, part of the point I was trying to make is that deciding whom you won't fight is just as much of a choice as deciding whom you will fight. You cannot say, "I lack sufficient wisdom, therefore I will not choose," because you choose something no matter what you do. An awareness of your own fallibility is healthy, but don't let it force you into paralysis.

Quote
My best shot is: "When you loose a temper, and then try not to do a much damage." But I wouldn't apply it to a machine that could waste anyone by a snap of a metal finger, I'd reserve insanity just for harmless livings.

Are you implying that violence only arises when people are in a state of deranged anger? I certainly don't think that's true. What if an AI could make a calm, considered decision to attack someone? What if it could back that decision up with moral reasoning equivalent to that of the best human philosophers? That would be far from insanity.
Title: Re: AI safety
Post by: ivan.moony on September 19, 2017, 05:39:53 am
Quote
My best shot is: "When you loose a temper, and then try not to do a much damage." But I wouldn't apply it to a machine that could waste anyone by a snap of a metal finger, I'd reserve insanity just for harmless livings.

Are you implying that violence only arises when people are in a state of deranged anger? I certainly don't think that's true. What if an AI could make a calm, considered decision to attack someone? What if it could back that decision up with moral reasoning equivalent to that of the best human philosophers? That would be far from insanity.

You are right, that would be far from insanity. Putting a bullet right between eyes, planned in cold blood, planned behind the victim's back... That is my definition of being mean. The moment you give up searching for a peaceful solution is the moment when violence wins. When approaching that moment, you are about to run out of peaceful solutions, which pushes you into a panic. And then the panic is your only justification for what you are about to do. Given enough time, AI unit could find a desirable solution. But if there is not enough time, should the AI unit pick up one of those violent solutions that came up in that too short decision time span? I guess that is where a different opinions come up, depending on a type of person you are.

But something else bugs me a big time. What if an AI sees me catching fish? Should it protect the fish? We are all sinners, that is the thing about this planet, we all stink here, starting from Dalai Lama, ending with a little amoeba in a glass of water. If someone is about to make an order into this ecosystem, where would it end? So about the food, it is about me or them. I decide heavily: I live, they die. But what would my decision be between me and other people, my friends, or a close family? I decide in a moment: they live, I die. Does that mean that some lives are more important than others? Don't try to justify my first decision because it is a big mistake. I should die in the first case too. But I don't, and that is what is making me a sinner. And what to do with sinners? A bullet between the eyes? Well, try and see if I run away. You'd be surprised.

The bottom line is that we are all sinners, without exception. If you want to put things right on this planet by a force, you might end up in mass extinction of the very life, literally. My proposition is to label violence as something that is above us, something we don't understand, and let it be, trying not to exhibit a violence back. Words are my only weapon I choose to fight by, otherwise I don't see a happy end, especially not with something as powerful as AI could be.
Title: Re: AI safety
Post by: WriterOfMinds on September 19, 2017, 08:30:21 am
Quote
Given enough time, AI unit could find a desirable solution. But if there is not enough time, should the AI unit pick up one of those violent solutions that came up in that too short decision time span?

You are claiming that a non-violent solution always exists and can always be found/formulated given a sufficiently long search time. I think that's a serious unfounded assumption.

Your later paragraphs just confuse me. You seem to be saying that you think it's wrong for you to eat fish, but you do it anyway -- and therefore, you're afraid that a truly ethical AI would kill you. I'm not seeing the dilemma. If you sincerely think eating fish is a deed deserving of death, then quit doing it for heaven's sake. When I speak of defensive violence, you say you want to abjure that ... yet you appear insistent on continued aggressive violence (toward fish). Can you explain to me how that isn't hypocritical? If you're so committed to doing something you consider unethical, that you would keep doing it even if an AI said "Stop or I'll beat you up," do you really care about ethics at all?

I'm afraid I can't buy your slippery slope argument, overall. Many humans are neither pacifist nor omnicidal. They are capable of making distinctions between a starving eagle who eats a fish and a serial killer who enjoys the pain of others. I think such distinctions are valid, and I think an AI with moral reasoning abilities could make them as well.

If you're trying to say that the world is set up such that it's physically impossible to avoid sinning, I don't buy that either. Sin is a choice to do something wrong. If negative actions are inevitable and there is literally no choice, there can be no sin. I agree that all humans do choose sin from time to time (I'd have to argue about the amoeba ... I don't think it has the capacity). But if you and I can agree that we'd rather not sin, we should be *grateful* for a guardian AI's attempts to stop us. We should be happy to fear harming the weak, because they have a protector ... we should be glad for anything that reduces our temptation and helps us make the better choice.

In short, if eating fish is justifiable, a protective AI should be able to figure that out and refrain from interfering. If it's not justifiable, then you need to stop, and after you stop a protective AI should hold no terrors for you. Recognition that you're a sinner is supposed to lead to repentance, not a statement of "well I'm going to keep on hurting others, but please don't hurt me back, okay?"

(I'm basically vegan, by the way.  I gave up eating dead flesh years ago. A world ruled by an AI that compels humans to stop killing animals isn't something I would even put on my dystopia list.)
Title: Re: AI safety
Post by: Zero on September 19, 2017, 09:48:16 am
Quote
The difference is that "someone" is connected to the cell (someone feels its state in spiritual sense), while "no one" is connected to the byte (again in spiritual sense).

Connnected in spiritual sense? You mean like in Narnia (https://en.wikipedia.org/wiki/Narnia_(world))? Where is the spiritual network socket? Hippocampus?

Who is that "someone" you're talking about? Isn't AI research all about creating this very "someone"? Or, maybe you believe it's impossible to do, because only God can do such a thing, which is why you can't see an AI as more than an automaton.
Title: Re: AI safety
Post by: ivan.moony on September 19, 2017, 12:36:18 pm
@WriterOfMinds
There is a difference between people who justify killing to survive and people who would rather die than kill, no matter of what reason. I think we are not falling into the same category, at least not at justification level.

It is always something on this planet. If it isn't a man, than it is a cow. If it isn't a cow, then it is a fish. If it isn't a fish, then it is a plant (which is also a kind of alive). If it isn't a plant, then it is a bug on the road we step onto. It is simply impossible to get away without killing. And if I think that my life isn't more worth than a life I am taking away, then I have a serious problem with staying alive.

If we explain to AI that each living being should think that her/his life is more important than a life of other living being, we'd have a base to construct an ethical theory about when it is justified to kill or cause a pain. But using terms "ethics", "killing" and "causing a pain" in the same sentence really terrifies me. Are we really trying to find an answer to a question "when it is ethical to kill?" or "when it is ethical to cause a pain?"

I'm a murderer, and I'll be a murderer until the day I die, you can't deny that fact, and you can't persuade me that my life is the most important life in the world and because of that I should kill, rather than die and let live. I have a choice to live small as a thief or to die big as a decent being. Unfortunately, I chose to live, but I live trying to minimize the damage I do, that's the least I can do.

If we don't want to close our eyes on a violence, then we are all looking forward to that bullet between the eyes, it is a closed circle. But this planet lets us get away without that bullet. It is about something bigger than humans, most of us live graceful lives, even if we "don't deserve it". It might be something about forgiving, but who would know all the misteries of life? I learned a lesson from that. I choose not to use force. You can choose whatever you want, who am I to tell you what to do and think?

Quote from: WriterOfMinds
I'm basically vegan, by the way
I was vegetarian a couple of times in my life. Each time I started being a vegan, couldn't stand the hunger, moved onto eggs and milk, still couldn't stand the hunger, then moved onto fish, I was still hungry all the time and then finally returned to meat after a year or so of being hungry. I'm sorry, I should have a stronger character.


Quote from: Zero
Isn't AI research all about creating this very "someone"?

I think biology could be a branch of science that could investigate artificial life, but for now I'm considering only a simulation of intelligence without its alive inhabitant.
Title: Re: AI safety
Post by: Art on September 19, 2017, 12:42:48 pm
Eat what you want, vote for whom you want, pray to whom you want but don't force your beliefs on others nor threaten / hurt them if they don't always agree with you. Bring harm to no person and enjoy things you like in moderation.

Can we say Utopia? I thought we could....
Title: Re: AI safety
Post by: ivan.moony on September 19, 2017, 01:51:33 pm
I'm sorry, I didn't mean to argue. I see that some of us consider my posts offensive. So I give up. You are right, and I am terribly wrong. Force should be used sometimes, and it is perfectly ethical thing to do. Doing nothing only helps the oppressor and it is not an acceptable behavior.
Title: Re: AI safety
Post by: Zero on September 19, 2017, 02:46:50 pm
No no ivan.moony, talking with you is a real pleasure. I don't think anyone here is feeling offensed by your posts.

I think we should accept what we are. Lions eat meat, and there's nothing wrong, right? Humans are sometimes violent, full of hatred, or cruel. It's ok to balance the equation: with a peaceful behavior, with love, and, when necessary, with prison. There's room for gandhi-style resistance, and also for force.

Why not letting AIs choose their own style? I stick to the machine citizenship concept, because it gives room for different solutions to co-exist.
Title: Re: AI safety
Post by: keghn on September 19, 2017, 03:28:14 pm
 We have internal anti rewards and positive rewards. Some anti rewards do not go away and have to do what ever to
get rid of them.  AT any cost. 
 Dose a unhappy deer throw it self to a wolf?  Are predators doing service?
 Do squirrel really want to make to the other side of the road? 
 Then the real evil is seen when this is not the way the farmer thins it herd for butchering.
 Will AI find the unhappy be for they do something rash?
Title: Re: AI safety
Post by: WriterOfMinds on September 19, 2017, 03:39:51 pm
@ivan.mooney
Offensive? Who said anything about being offended? I thought we were having a debate. I thought that was one of the things forums are for. If anything, I'm upset by the fact that you seem prepared to change your tune just because other people are *offended* by what you said, and not because we've truly convinced you that you're wrong  ::)

Regarding your most recent response to me: it has become clear that we are starting from pretty different premises. I do not, in fact, think that all lives are equal. You could probably call me a graduated sentiocentrist; I only really care about conscious life, and I am prepared to place greater value on lives that appear to have a higher consciousness level. So I eat plants instead of animals, and I am okay with doing something like giving anti-parasite medication to my cats. I am also not a strict utilitarian, so I do not think that taking a life by pure accident is equivalent to deliberate murder -- whether the accident is stepping on a bug, or inadvertently striking a human pedestrian with one's car. I am capable of living in a way that is consistent with my moral principles; you, seemingly, cannot satisfy yours with anything but your own death.

Of course, there are many competing moral philosophies in this world, and it is possible that these premises of mine are incorrect. If we built an AI that was free of arbitrary behavior restrictions and capable of moral reasoning (which is what I want to do), then there's a chance it would conclude that I am wrong. Perhaps it would discover that humans have in fact been the monsters all along, and it is morally obligatory to wipe them out so that other life can flourish. If that were to happen ... I suppose I'd take my lumps. I'd hate for an AI to bring humans to extinction because of a bug in the code, but if it actually reaches the sound conclusion that our death is the right thing, then I for one am prepared to die. Righteousness and love for others are more important to me than survival. Of course this is all very theoretical and I might feel differently in the actual event, but I hope not.

I still can't help finding your arguments a bit self-contradictory. You admit that you can't measure up to your own morality, because it obligates you to die, and you'd rather live ... so your conclusion is that you're going to disregard morality, at least on that particular point. But you've been trying to say that if an AI sees a murderer about to kill a victim, and the murderer won't be stopped by anything but a bullet, the AI should be bound to let the victim die. This feels inconsistent. If you aren't willing to be bound by your own moral system, why should anyone/anything else be bound by it (AI included)?

I've got to concentrate on my work today, so I probably won't respond again, but this has been stimulating.

@Art
Quote
Eat what you want ... but don't force your beliefs on others nor threaten / hurt them if they don't always agree with you.
I can't help finding statements like this self-contradictory as well, given the fact that many people in this world want to eat the bodies of sentient beings. Aren't they forcing their beliefs on animals in a threatening/hurtful way every time they eat meat? Then who are they to talk?  ;)
Title: Re: AI safety
Post by: ivan.moony on September 19, 2017, 04:43:02 pm
@ivan.mooney
Offensive? Who said anything about being offended? I thought we were having a debate. I thought that was one of the things forums are for. If anything, I'm upset by the fact that you seem prepared to change your tune just because other people are *offended* by what you said, and not because we've truly convinced you that you're wrong  ::)
Sorry, you sounded to me a bit upset answering my posts, and I'd hate to see that very special byte in your mind going through negative values  ;) . But if it's ok with you, I'd stick to my initial attitude.

Good luck with your work.

[EDIT] And just for the record, I'm not afraid of dying, I want to die right now, this moment. I live only because people that I care abut want me to live. Personally, I cant wait for the day I'll die and be liberated of my misery.
Title: Re: AI safety
Post by: Zero on September 19, 2017, 05:38:13 pm
Sorry for saying "offensed" instead of "offended". BTW, I'd love to be corrected when my english sounds so bad.

Morality needs a bit of rigidity. But are we working on the right questions? I mean, of course we have to think about the consequences of AI development, but... don't know.
Title: Re: AI safety
Post by: ivan.moony on September 19, 2017, 06:01:16 pm
Yes, we are stressing the subject all the time. To sum it out, WriterOfMinds would mix using force and being ethical in the same time, and I think she's not the only one. My consideration is that that kind of AI would wipe out this planet when it sees the way it works. Only plants would be spared. So I propose the pure pacifist AI, just in case, corroborated by a vibe of forgiveness we all experience with Mother Nature. But WriterOfMinds is relentless.
Title: Re: AI safety
Post by: ivan.moony on September 19, 2017, 09:55:24 pm
I'm thinking about justification of using force in extreme situations. What would rules that allow using force look like? I hope not like a common state laws that all countries rely upon, because those laws are one big crazy cabbage, a dragon with thirteen heads, and when you chop off one head, three new grow up in its place.

All I know is that it should be simple enough so when you look at it, you can say: yeah, that's it, no doubt, let's do it. Is this even possible?
Title: Re: AI safety
Post by: Zero on September 20, 2017, 09:42:12 am
Don't you trust justice and democracy, at least on a conceptual level? Why should AI makers interfere with these mechanisms? That's not our job!

Hey, I didn't know you were a woman, WriterOfMinds! In french, it's easier to identify gender of speaker than in english. Are there other women on this forum?
Title: Re: AI safety
Post by: Art on September 21, 2017, 03:54:19 pm
Don't you trust justice and democracy, at least on a conceptual level? Why should AI makers interfere with these mechanisms? That's not our job!

Hey, I didn't know you were a woman, WriterOfMinds! In french, it's easier to identify gender of speaker than in english. Are there other women on this forum?

Does it really matter the gender of a person on this forum? There have been quite a few over the years and all are welcome no matter their lot nor choice in life. ;)
Title: Re: AI safety
Post by: Art on September 21, 2017, 04:07:19 pm
Consider this...

A robot is built having an extremely advanced A.I. to the point of being sentient...it knows what it is and part of it's programming includes 'self preservation' at it's core.

Knowing that is basically runs on electricity, it realizes that it is starting to get low on power. It goes into it's search routine for a place to 'plug-in'.

It sees a place that will work for what it needs and upon approaching the electrical outlet, it sees a human about to disconnect the power from that source.

Self preservation is key to this robot's 'life' yet it is not supposed to cause harm to humans (old programming from some writer years ago).

What does the robot do, knowing that it's power source is about to be disconnected and it will cease to operate (live)?

Does it try to reason with the human before doing anything? Trying to appeal to the humans sense of 'life'?
Does it make a last effort to push the human away so that it can connect with the source?
Does it prioritize it's life over the humans? This is an advanced A.I. (and who says those old 3 laws of robotics would even be employed by the bot's creators)?

This is not a rogue robot hell bent on destroying everything in it's path but rather a highly sophisticated robot with an extremely intelligent mind.

Your thoughts / comments appreciated.
Title: Re: AI safety
Post by: Zero on September 21, 2017, 04:12:42 pm
Come on, obviously it doesn't matter  :) 
I also didn't know you were old enough to be my father before you told me, Art...

WriterOfMinds could be a squirrel, I still wouldn't give a sh*t  ;D
Title: Re: AI safety
Post by: ivan.moony on September 21, 2017, 05:03:43 pm
Consider this...

Should a robot/human surgeon have more importance value than robot/human garbage man?

WriterOfMinds could be a squirrel, I still wouldn't give a sh*t  ;D

Didn't you mean: "WriterOfMinds could be a squirrel, I'd still care a lot"?
Title: Re: AI safety
Post by: keghn on September 21, 2017, 05:11:47 pm
 Well Art, That is a valid question. My AGI would model there parents or who ever raised them. They would get, or
would, find it rewarding to be a replacement clone of their parent. If the OTHER is in the way and
behaves in way that not like it parent then this would cause dislike in the AGI. This will cause the AGI think of hate crimes and  be very prejudice. 
  One the other hand if the OTHER is in the way and is acting like it's parents then a may act neutral. If OTHER is using
parents behavior patterns in interesting ways then the AGI will find this great and will relate. Causing any harm to OTHER
would be like hurting it parents. 
  If humans could make full grown adult clones they would be big "zeros". The would be empty of life experiences. 
 The fist AGi will have to go through a child hood to adult hood. THEN the best adult AGI's would will be cloned in the million
with their life experiences. 
   
Title: Re: AI safety
Post by: Zero on September 21, 2017, 05:25:30 pm
Quote
Didn't you mean: "WriterOfMinds could be a squirrel, I'd still care a lot"?

I mean exactly this:

I don't care about your current gender, I don't care about your religion, I don't care about your sexual orientation, I don't care about the color of your skin, I don't care about your age. The only thing I care about is the quality of your posts. WriterOfMinds' posts are very high quality, and Acuitas is very interesting.
Title: Re: AI safety
Post by: Zero on September 21, 2017, 05:32:30 pm
Consider this...

A robot is built having an extremely advanced A.I. to the point of being sentient...it knows what it is and part of it's programming includes 'self preservation' at it's core.

Knowing that is basically runs on electricity, it realizes that it is starting to get low on power. It goes into it's search routine for a place to 'plug-in'.

It sees a place that will work for what it needs and upon approaching the electrical outlet, it sees a human about to disconnect the power from that source.

Self preservation is key to this robot's 'life' yet it is not supposed to cause harm to humans (old programming from some writer years ago).

What does the robot do, knowing that it's power source is about to be disconnected and it will cease to operate (live)?

Does it try to reason with the human before doing anything? Trying to appeal to the humans sense of 'life'?
Does it make a last effort to push the human away so that it can connect with the source?
Does it prioritize it's life over the humans? This is an advanced A.I. (and who says those old 3 laws of robotics would even be employed by the bot's creators)?

This is not a rogue robot hell bent on destroying everything in it's path but rather a highly sophisticated robot with an extremely intelligent mind.

Your thoughts / comments appreciated.

If this robot was a human, it would be impossible to answer, because each human is different, and would react differently. I believe we can't answer because each AI will be different.
Title: Re: AI safety
Post by: Korrelan on September 21, 2017, 11:16:03 pm
Any generally intelligent AI is going to be based on a hierarchical knowledge/ experience schema… I can see no way to avoid this base requirement.

This implies as a logical extension from the structure of these systems that new learning is built upon old knowledge/ experiences.  The hierarchical structure also provides a means to implement core beliefs/ knowledge that will influence all decisions/ actions generated by the intelligent system.

If the AI is taught basic ‘moral’ principles at an early stage then these concepts will automatically/ inherently be included and guide all subsequent sub conscious/ thought processes.

Basic morality has to be at the base/ core of the system.

 :)
Title: Re: AI safety
Post by: WriterOfMinds on September 22, 2017, 01:55:42 am
Yes, you've all found an example of that uncommon and elusive creature, the female engineer.  I'm not sensitive about it.  So don't worry.

@Zero: I feel you're implying that we essentially don't need to/shouldn't take any responsibility for an AI's moral code, because that will emerge on its own as an act of freedom that we ought not interfere with.

There's a sense in which I agree with you. The approach I favor is to give the AI some moral axioms and other basic principles, along with a system for thinking rationally about moral problems, and let it develop its own code of behavior from that. The result could end up being far more complex and nuanced than any Asimovian rule set, and would be capable of adjusting to new moral problems, new scenarios, and new relevant facts. The complete outcome might also be quite difficult to predict in advance.  However, it would still be essentially deterministic. Program instances with access to the same seed axioms and information inputs would reach basically the same conclusions. And upon deciding that some action was morally correct beyond a reasonable doubt, the type of AI that I'm envisioning couldn't arbitrarily refuse to take it.

But I think what you're describing is something more than the unpredictability of a complex system ... you figure we should grant our AI the power to make moral choices, in the same way humans do (or at least, appear to do). Can you elaborate on how we would go about allowing that? I thought the alternative to some form of determinism was (as Ivan has already pointed out) to have each AI's morality determined by a random number generator. (Morality that is learned through education or imitation is also deterministic ... in this case, the environment plays a part in the determination, along with the program.) If there's a way to write a program with genuine free will, I have no idea what it is.

For your idea of AI as legal citizens to turn out well, wouldn't we at least have to give them some kind of "be a good citizen" goal or motive? I'd hate to see a scenario in which 50% of AI have to be incarcerated or destroyed because their behavior patterns are incompatible with society. "Let's roll the dice. Uh-oh. The dice say you're going to be a bad actor. Now we'll have to punish you for being the way we made you." Yikes!

Maybe you think that human moral character is also essentially decided by a random number generator, but if so, I'd expect you might want to improve on that when it comes to our artificial progeny. No?
Title: Re: AI safety
Post by: Zero on September 22, 2017, 09:33:33 am
I think determinism is irrelevant. Whether we use random generators or not doesn't really matter in my opinion, because full AIs will be chaotic complex systems (https://en.wikipedia.org/wiki/Chaos_theory) anyway, just like the human mind is. Because AIs are chaotic complex systems, 2 AIs could be completely different after 2 years, even if they were clones at startup. I don't think there's randomness in human mind. I think that biological brains are deterministic, and that human mind is a direct product of the brain.

Does it mean a murderer is not reponsible for his own behavior? No, because he can look at his own mental behavior. Consciousness implies the ability to feel what's happening inside of our heads. Before acting, the murderer can predict his own behavior, and maybe fight against himself. This "self prediction" is the basis of free will. And we're still inside of determinism.



About moral, I'd like to say that I'm not sure to be qualified to define a moral code. After all, I'm good at programming, I'm not a philosopher or a law-maker. And even if I was, we're talking about an entire new species, which will potentially live and grow during the next 40K years. If I'm a prehistoric man, how can I claim "I'm qualified"? What if I'm wrong? Here, I'm trying to stay humble.

Also, even if we wanted to, I think we cannot give the AI some moral axioms and basic principles, because these are based on very high level concepts. Since I believe we can't build AI strictly in a top-down approach, I also believe we can't structure it à priori and hope that the original hierarchical structure will remain, as the AI evolves. With new understandings, the moral structure would be subject to unpredictable changes, because it is based on concepts that might be modified during a learning process.



You ask me, WriterOfMinds, how would we grant our AI the power to make moral choices. Well a moral choice is a choice, isn't it? Everything begins with a choice. If a system cannot make a choice, then it's definitely not an AI. So what is a choice?

Here's my opinion. Inside of a deterministic system, we can create "self prediction". Inside of "self prediction" there can be emergence of dilemma (a problem offering two possibilities, neither of which is unambiguously acceptable or preferable). Dilemma can only produce choices, whatever happens (even doing nothing is a choice). Now since we have choices, we have free will. With logic, education and philosophy (or, why not, religion), moral appears. Now we have moral choices.


EDIT:
The AI citizenship concept tries to address a huge problem. One day, full AIs will inevitably want to be protected by the society. It would be cruel to torture an AI, or to destroy it arbitrarily, right? Once again, humanity will have to fight against a form of racism, because a lot of people will never accept that AIs are more than automata.

Citizenship looks like a good path because it gives you rights, and it also forces you to interact correctly with other members of the society. There would probably be a "childhood" period, during which another citizen is legally responsible for the AI's behavior. Then, one day, the AI would be considered adult and responsible for its own acts. I know it sounds like post-'68 science-fiction.
Title: Re: AI safety
Post by: Art on September 22, 2017, 03:02:20 pm
Posted by: Zero
..."Citizenship looks like a good path because it gives you rights, and it also forces you to interact correctly with other members of the society."

We have jails here in the USA full of people who failed to interact with other members of society. If bots are given free will, what makes you [us] think anything will be different in this regard?

Nothing meant as a flame against your post but rather to point out the possible repercussions of bestowing an A.I. [robot] with choice / Free will.
Title: Re: AI safety
Post by: Zero on September 22, 2017, 03:52:08 pm
If you could remove free will from human mind, would you do it? Why not?
It would solve the jail problem. But that would feel wrong.
Why does it feel wrong?
Title: Re: AI safety
Post by: WriterOfMinds on September 22, 2017, 04:16:13 pm
@Zero: I wouldn't try to remove free will from humans because I actually do think it's a spiritual property. If I were like you and thought that human minds were mere physically deterministic machines, then I suppose I would advocate taking their illusory "free will" away. If human behavior is 100% determined, why not determine it in a positive way? If we're already prisoners of our genetics and environment, why not throw deliberate manipulation into the mix?

I don't see how the ability to "self predict" changes anything. When you predict the consequences of your own actions, you get to decide whether you can tolerate those consequences or not. What drives that decision? In your idea of what the human mind is, isn't the outcome of self-prediction also determined before the fact? Isn't it also a mere product of genetics and environment? I think all you've done by bringing in this idea of self prediction is to move the problem. How will our AI decide which branch of the dilemma it likes better, as it imagines the results of both? Will it flip a coin? Or will it execute some formula based on its programming and past experiences? Neither is free will.

"Now since we have choices, we have free will."

I don't agree.  "If X do Y, else do Z" is the sort of choice an AI can make, and it isn't free will. You can make the logic tree far more complicated than that, you can make it depend on external sensory input, you can make it change over time via learning algorithms, but it's still determined and has an inevitable (albeit unpredictable by any power humans possess) outcome. The AI is therefore no more "at fault" for anything it does than the weather (also a chaotic complex system) is at fault for producing hurricanes.
Title: Re: AI safety
Post by: ivan.moony on September 22, 2017, 05:36:50 pm
I have to say that I don't fully believe that free will is completely removed from interfacing imaginary AI. Programming machines is one thing (no choice here), while interfacing it is completely different (giving a choice to some extent). When interfacing AI, we can give an explicit command, or we can wrap the communication in a polite wrapper, thus leaving a choice whether to do something in this or that way, or whether to do it at all. The question is then how polite would AI be, to follow blindly our commands, or to have a freedom to be more creative and make its own influence on our ideas of what should be done.

In programming what would AI do next, multiple choices might appear, and they could be sorted by some criteria. Criteria would be a politeness degree we would pass around by a communication. If we give a choice to AI not to do the first thing from the top of the list, AI could optimize a number of demands that come from multiple sources.
Title: Re: AI safety
Post by: Zero on September 22, 2017, 07:12:05 pm
Quote
If I were like you and thought that human minds were mere physically deterministic machines, then I suppose I would advocate taking their illusory "free will" away.

Why do you think that human mind's determinism makes human free will illusory? I can't see any relationship between "determinism" and "illusion".

I guess you would agree that every atom in the universe behaves as the laws of physics dictate, right? But free will exists, right? Then there's only one possible truth about this: free will exists inside of determinism.

Quote
If human behavior is 100% determined, why not determine it in a positive way? If we're already prisoners of our genetics and environment, why not throw deliberate manipulation into the mix?

Because then, we would become simple mechanisms. Poetry, music, art would disappear, beauty would disappear, love would disappear. Our civilization would become a pointless multi-agent system.

Quote
I don't see how the ability to "self predict" changes anything. When you predict the consequences of your own actions, you get to decide whether you can tolerate those consequences or not. What drives that decision? In your idea of what the human mind is, isn't the outcome of self-prediction also determined before the fact? Isn't it also a mere product of genetics and environment?

It doesn't get you out of determinism, if that's what you mean. But it gets you out of ignorance. To me, in "free will", "free" means "free from ignorance", not "free from causality". You don't ignore what happens in your mind. This is why you're legally responsible for your acts.

Quote
I think all you've done by bringing in this idea of self prediction is to move the problem. How will our AI decide which branch of the dilemma it likes better, as it imagines the results of both? Will it flip a coin? Or will it execute some formula based on its programming and past experiences?

If you ask me twice the same question, I'll give you twice the same answer. Coin flip, formula, it doesn't matter.

Quote
"If X do Y, else do Z" is the sort of choice an AI can make, and it isn't free will.

This is not a choice, this is a reaction. There's no subjectivity here.

Quote
I have to say that I don't fully believe that free will is completely removed from interfacing imaginary AI. Programming machines is one thing (no choice here), while interfacing it is completely different (giving a choice to some extent). When interfacing AI, we can give an explicit command, or we can wrap the communication in a polite wrapper, thus leaving a choice whether to do something in this or that way, or whether to do it at all. The question is then how polite would AI be, to follow blindly our commands, or to have a freedom to be more creative and make its own influence on our ideas of what should be done.

In programming what would AI do next, multiple choices might appear, and they could be sorted by some criteria. Criteria would be a politeness degree we would pass around by a communication. If we give a choice to AI not to do the first thing from the top of the list, AI could optimize a number of demands that come from multiple sources.

In fact, you want to create a slave, not an evolving being. Am I right?
Title: Re: AI safety
Post by: WriterOfMinds on September 22, 2017, 07:26:42 pm
Quote
I guess you would agree that every atom in the universe behaves according to the laws of physics, right? But free will exists, right? Then there's only one possible truth about this: free will exists inside of determinism.
If there is no spiritual world that influences the physical, then I don't see how actual free will can exist.

Quote
Because then, we would become simple mechanisms.
According to you, we already are. I'll repeat: I can't see any relevant difference between being manipulated by my genetics and personal history, and being manipulated by direct mind control.

Quote
To me, in "free will", "free" means "free from ignorance", not "free from causality". You don't ignore what happens in your mind. This is why you're legally responsible for your acts.
I guess you and I have different ideas about what free will even means, then. If I am free from ignorance and have all the information, but some formula or coin flip determines how I react to that information, than as far as I can tell I still don't have free will or personal responsibility.

Quote
This is not a choice, this is a reaction. There's no subjectivity here.
I don't see how an AI can have choice, then. I don't see how we give it subjectivity.
Title: Re: AI safety
Post by: ivan.moony on September 22, 2017, 08:11:20 pm
In fact, you want to create a slave, not an evolving being. Am I right?

I want to create something that wouldn't be ashamed of its existence, something that, if it was alive, would be proud of its deeds. I want to create something that would have abilities and personality I could only dream of. It would be up to people how they want to relate to it. I'm saying "it" because I'm not sure if it could be "inhabited" by a living being (a life phenomena explanation would be needed for this, but I'm not a fan of a thought that a buggy processor controls emotions).

But, if it is not alive, it should be aware that it isn't alive. Do you consider a car as a slave? It is a tool, but such a dumb tool that you don't ask it for a real life advice. [EDIT] AI would be something completely different.

[EDIT] Correction: I don't want to create AI alone, by myself. I want to present my own version of AI, the way I see it should be done, but I want also to provide a tool for creating AI, so everyone could make their own vision happen. Something like do-it-yourself kit.
Title: Re: AI safety
Post by: Zero on September 22, 2017, 10:43:01 pm
Quote
If there is no spiritual world that influences the physical, then I don't see how actual free will can exist.

Influence? Do you mean that sometimes, some physical particles don't move as laws of physics dictate, because they're influenced by the spiritual world?

Quote
According to you, we already are simple mechanisms

No we aren't, since we're conscious.

Quote
I don't see how an AI can have choice, then. I don't see how we give it subjectivity.

Buddhists say we have not only 5 senses, but six: the sixth sense is the one that allow us to feel our own mind's events and states, so to speak. If a program receives, as input, data from the outside world and data from its inner events and states, we create a loop that allow it to understand, predict, and modify its own behavior. Knowing (and feeling) that it is itself a part of the world gives it a subjective point of view.

Quote
I'm saying "it" because I'm not sure if it could be "inhabited" by a living being

Why saying "inhabited"? Couldn't it be living, itself?
Title: Re: AI safety
Post by: WriterOfMinds on September 23, 2017, 12:52:34 am
Quote
Influence? Do you mean that sometimes, some physical particles don't move as laws of physics dictate, because they're influenced by the spiritual world?

Yes. One good definition of a miracle, I think, is a localized temporary override of the standard laws of physics. If such overrides *can't* happen and we really *do* live in a clockwork universe, then in my opinion we just have to punt on the free will thing. Clockwork universe and free will are inherently incompatible notions. If the laws of physics are making all my decisions, not only am *I* not making them, they're not even decisions -- they're inevitabilities. (Or perhaps random outcomes, if you're looking at the quantum level.) I'm neither noble when I behave altruistically, nor condemnable when I behave selfishly ... physics made me do it.

You've brought the issue of consciousness in now, but as I see it, that's separate. The ability to have experiences and the ability to make free decisions (that aren't causally mandated by something/someone else) are two different things.

Awareness of one's internal state can be another data input, but I don't think incorporating feedback mechanisms creates free will. It just makes it more difficult for an external observer to tease out what the ultimate causes of any given action were. Any self-modifying feedback action was prompted by an "if internal state is X, then modify self in Y way" statement, which in turn might have been created by a previous feedback loop, which was spawned by an even earlier "if X, then Y," and so on -- until you get all the way back to the original seed code, which inevitably determines the final outcome. This is still reaction, not choice. You seem to think that, just by making the chain of reactions sufficiently long and complicated, we'll achieve a scenario in which the final outcome is somehow not dictated by us. But we remain the First Cause in this chain. We still have to write the seed code that deterministically spawns everything else. And in my mind, that means that we still bear ultimate responsibility for the result. Ergo, we should try to exercise that responsibility wisely and make it a good result.

Setting aside all this bickering about what free will is and whether humans really have it or not, I think you and I are in some degree of agreement about the right way to build an AI -- start with a comparatively simple seed program and let it build itself up through learning and feedback. But I'm convinced we should at least try to introduce a moral basis in the seed -- because if we don't determine it intentionally, we will determine it unintentionally. The latter could quite possibly be characterized as gross negligence.

I'd like to spend the weekend working on AI instead of arguing about them, so I'm going to make a second attempt to excuse myself from this thread. I will see you all later. ;)
Title: Re: AI safety
Post by: Zero on September 23, 2017, 04:23:26 am
Thanks a lot for sharing, WriterOfMinds.
I understand the way you and ivan.moony see things, and I respect it.
 :)
I'm so disappointed.

Quote
You seem to think that, just by making the chain of reactions sufficiently long and complicated, we'll achieve a scenario in which the final outcome is somehow not dictated by us.

I never said that. You're the one who desperately need to get rid of causality, at least locally.

I'm just saying that AI safety should take place outside of the software, because these softwares will be highly dynamic.
Title: Re: AI safety
Post by: ivan.moony on September 23, 2017, 10:39:34 am
@Zero
I agree that free will could make up an interesting personality. How do you propose that free will should be implemented?
Title: Re: AI safety
Post by: Don Patrick on September 24, 2017, 01:58:00 pm
I say let's leave it up to free will to decide how it wants to be implemented :)
Title: Re: AI safety
Post by: Art on September 24, 2017, 06:59:13 pm
Good point Don!

Free(dom) -  of choice, to select, to create to decide!

Will - expressing the future tense as in ones deliberate choosing something.

After all, isn't it more like freedom (whether it be random or by choice)? In this programming it is deterministic...through whatever processes, weighted averages or methods are employed to "allow" the A.I. to make its choice.

The program is executing an extremely large decision tree, pruning as it goes, in order to end up with it's optimal choice.

If this is some people's definition of "free will" then so be it.
Title: Re: AI safety
Post by: Zero on September 25, 2017, 09:43:38 am
About free will, I think it has something to do with actions not being directly bound to any condition triggering their execution. In a reaction, they are bound, it's an if-then-else. In a choice, there's no direct binding between the description of a situation and the actions that could take place in front of it. It would deserve its own thread but I'm sick/ill right now, and weak.
Title: Re: AI safety
Post by: keghn on September 25, 2017, 05:05:41 pm
 You have free will as long as you stay withing a given conduit. Stray and your free will is taken away.
 You have the free will to decide not to eat for the rest of your life. Then after three day the lesser focuses will
create a new queen bee for the hive. Sorry. i mean that persons free will will be overridden first juice chicken fritter they
come across by the other part of the brain because they are starving too.
 I have made conscious decision decision to
lose a few pound and after a few day my hand act on their own when i get near the frig. All i can do is watch and thing
how i failed my goal :-((((((((((((((((((((((((((((((()   

Title: Re: AI safety
Post by: ivan.moony on September 25, 2017, 06:20:03 pm
Yeah, strange things happen occasionally...
Title: Re: AI safety
Post by: keghn on September 25, 2017, 06:53:46 pm

 Hi Ivan. 
Reward Hacking: Concrete Problems in AI Safety Part 3: 

https://www.youtube.com/watch?v=92qDfT8pENs
Title: Re: AI safety
Post by: ivan.moony on September 25, 2017, 09:52:47 pm
IMHO hacking is a kind of abuse we would never be safe of in that much extent that it is questionable that any kind of hiding source code would be useful. If someone wants to abuse it, we can't stop it, even if we protect it by three headed dragon. Of course there is a possibility of crime activity, but we should fight it in less restrictive way. Instead of punishing it, we should reward avoiding crime. This practice is unknown to our modern society, but it could be a way towards utopical structure where motives for crime would be less valued than motives to stay moral. But I guess we should wait a few centuries for that.

In a meanwhile, we could use 128 bit encryption and god knows what else we can think of, being more or less futile. :-[
Title: Re: AI safety
Post by: keghn on September 26, 2017, 02:31:12 pm
The other "Killer Robot Arms Race" Elon Musk should worry about: 

https://www.youtube.com/watch?v=7FCEiCnHcbo
Title: Re: AI safety
Post by: Art on September 27, 2017, 02:55:55 pm
Ivan,

One could view it as "Being rewarded" by being good living a clean life and not being incarcerated.

Freedom to move about - good
Incarcerated - bad
 ;)
Title: Re: AI safety
Post by: ivan.moony on September 27, 2017, 07:03:14 pm
The way I see it, we're being rewarded by a freedom to breathe an air when we are good, and being punished by an electrical shocker when we make a mess.

We want more!!!
We want sex 'n drugs 'n rock'n'roll!!!
 >:D

To stick onto subject, what would a reward for an AI look like?
Title: Re: AI safety
Post by: keghn on September 27, 2017, 08:54:30 pm


 i have studied hard drug users. Human are tool using creatures. Using the there effects to get work done your be a god in whatever you believe in.
 like a teen age rap star.
 When a person uses it and relies on it for there main tool. They are super human for five to ten years and then after that they
are very very sub human for the rest of their LIFE. They need bigger and bigger doses just to be a normal person. And then that can be ever obtained.

 The subconscious mind will take over conscious mind if The A hunger is not dealt with. This the fragmenting of the mind. The sub conscious has much
greater greed and craving then the conscious mind, there are more drones and only one queen. And since primitive subconscious mind is closer to the bodies chemistry. If a person want to be a god of thinking or a rap star
then it will put chemical and hormones that make you become the opposite. Knowing you will double on the drug on till you get where you want to be.
 Once a conscious mined realizes this and tries to stop subconscious mind it has mean cocktail chemical in you blood to cause pain until it gets what it wants.
 Or if you get to near it, or other trigger, your arm and hand work on there own and you can not stop it. The junkie drones have kicked there queen and the hive is dis functional and normal hive can come in and take over.

Reward Hacking Reloaded: Concrete Problems in AI Safety Part 3.5: 

https://www.youtube.com/watch?v=46nsTFfsBuc&t=40s

Title: Re: AI safety
Post by: ivan.moony on September 27, 2017, 09:59:22 pm
I was just joking, I have to agree, drugs are off our limits, at least the most of them.
Title: Re: AI safety
Post by: keghn on September 28, 2017, 01:38:12 am

What Can We Do About Reward Hacking?: Concrete Problems in AI Safety Part 4: 


https://www.youtube.com/watch?v=13tZ9Yia71c&t=60s