AI safety

  • 62 Replies
  • 18363 Views
*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: AI safety
« Reply #15 on: September 19, 2017, 05:39:53 am »
Quote
My best shot is: "When you loose a temper, and then try not to do a much damage." But I wouldn't apply it to a machine that could waste anyone by a snap of a metal finger, I'd reserve insanity just for harmless livings.

Are you implying that violence only arises when people are in a state of deranged anger? I certainly don't think that's true. What if an AI could make a calm, considered decision to attack someone? What if it could back that decision up with moral reasoning equivalent to that of the best human philosophers? That would be far from insanity.

You are right, that would be far from insanity. Putting a bullet right between eyes, planned in cold blood, planned behind the victim's back... That is my definition of being mean. The moment you give up searching for a peaceful solution is the moment when violence wins. When approaching that moment, you are about to run out of peaceful solutions, which pushes you into a panic. And then the panic is your only justification for what you are about to do. Given enough time, AI unit could find a desirable solution. But if there is not enough time, should the AI unit pick up one of those violent solutions that came up in that too short decision time span? I guess that is where a different opinions come up, depending on a type of person you are.

But something else bugs me a big time. What if an AI sees me catching fish? Should it protect the fish? We are all sinners, that is the thing about this planet, we all stink here, starting from Dalai Lama, ending with a little amoeba in a glass of water. If someone is about to make an order into this ecosystem, where would it end? So about the food, it is about me or them. I decide heavily: I live, they die. But what would my decision be between me and other people, my friends, or a close family? I decide in a moment: they live, I die. Does that mean that some lives are more important than others? Don't try to justify my first decision because it is a big mistake. I should die in the first case too. But I don't, and that is what is making me a sinner. And what to do with sinners? A bullet between the eyes? Well, try and see if I run away. You'd be surprised.

The bottom line is that we are all sinners, without exception. If you want to put things right on this planet by a force, you might end up in mass extinction of the very life, literally. My proposition is to label violence as something that is above us, something we don't understand, and let it be, trying not to exhibit a violence back. Words are my only weapon I choose to fight by, otherwise I don't see a happy end, especially not with something as powerful as AI could be.
« Last Edit: September 19, 2017, 07:07:23 am by ivan.moony »

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 620
    • WriterOfMinds Blog
Re: AI safety
« Reply #16 on: September 19, 2017, 08:30:21 am »
Quote
Given enough time, AI unit could find a desirable solution. But if there is not enough time, should the AI unit pick up one of those violent solutions that came up in that too short decision time span?

You are claiming that a non-violent solution always exists and can always be found/formulated given a sufficiently long search time. I think that's a serious unfounded assumption.

Your later paragraphs just confuse me. You seem to be saying that you think it's wrong for you to eat fish, but you do it anyway -- and therefore, you're afraid that a truly ethical AI would kill you. I'm not seeing the dilemma. If you sincerely think eating fish is a deed deserving of death, then quit doing it for heaven's sake. When I speak of defensive violence, you say you want to abjure that ... yet you appear insistent on continued aggressive violence (toward fish). Can you explain to me how that isn't hypocritical? If you're so committed to doing something you consider unethical, that you would keep doing it even if an AI said "Stop or I'll beat you up," do you really care about ethics at all?

I'm afraid I can't buy your slippery slope argument, overall. Many humans are neither pacifist nor omnicidal. They are capable of making distinctions between a starving eagle who eats a fish and a serial killer who enjoys the pain of others. I think such distinctions are valid, and I think an AI with moral reasoning abilities could make them as well.

If you're trying to say that the world is set up such that it's physically impossible to avoid sinning, I don't buy that either. Sin is a choice to do something wrong. If negative actions are inevitable and there is literally no choice, there can be no sin. I agree that all humans do choose sin from time to time (I'd have to argue about the amoeba ... I don't think it has the capacity). But if you and I can agree that we'd rather not sin, we should be *grateful* for a guardian AI's attempts to stop us. We should be happy to fear harming the weak, because they have a protector ... we should be glad for anything that reduces our temptation and helps us make the better choice.

In short, if eating fish is justifiable, a protective AI should be able to figure that out and refrain from interfering. If it's not justifiable, then you need to stop, and after you stop a protective AI should hold no terrors for you. Recognition that you're a sinner is supposed to lead to repentance, not a statement of "well I'm going to keep on hurting others, but please don't hurt me back, okay?"

(I'm basically vegan, by the way.  I gave up eating dead flesh years ago. A world ruled by an AI that compels humans to stop killing animals isn't something I would even put on my dystopia list.)

*

Zero

  • Eve
  • ***********
  • 1287
Re: AI safety
« Reply #17 on: September 19, 2017, 09:48:16 am »
Quote
The difference is that "someone" is connected to the cell (someone feels its state in spiritual sense), while "no one" is connected to the byte (again in spiritual sense).

Connnected in spiritual sense? You mean like in Narnia? Where is the spiritual network socket? Hippocampus?

Who is that "someone" you're talking about? Isn't AI research all about creating this very "someone"? Or, maybe you believe it's impossible to do, because only God can do such a thing, which is why you can't see an AI as more than an automaton.

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: AI safety
« Reply #18 on: September 19, 2017, 12:36:18 pm »
@WriterOfMinds
There is a difference between people who justify killing to survive and people who would rather die than kill, no matter of what reason. I think we are not falling into the same category, at least not at justification level.

It is always something on this planet. If it isn't a man, than it is a cow. If it isn't a cow, then it is a fish. If it isn't a fish, then it is a plant (which is also a kind of alive). If it isn't a plant, then it is a bug on the road we step onto. It is simply impossible to get away without killing. And if I think that my life isn't more worth than a life I am taking away, then I have a serious problem with staying alive.

If we explain to AI that each living being should think that her/his life is more important than a life of other living being, we'd have a base to construct an ethical theory about when it is justified to kill or cause a pain. But using terms "ethics", "killing" and "causing a pain" in the same sentence really terrifies me. Are we really trying to find an answer to a question "when it is ethical to kill?" or "when it is ethical to cause a pain?"

I'm a murderer, and I'll be a murderer until the day I die, you can't deny that fact, and you can't persuade me that my life is the most important life in the world and because of that I should kill, rather than die and let live. I have a choice to live small as a thief or to die big as a decent being. Unfortunately, I chose to live, but I live trying to minimize the damage I do, that's the least I can do.

If we don't want to close our eyes on a violence, then we are all looking forward to that bullet between the eyes, it is a closed circle. But this planet lets us get away without that bullet. It is about something bigger than humans, most of us live graceful lives, even if we "don't deserve it". It might be something about forgiving, but who would know all the misteries of life? I learned a lesson from that. I choose not to use force. You can choose whatever you want, who am I to tell you what to do and think?

Quote from: WriterOfMinds
I'm basically vegan, by the way
I was vegetarian a couple of times in my life. Each time I started being a vegan, couldn't stand the hunger, moved onto eggs and milk, still couldn't stand the hunger, then moved onto fish, I was still hungry all the time and then finally returned to meat after a year or so of being hungry. I'm sorry, I should have a stronger character.


Quote from: Zero
Isn't AI research all about creating this very "someone"?

I think biology could be a branch of science that could investigate artificial life, but for now I'm considering only a simulation of intelligence without its alive inhabitant.
« Last Edit: September 19, 2017, 01:14:35 pm by ivan.moony »

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: AI safety
« Reply #19 on: September 19, 2017, 12:42:48 pm »
Eat what you want, vote for whom you want, pray to whom you want but don't force your beliefs on others nor threaten / hurt them if they don't always agree with you. Bring harm to no person and enjoy things you like in moderation.

Can we say Utopia? I thought we could....
In the world of AI, it's the thought that counts!

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: AI safety
« Reply #20 on: September 19, 2017, 01:51:33 pm »
I'm sorry, I didn't mean to argue. I see that some of us consider my posts offensive. So I give up. You are right, and I am terribly wrong. Force should be used sometimes, and it is perfectly ethical thing to do. Doing nothing only helps the oppressor and it is not an acceptable behavior.

*

Zero

  • Eve
  • ***********
  • 1287
Re: AI safety
« Reply #21 on: September 19, 2017, 02:46:50 pm »
No no ivan.moony, talking with you is a real pleasure. I don't think anyone here is feeling offensed by your posts.

I think we should accept what we are. Lions eat meat, and there's nothing wrong, right? Humans are sometimes violent, full of hatred, or cruel. It's ok to balance the equation: with a peaceful behavior, with love, and, when necessary, with prison. There's room for gandhi-style resistance, and also for force.

Why not letting AIs choose their own style? I stick to the machine citizenship concept, because it gives room for different solutions to co-exist.

*

keghn

  • Trusty Member
  • *********
  • Terminator
  • *
  • 824
Re: AI safety
« Reply #22 on: September 19, 2017, 03:28:14 pm »
 We have internal anti rewards and positive rewards. Some anti rewards do not go away and have to do what ever to
get rid of them.  AT any cost. 
 Dose a unhappy deer throw it self to a wolf?  Are predators doing service?
 Do squirrel really want to make to the other side of the road? 
 Then the real evil is seen when this is not the way the farmer thins it herd for butchering.
 Will AI find the unhappy be for they do something rash?

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 620
    • WriterOfMinds Blog
Re: AI safety
« Reply #23 on: September 19, 2017, 03:39:51 pm »
@ivan.mooney
Offensive? Who said anything about being offended? I thought we were having a debate. I thought that was one of the things forums are for. If anything, I'm upset by the fact that you seem prepared to change your tune just because other people are *offended* by what you said, and not because we've truly convinced you that you're wrong  ::)

Regarding your most recent response to me: it has become clear that we are starting from pretty different premises. I do not, in fact, think that all lives are equal. You could probably call me a graduated sentiocentrist; I only really care about conscious life, and I am prepared to place greater value on lives that appear to have a higher consciousness level. So I eat plants instead of animals, and I am okay with doing something like giving anti-parasite medication to my cats. I am also not a strict utilitarian, so I do not think that taking a life by pure accident is equivalent to deliberate murder -- whether the accident is stepping on a bug, or inadvertently striking a human pedestrian with one's car. I am capable of living in a way that is consistent with my moral principles; you, seemingly, cannot satisfy yours with anything but your own death.

Of course, there are many competing moral philosophies in this world, and it is possible that these premises of mine are incorrect. If we built an AI that was free of arbitrary behavior restrictions and capable of moral reasoning (which is what I want to do), then there's a chance it would conclude that I am wrong. Perhaps it would discover that humans have in fact been the monsters all along, and it is morally obligatory to wipe them out so that other life can flourish. If that were to happen ... I suppose I'd take my lumps. I'd hate for an AI to bring humans to extinction because of a bug in the code, but if it actually reaches the sound conclusion that our death is the right thing, then I for one am prepared to die. Righteousness and love for others are more important to me than survival. Of course this is all very theoretical and I might feel differently in the actual event, but I hope not.

I still can't help finding your arguments a bit self-contradictory. You admit that you can't measure up to your own morality, because it obligates you to die, and you'd rather live ... so your conclusion is that you're going to disregard morality, at least on that particular point. But you've been trying to say that if an AI sees a murderer about to kill a victim, and the murderer won't be stopped by anything but a bullet, the AI should be bound to let the victim die. This feels inconsistent. If you aren't willing to be bound by your own moral system, why should anyone/anything else be bound by it (AI included)?

I've got to concentrate on my work today, so I probably won't respond again, but this has been stimulating.

@Art
Quote
Eat what you want ... but don't force your beliefs on others nor threaten / hurt them if they don't always agree with you.
I can't help finding statements like this self-contradictory as well, given the fact that many people in this world want to eat the bodies of sentient beings. Aren't they forcing their beliefs on animals in a threatening/hurtful way every time they eat meat? Then who are they to talk?  ;)

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: AI safety
« Reply #24 on: September 19, 2017, 04:43:02 pm »
@ivan.mooney
Offensive? Who said anything about being offended? I thought we were having a debate. I thought that was one of the things forums are for. If anything, I'm upset by the fact that you seem prepared to change your tune just because other people are *offended* by what you said, and not because we've truly convinced you that you're wrong  ::)
Sorry, you sounded to me a bit upset answering my posts, and I'd hate to see that very special byte in your mind going through negative values  ;) . But if it's ok with you, I'd stick to my initial attitude.

Good luck with your work.

[EDIT] And just for the record, I'm not afraid of dying, I want to die right now, this moment. I live only because people that I care abut want me to live. Personally, I cant wait for the day I'll die and be liberated of my misery.
« Last Edit: September 19, 2017, 05:44:24 pm by ivan.moony »

*

Zero

  • Eve
  • ***********
  • 1287
Re: AI safety
« Reply #25 on: September 19, 2017, 05:38:13 pm »
Sorry for saying "offensed" instead of "offended". BTW, I'd love to be corrected when my english sounds so bad.

Morality needs a bit of rigidity. But are we working on the right questions? I mean, of course we have to think about the consequences of AI development, but... don't know.

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: AI safety
« Reply #26 on: September 19, 2017, 06:01:16 pm »
Yes, we are stressing the subject all the time. To sum it out, WriterOfMinds would mix using force and being ethical in the same time, and I think she's not the only one. My consideration is that that kind of AI would wipe out this planet when it sees the way it works. Only plants would be spared. So I propose the pure pacifist AI, just in case, corroborated by a vibe of forgiveness we all experience with Mother Nature. But WriterOfMinds is relentless.
« Last Edit: September 19, 2017, 07:09:55 pm by ivan.moony »

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: AI safety
« Reply #27 on: September 19, 2017, 09:55:24 pm »
I'm thinking about justification of using force in extreme situations. What would rules that allow using force look like? I hope not like a common state laws that all countries rely upon, because those laws are one big crazy cabbage, a dragon with thirteen heads, and when you chop off one head, three new grow up in its place.

All I know is that it should be simple enough so when you look at it, you can say: yeah, that's it, no doubt, let's do it. Is this even possible?

*

Zero

  • Eve
  • ***********
  • 1287
Re: AI safety
« Reply #28 on: September 20, 2017, 09:42:12 am »
Don't you trust justice and democracy, at least on a conceptual level? Why should AI makers interfere with these mechanisms? That's not our job!

Hey, I didn't know you were a woman, WriterOfMinds! In french, it's easier to identify gender of speaker than in english. Are there other women on this forum?

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: AI safety
« Reply #29 on: September 21, 2017, 03:54:19 pm »
Don't you trust justice and democracy, at least on a conceptual level? Why should AI makers interfere with these mechanisms? That's not our job!

Hey, I didn't know you were a woman, WriterOfMinds! In french, it's easier to identify gender of speaker than in english. Are there other women on this forum?

Does it really matter the gender of a person on this forum? There have been quite a few over the years and all are welcome no matter their lot nor choice in life. ;)
In the world of AI, it's the thought that counts!

 


LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

337 Guests, 0 Users

Most Online Today: 447. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles