Recent Posts

Pages: [1] 2 3 ... 10
1
General AI Discussion / Re: AI safety
« Last post by ivan.moony on Today at 06:01:16 pm »
Yes, we are stressing the subject all the time. To sum it out, WriterOfMinds would mix using force and being ethical in the same time, and I think she's not the only one. My consideration is that that kind of AI would wipe out this planet when it sees the way it works. So I propose the pure pacifist AI, just in case, corroborated by a vibe of forgiveness we all experience with Mother Nature. But WriterOfMinds is relentless, she'd be putting a bullet between eyes when someone even farts. :uglystupid2:
2
General AI Discussion / Re: AI safety
« Last post by Zero on Today at 05:38:13 pm »
Sorry for saying "offensed" instead of "offended". BTW, I'd love to be corrected when my english sounds so bad.

Morality needs a bit of rigidity. But are we working on the right questions? I mean, of course we have to think about the consequences of AI development, but... don't know.
3
General AI Discussion / Re: AI safety
« Last post by ivan.moony on Today at 04:43:02 pm »
@ivan.mooney
Offensive? Who said anything about being offended? I thought we were having a debate. I thought that was one of the things forums are for. If anything, I'm upset by the fact that you seem prepared to change your tune just because other people are *offended* by what you said, and not because we've truly convinced you that you're wrong  ::)
Sorry, you sounded to me a bit upset answering my posts, and I'd hate to see that very special byte in your mind going through negative values  ;) . But if it's ok with you, I'd stick to my initial attitude.

Good luck with your work.

[EDIT] And just for the record, I'm not afraid of dying, I want to die right now, this moment. I live only because people that I care abut want me to live. Personally, I cant wait for the day I'll die and be liberated of my misery.
4
General AI Discussion / Re: AI safety
« Last post by WriterOfMinds on Today at 03:39:51 pm »
@ivan.mooney
Offensive? Who said anything about being offended? I thought we were having a debate. I thought that was one of the things forums are for. If anything, I'm upset by the fact that you seem prepared to change your tune just because other people are *offended* by what you said, and not because we've truly convinced you that you're wrong  ::)

Regarding your most recent response to me: it has become clear that we are starting from pretty different premises. I do not, in fact, think that all lives are equal. You could probably call me a graduated sentiocentrist; I only really care about conscious life, and I am prepared to place greater value on lives that appear to have a higher consciousness level. So I eat plants instead of animals, and I am okay with doing something like giving anti-parasite medication to my cats. I am also not a strict utilitarian, so I do not think that taking a life by pure accident is equivalent to deliberate murder -- whether the accident is stepping on a bug, or inadvertently striking a human pedestrian with one's car. I am capable of living in a way that is consistent with my moral principles; you, seemingly, cannot satisfy yours with anything but your own death.

Of course, there are many competing moral philosophies in this world, and it is possible that these premises of mine are incorrect. If we built an AI that was free of arbitrary behavior restrictions and capable of moral reasoning (which is what I want to do), then there's a chance it would conclude that I am wrong. Perhaps it would discover that humans have in fact been the monsters all along, and it is morally obligatory to wipe them out so that other life can flourish. If that were to happen ... I suppose I'd take my lumps. I'd hate for an AI to bring humans to extinction because of a bug in the code, but if it actually reaches the sound conclusion that our death is the right thing, then I for one am prepared to die. Righteousness and love for others are more important to me than survival. Of course this is all very theoretical and I might feel differently in the actual event, but I hope not.

I still can't help finding your arguments a bit self-contradictory. You admit that you can't measure up to your own morality, because it obligates you to die, and you'd rather live ... so your conclusion is that you're going to disregard morality, at least on that particular point. But you've been trying to say that if an AI sees a murderer about to kill a victim, and the murderer won't be stopped by anything but a bullet, the AI should be bound to let the victim die. This feels inconsistent. If you aren't willing to be bound by your own moral system, why should anyone/anything else be bound by it (AI included)?

I've got to concentrate on my work today, so I probably won't respond again, but this has been stimulating.

@Art
Quote
Eat what you want ... but don't force your beliefs on others nor threaten / hurt them if they don't always agree with you.
I can't help finding statements like this self-contradictory as well, given the fact that many people in this world want to eat the bodies of sentient beings. Aren't they forcing their beliefs on animals in a threatening/hurtful way every time they eat meat? Then who are they to talk?  ;)
5
General AI Discussion / Re: outline from gadient mask
« Last post by keghn on Today at 03:32:08 pm »
6
General AI Discussion / Re: AI safety
« Last post by keghn on Today at 03:28:14 pm »
 We have internal anti rewards and positive rewards. Some anti rewards do not go away and have to do what ever to
get rid of them.  AT any cost. 
 Dose a unhappy deer throw it self to a wolf?  Are predators doing service?
 Do squirrel really want to make to the other side of the road? 
 Then the real evil is seen when this is not the way the farmer thins it herd for butchering.
 Will AI find the unhappy be for they do something rash?
7
General AI Discussion / Re: AI safety
« Last post by Zero on Today at 02:46:50 pm »
No no ivan.moony, talking with you is a real pleasure. I don't think anyone here is feeling offensed by your posts.

I think we should accept what we are. Lions eat meat, and there's nothing wrong, right? Humans are sometimes violent, full of hatred, or cruel. It's ok to balance the equation: with a peaceful behavior, with love, and, when necessary, with prison. There's room for gandhi-style resistance, and also for force.

Why not letting AIs choose their own style? I stick to the machine citizenship concept, because it gives room for different solutions to co-exist.
8
General AI Discussion / Re: AI safety
« Last post by ivan.moony on Today at 01:51:33 pm »
I'm sorry, I didn't mean to argue. I see that some of us consider my posts offensive. So I give up. You are right, and I am terribly wrong. Force should be used sometimes, and it is perfectly ethical thing to do. Doing nothing only helps the oppressor and it is not an acceptable behavior.
9
General Chatbots and Software / Grats to SquareBear
« Last post by Freddy on Today at 12:58:18 pm »
Loebner Prize 2017 winner - three time times now yes ?

Well done Steve :)

Dr Wallace made a post over on Chatbots.org : https://www.chatbots.org/ai_zone/viewthread/3129/
10
General AI Discussion / Re: AI safety
« Last post by Art on Today at 12:42:48 pm »
Eat what you want, vote for whom you want, pray to whom you want but don't force your beliefs on others nor threaten / hurt them if they don't always agree with you. Bring harm to no person and enjoy things you like in moderation.

Can we say Utopia? I thought we could....
Pages: [1] 2 3 ... 10

Users Online

47 Guests, 2 Users
Users active in past 15 minutes:
ivan.moony, keghn
[Trusty Member]

Most Online Today: 73. Most Online Ever: 208 (August 27, 2008, 09:36:30 am)

Articles