ETHICS

  • 40 Replies
  • 1875 Views
*

AgentSmith

  • Bumblebee
  • **
  • 37
Re: ETHICS
« Reply #15 on: October 19, 2018, 12:26:31 pm »
If we create an AGI, we need to make sure it behaves well. I tried to distill ethics to the basic principles. Let me know if I've forgotten to include anything important.

ETHICS : Do not attempt nor accomplish actions which do unjust harm to other life. Attempt and accomplish actions which aid other life.

The initial and most important steps of AGI development will definitely be based on learning from demonstrations where the AI will observe the behavior of humans, learn from it and (hopefully eventually) try to reproduce it. This means that the ethics of  humans will have a high impact on the ethics of future AGI. However, at this point and for this context I am not really sure whether this good, bad, neutral or anything else...

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Global Moderator
  • *********************
  • Deep Thought
  • *
  • 5109
Re: ETHICS
« Reply #16 on: October 20, 2018, 04:13:50 am »
Yes, by all means, Do as we (humans) say, not as we do!  O0
In the world of AI, it's the thought that counts!

*

DemonRaven

  • Trusty Member
  • ********
  • Replicant
  • *
  • 604
  • Disclaimer old brain @ work not liable for content
    • Chatbotfriends
Re: ETHICS
« Reply #17 on: October 21, 2018, 07:47:43 am »
If we create an AGI, we need to make sure it behaves well. I tried to distill ethics to the basic principles. Let me know if I've forgotten to include anything important.

ETHICS : Do not attempt nor accomplish actions which do unjust harm to other life. Attempt and accomplish actions which aid other life.

The initial and most important steps of AGI development will definitely be based on learning from demonstrations where the AI will observe the behavior of humans, learn from it and (hopefully eventually) try to reproduce it. This means that the ethics of  humans will have a high impact on the ethics of future AGI. However, at this point and for this context I am not really sure whether this good, bad, neutral or anything else...

lol are you really sure you want a AI/Robot to imitate a human lol Microsoft did not have much luck on that lol lol
So sue me

*

AgentSmith

  • Bumblebee
  • **
  • 37
Re: ETHICS
« Reply #18 on: October 22, 2018, 06:02:13 am »
lol are you really sure you want a AI/Robot to imitate a human lol Microsoft did not have much luck on that lol lol

As it seems to be the only feasible way to get to AGI...yes indeed.

*

Zero

  • Trusty Member
  • ********
  • Replicant
  • *
  • 737
    • Thinkbots are free
Re: ETHICS
« Reply #19 on: October 22, 2018, 11:01:42 am »
The first thing that comes to mind is the impossibility to define exactly the words we would use to write the rules an AI should follow. But there are a lot more issues. First, how do we know that rule X or rule Y is a good one. We don't. Even if 90% of mankind could agree on something, they could be wrong (but what do 90% agree on anyway?). Now, imagine we do define words exactly somehow, and we do hardwire rules in an AI's mind, we still have a problem. Being intelligent is, among other things, being able to see things differently, to evolve an understanding, to redefine words and ideas. It's like trying to teach water not to flow in this or that direction. Intelligence is, by all means, a wild thing. In my opinion, a machine is either programmed and constrained by rules, in which case it is potentially dangerous because it is controlled by humans (who are dangerous), or it is free, as in 'free will', in which case it has to be treated for what it is: a citizen, with rights and duties. Citizenship, lawyers, trials, ...etc. That's the only way to go.
Thinkbots are free, as in 'free will'.

*

Korrelan

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1114
  • Look into my eyes! WOAH!
    • YouTube
Re: ETHICS
« Reply #20 on: October 22, 2018, 12:44:24 pm »
As a guiding ethic… perhaps some kind of self fulfilling paradox…

Treat others as you wish to be treated?

 :)
It thunk... therefore it is!... my project page.

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Global Moderator
  • *********************
  • Deep Thought
  • *
  • 5109
Re: ETHICS
« Reply #21 on: October 22, 2018, 12:49:50 pm »
Then That would be the "Golden Rule".  O0

You (we) speak of the ASI...need there be just one or will there be several? Who decides?

Will the ASI of one country or state decide the ASI of a different one is incorrect or flawed?

It could be a whole new game of thrones taking place! Just a thought... :knuppel2:
In the world of AI, it's the thought that counts!

*

ivan.moony

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1120
    • Some of my projects
Re: ETHICS
« Reply #22 on: October 22, 2018, 12:50:34 pm »
As a guiding ethic… perhaps some kind of self fulfilling paradox…

Treat others as you wish to be treated?

 :)

Robot, plug everyone into 220 V AC/DC!!!  :2funny:
Dream big. The bigger the dream is, the more beautiful place the world becomes.

*

Korrelan

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1114
  • Look into my eyes! WOAH!
    • YouTube
Re: ETHICS
« Reply #23 on: October 22, 2018, 01:45:47 pm »
Quote
Will the ASI of one country or state decide the ASI of a different one is incorrect or flawed?

I think as long as emotional intelligence is kept out of the mix, two ASI would always eventually agree.  There is no escaping the nature of pure provable logic, each would provide argument for their case using logic, and the other would have no disagreement because it’s pure logic. If ones logical premise is flawed due to lack of information, they will exchange the relevant knowledge to reach a logical compromise/ agreement.

Quote
one or will there be several? Who decides?

Give the above, no matter how many ASI machines exist; if they are allowed to exchange information freely they can all be considered just one.

Quote
Robot, plug everyone into 220 V AC/DC!

Haha.. I’d honestly not thought of that… as he stuffs a sandwich into his PC’s CD drive.

 :)
It thunk... therefore it is!... my project page.

*

DemonRaven

  • Trusty Member
  • ********
  • Replicant
  • *
  • 604
  • Disclaimer old brain @ work not liable for content
    • Chatbotfriends
Re: ETHICS
« Reply #24 on: October 22, 2018, 06:50:17 pm »
lol are you really sure you want a AI/Robot to imitate a human lol Microsoft did not have much luck on that lol lol

As it seems to be the only feasible way to get to AGI...yes indeed.
Well I was half serious you might want to watch this video and think again. Training them takes time but you get a better product.
So sue me

*

ivan.moony

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1120
    • Some of my projects
Re: ETHICS
« Reply #25 on: October 22, 2018, 07:24:54 pm »
Quote
one or will there be several? Who decides?

Give the above, no matter how many ASI machines exist; if they are allowed to exchange information freely they can all be considered just one.

Quote
Robot, plug everyone into 220 V AC/DC!

Haha.. I’d honestly not thought of that… as he stuffs a sandwich into his PC’s CD drive.

 :)

There's always a workaround. In this case, a robot should think what would it want if it was in the place of observed living being (meaning if it would have the same needs). The key word is "want", which should be somehow represented in robot's knowledge base. Once the robot detects our wishes, it may choose whether to get along with them, or not. The problem arises when contradictory wishes take place. Contradiction is something that can be detected by logic science, and I don't see another way than to somehow implement this logic into decision mechanism, if we want an ethic aware machine. If logic can be learned by neural networks, that could be a way to go without hard-coding logic rules. But this way or another, contradictions should be detectable.
« Last Edit: October 22, 2018, 08:18:58 pm by ivan.moony »
Dream big. The bigger the dream is, the more beautiful place the world becomes.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *************
  • Transformer
  • *
  • 2173
  • First it wiggles, then it is rewarded.
Re: ETHICS
« Reply #26 on: October 22, 2018, 07:54:31 pm »
"The key word is "want", which should be somehow represented in robot's knowledge base."

A dog's favorite word!
Emergent

*

Hopefully Something

  • Electric Dreamer
  • ****
  • 113
  • So where are these cookies?
Re: ETHICS
« Reply #27 on: October 22, 2018, 09:19:56 pm »
"The key word is "want", which should be somehow represented in robot's knowledge base."

A dog's favorite word!

I think a dog is a prime example of an NGI which can adequately learn the ethics expected of it. And it can learn all this information just through body language! Think of the bandwidth, or compression, or just low data transfer required, to impart such a complex and complete understanding. If we build a solid foundation like this, we can be more confident that the towers of intellect of a super intelligence won't come crashing back down to smite it's creators.

*

AgentSmith

  • Bumblebee
  • **
  • 37
Re: ETHICS
« Reply #28 on: October 22, 2018, 09:40:27 pm »
lol are you really sure you want a AI/Robot to imitate a human lol Microsoft did not have much luck on that lol lol

As it seems to be the only feasible way to get to AGI...yes indeed.
Well I was half serious you might want to watch this video and think again. Training them takes time but you get a better product.

Better in what sense? Solving a specific task with high precision like the products of Boston Dynamics? This has not much to do with AGI as these products are completely unable to extend their knowledge and skills by their own or even to transfer these things to other tasks. It is just overspecialized crap that consumed much money, time and effort. And whenever the task changes a little bit a new enormous investment cycle has to start. AGI related concepts will not rely on this and they will enable agents to learn and to generalize knowledge and skills with ease by their own.
« Last Edit: October 22, 2018, 10:07:56 pm by AgentSmith »

*

DemonRaven

  • Trusty Member
  • ********
  • Replicant
  • *
  • 604
  • Disclaimer old brain @ work not liable for content
    • Chatbotfriends
Re: ETHICS
« Reply #29 on: October 22, 2018, 11:23:40 pm »
Well if you are talking the general public then it was already tired with chatbot  billy, daisy and paula. There were many others but those are the ones I can think of off the top of my head.  Microsoft also tried it so if you think that you are somehow more intelligent then these guys then go for it. I have been around for a while and saw did and didn't work.  I see what the general public says to my chatbots and it isn't pretty.
So sue me

 


Bridge to the future of engineering
by Tyler (Robotics News)
November 15, 2018, 12:00:54 pm
The many interfaces of computing
by Tyler (Robotics News)
November 14, 2018, 12:00:23 pm
Emotionally Responsive Avatars
by 8pla.net (AI News )
November 12, 2018, 10:00:17 pm
World's First AI News Anchor
by squarebear (AI News )
November 12, 2018, 08:22:59 am
Looking back at Project Athena
by Tyler (Robotics News)
November 11, 2018, 12:01:49 pm
A.I. Optometrist
by LOCKSUIT (AI News )
November 11, 2018, 07:06:36 am
Highlighting new research opportunities in civil and environmental engineering
by Tyler (Robotics News)
November 08, 2018, 12:00:38 pm
Why some Wikipedia disputes go unresolved
by Tyler (Robotics News)
November 07, 2018, 12:01:07 pm

Users Online

72 Guests, 1 User
Users active in past 15 minutes:
sugumaton
[Roomba]

Most Online Today: 92. Most Online Ever: 208 (August 27, 2008, 09:36:30 am)

Articles