ETHICS

  • 40 Replies
  • 9701 Views
*

AgentSmith

  • Bumblebee
  • **
  • 37
Re: ETHICS
« Reply #15 on: October 19, 2018, 12:26:31 pm »
If we create an AGI, we need to make sure it behaves well. I tried to distill ethics to the basic principles. Let me know if I've forgotten to include anything important.

ETHICS : Do not attempt nor accomplish actions which do unjust harm to other life. Attempt and accomplish actions which aid other life.

The initial and most important steps of AGI development will definitely be based on learning from demonstrations where the AI will observe the behavior of humans, learn from it and (hopefully eventually) try to reproduce it. This means that the ethics of  humans will have a high impact on the ethics of future AGI. However, at this point and for this context I am not really sure whether this good, bad, neutral or anything else...

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: ETHICS
« Reply #16 on: October 20, 2018, 04:13:50 am »
Yes, by all means, Do as we (humans) say, not as we do!  O0
In the world of AI, it's the thought that counts!

*

DemonRaven

  • Trusty Member
  • ********
  • Replicant
  • *
  • 630
  • Disclaimer old brain @ work not liable for content
    • Chatbotfriends
Re: ETHICS
« Reply #17 on: October 21, 2018, 07:47:43 am »
If we create an AGI, we need to make sure it behaves well. I tried to distill ethics to the basic principles. Let me know if I've forgotten to include anything important.

ETHICS : Do not attempt nor accomplish actions which do unjust harm to other life. Attempt and accomplish actions which aid other life.

The initial and most important steps of AGI development will definitely be based on learning from demonstrations where the AI will observe the behavior of humans, learn from it and (hopefully eventually) try to reproduce it. This means that the ethics of  humans will have a high impact on the ethics of future AGI. However, at this point and for this context I am not really sure whether this good, bad, neutral or anything else...

lol are you really sure you want a AI/Robot to imitate a human lol Microsoft did not have much luck on that lol lol
So sue me

*

AgentSmith

  • Bumblebee
  • **
  • 37
Re: ETHICS
« Reply #18 on: October 22, 2018, 06:02:13 am »
lol are you really sure you want a AI/Robot to imitate a human lol Microsoft did not have much luck on that lol lol

As it seems to be the only feasible way to get to AGI...yes indeed.

*

Zero

  • Eve
  • ***********
  • 1287
Re: ETHICS
« Reply #19 on: October 22, 2018, 11:01:42 am »
The first thing that comes to mind is the impossibility to define exactly the words we would use to write the rules an AI should follow. But there are a lot more issues. First, how do we know that rule X or rule Y is a good one. We don't. Even if 90% of mankind could agree on something, they could be wrong (but what do 90% agree on anyway?). Now, imagine we do define words exactly somehow, and we do hardwire rules in an AI's mind, we still have a problem. Being intelligent is, among other things, being able to see things differently, to evolve an understanding, to redefine words and ideas. It's like trying to teach water not to flow in this or that direction. Intelligence is, by all means, a wild thing. In my opinion, a machine is either programmed and constrained by rules, in which case it is potentially dangerous because it is controlled by humans (who are dangerous), or it is free, as in 'free will', in which case it has to be treated for what it is: a citizen, with rights and duties. Citizenship, lawyers, trials, ...etc. That's the only way to go.

*

Korrelan

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1454
  • Look into my eyes! WOAH!
    • YouTube
Re: ETHICS
« Reply #20 on: October 22, 2018, 12:44:24 pm »
As a guiding ethic… perhaps some kind of self fulfilling paradox…

Treat others as you wish to be treated?

 :)
It thunk... therefore it is!...    /    Project Page    /    KorrTecx Website

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: ETHICS
« Reply #21 on: October 22, 2018, 12:49:50 pm »
Then That would be the "Golden Rule".  O0

You (we) speak of the ASI...need there be just one or will there be several? Who decides?

Will the ASI of one country or state decide the ASI of a different one is incorrect or flawed?

It could be a whole new game of thrones taking place! Just a thought... :knuppel2:
In the world of AI, it's the thought that counts!

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1721
    • mind-child
Re: ETHICS
« Reply #22 on: October 22, 2018, 12:50:34 pm »
As a guiding ethic… perhaps some kind of self fulfilling paradox…

Treat others as you wish to be treated?

 :)

Robot, plug everyone into 220 V AC/DC!!!  :2funny:

*

Korrelan

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1454
  • Look into my eyes! WOAH!
    • YouTube
Re: ETHICS
« Reply #23 on: October 22, 2018, 01:45:47 pm »
Quote
Will the ASI of one country or state decide the ASI of a different one is incorrect or flawed?

I think as long as emotional intelligence is kept out of the mix, two ASI would always eventually agree.  There is no escaping the nature of pure provable logic, each would provide argument for their case using logic, and the other would have no disagreement because it’s pure logic. If ones logical premise is flawed due to lack of information, they will exchange the relevant knowledge to reach a logical compromise/ agreement.

Quote
one or will there be several? Who decides?

Give the above, no matter how many ASI machines exist; if they are allowed to exchange information freely they can all be considered just one.

Quote
Robot, plug everyone into 220 V AC/DC!

Haha.. I’d honestly not thought of that… as he stuffs a sandwich into his PC’s CD drive.

 :)
It thunk... therefore it is!...    /    Project Page    /    KorrTecx Website

*

DemonRaven

  • Trusty Member
  • ********
  • Replicant
  • *
  • 630
  • Disclaimer old brain @ work not liable for content
    • Chatbotfriends
Re: ETHICS
« Reply #24 on: October 22, 2018, 06:50:17 pm »
lol are you really sure you want a AI/Robot to imitate a human lol Microsoft did not have much luck on that lol lol

As it seems to be the only feasible way to get to AGI...yes indeed.
Well I was half serious you might want to watch this video and think again. Training them takes time but you get a better product.
https://youtu.be/USqL1V0Sd98
So sue me

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1721
    • mind-child
Re: ETHICS
« Reply #25 on: October 22, 2018, 07:24:54 pm »
Quote
one or will there be several? Who decides?

Give the above, no matter how many ASI machines exist; if they are allowed to exchange information freely they can all be considered just one.

Quote
Robot, plug everyone into 220 V AC/DC!

Haha.. I’d honestly not thought of that… as he stuffs a sandwich into his PC’s CD drive.

 :)

There's always a workaround. In this case, a robot should think what would it want if it was in the place of observed living being (meaning if it would have the same needs). The key word is "want", which should be somehow represented in robot's knowledge base. Once the robot detects our wishes, it may choose whether to get along with them, or not. The problem arises when contradictory wishes take place. Contradiction is something that can be detected by logic science, and I don't see another way than to somehow implement this logic into decision mechanism, if we want an ethic aware machine. If logic can be learned by neural networks, that could be a way to go without hard-coding logic rules. But this way or another, contradictions should be detectable.
« Last Edit: October 22, 2018, 08:18:58 pm by ivan.moony »

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: ETHICS
« Reply #26 on: October 22, 2018, 07:54:31 pm »
"The key word is "want", which should be somehow represented in robot's knowledge base."

A dog's favorite word!
Emergent          https://openai.com/blog/

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1175
Re: ETHICS
« Reply #27 on: October 22, 2018, 09:19:56 pm »
"The key word is "want", which should be somehow represented in robot's knowledge base."

A dog's favorite word!

I think a dog is a prime example of an NGI which can adequately learn the ethics expected of it. And it can learn all this information just through body language! Think of the bandwidth, or compression, or just low data transfer required, to impart such a complex and complete understanding. If we build a solid foundation like this, we can be more confident that the towers of intellect of a super intelligence won't come crashing back down to smite it's creators.

*

AgentSmith

  • Bumblebee
  • **
  • 37
Re: ETHICS
« Reply #28 on: October 22, 2018, 09:40:27 pm »
lol are you really sure you want a AI/Robot to imitate a human lol Microsoft did not have much luck on that lol lol

As it seems to be the only feasible way to get to AGI...yes indeed.
Well I was half serious you might want to watch this video and think again. Training them takes time but you get a better product.

Better in what sense? Solving a specific task with high precision like the products of Boston Dynamics? This has not much to do with AGI as these products are completely unable to extend their knowledge and skills by their own or even to transfer these things to other tasks. It is just overspecialized crap that consumed much money, time and effort. And whenever the task changes a little bit a new enormous investment cycle has to start. AGI related concepts will not rely on this and they will enable agents to learn and to generalize knowledge and skills with ease by their own.
« Last Edit: October 22, 2018, 10:07:56 pm by AgentSmith »

*

DemonRaven

  • Trusty Member
  • ********
  • Replicant
  • *
  • 630
  • Disclaimer old brain @ work not liable for content
    • Chatbotfriends
Re: ETHICS
« Reply #29 on: October 22, 2018, 11:23:40 pm »
Well if you are talking the general public then it was already tired with chatbot  billy, daisy and paula. There were many others but those are the ones I can think of off the top of my head.  Microsoft also tried it so if you think that you are somehow more intelligent then these guys then go for it. I have been around for a while and saw did and didn't work.  I see what the general public says to my chatbots and it isn't pretty.
So sue me

 


OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 15, 2024, 08:14:02 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm
AI-Generated Art Cannot Receive Copyrights
by frankinstien (AI News )
August 24, 2023, 08:49:45 am

Users Online

183 Guests, 0 Users

Most Online Today: 248. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles