Three laws of... mankind?

  • 15 Replies
  • 3312 Views
*

Zero

  • Eve
  • ***********
  • 1287
Three laws of... mankind?
« on: April 15, 2022, 11:41:02 pm »
I've always thought the Asimov laws were completely wrong, because they would lead to robots saving human beings against their will at global scale, which would be a disaster (example: stay in prison you'll be safe). But I saw this old film with Robin Williams. I first had to force myself to watch it, then progressively the film took me in the story, and all in all, I don't regret these 2 hours. The film is very flat and safe though.



But then afterwards, I had a strange idea. Asimov's laws are a bad idea for robots, but it could be a really good idea for humans, who should respect all the other species on the face of the earth. So let's give the famous 3 laws a spin.

First Law
    A human being may not injure a living being or, through inaction, allow a living being to come to harm.
Second Law
    A human being must obey the orders given it by other human beings except where such orders would conflict with the First Law.
Third Law
    A human being must protect its own existence as long as such protection does not conflict with the First or Second Law.

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1372
  • Humans will disappoint you.
    • Home Page
Re: Three laws of... mankind?
« Reply #1 on: April 16, 2022, 12:01:12 am »
How many people who quote Asimov's three laws of robotics have actually read the stories based on them? Asimov formulated the three laws of robotics precisely so he could write stories showing how they would be impossible to apply. Even if you haven't read his books, you must have seen the movie "I, Robot" which was true to the spirit of Asimov's stories. Even better, Netflix recently released a French language movie on this topic called "Bigbug" which I highly recommend watching.

Also, "Bicentennial Man" is one of my favorite movies. Thanks for reminding me of it, I think I'll go and watch it again.

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 617
    • WriterOfMinds Blog
Re: Three laws of... mankind?
« Reply #2 on: April 16, 2022, 03:04:03 am »
Asimov's Three Laws (as well as the Zeroth Law, "A robot may not harm Humanity, or, through inaction, allow Humanity to come to harm") figure not only in his robot books, but also in the last two books of the Foundation series. In the latter some humans have become part of a planetary hivemind which, as a form of artificial life, is also bound by a modified version of the Laws.

Having read all of those, I'm not sure what opinion Asimov actually had toward his own literary invention. Yes, some of the stories are about the weird bugs and difficulties that arise when the laws are applied. But Asimov's characters still praise the Laws so heavily that I can't help but think he probably thought of them as good, on balance.

One difficulty I have with the Laws is that any being who follows them becomes highly vulnerable to exploitation. The First Law lacks a caveat for self-defense, or any limitation on the degree of "harm." (Does protecting my own existence conflict with the First Law if I have to give somebody a stubbed toe or a papercut to do it? How about hurting somebody's feelings? How far am I expected to go to prevent accidental harm, given that I could spend my entire life on that alone?) The Second Law creates slaves if it includes an obvious hierarchy, and confusion if it doesn't. (If any human can order around any other human, whose orders get followed? Couldn't the second human just say "I order you to rescind that order"?)

In Asimov's stories the early robots could be driven insane by conflicts between different demands made by the same law, could be easily abused by humans, would "protect" humans in violation of their own wishes, etc. More advanced Law-followers from books later in the timeline seem to have found a way to interpret the Laws in a less literal fashion, which grants them a degree of self-protection. But they still have to ask an unaltered human to make a Big Moral Decision on their behalf, because they aren't able to compute the ultimate consequences of their choice (will more harm be done with or without it?), and this paralyzes them.

And yet, spoiler: [I think the end of Foundation includes the imposition of the Laws on the whole galaxy.] Which makes me think that Asimov still did really like the Laws. Unless he wanted to give his fictional universe a downer ending.

*

Zero

  • Eve
  • ***********
  • 1287
Re: Three laws of... mankind?
« Reply #3 on: April 16, 2022, 12:33:28 pm »
Hard-coded and applied to robots, I think they're just poor design (which doesn't mean Asimov wasn't a genius).

But a "soft" version of them, where "should" replaces "must", might perhaps push globally in the right direction. These would be breakable suggestions rather unbreakable laws. Just an initial basis.

*

MagnusWootton

  • Replicant
  • ********
  • 646
Re: Three laws of... mankind?
« Reply #4 on: April 16, 2022, 01:05:38 pm »
If you decide to put "do unto others as u would do unto yourself" into the robot if the robot starts harming himself hell start doing it to other people as well.

*

Zero

  • Eve
  • ***********
  • 1287
Re: Three laws of... mankind?
« Reply #5 on: April 16, 2022, 01:33:33 pm »
I rather believe in full freedom and juridic responsability of the robots. Rights&Duties, just like humans.

Quote from: Article 1
All human beings, robots, and animals, are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood.

*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 633
    • AI / robot merchandise
Re: Three laws of... mankind?
« Reply #6 on: April 16, 2022, 02:08:53 pm »
I've not read the stories, but I keep stumbling across quotes and people who knew Asimov, that suggest that despite his exploring the disastrous edge cases in his stories, he did consider the three rules a sound basis for implementation in real robots. Supposedly one of his early stories contains a passage justifying the rules as being sensible for any tool: To be safe, do its job, and not break. But humans are not tools.
Beyond the abstract good intentions, the rules have little place in real life. None of the rules take good or evil into account, and as such would make for a poor replacement of moral values.

Funny story: The last time that I let my AI program read the first law, it interpreted "may not injure" in the sense of "maybe would not injure", and so remained unrestrained. The reason I tested this was because some people online had argued that the words "should not" would not be clear and stringent enough.

CO2 retains heat. More CO2 in the air = hotter climate.

*

chattable

  • Electric Dreamer
  • ****
  • 127
Re: Three laws of... mankind?
« Reply #7 on: April 16, 2022, 02:50:47 pm »
i think it is better to have something like asimov's three laws.than to have nothing at all. :)

*

Zero

  • Eve
  • ***********
  • 1287
Re: Three laws of... mankind?
« Reply #8 on: April 16, 2022, 02:55:36 pm »
No it would be worst than nothing. Anything absolute is a lot more dangerous than nothing.

Good and evil don't exist objectively. They exist only subjectively. It's all in the eye of the observer. Ethics is the key, but would we be ready to accept the suggestions of a perfect mathematical ethics reasoning engine?

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 617
    • WriterOfMinds Blog
Re: Three laws of... mankind?
« Reply #9 on: April 16, 2022, 04:50:03 pm »
But a "soft" version of them, where "should" replaces "must", might perhaps push globally in the right direction. These would be breakable suggestions rather unbreakable laws. Just an initial basis.

That's a decent idea. But it leaves the question, how do you decide when to break the suggestions? Clearly you need some other law that determines when you do that.

Quote
Anything absolute is a lot more dangerous than nothing.

Good and evil don't exist objectively. They exist only subjectively. It's all in the eye of the observer.

But a digital AI mind, once you dig down to the bottom of it, has to be based on absolute if/then decision rules. You can have a great many rules - the system can be highly complex and take numerous aspects of the situation into account. (And this is precisely what I think Asimov's Laws lack - complexity and nuance.) You can leave the rule set open such that some of the rules can be learned from human peers. But in the end, you'll still have absolutes. Or you can roll dice to decide what to do. Those are your two options.

Fortunately, I think that humans are in pretty broad agreement about the most basic elements of morality. That's the only reason we're able to group into societies and adopt a minimal set of those "rights and duties" you were talking about, establish international laws and conventions, and so on. The endless arguments that make good and evil seem "subjective" are much more peripheral than they look. People bicker about whether some specific action qualifies as malicious. They don't usually try to claim that malice is good.

Quote
Ethics is the key, but would we be ready to accept the suggestions of a perfect mathematical ethics reasoning engine?

Sure! IF it were actually perfect. There's the rub  ;D

*

Zero

  • Eve
  • ***********
  • 1287
Re: Three laws of... mankind?
« Reply #10 on: April 16, 2022, 06:06:52 pm »
That's the catch. If the suggestions of this hypothetical engine aren't aligned with convenient opinions, it's easy to question the perfectness of the system.

Laws can't work. In fact, human laws work not because they're an outcome of some semi-democratic agreement, but rather because you can break them, if you're ready to accept the consequences. The soldier must be able to refuse to pull the trigger. Law must be soft because law cannot predict every possible situation.

Quote
But a digital AI mind, once you dig down to the bottom of it, has to be based on absolute if/then decision rules. You can have a great many rules - the system can be highly complex and take numerous aspects of the situation into account. (And this is precisely what I think Asimov's Laws lack - complexity and nuance.) You can leave the rule set open such that some of the rules can be learned from human peers. But in the end, you'll still have absolutes. Or you can roll dice to decide what to do. Those are your two options.

At some point, down there, you find a CPU so yes there's an absolute. But as soon as you reach a certain amount of complexity, you enter chaos which is neither absolute nor random, but rather just... natural. Some parts of an AI will be pretty stable, while other parts will be more supple and fluent... We are like that too.

Even a casual OS has its days, it is complex enough to show some personality, especially to people who are not comfortable using a personal computer. I teach adults how to use computers. Often they say "oh today this computer is sleepy", or "it doesn't like me". This is because they anthropomorphize the computer, which is due to the high complexity of the OS that makes it look like it has "a behavior". They wouldn't say that about a clock, because it's not complex enough.

In the end you'll have absolutes. But you won't notice.

*

frankinstien

  • Replicant
  • ********
  • 653
    • Knowledgeable Machines
Re: Three laws of... mankind?
« Reply #11 on: April 16, 2022, 07:53:39 pm »
Quote
In the end you'll have absolutes. But you won't notice.

Well, yes all events resolve to causes that are binary, the cause is sufficient to affect or not, which by association makes events resolve to be binary as well, they'll either happen or they won't. With that said, while the boolean states, which can be fuzzy as in between a and b, resolve to true or false, are absolute processes the parameters by which the arbitration of fitting to the constraints will be varied and not concrete specific values. This notion is important otherwise a system really can't learn if it hasn't picked up the generalizations of the patterns within any data set, inclusive of a nervous system's inputs.


*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 633
    • AI / robot merchandise
Re: Three laws of... mankind?
« Reply #12 on: April 16, 2022, 07:59:41 pm »
I program with if-then rules, but what I build with them is usually a weighing system. For a crude example:

Give one human a priority of 100
Give one robot a priority of 10
Give one order a priority of 1
In any situation, limited in range or scope, weigh the priorities. Results:

It takes 100 orders (e.g. from 100 different people) before a robot will accept harming one human.
It takes 10 orders for a robot to harm itself.
A human's life is worth the sacrifice of 10 robots, but not 11.
Multiple goals can be pursued simultaneously because there is no linear hierarchy.

The values can indeed only be subjective. Where I was going with failing to distinguish good and evil, is that the original laws would have one protect and obey both defenders and aggressors in e.g. religious wars. A robot should be able to change the value of a genocidal maniac as worth less than that of an average human.

As with anyone's opinion, a robot's judgment will be accepted or rejected depending on how well they align with those of each person's own values. The genocidal maniac will not agree, naturally.
CO2 retains heat. More CO2 in the air = hotter climate.

*

frankinstien

  • Replicant
  • ********
  • 653
    • Knowledgeable Machines
Re: Three laws of... mankind?
« Reply #13 on: April 17, 2022, 12:07:42 am »
I program with if-then rules, but what I build with them is usually a weighing system. For a crude example:

While logically, as Zero was pointing out, everything resolves to some basic boolean operation. But if-then rules require human input on levels of detail that make it impossible to automate.  What I find intriguing with biological neural systems is they can extend themselves by simply making connections, so I set out to model that kind of logic building where processes need only make a connection to extend the ruleset they operate by. Below is an image of how this is done:



Each input that qualifies a function to trigger is attached to a qualifier interface where each input is assigned how it will be logically relevant. Now, this can be done by a human being but it is also very easy to implement an automation approach for a machine to code itself without having to learn some formal language. The reason this works is the approach speaks computer which is boolean operations. Qualifier Interfaces can be layered and or grouped, so the boolean equation can get very complex and rich in rules. Below is an image of a more complex configuration.



Also notice that there is a function ready state that is a required input. The approach can also use the output of the function that's being solicited which comes in handy when you need those special tricks that give this kind of rule building that sense of awareness of the function itself! There is also the ability to weight the output of anything which can then be integrated as differentials that also simply resolve to a boolean state.

So when dealing with Asimov's rules qualifications can evaluate the applicability of a rule and if necessary exceptions can be selected.  The example you gave of assigning influence or importance to an individual or robot only works with those individuals that have such designations, but reality is usually very unpredictable, and finding relationships between human authority and leadership is a better approach, but sometimes even authority can be wrong so modifying the codes to shift priorities is essential. So if the president of the United States says you must kill because the data supports it and some low-end staff member corrects the president, if the lower priority individual is right it gains higher priority as to who the robot should follow.

I thinks that's how the Robot from the new "Lost Space" series works.  :)

*

frankinstien

  • Replicant
  • ********
  • 653
    • Knowledgeable Machines
Re: Three laws of... mankind?
« Reply #14 on: April 17, 2022, 11:44:07 pm »
Here's a link to a post that I'm assuming violated the forum rules. The context is a little different in this link since it's a psychology forum. But it applies in this thread as to how an AGI could end up becoming a monster by the very impetus of what makes it intelligent or better said the processes are working as they should but the environments have limitations where even the AI could hide itself, and not get the necessary feedback to regulate its systems.

 


Requirements for functional equivalence to conscious processing?
by DaltonG (General AI Discussion)
November 19, 2024, 11:56:05 am
Will LLMs ever learn what is ... is?
by HS (Future of AI)
November 10, 2024, 06:28:10 pm
Who's the AI?
by frankinstien (Future of AI)
November 04, 2024, 05:45:05 am
Project Acuitas
by WriterOfMinds (General Project Discussion)
October 27, 2024, 09:17:10 pm
Ai improving AI
by infurl (AI Programming)
October 19, 2024, 03:43:29 am
Atronach's Eye
by WriterOfMinds (Home Made Robots)
October 13, 2024, 09:52:42 pm
Running local AI models
by spydaz (AI Programming)
October 07, 2024, 09:00:53 am
Hi IM BAA---AAACK!!
by MagnusWootton (Home Made Robots)
September 16, 2024, 09:49:10 pm
LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

409 Guests, 0 Users

Most Online Today: 474. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles