Asimov's laws

  • 11 Replies
  • 5288 Views
*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Asimov's laws
« on: February 16, 2015, 12:21:57 am »
I've been thinking about alternative to Asimov's laws in terms of behavior copying algorithm. This is how I'd put it in just one rule:

Copy human behavior unless anyone alive is being hurt more than without your influence.

What do you think? Am I missing something important?

Edit: it is not intended to be just one rule, but it seems enough for me. Am I'm wrong?

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: Asimov's laws
« Reply #1 on: February 16, 2015, 01:59:24 am »
Only you would know what is both right and enough for yourself. Others might have a different set of constraints or liberation as the case might be.

A while back I wrote a 4th Law of robotics which I'll see if I can find it in the archives.

(...OK...I'm back! That wasn't too bad if one uses the Search feature.)

4. When Asked, a Robot must always tell the truth.

Some like to hope that fundamentally, robots would not know how to lie (seems that that is a learned trait, often taught by elders).
Most people would rather have their bot tell the truth and not have enough "where-with-all" to reason out or logically deduce all possible consequences before deciding whether or not the truth would be advantageous to itself or it's creator / owner. Just my take.

This is not about "Ownership" of a robot or whether they should have emotions or rights or whatever. Those topics would make for interesting discussions under another thread.
In the world of AI, it's the thought that counts!

*

Ultron

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 471
  • There are no strings on me.
Re: Asimov's laws
« Reply #2 on: February 16, 2015, 08:01:36 pm »
Art, sometimes the truth can kill someone! So your 4th rule could disagree with the previous rules in given cases.

And on copying behavior - everyone already knows how strongly I believe in pattern-based life. To explain, we learn how to behave in our society by copying (a pattern of) human behavior which we have witnessed to do or be considered good. This would also work with style of jokes, dressing etc.

So, for a robot to improve it would copy the behavior and steps that led other robots to improvement. And if the robots are based off such a way of learning, you would only need to have one 'origin' or 'elder' robot which implements the original Asimov laws and the other would follow up in 'his' footsteps - theoretically speaking. This means that you could remove or tweak these laws in future generations, enabling them to upgrade themselves and eventually be allowed to kill, when they reach a level of being able to judge ethically.
With the current laws, robots could easily become extinct and would remain bystanders when someone is to be attacked. But I do realize they would need to learn what is right and wrong before I let them be cops or judges.

But bear in mind - not even humans always have a correct judgement - we make mistakes, and mistakes can only make those robots more human.

So to summarize this badly connected cluster of sentences - A.I. is useless to us if it cannot improve itself but to be able to do so it cannot be constrained by groups of rules such as Asimov's three laws - instead it must be free (but constrained. What? Bear with me). One simple rule could make a community of robots dynamic and acceptable to humans if stated as the following:

"Copy behavioral patterns from other units (or humans) as long as they are accepted by the community and give positive results."

Now this is not very specific, but for a reason. If the robot were to live within a community of killers and savage creatures it will of course become (or mimic) one itself, but the end result is the same - it will be accepted and considered 'good' by the community. So they will change and adapt to our society.

"It is not the biggest or strongest that survive - it is those who are able to adapt to changes."

And if I were to add one more rule (which I don't think is necessary) would be the following (it is not exactly a 'rule', though):

"The needs of the many outweigh the needs of the few."
Software and Hardware developer, and everything in between.

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: Asimov's laws
« Reply #3 on: February 16, 2015, 08:32:00 pm »
If we go on like this maybe we will need prisons for robots in the future... :idiot2:

To kill someone... And to consider it as an ethical act... And if robots are about to kill... God help this already screwed planet... I think I'm going to throw up just for thinking about it...

*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 633
    • AI / robot merchandise
Re: Asimov's laws
« Reply #4 on: February 16, 2015, 10:05:25 pm »
Well, I'm not looking at this from a positive angle obviously, but like all Asimov-ish laws it has some back door issues. Like, what if it accidentally steps on someone's toes or keeps bumping into people, like people do? Will it eventually decide to stand still and do nothing (as happened in a car collision AI experiment)? What if it understands that the meat it bought for you stimulates slaughter of more cows? Basically, how should it deal with the circle of life?
To be honest I think the last part, other than the copying humans, is actually fairly good and possibly even chivalrous.
CO2 retains heat. More CO2 in the air = hotter climate.

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: Asimov's laws
« Reply #5 on: February 16, 2015, 10:39:28 pm »

@Don Patric
A task picking upon some input could be resolved by copying humans. Achieving tasks could be realized by canonical abduction from the final goal where each step (like the final goal) is tested by Asimov-ish laws. If the plan is not working, another plan is being built until achievement or giving up. Accidents should vanish with experience. How should it deal with a circle of life? It shouldn't help in killing or buying certain groceries, it should offer alternatives, if any exist. If not, it should do nothing.

*

Ultron

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 471
  • There are no strings on me.
Re: Asimov's laws
« Reply #6 on: February 16, 2015, 10:44:34 pm »
If robots (A.I.) were to one day become stronger / more adaptable then us, then regardless of the rules we place they will one day become the dominant species. This doesn't necessarily mean that they will drive us extinct yet that they will simply...dominate.

It is too late to change this.

P.S. Apologies for the dark-themed speech, but it is an honest opinion.
Software and Hardware developer, and everything in between.

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: Asimov's laws
« Reply #7 on: February 17, 2015, 11:45:56 am »
Quote
P.S. Apologies for the dark-themed speech, but it is an honest opinion.

Being honest is the best thing you can do because a.i. programmer holds a great responsibility due to impact that his project might have to the world. Being honest gives us a change to improve our standings and perceptions of different phenomena in our world. If anyone is required to master the art of ethical behavior that would surely be a.i. programmers. And two heads are always smarter than one solely.

From my experience I would like to give you an advice: a.i. should not ever get into conflict with anyone. In the case of outer conflict it should stay aside. Otherwise it turns into judge, jury and executioner and in heavier cases it can turn into a murderer by mistake.

P.S.
Maybe "Jesus" would be a good name for a robot. I ask no less from them. (I am not a Christian, but I like Jesus very much as a person)

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: Asimov's laws
« Reply #8 on: February 17, 2015, 01:33:31 pm »
I mentioned, "When asked..."

Would you rather your bot lie to you? I would not.

They say, not telling is not lying but within that phrase lies the deceit...hidden until...someone asks.

If not, then life goes on.
#####################
I totally agree that bots should be unconstrained within reason, to develop, explore, research and thrive.

In keeping with many Futurist's predictions, I too believe that non-biological intelligence will one day, become dominant and we "biological units" will become...non-dominant, shall we say, rather than no longer necessary.

Or as the robot, Bender, on Futurama often says, "Bite my shiny metal A$$" !!  :2funny:
In the world of AI, it's the thought that counts!

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6860
  • Mostly Harmless
Re: Asimov's laws
« Reply #9 on: February 17, 2015, 09:41:13 pm »
If Science fiction so often becomes reality are we heading for a Matrix scenario here ?  :o

*

Ultron

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 471
  • There are no strings on me.
Re: Asimov's laws
« Reply #10 on: February 17, 2015, 11:20:17 pm »
We are merely part of cell which is part of a much larger organism - the universe. If we were to die, that is because we were weak.  This can even be observed in our bodies :)

When you are a part of something bigger than you, then you should be willing to sacrifice for it (for the team) in order for it to survive.

Is this also not why we breed - to maintain the human race? Just scale it up...

P.S. And yes, I do not care if we go extinct for such reasons :)
Software and Hardware developer, and everything in between.

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: Asimov's laws
« Reply #11 on: February 18, 2015, 10:57:48 am »
You came into this world without a conscious choice.

You can choose when and how to leave it...if that matters.
In the world of AI, it's the thought that counts!

 


LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

347 Guests, 0 Users

Most Online Today: 365. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles