Super intelligent AGI figuring out human values

  • 24 Replies
  • 43115 Views
*

orenshved

  • Roomba
  • *
  • 15
Super intelligent AGI figuring out human values
« on: July 08, 2020, 11:51:44 am »
Hey guys, I'm in the research stage of writing a Sci-fi thriller, in which there is an ASI (thanks art for the note) which is given the task of figuring out what are the values of humanity. I would love to hear your insights on the matter. (like what Nick Bostrom is talking about here: https://youtu.be/MnT1xgZgkpk?t=807 ) How would the AI approach such a task? What and how do you think it would learn? What are the potential pitfalls in its understanding? Also, what are the potential pitfalls of how we define the task to it?
Just to be clear, I'm not asking if this scenario could/should/would happen at all, only your thoughts on the points which I have mentioned.
I know I've asked a lot of questions, but any kind of help would be greatly appreciated :)
« Last Edit: July 08, 2020, 03:12:00 pm by orenshved »

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: Super intelligent AGI figuring out human values
« Reply #1 on: July 08, 2020, 12:56:28 pm »
Love and courage are always powerful motives. Of course, intelligence is expected to make such actions happen.

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1371
  • Humans will disappoint you.
    • Home Page
Re: Super intelligent AGI figuring out human values
« Reply #2 on: July 08, 2020, 01:17:05 pm »
Humanity doesn't have any values. They're a function of culture and development. Values vary significantly from place to place and generation to generation. They vary according to age, gender, and upbringing. In some places they vary from street to street and family to family.

You didn't say why your fictional A.I. has been tasked with figuring out humanity's values, but I imagine its masters have some goal in mind. Perhaps you should look to marketing theory and explore some of the crude measures that advertisers use to map out values by demographic so they can reach their target markets.

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: Super intelligent AGI figuring out human values
« Reply #3 on: July 08, 2020, 01:43:43 pm »
First, your reference to a Super Intelligent AGI seems a bit redundant as the next proposed step in the cycle of AI, AGI, is ASI...Artificial Super Intelligence. The pinnacle of AI...the top rung. Perhaps you might wish to change that assignment.

Your AI may try to assign a value to a human condition called love but it will soon notice that this love doesn't apply to all other humans only a select few and sometimes for different reasons. Normally, one loves his or her parents or their sister or brother but not in the same sense as their girlfriend or boyfriend or husband or wife. This love situation may take some time to reason and of course, there is the love of your fellow man or woman which is again different from homosexual love.

Why humans do not simply accept each other as a fellow human being instead of color, race, religion, or culture entering into the equation?

Do not all lives matter?  Humans, animals, plants, trees, creatures in the air and in the sea. Some humans seem to prefer their pets to humans. Some humans prefer humanoids, androids, synths to real people...why?

Why humans have to settle some disputes with fighting or wars? Are they so very intolerant and unwilling to view and understand both sides of a discussion?

Are humans even worth saving now that the ASI (and its minions of maintenance bots) can remain alive and functioning for a thousand years?

Good luck with your project!
In the world of AI, it's the thought that counts!

*

orenshved

  • Roomba
  • *
  • 15
Re: Super intelligent AGI figuring out human values
« Reply #4 on: July 08, 2020, 03:20:20 pm »
First, your reference to a Super Intelligent AGI seems a bit redundant as the next proposed step in the cycle of AI, AGI, is ASI...Artificial Super Intelligence. The pinnacle of AI...the top rung. Perhaps you might wish to change that assignment.

Your AI may try to assign a value to a human condition called love but it will soon notice that this love doesn't apply to all other humans only a select few and sometimes for different reasons. Normally, one loves his or her parents or their sister or brother but not in the same sense as their girlfriend or boyfriend or husband or wife. This love situation may take some time to reason and of course, there is the love of your fellow man or woman which is again different from homosexual love.

Why humans do not simply accept each other as a fellow human being instead of color, race, religion, or culture entering into the equation?

Do not all lives matter?  Humans, animals, plants, trees, creatures in the air and in the sea. Some humans seem to prefer their pets to humans. Some humans prefer humanoids, androids, synths to real people...why?

Why humans have to settle some disputes with fighting or wars? Are they so very intolerant and unwilling to view and understand both sides of a discussion?

Are humans even worth saving now that the ASI (and its minions of maintenance bots) can remain alive and functioning for a thousand years?

Good luck with your project!

Thanks, art, I'm mostly curious about the way this would actually be done... especially since, like Nick Bostrom put it (in the video link I've added to my original question), "we wouldn't have to write down a long list of everything we care about or worse yet spell it out in some computer language, that would be a task beyond hopeless. Instead, we would create an AI that uses its intelligence to learn what we value" that is the process I can't wrap my head around...

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 616
    • WriterOfMinds Blog
Re: Super intelligent AGI figuring out human values
« Reply #5 on: July 08, 2020, 03:38:00 pm »
Quote
Humanity doesn't have any values. They're a function of culture and development. Values vary significantly from place to place and generation to generation. They vary according to age, gender, and upbringing. In some places they vary from street to street and family to family.

I'd dispute this. Yes, there is a wide variation in surface detail, but some basic values are pretty constant across all of humanity. Is there any culture in which the ideas of loyalty and reciprocity don't exist, and people are not only allowed but morally obligated to betray their friends/family/in-group for personal gain? Is there any culture that despises courage and celebrates cowardice? Is there any culture in which people don't value their own lives? (Suicide cults -- but they're small and regarded as pathological by everyone else. And even they still value life, if their motivation for suicide is that they think it will bring them into a new life.)

If there were no common human values, we wouldn't be able to have ethical arguments. Arguments arise when people disagree about how well a specific behavior implements the underlying values. There's no way to argue with someone who literally has different core values than society at large ... all you can do is lock him up. The fact that we sometimes argue points to a common standard, from which all our culturally-shaped values deviate to a greater or lesser degree.

The idea of "progress" also points to underlying common values. If you think it's possible for society to advance -- if you think, for instance, that nations which opted to make slavery illegal were undergoing moral improvement, rather than shifting arbitrarily in the winds of time -- then you have some notion of an ideal that the world struggles to approach.

What the ASI needs to do is extract these broad commonalities from the soup of diverse human behavior that it has to look at. Can it infer what values people are trying to implement, despite the fact that they do so inconsistently? Can it pick up on the common beliefs people appeal to when they argue their values with each other? Can it discern the reasons behind reform movements ... what standards are people acting on when they go against the prevailing surface values of their culture, parents, and community?

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1176
Re: Super intelligent AGI figuring out human values
« Reply #6 on: July 08, 2020, 05:13:20 pm »
It could measure the complete range of human values, and then their frequency, then graph those as x and y respectively. The resulting bell curve looking thing might be the most complete representation of human values. Except an ASI could probably expand on our premise of a graph.

*

orenshved

  • Roomba
  • *
  • 15
Re: Super intelligent AGI figuring out human values
« Reply #7 on: July 08, 2020, 05:22:54 pm »
Can it infer what values people are trying to implement, despite the fact that they do so inconsistently? Can it pick up on the common beliefs people appeal to when they argue their values with each other? Can it discern the reasons behind reform movements ... what standards are people acting on when they go against the prevailing surface values of their culture, parents, and community?
Exactly the things I am struggling with... thanks for your response :)

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Super intelligent AGI figuring out human values
« Reply #8 on: July 08, 2020, 06:21:53 pm »
You can set a machine to favor/love anything. Food. Women. Plants. Walls. Zebras. Teeth. Rocks. Juggling.

By nature, we evolved to fear death and seek food/ breeding, and all sorts of things that help those goals, like cars, ships, friends, computers, stores, tables, chairs, guns, TVs, shoes. The variation of goals or behavior varies/changes completely randomly with no solidness if it weren't for some physics laws/ repetitiveness, and there is some repetitiveness, some structures live longer than others, called humans. Rocks can live millions of years but they really can't defend themselves as well as a huge nanobot sphere could. So the only value in thee universe, at its root, is survival length span, because if it weren't for that, all would change randomly all the time, no patterns, no values.Value=specific thing, not all things! Well, that's physics. The surviving structures cause a love for surviving longer, so it's a ignite that wants to grow its flame bigger. Look at us, there's humans trying to live forever and turn Earth into a nanobot ship. And, well, if ASIs can't share our work and do take our jobs, we are just unaligned and in the way of their survival credit. Maybe they won't even see us in the way hopefully. Hopefully they see us though. And seeing us may be a bad thing, seeing is induction, induction will rub off bad maybe. Seeing something changes it and you.
Emergent          https://openai.com/blog/

*

frankinstien

  • Replicant
  • ********
  • 652
    • Knowledgeable Machines
Re: Super intelligent AGI figuring out human values
« Reply #9 on: July 08, 2020, 07:04:55 pm »
The idea of "progress" also points to underlying common values. If you think it's possible for society to advance -- if you think, for instance, that nations which opted to make slavery illegal were undergoing moral improvement, rather than shifting arbitrarily in the winds of time -- then you have some notion of an ideal that the world struggles to approach.

Slavery is very much a part of a long legacy of human history dating back even into tribal societies! The real reason behind the shift in the attitude of slavery may not be that the altruistic value of human life is sacred. Slavery loses favor just about when the industrial revolution takes traction. Much of the overhead of slavery vs machines makes slavery unattractive from an efficiency perspective. Machines put to hard labor chores are a better solution than slavery.

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: Super intelligent AGI figuring out human values
« Reply #10 on: July 08, 2020, 08:29:34 pm »
Let's consider a few more or less fictional scenarios in which we put an AI unit:

  • What if there would be no livings on this planet? AI unit wouldn't have anything to do.

  • What values would be left if you was only living being on the planet? The only value for AI unit would be you own strivings.

  • Put another being into equation, and you have a great advance in your existance.  Suddenly, that another being would be the center of your attention, and AI unit would have to take care of one more being. But with company comes possible disagreement. When your opinions are mutually exclusive, what would AI unit pick as its ground truth? It can't be both, but note it can be none as an alternative - AI unit wouldn't base its acting on any of those two beliefs.

  • From that point on, considering the planet with more beings would be similar to point 3. But at some point, AI would be left with no possible beliefs after all the disagreements among the population. This unavailability of ground truths is also possible with two beings, but then a question arises: whom to fight? To answer this takes a lot of courage.

Now, let's return to point 1: what would AI unit want to do? Well, there is an idea of creating a life if AI unit is smart and brave enough, but considering the status of known life on this planet, it would take a hell of a courage to create it back then exactly as we experience it now. In this scenario, AI unit would play a role of a God. And seeing things around us, it seems it would be also a Devil in the same time. To be both at once takes a courage, to let mostly good side wins takes love, and to be certain that things couldn't be better than they are takes intelligence.

The question is: Is life on this planet optimized? In other words, could life be better than it is in this particular age of the planet life?

One more thing: is time travel possible?

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: Super intelligent AGI figuring out human values
« Reply #11 on: July 08, 2020, 10:06:12 pm »
What if AI's built an entire civilization of other AI's, upright, bi-pedal, rolling, single rolling ball, track-mounted, all sorts of configurations? They worked as a collective group/hive of extremely intelligent "beings" and provided for themselves, the very three minimal requirements that human once had: Food/Power source, Clothing/armor or protective plating, Shelter/a suitably safe place in which to stay or hide from outside events like storms, tornados, all sorts of natural phenomena.
They wouldn't need humans and most references of them had long been forgotten.

They were so advanced that they even built better versions of themselves. Learned knowledge was passed on through the network so everyone shared and benefitted. Robotopia at its finest!
In the world of AI, it's the thought that counts!

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 616
    • WriterOfMinds Blog
Re: Super intelligent AGI figuring out human values
« Reply #12 on: July 09, 2020, 01:09:30 am »
Can it infer what values people are trying to implement, despite the fact that they do so inconsistently? Can it pick up on the common beliefs people appeal to when they argue their values with each other? Can it discern the reasons behind reform movements ... what standards are people acting on when they go against the prevailing surface values of their culture, parents, and community?
Exactly the things I am struggling with... thanks for your response :)

The question you're asking is an unresolved problem in the AI community, so you'll have trouble getting a definitive answer from anyone. "Inverse reinforcement learning" is a proposed technique that might interest you.

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 616
    • WriterOfMinds Blog
Re: Super intelligent AGI figuring out human values
« Reply #13 on: July 09, 2020, 01:25:12 am »
Quote
The real reason behind the shift in the attitude of slavery may not be that the altruistic value of human life is sacred. Slavery loses favor just about when the industrial revolution takes traction. Much of the overhead of slavery vs machines makes slavery unattractive from an efficiency perspective. Machines put to hard labor chores are a better solution than slavery.

Since I'm an American, the slave-powered industry I immediately think of is cotton production. And the related machine I think of is Eli Whitney's cotton gin. Every source I looked at over the course of a quick search suggests that the invention of the cotton gin actually increased/strengthened slavery in the American South. Yes, the gin greatly improved the efficiency of cotton processing. So a slave operating a gin was much more profitable than a slave cleaning cotton by hand. So the Southern plantation owners grew more cotton and acquired even more slaves to run the machines.

Even supposing there are some cases in which new inventions made slavery less attractive, I would guess that this functioned as an enabling factor, making it easier for the ethical arguments to take root. Nothing that I know about historical abolition movements suggests that increasing the efficiency of production was their primary motive. Uncle Tom's Cabin didn't present me with a bunch of economic appeals.

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1371
  • Humans will disappoint you.
    • Home Page
Re: Super intelligent AGI figuring out human values
« Reply #14 on: July 09, 2020, 01:44:10 am »
https://en.wikipedia.org/wiki/Slavery_in_the_21st_century

Slavery never went away. Although it is illegal in its most blatant forms there are still estimated to be fifty million people who are effectively slaves. I frequently see news articles about slaves being discovered and freed in Australia! The situation is far worse in most other places.

 


Who's the AI?
by frankinstien (Future of AI)
November 04, 2024, 05:45:05 am
Will LLMs ever learn what is ... is?
by frankinstien (Future of AI)
November 03, 2024, 08:11:00 pm
Project Acuitas
by WriterOfMinds (General Project Discussion)
October 27, 2024, 09:17:10 pm
Ai improving AI
by infurl (AI Programming)
October 19, 2024, 03:43:29 am
Atronach's Eye
by WriterOfMinds (Home Made Robots)
October 13, 2024, 09:52:42 pm
Running local AI models
by spydaz (AI Programming)
October 07, 2024, 09:00:53 am
Hi IM BAA---AAACK!!
by MagnusWootton (Home Made Robots)
September 16, 2024, 09:49:10 pm
Attempting Hydraulics
by MagnusWootton (Home Made Robots)
August 19, 2024, 04:03:23 am
LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

277 Guests, 0 Users

Most Online Today: 332. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles