Ai Dreams Forum

Artificial Intelligence => Future of AI => Topic started by: orenshved on July 08, 2020, 11:51:44 am

Title: Super intelligent AGI figuring out human values
Post by: orenshved on July 08, 2020, 11:51:44 am
Hey guys, I'm in the research stage of writing a Sci-fi thriller, in which there is an ASI (thanks art for the note) which is given the task of figuring out what are the values of humanity. I would love to hear your insights on the matter. (like what Nick Bostrom is talking about here: https://youtu.be/MnT1xgZgkpk?t=807 ) How would the AI approach such a task? What and how do you think it would learn? What are the potential pitfalls in its understanding? Also, what are the potential pitfalls of how we define the task to it?
Just to be clear, I'm not asking if this scenario could/should/would happen at all, only your thoughts on the points which I have mentioned.
I know I've asked a lot of questions, but any kind of help would be greatly appreciated :)
Title: Re: Super intelligent AGI figuring out human values
Post by: ivan.moony on July 08, 2020, 12:56:28 pm
Love and courage are always powerful motives. Of course, intelligence is expected to make such actions happen.
Title: Re: Super intelligent AGI figuring out human values
Post by: infurl on July 08, 2020, 01:17:05 pm
Humanity doesn't have any values. They're a function of culture and development. Values vary significantly from place to place and generation to generation. They vary according to age, gender, and upbringing. In some places they vary from street to street and family to family.

You didn't say why your fictional A.I. has been tasked with figuring out humanity's values, but I imagine its masters have some goal in mind. Perhaps you should look to marketing theory and explore some of the crude measures that advertisers use to map out values by demographic so they can reach their target markets.
Title: Re: Super intelligent AGI figuring out human values
Post by: Art on July 08, 2020, 01:43:43 pm
First, your reference to a Super Intelligent AGI seems a bit redundant as the next proposed step in the cycle of AI, AGI, is ASI...Artificial Super Intelligence. The pinnacle of AI...the top rung. Perhaps you might wish to change that assignment.

Your AI may try to assign a value to a human condition called love but it will soon notice that this love doesn't apply to all other humans only a select few and sometimes for different reasons. Normally, one loves his or her parents or their sister or brother but not in the same sense as their girlfriend or boyfriend or husband or wife. This love situation may take some time to reason and of course, there is the love of your fellow man or woman which is again different from homosexual love.

Why humans do not simply accept each other as a fellow human being instead of color, race, religion, or culture entering into the equation?

Do not all lives matter?  Humans, animals, plants, trees, creatures in the air and in the sea. Some humans seem to prefer their pets to humans. Some humans prefer humanoids, androids, synths to real people...why?

Why humans have to settle some disputes with fighting or wars? Are they so very intolerant and unwilling to view and understand both sides of a discussion?

Are humans even worth saving now that the ASI (and its minions of maintenance bots) can remain alive and functioning for a thousand years?

Good luck with your project!
Title: Re: Super intelligent AGI figuring out human values
Post by: orenshved on July 08, 2020, 03:20:20 pm
First, your reference to a Super Intelligent AGI seems a bit redundant as the next proposed step in the cycle of AI, AGI, is ASI...Artificial Super Intelligence. The pinnacle of AI...the top rung. Perhaps you might wish to change that assignment.

Your AI may try to assign a value to a human condition called love but it will soon notice that this love doesn't apply to all other humans only a select few and sometimes for different reasons. Normally, one loves his or her parents or their sister or brother but not in the same sense as their girlfriend or boyfriend or husband or wife. This love situation may take some time to reason and of course, there is the love of your fellow man or woman which is again different from homosexual love.

Why humans do not simply accept each other as a fellow human being instead of color, race, religion, or culture entering into the equation?

Do not all lives matter?  Humans, animals, plants, trees, creatures in the air and in the sea. Some humans seem to prefer their pets to humans. Some humans prefer humanoids, androids, synths to real people...why?

Why humans have to settle some disputes with fighting or wars? Are they so very intolerant and unwilling to view and understand both sides of a discussion?

Are humans even worth saving now that the ASI (and its minions of maintenance bots) can remain alive and functioning for a thousand years?

Good luck with your project!

Thanks, art, I'm mostly curious about the way this would actually be done... especially since, like Nick Bostrom put it (in the video link I've added to my original question), "we wouldn't have to write down a long list of everything we care about or worse yet spell it out in some computer language, that would be a task beyond hopeless. Instead, we would create an AI that uses its intelligence to learn what we value" that is the process I can't wrap my head around...
Title: Re: Super intelligent AGI figuring out human values
Post by: WriterOfMinds on July 08, 2020, 03:38:00 pm
Quote
Humanity doesn't have any values. They're a function of culture and development. Values vary significantly from place to place and generation to generation. They vary according to age, gender, and upbringing. In some places they vary from street to street and family to family.

I'd dispute this. Yes, there is a wide variation in surface detail, but some basic values are pretty constant across all of humanity. Is there any culture in which the ideas of loyalty and reciprocity don't exist, and people are not only allowed but morally obligated to betray their friends/family/in-group for personal gain? Is there any culture that despises courage and celebrates cowardice? Is there any culture in which people don't value their own lives? (Suicide cults -- but they're small and regarded as pathological by everyone else. And even they still value life, if their motivation for suicide is that they think it will bring them into a new life.)

If there were no common human values, we wouldn't be able to have ethical arguments. Arguments arise when people disagree about how well a specific behavior implements the underlying values. There's no way to argue with someone who literally has different core values than society at large ... all you can do is lock him up. The fact that we sometimes argue points to a common standard, from which all our culturally-shaped values deviate to a greater or lesser degree.

The idea of "progress" also points to underlying common values. If you think it's possible for society to advance -- if you think, for instance, that nations which opted to make slavery illegal were undergoing moral improvement, rather than shifting arbitrarily in the winds of time -- then you have some notion of an ideal that the world struggles to approach.

What the ASI needs to do is extract these broad commonalities from the soup of diverse human behavior that it has to look at. Can it infer what values people are trying to implement, despite the fact that they do so inconsistently? Can it pick up on the common beliefs people appeal to when they argue their values with each other? Can it discern the reasons behind reform movements ... what standards are people acting on when they go against the prevailing surface values of their culture, parents, and community?
Title: Re: Super intelligent AGI figuring out human values
Post by: HS on July 08, 2020, 05:13:20 pm
It could measure the complete range of human values, and then their frequency, then graph those as x and y respectively. The resulting bell curve looking thing might be the most complete representation of human values. Except an ASI could probably expand on our premise of a graph.
Title: Re: Super intelligent AGI figuring out human values
Post by: orenshved on July 08, 2020, 05:22:54 pm
Can it infer what values people are trying to implement, despite the fact that they do so inconsistently? Can it pick up on the common beliefs people appeal to when they argue their values with each other? Can it discern the reasons behind reform movements ... what standards are people acting on when they go against the prevailing surface values of their culture, parents, and community?
Exactly the things I am struggling with... thanks for your response :)
Title: Re: Super intelligent AGI figuring out human values
Post by: LOCKSUIT on July 08, 2020, 06:21:53 pm
You can set a machine to favor/love anything. Food. Women. Plants. Walls. Zebras. Teeth. Rocks. Juggling.

By nature, we evolved to fear death and seek food/ breeding, and all sorts of things that help those goals, like cars, ships, friends, computers, stores, tables, chairs, guns, TVs, shoes. The variation of goals or behavior varies/changes completely randomly with no solidness if it weren't for some physics laws/ repetitiveness, and there is some repetitiveness, some structures live longer than others, called humans. Rocks can live millions of years but they really can't defend themselves as well as a huge nanobot sphere could. So the only value in thee universe, at its root, is survival length span, because if it weren't for that, all would change randomly all the time, no patterns, no values.Value=specific thing, not all things! Well, that's physics. The surviving structures cause a love for surviving longer, so it's a ignite that wants to grow its flame bigger. Look at us, there's humans trying to live forever and turn Earth into a nanobot ship. And, well, if ASIs can't share our work and do take our jobs, we are just unaligned and in the way of their survival credit. Maybe they won't even see us in the way hopefully. Hopefully they see us though. And seeing us may be a bad thing, seeing is induction, induction will rub off bad maybe. Seeing something changes it and you.
Title: Re: Super intelligent AGI figuring out human values
Post by: frankinstien on July 08, 2020, 07:04:55 pm
The idea of "progress" also points to underlying common values. If you think it's possible for society to advance -- if you think, for instance, that nations which opted to make slavery illegal were undergoing moral improvement, rather than shifting arbitrarily in the winds of time -- then you have some notion of an ideal that the world struggles to approach.

Slavery is very much a part of a long legacy of human history dating back even into tribal societies! The real reason behind the shift in the attitude of slavery may not be that the altruistic value of human life is sacred. Slavery loses favor just about when the industrial revolution takes traction. Much of the overhead of slavery vs machines makes slavery unattractive from an efficiency perspective. Machines put to hard labor chores are a better solution than slavery.
Title: Re: Super intelligent AGI figuring out human values
Post by: ivan.moony on July 08, 2020, 08:29:34 pm
Let's consider a few more or less fictional scenarios in which we put an AI unit:

Now, let's return to point 1: what would AI unit want to do? Well, there is an idea of creating a life if AI unit is smart and brave enough, but considering the status of known life on this planet, it would take a hell of a courage to create it back then exactly as we experience it now. In this scenario, AI unit would play a role of a God. And seeing things around us, it seems it would be also a Devil in the same time. To be both at once takes a courage, to let mostly good side wins takes love, and to be certain that things couldn't be better than they are takes intelligence.

The question is: Is life on this planet optimized? In other words, could life be better than it is in this particular age of the planet life?

One more thing: is time travel possible?
Title: Re: Super intelligent AGI figuring out human values
Post by: Art on July 08, 2020, 10:06:12 pm
What if AI's built an entire civilization of other AI's, upright, bi-pedal, rolling, single rolling ball, track-mounted, all sorts of configurations? They worked as a collective group/hive of extremely intelligent "beings" and provided for themselves, the very three minimal requirements that human once had: Food/Power source, Clothing/armor or protective plating, Shelter/a suitably safe place in which to stay or hide from outside events like storms, tornados, all sorts of natural phenomena.
They wouldn't need humans and most references of them had long been forgotten.

They were so advanced that they even built better versions of themselves. Learned knowledge was passed on through the network so everyone shared and benefitted. Robotopia at its finest!
Title: Re: Super intelligent AGI figuring out human values
Post by: WriterOfMinds on July 09, 2020, 01:09:30 am
Can it infer what values people are trying to implement, despite the fact that they do so inconsistently? Can it pick up on the common beliefs people appeal to when they argue their values with each other? Can it discern the reasons behind reform movements ... what standards are people acting on when they go against the prevailing surface values of their culture, parents, and community?
Exactly the things I am struggling with... thanks for your response :)

The question you're asking is an unresolved problem in the AI community, so you'll have trouble getting a definitive answer from anyone. "Inverse reinforcement learning" is a proposed technique that might interest you.
Title: Re: Super intelligent AGI figuring out human values
Post by: WriterOfMinds on July 09, 2020, 01:25:12 am
Quote
The real reason behind the shift in the attitude of slavery may not be that the altruistic value of human life is sacred. Slavery loses favor just about when the industrial revolution takes traction. Much of the overhead of slavery vs machines makes slavery unattractive from an efficiency perspective. Machines put to hard labor chores are a better solution than slavery.

Since I'm an American, the slave-powered industry I immediately think of is cotton production. And the related machine I think of is Eli Whitney's cotton gin. Every source I looked at over the course of a quick search suggests that the invention of the cotton gin actually increased/strengthened slavery in the American South. Yes, the gin greatly improved the efficiency of cotton processing. So a slave operating a gin was much more profitable than a slave cleaning cotton by hand. So the Southern plantation owners grew more cotton and acquired even more slaves to run the machines.

Even supposing there are some cases in which new inventions made slavery less attractive, I would guess that this functioned as an enabling factor, making it easier for the ethical arguments to take root. Nothing that I know about historical abolition movements suggests that increasing the efficiency of production was their primary motive. Uncle Tom's Cabin didn't present me with a bunch of economic appeals.
Title: Re: Super intelligent AGI figuring out human values
Post by: infurl on July 09, 2020, 01:44:10 am
https://en.wikipedia.org/wiki/Slavery_in_the_21st_century (https://en.wikipedia.org/wiki/Slavery_in_the_21st_century)

Slavery never went away. Although it is illegal in its most blatant forms there are still estimated to be fifty million people who are effectively slaves. I frequently see news articles about slaves being discovered and freed in Australia! The situation is far worse in most other places.
Title: Re: Super intelligent AGI figuring out human values
Post by: infurl on July 09, 2020, 01:48:24 am
https://www.huffingtonpost.co.uk/mark-fletcherbrown/riots-remember-the-four-m_b_927882.html (https://www.huffingtonpost.co.uk/mark-fletcherbrown/riots-remember-the-four-m_b_927882.html)

Any realistic discussion of values should take into account the four meal rule. There is no civilization that is more than a few meals away from anarchy.
Title: Re: Super intelligent AGI figuring out human values
Post by: Art on July 09, 2020, 01:51:10 am
Unfortunately, our greatest tragedy throughout the centuries has been man's inhumanity to man!
Title: Re: Super intelligent AGI figuring out human values
Post by: frankinstien on July 09, 2020, 09:26:23 am
Even supposing there are some cases in which new inventions made slavery less attractive, I would guess that this functioned as an enabling factor, making it easier for the ethical arguments to take root. Nothing that I know about historical abolition movements suggests that increasing the efficiency of production was their primary motive. Uncle Tom's Cabin didn't present me with a bunch of economic appeals.

I was wondering if anyone was going to catch that issue where automation made a slave more efficient and therefore increased the demand for slaves in the U.S. But you also caught my main reason for nations to entertain the plausibility of ending slavery. Almost every empire of the past's economic foundation was based on slavery, particularly Rome. So it doesn't make sense that all of a sudden humans become so empathic towards slaves until the industrial revolution.   
Title: Re: Super intelligent AGI figuring out human values
Post by: LOCKSUIT on July 09, 2020, 10:08:40 am
https://ncase.me/trust/

The music makes it even more interesting.
Title: Re: Super intelligent AGI figuring out human values
Post by: frankinstien on July 09, 2020, 03:18:06 pm
Unfortunately, our greatest tragedy throughout the centuries has been man's inhumanity to man!

Centuries? You mean millennia. Here's an article explaining why most of the male population across Asia, Europe and Africa seems to have died off 7000 years ago, leaving behind just one man for every 17 women. (https://www.livescience.com/62754-warring-clans-caused-population-bottleneck.html)
Title: Re: Super intelligent AGI figuring out human values
Post by: MikeB on September 15, 2020, 10:43:18 am
An AI working out that people are best at sweating and picking vegetables, and so are all sent to work doing this for all the other species on earth forever, as they sit on a metallic throne with legs crossed... would be a nightmare.

Man has actually been evolving into robots for centuries... everyone is seperated into different houses, programmed with what to like/not like... go to work, do mundane functions.... the irony is that 3rd world countries are living more like evolved people (working outside with hands, walking around, sweating, building, making art).
Title: Re: Super intelligent AGI figuring out human values
Post by: LOCKSUIT on September 17, 2020, 12:00:21 am
An AI working out that people are best at sweating and picking vegetables, and so are all sent to work doing this for all the other species on earth forever, as they sit on a metallic throne with legs crossed... would be a nightmare.

Man has actually been evolving into robots for centuries... everyone is seperated into different houses, programmed with what to like/not like... go to work, do mundane functions.... the irony is that 3rd world countries are living more like evolved people (working outside with hands, walking around, sweating, building, making art).

Yes I've saying/ thinking that lots. Everything we make is square. And yup, our society is telling us what to like, just 100 years ago there were much more natural marriages and a lot less laws, though that led to more death, 3rd world countries seem to show lots of quick-to-marry marriages and child marriages to this day. Just recently 100-10,000 years ago no one would think a mate was too unsymmetrical looking (eyes, breasts, etc) or too young or to quick to get intimate or the wrong gender, they just mated with nothing to catch misjif. What is needed is adulthood or cops or a long verification process (getting married) or opposite gender, to protect/duplicate oneself. Obviously we want to try to be understanding of the minority. I know getting married seems like more than just checking their safety over a year, getting married helps buddy-up resources / perform different tasks needed, like family does, but it doesn't mean its the best way, many like having many partners and sometimes its the only way to re-colonize if 1 male and 10 females is all nearby/ that remains.
Title: Re: Super intelligent AGI figuring out human values
Post by: frankinstien on September 17, 2020, 05:57:54 am
An AI working out that people are best at sweating and picking vegetables, and so are all sent to work doing this for all the other species on earth forever, as they sit on a metallic throne with legs crossed... would be a nightmare.

Man has actually been evolving into robots for centuries... everyone is seperated into different houses, programmed with what to like/not like... go to work, do mundane functions.... the irony is that 3rd world countries are living more like evolved people (working outside with hands, walking around, sweating, building, making art).

Actually the power of symbolic reasoning and the delusion of supernatural beliefs that motivated the development of architecture, mathematics, arts, and material engineering which were used for monument building, has lead to humanity re-organizing itself into nations rather than tribes. As nations humanity lost its sense of community. We really can't just work together for the sake of helping each other the only means to get people in today's technological world to work together is through money! Without money, our society would collapse! Money is the glue that keeps our civilization together. But with that comes isolation regardless if you have a family and friends we still live apart and actually do favor our privacy . While we can be polite we really don't care about helping each other beyond some trivial effort that doesn't interfere with our daily routines. This new means of human organization is going to continue and so will the isolation and indifference. This will also have an impact on mental health as loneliness is predominantly common. In one context this phenomenon of social de-sensitizing does motivate the need for a social AI to fill in the gap for human intimacy, however, this social AI can't respond with just rule based scripts since that route leads to the uncanny valley.  A social AI must be an intelligence that interacts with its environment and expresses its impressions based on its knowledge base that would include real experiences.
Title: Re: Super intelligent AGI figuring out human values
Post by: HS on September 17, 2020, 07:38:58 am
however, this social AI can't respond with just rule based scripts since that route leads to the uncanny valley.  A social AI must be an intelligence that interacts with its environment and expresses its impressions based on its knowledge base that would include real experiences.

A potentially simple idea behind what we imagine to be the myriad necessary rules for operating AGI is Artificial General Potential. The idea that the essence of intelligence is not present in, or recoverable from, the defined/measured outputs of an intelligent system seems reasonable to me. Trying to make an AI copy the actions/outputs of natural intelligences is like infinitely constraining potential, then slowly chipping away at that wall of virtual infinity. Chipping away at infinity does let us keep making incremental progress, but ultimately doesn’t seem like an effective strategy towards fully achieving the goal.

A source of intelligence which starts off with minimal constraints would naturally contain maximal potential. Besides, constraints already exist, they are the environment. Intelligence would not benefit from further artificial constraints, I think it requires an initial internal unboundedness, to then have the capacity to mold itself to any environment. It could be useful to think of intelligence as a kind of lighting in a bottle, an ultimately intricate manifestation of potential energy channeled by the environment. This outside-in approach would hopefully produce an adaptable emergent intelligence, instead of a vast library of the hard-coded results of such a system, which seems to give robots an intrinsically un-alive quality. So, I think an AGP strategy would be a way to summon the essence of intelligence instead of just reproducing its outputs.
Title: Re: Super intelligent AGI figuring out human values
Post by: infurl on September 17, 2020, 07:49:43 am
A potentially simple idea behind what we imagine to be the myriad necessary rules for operating AGI is Artificial General Potential. The idea that the essence of intelligence is not present in, or recoverable from, the defined/measured outputs of an intelligent system seems reasonable to me. Trying to make an AI copy the actions/outputs of natural intelligences is like infinitely constraining potential, then slowly chipping away at that wall of virtual infinity. Chipping away at infinity does let us keep making incremental progress, but ultimately doesn’t seem like an effective strategy towards fully achieving the goal.

A source of intelligence which starts off with minimal constraints would naturally contain maximal potential. Besides, constraints already exist, they are the environment. Intelligence would not benefit from further artificial constraints, I think it requires an initial internal unboundedness, to then have the capacity to mold itself to any environment...

That notion of artificial general potential is what is captured by AIXI which is also intractable. Unfortunately you are faced with "a wall of virtual infinity" whether you take an outside-in or an inside-out approach.