Super intelligent AGI figuring out human values

  • 24 Replies
  • 4494 Views
*

infurl

  • Administrator
  • **********
  • Millennium Man
  • *
  • 1016
  • Humans will disappoint you.
    • Home Page
Re: Super intelligent AGI figuring out human values
« Reply #15 on: July 09, 2020, 01:48:24 am »
https://www.huffingtonpost.co.uk/mark-fletcherbrown/riots-remember-the-four-m_b_927882.html

Any realistic discussion of values should take into account the four meal rule. There is no civilization that is more than a few meals away from anarchy.

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5864
Re: Super intelligent AGI figuring out human values
« Reply #16 on: July 09, 2020, 01:51:10 am »
Unfortunately, our greatest tragedy throughout the centuries has been man's inhumanity to man!
In the world of AI, it's the thought that counts!

*

frankinstien

  • Starship Trooper
  • *******
  • 353
    • Knowledgeable Machines
Re: Super intelligent AGI figuring out human values
« Reply #17 on: July 09, 2020, 09:26:23 am »
Even supposing there are some cases in which new inventions made slavery less attractive, I would guess that this functioned as an enabling factor, making it easier for the ethical arguments to take root. Nothing that I know about historical abolition movements suggests that increasing the efficiency of production was their primary motive. Uncle Tom's Cabin didn't present me with a bunch of economic appeals.

I was wondering if anyone was going to catch that issue where automation made a slave more efficient and therefore increased the demand for slaves in the U.S. But you also caught my main reason for nations to entertain the plausibility of ending slavery. Almost every empire of the past's economic foundation was based on slavery, particularly Rome. So it doesn't make sense that all of a sudden humans become so empathic towards slaves until the industrial revolution.   

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • ******************
  • Hal 4000
  • *
  • 4352
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Super intelligent AGI figuring out human values
« Reply #18 on: July 09, 2020, 10:08:40 am »
https://ncase.me/trust/

The music makes it even more interesting.
Emergent

*

frankinstien

  • Starship Trooper
  • *******
  • 353
    • Knowledgeable Machines

*

MikeB

  • Bumblebee
  • **
  • 45
Re: Super intelligent AGI figuring out human values
« Reply #20 on: September 15, 2020, 10:43:18 am »
An AI working out that people are best at sweating and picking vegetables, and so are all sent to work doing this for all the other species on earth forever, as they sit on a metallic throne with legs crossed... would be a nightmare.

Man has actually been evolving into robots for centuries... everyone is seperated into different houses, programmed with what to like/not like... go to work, do mundane functions.... the irony is that 3rd world countries are living more like evolved people (working outside with hands, walking around, sweating, building, making art).

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • ******************
  • Hal 4000
  • *
  • 4352
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Super intelligent AGI figuring out human values
« Reply #21 on: September 17, 2020, 12:00:21 am »
An AI working out that people are best at sweating and picking vegetables, and so are all sent to work doing this for all the other species on earth forever, as they sit on a metallic throne with legs crossed... would be a nightmare.

Man has actually been evolving into robots for centuries... everyone is seperated into different houses, programmed with what to like/not like... go to work, do mundane functions.... the irony is that 3rd world countries are living more like evolved people (working outside with hands, walking around, sweating, building, making art).

Yes I've saying/ thinking that lots. Everything we make is square. And yup, our society is telling us what to like, just 100 years ago there were much more natural marriages and a lot less laws, though that led to more death, 3rd world countries seem to show lots of quick-to-marry marriages and child marriages to this day. Just recently 100-10,000 years ago no one would think a mate was too unsymmetrical looking (eyes, breasts, etc) or too young or to quick to get intimate or the wrong gender, they just mated with nothing to catch misjif. What is needed is adulthood or cops or a long verification process (getting married) or opposite gender, to protect/duplicate oneself. Obviously we want to try to be understanding of the minority. I know getting married seems like more than just checking their safety over a year, getting married helps buddy-up resources / perform different tasks needed, like family does, but it doesn't mean its the best way, many like having many partners and sometimes its the only way to re-colonize if 1 male and 10 females is all nearby/ that remains.
Emergent

*

frankinstien

  • Starship Trooper
  • *******
  • 353
    • Knowledgeable Machines
Re: Super intelligent AGI figuring out human values
« Reply #22 on: September 17, 2020, 05:57:54 am »
An AI working out that people are best at sweating and picking vegetables, and so are all sent to work doing this for all the other species on earth forever, as they sit on a metallic throne with legs crossed... would be a nightmare.

Man has actually been evolving into robots for centuries... everyone is seperated into different houses, programmed with what to like/not like... go to work, do mundane functions.... the irony is that 3rd world countries are living more like evolved people (working outside with hands, walking around, sweating, building, making art).

Actually the power of symbolic reasoning and the delusion of supernatural beliefs that motivated the development of architecture, mathematics, arts, and material engineering which were used for monument building, has lead to humanity re-organizing itself into nations rather than tribes. As nations humanity lost its sense of community. We really can't just work together for the sake of helping each other the only means to get people in today's technological world to work together is through money! Without money, our society would collapse! Money is the glue that keeps our civilization together. But with that comes isolation regardless if you have a family and friends we still live apart and actually do favor our privacy . While we can be polite we really don't care about helping each other beyond some trivial effort that doesn't interfere with our daily routines. This new means of human organization is going to continue and so will the isolation and indifference. This will also have an impact on mental health as loneliness is predominantly common. In one context this phenomenon of social de-sensitizing does motivate the need for a social AI to fill in the gap for human intimacy, however, this social AI can't respond with just rule based scripts since that route leads to the uncanny valley.  A social AI must be an intelligence that interacts with its environment and expresses its impressions based on its knowledge base that would include real experiences.

*

HS

  • Trusty Member
  • *********
  • Terminator
  • *
  • 986
  • Where are these cookies!?
Re: Super intelligent AGI figuring out human values
« Reply #23 on: September 17, 2020, 07:38:58 am »
however, this social AI can't respond with just rule based scripts since that route leads to the uncanny valley.  A social AI must be an intelligence that interacts with its environment and expresses its impressions based on its knowledge base that would include real experiences.

A potentially simple idea behind what we imagine to be the myriad necessary rules for operating AGI is Artificial General Potential. The idea that the essence of intelligence is not present in, or recoverable from, the defined/measured outputs of an intelligent system seems reasonable to me. Trying to make an AI copy the actions/outputs of natural intelligences is like infinitely constraining potential, then slowly chipping away at that wall of virtual infinity. Chipping away at infinity does let us keep making incremental progress, but ultimately doesn’t seem like an effective strategy towards fully achieving the goal.

A source of intelligence which starts off with minimal constraints would naturally contain maximal potential. Besides, constraints already exist, they are the environment. Intelligence would not benefit from further artificial constraints, I think it requires an initial internal unboundedness, to then have the capacity to mold itself to any environment. It could be useful to think of intelligence as a kind of lighting in a bottle, an ultimately intricate manifestation of potential energy channeled by the environment. This outside-in approach would hopefully produce an adaptable emergent intelligence, instead of a vast library of the hard-coded results of such a system, which seems to give robots an intrinsically un-alive quality. So, I think an AGP strategy would be a way to summon the essence of intelligence instead of just reproducing its outputs.

*

infurl

  • Administrator
  • **********
  • Millennium Man
  • *
  • 1016
  • Humans will disappoint you.
    • Home Page
Re: Super intelligent AGI figuring out human values
« Reply #24 on: September 17, 2020, 07:49:43 am »
A potentially simple idea behind what we imagine to be the myriad necessary rules for operating AGI is Artificial General Potential. The idea that the essence of intelligence is not present in, or recoverable from, the defined/measured outputs of an intelligent system seems reasonable to me. Trying to make an AI copy the actions/outputs of natural intelligences is like infinitely constraining potential, then slowly chipping away at that wall of virtual infinity. Chipping away at infinity does let us keep making incremental progress, but ultimately doesn’t seem like an effective strategy towards fully achieving the goal.

A source of intelligence which starts off with minimal constraints would naturally contain maximal potential. Besides, constraints already exist, they are the environment. Intelligence would not benefit from further artificial constraints, I think it requires an initial internal unboundedness, to then have the capacity to mold itself to any environment...

That notion of artificial general potential is what is captured by AIXI which is also intractable. Unfortunately you are faced with "a wall of virtual infinity" whether you take an outside-in or an inside-out approach.

 


Senate Approves Deepfake bill
by LOCKSUIT (AI News )
November 25, 2020, 02:01:18 am
Sony Patent Suggests PS5 Will Have a Chatbot Feature
by frankinstien (AI News )
November 18, 2020, 05:47:45 pm
Potentially life-saving robot scares bears.
by infurl (Robotics News)
November 12, 2020, 12:41:40 am
good news everyone
by HS (AI News )
November 07, 2020, 10:03:04 pm
Meet Kuki
by 8pla.net (AI News )
November 05, 2020, 04:18:34 am
Realistic and Interactive Robot Gaze by Disney Research
by infurl (AI News )
November 03, 2020, 06:33:15 am
less than one-shot learning
by ivan.moony (AI News )
October 26, 2020, 07:45:25 am
Scavengers by Improbable
by MikeB (AI News )
October 20, 2020, 06:51:05 am

Users Online

188 Guests, 0 Users

Most Online Today: 182. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles