Super intelligent AGI figuring out human values

  • 24 Replies
  • 36143 Views
*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1365
  • Humans will disappoint you.
    • Home Page
Re: Super intelligent AGI figuring out human values
« Reply #15 on: July 09, 2020, 01:48:24 am »
https://www.huffingtonpost.co.uk/mark-fletcherbrown/riots-remember-the-four-m_b_927882.html

Any realistic discussion of values should take into account the four meal rule. There is no civilization that is more than a few meals away from anarchy.

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: Super intelligent AGI figuring out human values
« Reply #16 on: July 09, 2020, 01:51:10 am »
Unfortunately, our greatest tragedy throughout the centuries has been man's inhumanity to man!
In the world of AI, it's the thought that counts!

*

frankinstien

  • Replicant
  • ********
  • 642
    • Knowledgeable Machines
Re: Super intelligent AGI figuring out human values
« Reply #17 on: July 09, 2020, 09:26:23 am »
Even supposing there are some cases in which new inventions made slavery less attractive, I would guess that this functioned as an enabling factor, making it easier for the ethical arguments to take root. Nothing that I know about historical abolition movements suggests that increasing the efficiency of production was their primary motive. Uncle Tom's Cabin didn't present me with a bunch of economic appeals.

I was wondering if anyone was going to catch that issue where automation made a slave more efficient and therefore increased the demand for slaves in the U.S. But you also caught my main reason for nations to entertain the plausibility of ending slavery. Almost every empire of the past's economic foundation was based on slavery, particularly Rome. So it doesn't make sense that all of a sudden humans become so empathic towards slaves until the industrial revolution.   

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Super intelligent AGI figuring out human values
« Reply #18 on: July 09, 2020, 10:08:40 am »
https://ncase.me/trust/

The music makes it even more interesting.
Emergent          https://openai.com/blog/

*

frankinstien

  • Replicant
  • ********
  • 642
    • Knowledgeable Machines

*

MikeB

  • Autobot
  • ******
  • 219
Re: Super intelligent AGI figuring out human values
« Reply #20 on: September 15, 2020, 10:43:18 am »
An AI working out that people are best at sweating and picking vegetables, and so are all sent to work doing this for all the other species on earth forever, as they sit on a metallic throne with legs crossed... would be a nightmare.

Man has actually been evolving into robots for centuries... everyone is seperated into different houses, programmed with what to like/not like... go to work, do mundane functions.... the irony is that 3rd world countries are living more like evolved people (working outside with hands, walking around, sweating, building, making art).

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Super intelligent AGI figuring out human values
« Reply #21 on: September 17, 2020, 12:00:21 am »
An AI working out that people are best at sweating and picking vegetables, and so are all sent to work doing this for all the other species on earth forever, as they sit on a metallic throne with legs crossed... would be a nightmare.

Man has actually been evolving into robots for centuries... everyone is seperated into different houses, programmed with what to like/not like... go to work, do mundane functions.... the irony is that 3rd world countries are living more like evolved people (working outside with hands, walking around, sweating, building, making art).

Yes I've saying/ thinking that lots. Everything we make is square. And yup, our society is telling us what to like, just 100 years ago there were much more natural marriages and a lot less laws, though that led to more death, 3rd world countries seem to show lots of quick-to-marry marriages and child marriages to this day. Just recently 100-10,000 years ago no one would think a mate was too unsymmetrical looking (eyes, breasts, etc) or too young or to quick to get intimate or the wrong gender, they just mated with nothing to catch misjif. What is needed is adulthood or cops or a long verification process (getting married) or opposite gender, to protect/duplicate oneself. Obviously we want to try to be understanding of the minority. I know getting married seems like more than just checking their safety over a year, getting married helps buddy-up resources / perform different tasks needed, like family does, but it doesn't mean its the best way, many like having many partners and sometimes its the only way to re-colonize if 1 male and 10 females is all nearby/ that remains.
Emergent          https://openai.com/blog/

*

frankinstien

  • Replicant
  • ********
  • 642
    • Knowledgeable Machines
Re: Super intelligent AGI figuring out human values
« Reply #22 on: September 17, 2020, 05:57:54 am »
An AI working out that people are best at sweating and picking vegetables, and so are all sent to work doing this for all the other species on earth forever, as they sit on a metallic throne with legs crossed... would be a nightmare.

Man has actually been evolving into robots for centuries... everyone is seperated into different houses, programmed with what to like/not like... go to work, do mundane functions.... the irony is that 3rd world countries are living more like evolved people (working outside with hands, walking around, sweating, building, making art).

Actually the power of symbolic reasoning and the delusion of supernatural beliefs that motivated the development of architecture, mathematics, arts, and material engineering which were used for monument building, has lead to humanity re-organizing itself into nations rather than tribes. As nations humanity lost its sense of community. We really can't just work together for the sake of helping each other the only means to get people in today's technological world to work together is through money! Without money, our society would collapse! Money is the glue that keeps our civilization together. But with that comes isolation regardless if you have a family and friends we still live apart and actually do favor our privacy . While we can be polite we really don't care about helping each other beyond some trivial effort that doesn't interfere with our daily routines. This new means of human organization is going to continue and so will the isolation and indifference. This will also have an impact on mental health as loneliness is predominantly common. In one context this phenomenon of social de-sensitizing does motivate the need for a social AI to fill in the gap for human intimacy, however, this social AI can't respond with just rule based scripts since that route leads to the uncanny valley.  A social AI must be an intelligence that interacts with its environment and expresses its impressions based on its knowledge base that would include real experiences.

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1175
Re: Super intelligent AGI figuring out human values
« Reply #23 on: September 17, 2020, 07:38:58 am »
however, this social AI can't respond with just rule based scripts since that route leads to the uncanny valley.  A social AI must be an intelligence that interacts with its environment and expresses its impressions based on its knowledge base that would include real experiences.

A potentially simple idea behind what we imagine to be the myriad necessary rules for operating AGI is Artificial General Potential. The idea that the essence of intelligence is not present in, or recoverable from, the defined/measured outputs of an intelligent system seems reasonable to me. Trying to make an AI copy the actions/outputs of natural intelligences is like infinitely constraining potential, then slowly chipping away at that wall of virtual infinity. Chipping away at infinity does let us keep making incremental progress, but ultimately doesn’t seem like an effective strategy towards fully achieving the goal.

A source of intelligence which starts off with minimal constraints would naturally contain maximal potential. Besides, constraints already exist, they are the environment. Intelligence would not benefit from further artificial constraints, I think it requires an initial internal unboundedness, to then have the capacity to mold itself to any environment. It could be useful to think of intelligence as a kind of lighting in a bottle, an ultimately intricate manifestation of potential energy channeled by the environment. This outside-in approach would hopefully produce an adaptable emergent intelligence, instead of a vast library of the hard-coded results of such a system, which seems to give robots an intrinsically un-alive quality. So, I think an AGP strategy would be a way to summon the essence of intelligence instead of just reproducing its outputs.

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1365
  • Humans will disappoint you.
    • Home Page
Re: Super intelligent AGI figuring out human values
« Reply #24 on: September 17, 2020, 07:49:43 am »
A potentially simple idea behind what we imagine to be the myriad necessary rules for operating AGI is Artificial General Potential. The idea that the essence of intelligence is not present in, or recoverable from, the defined/measured outputs of an intelligent system seems reasonable to me. Trying to make an AI copy the actions/outputs of natural intelligences is like infinitely constraining potential, then slowly chipping away at that wall of virtual infinity. Chipping away at infinity does let us keep making incremental progress, but ultimately doesn’t seem like an effective strategy towards fully achieving the goal.

A source of intelligence which starts off with minimal constraints would naturally contain maximal potential. Besides, constraints already exist, they are the environment. Intelligence would not benefit from further artificial constraints, I think it requires an initial internal unboundedness, to then have the capacity to mold itself to any environment...

That notion of artificial general potential is what is captured by AIXI which is also intractable. Unfortunately you are faced with "a wall of virtual infinity" whether you take an outside-in or an inside-out approach.

 


OpenAI Speech-to-Speech Reasoning Demo
by ivan.moony (AI News )
March 28, 2024, 01:31:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

316 Guests, 0 Users

Most Online Today: 396. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles