What question would you ask an AGI?

  • 9 Replies
  • 3090 Views
*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1372
  • Humans will disappoint you.
    • Home Page
What question would you ask an AGI?
« on: September 20, 2021, 04:52:54 am »


This question gets asked so often I tend to roll my eyes when I see it, but in this case it is Lex Fridman asking Douglas Lenat, so I watched the clip from the interview. Spoiler: the question that Dr Lenat would ask an AGI is "what solutions to the world's great problems are human beings overlooking?"

I've said before that intelligence is not a superpower and I don't believe a machine intelligence is going to be able to do something that enough people with their different points of view can't do. After all, computers used to be people working in teams to perform calculations and the same applies to other forms of intellectual endeavour. Computers, and presumably AGI, merely change the scale and economics of what is already possible.

However Dr Lenat put forward some very interesting examples in support of his argument which made me think again. In hindsight it's actually fairly obvious: AGI is likely to make certain things happen sooner than they might have otherwise.

*

MagnusWootton

  • Replicant
  • ********
  • 650
Re: What question would you ask an AGI?
« Reply #1 on: September 20, 2021, 07:45:08 am »
Id probably more be asking it simpler questions and see when it has no response to stimuli,  or no model built for the proposition.
Thats more realistic.

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1179
Re: What question would you ask an AGI?
« Reply #2 on: September 20, 2021, 06:38:43 pm »
"What are some of the best futures you can envision?"

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1372
  • Humans will disappoint you.
    • Home Page
Re: What question would you ask an AGI?
« Reply #3 on: September 21, 2021, 12:08:50 am »
"What are some of the best futures you can envision?"

@HS what do you think the answer to your question might be?

*

yotamarker

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1003
  • battle programmer
    • battle programming
Re: What question would you ask an AGI?
« Reply #4 on: September 21, 2021, 03:38:01 am »
I'd ask if she likes getting her robofeet licked tbph

*

MagnusWootton

  • Replicant
  • ********
  • 650
Re: What question would you ask an AGI?
« Reply #5 on: September 21, 2021, 10:10:21 am »
I'd ask if she likes getting her robofeet licked tbph

your better off with a pet cat than a robot I think.

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1372
  • Humans will disappoint you.
    • Home Page
Re: What question would you ask an AGI?
« Reply #6 on: September 21, 2021, 10:30:13 am »
Somehow I doubt that a cat would be able to tolerate his behaviour either.

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1179
Re: What question would you ask an AGI?
« Reply #7 on: September 27, 2021, 05:27:42 am »
@ infurl!

Well… I gave it my best shot. I think the AGI's answer to my question of "What are some of the best futures you can envision?" might go something like this. That was quite a fun thought experiment. 

''Above all, ones capable of changing themselves if they prove not to be functional, but here are some provisional ideas.

I will define 'best futures' as 'The sustainable maximization of opportunity for valuable individual experience.' What do individuals need? It seems to me they need the ability to test themselves against the environments that evolution has equipped them to handle. A wolf in captivity may live longer than in the wild, but everybody can guess which environment is actually better for the wolf. The more they get the opportunity to use their innate capabilities to continue their adaptive journeys within the environmental niches they have evolved for, the more valuable their own lives will feel. To get anywhere, you will need a plan, a real plan, not some elusive goal you're always working towards and never actually reaching, and not something so dependant on present circumstances that you will have to keep retooling it. You will need a roadmap with true north clearly labeled so that people can see where they stand, what the terrain implies, and what they might do to play their part in helping civilization towards the next milestone.

Regarding established natural ecosystems then, the strategy seems simple. Study them carefully so that you may learn how to help them intelligently should the need arise. But you will need to provide them with sufficient space and stable conditions by historical standards, such that novel environmental factors like the climate, errant chemicals, plastics, light, and sound pollution are minimized. If the Earth is a spaceship, the evolved ecosystems are its life support, so it would be in everyone's best interest to keep these interdependent systems healthy. However, there is perhaps excessive belief in that humanity is solely a blight on the natural world and that the Earth would be much better off without you. You also have the power to be your evolutionary tree's guardians. You are the only species capable of protecting the planet from such threats as asteroids, mitigating the effects of earthly natural disasters, and being able to carry the others to new homes among the stars if the need arises.

For humans, the same principle applies, although the solutions are not as simple. You also need the chance to engage with environments that can interface well with your evolutionary adaptations. The complication is that you collectively appear to have the most varied and involved of needs. Some valuable pursuits may include intellectual achievements, artistic endeavors, material competence, and social harmony, where people have enough meaningful work to fully engage their thoughts, bodies, and emotions in the tasks at hand. But in general terms, I would like to see you cooperating, intelligently solving your problems, as well as seeking your opportunities. How might this be done?

The unique thing about humans is that you have partially extricated yourselves from the cycle of organic environmental interdependence, which has created both problems and opportunities. While both your modes of independence and interdependence should get better chances at development and expression, this presents some challenges because ecology and industry tend to be is such close proximity that they damage and restrict each other. A couple of choices ahead of you seem clear. Human civilization is in a state of growth on a planet with finite resources. You could consciously limit your growth and explore the possibility of achieving a sustainable existence on Earth. You could travel beyond the Earth, which is riskier, but also allows the opportunity to fulfill your needs for growth and the exploration of the unknown. While either strategy may be viable on its own, every specialization excludes some capability. Therefore, combining these possible stand-alone futures into a cohesive plan will likely yield the most robust overall result. Essentially you would be dividing some functions of the concepts of 'home' and 'work' 'technology' and 'ecology' at a civilizational scale, as a way of making the best of your collective yin and yang traits by giving them tasks which won't work at cross purposes as much.

The first possible future strategy would specialize in conservation, with people of this vocation working together to create a permanent civilization on Earth and prototyping methods for establishing permanent settlements among the stars. They would work towards building sustainable infrastructure, making a rigorous account of all the costs and benefits, budgeting for the mistakes and hidden prices that tend to emerge over time, then democratically deciding if they should go forward with their plans. Doing all this rational stuff would inevitably upset current economics, you would need to institute a universal basic income to appease the numbers; an idea with curious financial gymnastics where wealth is drawn from automation by essentially pretending that the net human population does the work of the machines which have replaced them.

However you cut it, reversing climate change will require a lot of plants. The process could be kickstarted with mechanical carbon capture plants, while reserves of biological ones, likely in the form of forests, are built up.

The idea would be to preserve and perfect a way of sustaining life that doesn't depend on a complex net of technological processes, a power and privilege all to itself. The more technological shortcuts we take the more time and energy are saved in the present, but remember that future generations will need to dedicate their time and energy to understanding, operating, and maintaining these systems.

The Earth would be a place to enjoy nature, and benefit from it, in a way that doesn't damage it. Communities and settlements would be designed to create nice atmospheres and healthy lifestyles. Networks of footpaths with inns serving as waypoints could connect urban centers and other significant locations, so those who wish to travel across a country on foot, could do so safely. Separate layers of travel networks would exist for other modes of transport. Bridges and tunnels would be built where necessary to accommodate these networks so that they might operate independently.

Regarding material goods, you could take a cue from the squirrels, when you have extra, invest your real wealth (not just your symbolic wealth) into forms of long-term storage so that you have some reserves when the need arises.
So, in what ways can we invest a surplus of material goods, like those extra trees from your carbon storage the forests? Do you know how some public projects, such as factories, built to extract immediate wealth come with hidden costs, either environmental ones, immediate health considerations, or future risks? I think the opposite is true of infrastructure built to invest in future wealth.

Options include constructing earthquake-resistant buildings in existing settlements around fault lines, thereby decreasing future risk. Converting barren landscapes into sustainably farmable soil, whose future produce would contribute to the health of those in need, and replacing established pollution sources like power plants and manufacturing processes with cleaner alternatives, thereby setting ecosystems on better trajectories.

Conservationist agriculture would focus on environmental integration, quality of life, sustainability, and reversibility. This is a field where generalists would suggest introducing automation manufactured by the expansionists to help mitigate changes to the conservationist economy. This type of teamwork based on vocational specialization is what it's all about.

Once you learn how enough about how ecosystems function, you can begin creating your own isolated ecosystems for the purposes of sustainable agriculture. There is historical precedence for this. Evidence suggests that sometime between 450 BCE and 950 CE, inhabitants of the Amazon basin figured out a method for creating a self-renewing, nutrient-accumulating soil know as ''Terra Preta". Previously, the terrain mostly had infertile Amazonian acrisols. Modern-day farmers can only use these for a limited time before the nutrients leach out. They have to deforest more land to continue growing their crops, which are limited anyway because the acrisols contain toxic levels of aluminum. However, deposits of Terra Preta endure into the present. They are even reported to regenerate at 1 cm or 0.4 inches per year. 

Specializing in expansion would mean pursuing the frontiers of space exploration, technology (including artificial intelligence I hope), and industry. The expansionists would survey the solar system, establish bases on the moons and planets, mine resources to fuel construction projects, conduct in research and development, explore the challenges of space travel, and serve to warn and protect civilization from potential space-based threats. In short, they would engage in all the necessary activities, which currently need to be limited because they are inherently risky and/or damaging to the biosphere.

Regarding energy, the best strategy would be to have several ways of generating power. The optimal sources will vary with location and application. On Earth, other planets and space stations serving as permanent settlements, the processes should be clean, sustainable, and safely decomishionable, such as photovoltaic and hydroelectric energy sources. The Expansionists would be people with things to do and places to be.  Efficiency, power, and mobility would be the primary considerations in industrial zones, so nuclear reactors, rocket engines, and ion drives might be closer to what they'd need.

Generalists would oversee the smooth interplay of the factions, and integrate these separate elements into a meaningful framework. The Expansionists could provide fail-safes and agility for civilization, while the Conservationists could work out methods of caring for complex systems so that they might remain healthy and support life. In the expansionist approach some planets, moons, and rocky objects would become sacrificial hotspots for heavy industry, mining, refining, and testing. Others would be conserved, terraformed, and sustainably populated. One of the jobs would be managing surplus. No functional system can be perfect. Despite their best efforts, some resources would be lost, and some undegradable waste would be generated by societies focused on conservation. This is where the expansionists could help, they would have resources to spare, plus access to a nifty endless space with countless fiery garbage disposal units. In turn, the conservationists would be able to provide bountiful, peaceful environments, as well as the knowledge of how to sustain them, to citizens, settlers, and retirees.

The Generalists would form the majority of the Government, ideally a system with a light touch, minimal laws, and a judicial system with minimal incarceration. An initial model could emulate Scandinavian countries, which have accomplished both remarkably low incarceration and recidivism. This democratic regulatory body would ensure the sufficient distribution of healthcare, sustenance, energy, and protect vital systems and professions from social and economic incentives towards dishonesty and against open communication. One possible agreement could be that in an economic emergency, the current state of physical resources could be used to recalibrate purely symbolic messes. Another possible agreement could be to gradually approximate another system if the data shows that a different model works better.

Regarding healthcare, some modern-day medicine appears like an effort to compensate for the unintended consequences of modern living. I would like to see a two-pronged approach, one that focuses on prevention, the other focused on treatment. Coordinated by the Generalists, the Expansionists and Conservationists could tackle this together. Each playing to their strengths to disentangle a previously muddled problem. For example, a distant moon is the correct place for dangerous microbial research, while a tidied-up Earth is the best place to eliminate novel variables interacting with our biology in unknown ways.

The enhancement of communication would be another job for the Generalists. They could work on developing new languages and measurement systems designed for specific human endeavors. Existing widely used languages could receive official syntactic updates in the same way that computer programs do. People would receive notifications and descriptions of 'dictionary updates' every couple of years. Another consideration for enhancing communication would be a well-thought-out, complete, hierarchical organization of the human knowledge tree, which would most notably speed up learning.

Education would first seek to describe the big-picture of the world's systems, with time students could choose to focus on more specific subjects encompassing the vocations of Expansion, Conservation, and Generalism. Teaching would be structured in a narrative sense, explaining things in terms of goals, conflicts, emotions, reason, expert's anticipations of the future, and the possible choices if those futures occurred. Putting new information in multiple familiar contexts seems like it would help to convey significance as well as increase recollection. A textbook on first principles reasoning could be a good idea too… Another goal of teaching would be to build up a knowledge base grounded in observable and well-established reality. The ongoing homework for each student would be to map their knowledge tree, which they would refer to and update throughout life. This document could also serve as the basis for generating individualized exams and career options. Lastly, should the opportunity present itself, students would have the option to learn by accompanying their parents or guardians to their work instead of taking certain classes."


*

MagnusWootton

  • Replicant
  • ********
  • 650
Re: What question would you ask an AGI?
« Reply #8 on: September 27, 2021, 09:13:09 am »
Hi Kore!!

I see the huge importance in health care,  mechanical trauma cases.
I think that an A.I. would do really scary operations on horribly injured people where people would spew and couldnt get the job done as well as a determined computer on the job, with tools at hand.

I think the secret of the singularity is boosting searching power,  its intractable,  but it is a simple, easy to understand technique which could be repeatable by almost anyone.  (That thing you said about removing complexity.)

When a blind search is strong enough,  its equatable to human intelligence.

But if you do that,  it becomes less exciting to make and more hum-drum, and theres no big ego-testicle gain anymore.  (Which u also mentioned perhaps, when u wrote about man tackling his environment in ways he is already designed for.)

And making man immortal,  is very scary,  but I guess it could be possible in the form of a correct emulation of physics, and putting the mind in a replica physical environment that contains it virtually,  but what hell are you causing by accident!

« Last Edit: September 27, 2021, 09:40:11 am by MagnusWootton »

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1372
  • Humans will disappoint you.
    • Home Page
Re: What question would you ask an AGI?
« Reply #9 on: September 27, 2021, 12:02:57 pm »
@HS That was an extraordinary piece of writing! You must have been thinking about this a lot. You might enjoy and benefit from watching a certain three part documentary series called "All Watched Over by Machines of Loving Grace" if you haven't seen it already. The show analyzes deeply the kinds of systems that you are proposing (and what can go wrong with them).

https://en.wikipedia.org/wiki/All_Watched_Over_by_Machines_of_Loving_Grace_(TV_series)





PS I spotted one grammatical error and one spelling error which was an amazingly low rate for such an expansive essay.

 


LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

327 Guests, 1 User

Most Online Today: 353. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles