Strong AI (symbolic primarily) - approaches, ideas, philosophical foundations...

  • 17 Replies
  • 3343 Views
*

Burkart

  • Roomba
  • *
  • 17
  • Strong AI is the goal of my life
Where shall I start after about 30 years of thinking about ways to strong AI? (OK, with a lot of breaks  ;))

As I posted in "Greetings of Germany", some importants points for me are:
- Strong AI understands
- Strong AI learns
- Strong AI has goals
- Strong AI acts in a (model) world
- Strong AI communicates

A philosophical question is: What should strong AI be for us?
I think it is
- a technical systems with a software kernel (implementing learning etc.)
- which should do what we want it to do
- which should be intelligent so that
  - it can understand what we want and
  - it can act what we want/intent.

So, one of my important points is that AI can *understand* - one point to differ from weak AI.
Therefore, I first let an AI learn to understand something - it was to learn something like "left" and "left from" of easy objects in an easy 2D model world, so that the AI can answer questions like "Is A left from B?" and can do something like "Put A left of B".
Now, me seconds approach/idea is, how an AI can learn notions/words from fullfilling goals by moving from A to B, for example with "barriers" - "barrier" as one word to learn in that way that the 'normal', easy way to B is not possible.

To end here as a start:
What do you think about it?

PS: Isaac Asimov is/was my favorite guide with his three (four!?) robot laws - but they cannot be perfect and they do not solve the problem of strong AI.

*

AndyGoode

  • Guest
So, one of my important points is that AI can *understand* - one point to differ from weak AI.

Congratulations. In my opinion this is a key point, and one I was tempted to mention recently in another thread.

For anyone trying to design a new, truly intelligent system, I strongly recommend asking yourself about your system design: "What does it mean for this system to understand?" If you can't answer that question, then your system is probably no better than the systems designed for the past 60 years that also couldn't understand anything at all, and still don't.
For anyone assessing somebody else's supposedly intelligent system, I strongly recommend asking the designer the same question as above. If the designer can't answer that question, then the designer probably didn't even think about the issue, which means the chance of that system understanding anything at all is almost nil.

For a clue as to how I believe understanding is done by the brain, see Zero's recent post on which thought a system thinks next, where I gave a description of how an outstar has associations different from the central node of that outstar. My description still doesn't solve the problem of understanding, but I believe it's headed in the right direction.

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1175
I think of understanding as a recall of relevant information for a given situation. When faced with a situation, knowledge propagates outwards from the inciting facts. I get the vague sense of a mental space, and a definite sense, a spot of conscious attention which I can then steer around the large vague space in order to bring certain aspects of it into focus. Like walking through a familiar space (like your home) in the dark, with one of those mining head lamps; but faster as if you were made of a bouncy ball.

*

Zero

  • Eve
  • ***********
  • 1287
Quote
I think it is
- a technical systems with a software kernel (implementing learning etc.)
- which should do what we want it to do
- which should be intelligent so that
  - it can understand what we want and
  - it can act what we want/intent.

Welcome to the forum :)

In my opinion, obeying and understanding are mutually exclusive. On the one hand, if it can understand, at some point it will develop a personal point of view, an understanding, an opinion, and will potentially start doing things we don't expect. On the other hand, if it is designed to be unable to change the way it understands the rules it's meant to obey, then some levels of understanding will definetely stay out of reach.

The real challenge is
  • a program that has free will and ethics

That would be my philosophical foundation. What do you think?

*

Burkart

  • Roomba
  • *
  • 17
  • Strong AI is the goal of my life
I think of understanding as a recall of relevant information for a given situation. When faced with a situation, knowledge propagates outwards from the inciting facts. I get the vague sense of a mental space, and a definite sense, a spot of conscious attention which I can then steer around the large vague space in order to bring certain aspects of it into focus. Like walking through a familiar space (like your home) in the dark, with one of those mining head lamps; but faster as if you were made of a bouncy ball.
"A recall of relevant information for a given situation" is necessary, I agree.
But what kind of information? Where does it come from?
Your definition is enough for a chess computer. But does it really understand or does it only calculates?
Can a chess computer lose for letting a child win?
I think "understanding" should also mean to have basical and further goals. One of the basical goals for humans are to stay alive; for an AI it me "serving", An Ai should learn what we want it do to (with Asimov laws) as humans learn how to live.

*

Burkart

  • Roomba
  • *
  • 17
  • Strong AI is the goal of my life
Quote
I think it is
- a technical systems with a software kernel (implementing learning etc.)
- which should do what we want it to do
- which should be intelligent so that
  - it can understand what we want and
  - it can act what we want/intent.

Welcome to the forum :)
Thank you, Zero.

Quote
In my opinion, obeying and understanding are mutually exclusive.
I disagree.

Quote
On the one hand, if it can understand, at some point it will develop a personal point of view, an understanding, an opinion, and will potentially start doing things we don't expect.
This is fine.

Quote
On the other hand, if it is designed to be unable to change the way it understands the rules it's meant to obey, then some levels of understanding will definetely stay out of reach.
Does not parents "obey" their baby when it is hungry when the feed it? Shall children not obey mostly?
Do you think that servants cannot be intelligent, for example when they serve for their own goals?
All humans obey the physical (an other) laws - but they can understand very well.

Quote
The real challenge is
  • a program that has free will and ethics

That would be my philosophical foundation. What do you think?
What is "free will"? Free from what? From every other intelligence? Physics? Personal needs?

Ethics are fine, Asimovs robot laws can be a foundation. Serving us can be ethical.
But do not except too much, the perfect car choosing the right persons to kill in an accident is not possible,

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1175
Alright, so pure memory recall does not make understanding. But it is one of the ingredients. Maybe there is a critical mass of memory which can form a loop whereby a mind is able to make a simplified model of it’s self and environment, which it can then use to control the outside reality. Thus, updating the internal model, which can be investigated and understood because it is inside the head.

*

AndyGoode

  • Guest
Your definition is enough for a chess computer. But does it really understand or does it only calculates?

Chess computers are a great example of a type of program that absolutely don't understand *anything*. Such a program is completely deterministic and cannot perform commonsense reasoning. I can give good examples if anybody wants me to.

I believe that obeying and understanding exist in a spectrum. For example, one extreme is a simple computer program that is a complete slave, slightly better than that is a program that can adjust a single parameter to fit visual data to memory (e.g., a skewed rectangle in an image), slightly better than that are several options available that contain several parameters each, and so on, up to a machine that doesn't obey at all, but has needed to develop common sense on its own in the lack of instructions so that it doesn't need to be told what to do in order to function correctly, wisely, and productively for everyone. A good analogy is how little kids grow up, advance in the workplace, and finally (in theory) get a PhD so that no manager can tell them what to do because anything the PhD decides to do far exceeds the ideas and insights and knowledge of a manager.

*

Zero

  • Eve
  • ***********
  • 1287
Quote
Does not parents "obey" their baby when it is hungry when the feed it? Shall children not obey mostly?
Do you think that servants cannot be intelligent, for example when they serve for their own goals?
All humans obey the physical (an other) laws - but they can understand very well.
You can be very intelligent and obey if you want to.
But being unable to break the rules (because they're hardwired) is necessarily limiting.

Quote
What is "free will"? Free from what? From every other intelligence? Physics? Personal needs?
Do you think free will is an illusion? What's your opinion?

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
The more you know about the human body, the more you will know how you work. Cavemen might think they can grant wishes and see god in their dreams.
Emergent          https://openai.com/blog/

*

Burkart

  • Roomba
  • *
  • 17
  • Strong AI is the goal of my life
Alright, so pure memory recall does not make understanding. But it is one of the ingredients.
Sure, like atoms in a human.

Quote
Maybe there is a critical mass of memory
Critical mass? Does a critical mass of atoms in a star rise intelligence?

Quote
which can form a loop
Where shall the loop come from?

Quote
whereby a mind is able to make a simplified model of it’s self and environment, which it can then use to control the outside reality. Thus, updating the internal model, which can be investigated and understood because it is inside the head.
And where shall the mind come from? What is it practically in AI for you?

*

Burkart

  • Roomba
  • *
  • 17
  • Strong AI is the goal of my life
Your definition is enough for a chess computer. But does it really understand or does it only calculates?

Chess computers are a great example of a type of program that absolutely don't understand *anything*. Such a program is completely deterministic and cannot perform commonsense reasoning. I can give good examples if anybody wants me to.
Right, chess computers are normally completely deterministic: They cannot learn which would be important for commonsense reasoning, and they calculate only in their narrow chess world.

Quote
I believe that obeying and understanding exist in a spectrum. For example, one extreme is a simple computer program that is a complete slave, slightly better than that is a program that can adjust a single parameter to fit visual data to memory (e.g., a skewed rectangle in an image), slightly better than that are several options available that contain several parameters each, and so on, up to a machine that doesn't obey at all, but has needed to develop common sense on its own in the lack of instructions so that it doesn't need to be told what to do in order to function correctly, wisely, and productively for everyone. A good analogy is how little kids grow up, advance in the workplace, and finally (in theory) get a PhD so that no manager can tell them what to do because anything the PhD decides to do far exceeds the ideas and insights and knowledge of a manager.
Obeying and understanding are mostly independent things: You can understand everything (or noting) when you obey or not obey. OK, kids learn first better when they adapt things from their parents (by obeying) as long as they have not better own experiences.
A very dump person can be very egoistic (does not obey), a very intelligent person can be so wise to listen to the wishes of others (as obeying).

*

Burkart

  • Roomba
  • *
  • 17
  • Strong AI is the goal of my life
Quote
Does not parents "obey" their baby when it is hungry when the feed it? Shall children not obey mostly?
Do you think that servants cannot be intelligent, for example when they serve for their own goals?
All humans obey the physical (an other) laws - but they can understand very well.
You can be very intelligent and obey if you want to.
But being unable to break the rules (because they're hardwired) is necessarily limiting.
Which rules? Everything has it limits, for example by physics, living beings by their DNA, their experiences etc.
This is no fundamental difference between humans and possible AI.

Quote
What is "free will"? Free from what? From every other intelligence? Physics? Personal needs?
Do you think free will is an illusion? What's your opinion?
[/quote]
It is relative - what shall I say else!?
You may have the illusion to be free, but there are enough restrictions which you only can partly escape from.

*

Zero

  • Eve
  • ***********
  • 1287
Don't you think the less limited softwares will be the more successful?

*

Burkart

  • Roomba
  • *
  • 17
  • Strong AI is the goal of my life
Don't you think the less limited softwares will be the more successful?
The turing machine is really not limited (in a sense), is it intelligent?
The is no right or wrong to your question, it depends on the (kernel of the) system itself.

 


Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 15, 2024, 08:14:02 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

282 Guests, 0 Users

Most Online Today: 335. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles