Ai Dreams Forum

Artificial Intelligence => General AI Discussion => Topic started by: Burkart on July 03, 2019, 10:59:18 PM

Title: Strong AI (symbolic primarily) - approaches, ideas, philosophical foundations...
Post by: Burkart on July 03, 2019, 10:59:18 PM
Where shall I start after about 30 years of thinking about ways to strong AI? (OK, with a lot of breaks  ;))

As I posted in "Greetings of Germany", some importants points for me are:
- Strong AI understands
- Strong AI learns
- Strong AI has goals
- Strong AI acts in a (model) world
- Strong AI communicates

A philosophical question is: What should strong AI be for us?
I think it is
- a technical systems with a software kernel (implementing learning etc.)
- which should do what we want it to do
- which should be intelligent so that
  - it can understand what we want and
  - it can act what we want/intent.

So, one of my important points is that AI can *understand* - one point to differ from weak AI.
Therefore, I first let an AI learn to understand something - it was to learn something like "left" and "left from" of easy objects in an easy 2D model world, so that the AI can answer questions like "Is A left from B?" and can do something like "Put A left of B".
Now, me seconds approach/idea is, how an AI can learn notions/words from fullfilling goals by moving from A to B, for example with "barriers" - "barrier" as one word to learn in that way that the 'normal', easy way to B is not possible.

To end here as a start:
What do you think about it?

PS: Isaac Asimov is/was my favorite guide with his three (four!?) robot laws - but they cannot be perfect and they do not solve the problem of strong AI.
Title: Re: Strong AI (symbolic primarily) - approaches, ideas, philosophical foundations...
Post by: AndyGoode on July 04, 2019, 02:22:11 AM
So, one of my important points is that AI can *understand* - one point to differ from weak AI.

Congratulations. In my opinion this is a key point, and one I was tempted to mention recently in another thread.

For anyone trying to design a new, truly intelligent system, I strongly recommend asking yourself about your system design: "What does it mean for this system to understand?" If you can't answer that question, then your system is probably no better than the systems designed for the past 60 years that also couldn't understand anything at all, and still don't.
For anyone assessing somebody else's supposedly intelligent system, I strongly recommend asking the designer the same question as above. If the designer can't answer that question, then the designer probably didn't even think about the issue, which means the chance of that system understanding anything at all is almost nil.

For a clue as to how I believe understanding is done by the brain, see Zero's recent post on which thought a system thinks next, where I gave a description of how an outstar has associations different from the central node of that outstar. My description still doesn't solve the problem of understanding, but I believe it's headed in the right direction.
Title: Re: Strong AI (symbolic primarily) - approaches, ideas, philosophical foundations...
Post by: Hopefully Something on July 04, 2019, 04:48:23 AM
I think of understanding as a recall of relevant information for a given situation. When faced with a situation, knowledge propagates outwards from the inciting facts. I get the vague sense of a mental space, and a definite sense, a spot of conscious attention which I can then steer around the large vague space in order to bring certain aspects of it into focus. Like walking through a familiar space (like your home) in the dark, with one of those mining head lamps; but faster as if you were made of a bouncy ball.
Title: Re: Strong AI (symbolic primarily) - approaches, ideas, philosophical foundations...
Post by: Zero on July 04, 2019, 09:15:17 AM
Quote
I think it is
- a technical systems with a software kernel (implementing learning etc.)
- which should do what we want it to do
- which should be intelligent so that
  - it can understand what we want and
  - it can act what we want/intent.

Welcome to the forum :)

In my opinion, obeying and understanding are mutually exclusive. On the one hand, if it can understand, at some point it will develop a personal point of view, an understanding, an opinion, and will potentially start doing things we don't expect. On the other hand, if it is designed to be unable to change the way it understands the rules it's meant to obey, then some levels of understanding will definetely stay out of reach.

The real challenge is

That would be my philosophical foundation. What do you think?
Title: Re: Strong AI (symbolic primarily) - approaches, ideas, philosophical foundations...
Post by: Burkart on July 04, 2019, 06:01:40 PM
I think of understanding as a recall of relevant information for a given situation. When faced with a situation, knowledge propagates outwards from the inciting facts. I get the vague sense of a mental space, and a definite sense, a spot of conscious attention which I can then steer around the large vague space in order to bring certain aspects of it into focus. Like walking through a familiar space (like your home) in the dark, with one of those mining head lamps; but faster as if you were made of a bouncy ball.
"A recall of relevant information for a given situation" is necessary, I agree.
But what kind of information? Where does it come from?
Your definition is enough for a chess computer. But does it really understand or does it only calculates?
Can a chess computer lose for letting a child win?
I think "understanding" should also mean to have basical and further goals. One of the basical goals for humans are to stay alive; for an AI it me "serving", An Ai should learn what we want it do to (with Asimov laws) as humans learn how to live.
Title: Re: Strong AI (symbolic primarily) - approaches, ideas, philosophical foundations...
Post by: Burkart on July 04, 2019, 06:14:08 PM
Quote
I think it is
- a technical systems with a software kernel (implementing learning etc.)
- which should do what we want it to do
- which should be intelligent so that
  - it can understand what we want and
  - it can act what we want/intent.

Welcome to the forum :)
Thank you, Zero.

Quote
In my opinion, obeying and understanding are mutually exclusive.
I disagree.

Quote
On the one hand, if it can understand, at some point it will develop a personal point of view, an understanding, an opinion, and will potentially start doing things we don't expect.
This is fine.

Quote
On the other hand, if it is designed to be unable to change the way it understands the rules it's meant to obey, then some levels of understanding will definetely stay out of reach.
Does not parents "obey" their baby when it is hungry when the feed it? Shall children not obey mostly?
Do you think that servants cannot be intelligent, for example when they serve for their own goals?
All humans obey the physical (an other) laws - but they can understand very well.

Quote
The real challenge is
  • a program that has free will and ethics

That would be my philosophical foundation. What do you think?
What is "free will"? Free from what? From every other intelligence? Physics? Personal needs?

Ethics are fine, Asimovs robot laws can be a foundation. Serving us can be ethical.
But do not except too much, the perfect car choosing the right persons to kill in an accident is not possible,
Title: Re: Strong AI (symbolic primarily) - approaches, ideas, philosophical foundations...
Post by: Hopefully Something on July 04, 2019, 11:31:32 PM
Alright, so pure memory recall does not make understanding. But it is one of the ingredients. Maybe there is a critical mass of memory which can form a loop whereby a mind is able to make a simplified model of it’s self and environment, which it can then use to control the outside reality. Thus, updating the internal model, which can be investigated and understood because it is inside the head.
Title: Re: Strong AI (symbolic primarily) - approaches, ideas, philosophical foundations...
Post by: AndyGoode on July 05, 2019, 01:57:53 AM
Your definition is enough for a chess computer. But does it really understand or does it only calculates?

Chess computers are a great example of a type of program that absolutely don't understand *anything*. Such a program is completely deterministic and cannot perform commonsense reasoning. I can give good examples if anybody wants me to.

I believe that obeying and understanding exist in a spectrum. For example, one extreme is a simple computer program that is a complete slave, slightly better than that is a program that can adjust a single parameter to fit visual data to memory (e.g., a skewed rectangle in an image), slightly better than that are several options available that contain several parameters each, and so on, up to a machine that doesn't obey at all, but has needed to develop common sense on its own in the lack of instructions so that it doesn't need to be told what to do in order to function correctly, wisely, and productively for everyone. A good analogy is how little kids grow up, advance in the workplace, and finally (in theory) get a PhD so that no manager can tell them what to do because anything the PhD decides to do far exceeds the ideas and insights and knowledge of a manager.
Title: Re: Strong AI (symbolic primarily) - approaches, ideas, philosophical foundations...
Post by: Zero on July 05, 2019, 02:33:36 PM
Quote
Does not parents "obey" their baby when it is hungry when the feed it? Shall children not obey mostly?
Do you think that servants cannot be intelligent, for example when they serve for their own goals?
All humans obey the physical (an other) laws - but they can understand very well.
You can be very intelligent and obey if you want to.
But being unable to break the rules (because they're hardwired) is necessarily limiting.

Quote
What is "free will"? Free from what? From every other intelligence? Physics? Personal needs?
Do you think free will is an illusion? What's your opinion?
Title: Re: Strong AI (symbolic primarily) - approaches, ideas, philosophical foundations...
Post by: LOCKSUIT on July 05, 2019, 02:40:58 PM
The more you know about the human body, the more you will know how you work. Cavemen might think they can grant wishes and see god in their dreams.
Title: Re: Strong AI (symbolic primarily) - approaches, ideas, philosophical foundations...
Post by: Burkart on July 06, 2019, 08:58:16 AM
Alright, so pure memory recall does not make understanding. But it is one of the ingredients.
Sure, like atoms in a human.

Quote
Maybe there is a critical mass of memory
Critical mass? Does a critical mass of atoms in a star rise intelligence?

Quote
which can form a loop
Where shall the loop come from?

Quote
whereby a mind is able to make a simplified model of it’s self and environment, which it can then use to control the outside reality. Thus, updating the internal model, which can be investigated and understood because it is inside the head.
And where shall the mind come from? What is it practically in AI for you?
Title: Re: Strong AI (symbolic primarily) - approaches, ideas, philosophical foundations...
Post by: Burkart on July 06, 2019, 09:29:07 AM
Your definition is enough for a chess computer. But does it really understand or does it only calculates?

Chess computers are a great example of a type of program that absolutely don't understand *anything*. Such a program is completely deterministic and cannot perform commonsense reasoning. I can give good examples if anybody wants me to.
Right, chess computers are normally completely deterministic: They cannot learn which would be important for commonsense reasoning, and they calculate only in their narrow chess world.

Quote
I believe that obeying and understanding exist in a spectrum. For example, one extreme is a simple computer program that is a complete slave, slightly better than that is a program that can adjust a single parameter to fit visual data to memory (e.g., a skewed rectangle in an image), slightly better than that are several options available that contain several parameters each, and so on, up to a machine that doesn't obey at all, but has needed to develop common sense on its own in the lack of instructions so that it doesn't need to be told what to do in order to function correctly, wisely, and productively for everyone. A good analogy is how little kids grow up, advance in the workplace, and finally (in theory) get a PhD so that no manager can tell them what to do because anything the PhD decides to do far exceeds the ideas and insights and knowledge of a manager.
Obeying and understanding are mostly independent things: You can understand everything (or noting) when you obey or not obey. OK, kids learn first better when they adapt things from their parents (by obeying) as long as they have not better own experiences.
A very dump person can be very egoistic (does not obey), a very intelligent person can be so wise to listen to the wishes of others (as obeying).
Title: Re: Strong AI (symbolic primarily) - approaches, ideas, philosophical foundations...
Post by: Burkart on July 06, 2019, 09:55:01 AM
Quote
Does not parents "obey" their baby when it is hungry when the feed it? Shall children not obey mostly?
Do you think that servants cannot be intelligent, for example when they serve for their own goals?
All humans obey the physical (an other) laws - but they can understand very well.
You can be very intelligent and obey if you want to.
But being unable to break the rules (because they're hardwired) is necessarily limiting.
Which rules? Everything has it limits, for example by physics, living beings by their DNA, their experiences etc.
This is no fundamental difference between humans and possible AI.

Quote
What is "free will"? Free from what? From every other intelligence? Physics? Personal needs?
Do you think free will is an illusion? What's your opinion?
[/quote]
It is relative - what shall I say else!?
You may have the illusion to be free, but there are enough restrictions which you only can partly escape from.
Title: Re: Strong AI (symbolic primarily) - approaches, ideas, philosophical foundations...
Post by: Zero on July 06, 2019, 11:00:19 AM
Don't you think the less limited softwares will be the more successful?
Title: Re: Strong AI (symbolic primarily) - approaches, ideas, philosophical foundations...
Post by: Burkart on July 06, 2019, 11:10:56 AM
Don't you think the less limited softwares will be the more successful?
The turing machine is really not limited (in a sense), is it intelligent?
The is no right or wrong to your question, it depends on the (kernel of the) system itself.
Title: Re: Strong AI (symbolic primarily) - approaches, ideas, philosophical foundations...
Post by: AndyGoode on July 06, 2019, 06:57:31 PM
Obeying and understanding are mostly independent things:

Yes, I agree. I was about to make that same comment myself, but I jumped ahead and decided to just consider what the poster had likely *intended*, not what the words used actually meant.
Title: Re: Strong AI (symbolic primarily) - approaches, ideas, philosophical foundations...
Post by: AndyGoode on July 06, 2019, 07:14:50 PM
Which rules? Everything has it limits, for example by physics, living beings by their DNA, their experiences etc.
This is no fundamental difference between humans and possible AI.

That's a good and interesting point. I agree with both of you, but it depends of how big a scope is being considered.

Here's an example of being unable to overcome limits: If you line up caterpillars on the rim of a flower pot, they will follow each others in an infinite loop until they drop dead from exhaustion and starvation. (http://www.wolverton-mountain.com/articles/caterpillars.htm) This is partly because their brains cannot learn sufficiently or grasp higher level concepts.

Here's an example of unlimited scope: Suppose there exists a certain geometrical phenomenon, such as a certain pattern of alignment of corners, that happens when the number of dimensions and the number of geometrical figures become large. Human brains may never be able to understand the pattern because humans literally cannot visualize that number of dimensions, or that many objects at once, or have enough neurons to recognize that such a pattern exists. By extension, there is no end to the complexity that can exist like that, so there will always exist patterns and therefore concepts that can never be grasped by any physical entity. I suppose that's a consequence of Godel's incompleteness theorems (https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems).
Title: Re: Strong AI (symbolic primarily) - approaches, ideas, philosophical foundations...
Post by: Zero on July 06, 2019, 07:28:12 PM
Quote
I think it is
- a technical systems with a software kernel (implementing learning etc.)
- which should do what we want it to do
- which should be intelligent so that
  - it can understand what we want and
  - it can act what we want/intent.

If you want it to do what you decide, it has to understand what you want. But when its understanding will grow beyond yours, it will interpret what you want in a way you can't even understand, which is why it will appear to be "out of control". See what I mean now?