Thoughts on why ML does not achieve general intelligence .

  • 10 Replies
  • 3050 Views
*

elpidiovaldez5

  • Roomba
  • *
  • 17
Thoughts on why ML does not achieve general intelligence .
« on: July 08, 2017, 04:45:44 am »
 I have been dwelling on why ML does not scale and become generally intelligent .  RL has recently had some great successes - beating Lee Sedol at Go,  a single system learning to play 50 Atari games from  pixels.  This is very impressive, but these systems cannot keep learning like a child. 
  • I don't think the learning algorithms are the problem.  They seem to learn better than humans in small restricted domains.
  • I don't think lack of knowledge is the problem.  Babies start pretty much from scratch and build up the knowledge they need (this could well be disputed, but lets assume it is true for now).
It seems to me the only reason is that the learning systems do not create, and use, appropriate abstractions for solving problems. To give an example, imagine a robot which must learn to turn on a couple of switches and pull a lever in order to get some reward.  The problem would be trivial to a human - a bit of fiddling around with the controls and soon the rights positions would be determined.  It is easy for a human because a human would see the problem as determining the setting for 2 switches and a lever (just 8 possibilities).  A robot would see a mass of pixel data.  It would not recognise switches or levers and it would not know that it should manipulate them using its arms/hands.  Now this task is probably within the capacity of an RL algorithm like the one used for the Atari games, but it would take a huge amount of time.  Eventually by thrashing around it might hit a switch, but it would have to set all the controls to the right position before a reward was obtained.  After this RL would probably, eventually identify the actions that produced the reward and solve the problem.

In the space of pixels and joint movements the problem is huge.  In an abstract problem space which just contains the locations and states of the controls and actions for manipulating them, it is trivial.

So how can a machine get from one representation to the other and how can it learn the representations ?

I am going to define 'abstraction' to be any reduction in the detail of a representation. (I cannot find an accepted definition in ML literature and my question in Quora was never answered).  I think we could make operations which remove detail in certain simple ways, and then compose these to get useful transformations. Some examples, starting fro a 3D point cloud:

  • Project 3d representation onto a 2D plane (projection onto the ground would give an 'overhead view' useful for navigation).
  • Reduce resolution - by only taking every other point in each dimension (lots of other ways of doing this).
  • Classify features by a discrete label.
  • Replace distinct labels by a single label when they should be treated identically.
  • Remove unimportant labels.
  • Replace extended objects by a single significant point.
  • Replace a spatial representation by a list of contained objects
  • Replace a list representation by a count of items.

There are lots more operations and lots of variants on the ones that I have mentioned.  I have tried to give fairly simple operations that could be composed to get an abstract representation for many problems.  For the example of the switches and levers problem:  Classify features as switch or lever (recognise these objects in the pixel data), replace objects by centre-points in 3d.

I am considering a fixed set of given abstraction operations, however it would be very nice if these could be learned.  I must think more about the possibility of stochastic gradient descent being able to 'discover' any of these operations.  What problem would create the need for such a representation ?

There is also a need for abstract actions. e.g  'flip switch'.  These would represent a policy for implementing the action.  There are many possible policies, but many are not useful.  The problem is how can useful actions be identified and learnt ?

*

yotamarker

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1003
  • battle programmer
    • battle programming
Re: Thoughts on why ML does not achieve general intelligence .
« Reply #1 on: July 08, 2017, 05:55:14 am »
machine learning isn't A.I because its goals are hardcoded.
also, I don't understand how you abstract an object and why, please explain walkthrough style.

*

keghn

  • Trusty Member
  • *********
  • Terminator
  • *
  • 824
Re: Thoughts on why ML does not achieve general intelligence .
« Reply #2 on: July 08, 2017, 06:16:07 am »
 Machine Learning is really bad at unsupervised learning.
 ML scientist do not believe in free will. That is to let a device ramble around trying out and discovering every temporal pattern loop in its local area. Once it know it area. Then let it use its self learned pattern to do what you want. But in its
own style. Then reward it.

 AGI use a internal auto labeler.
 Subfeatures are give there own label, objects, a bag of sub features, are given there own label, the background
is given a label.
  A objects tract through video frames is given it own label. A object can morph into many different labels
as the object turns sideways is can have a completely different shape. Or could crumble like a piece of paper, in the next
frame.





*

Zero

  • Eve
  • ***********
  • 1287
Re: Thoughts on why ML does not achieve general intelligence .
« Reply #3 on: July 08, 2017, 09:24:14 am »
Hi elpidiovaldez5, very interesting topic!

I really don't think that abstraction is fundamentally a reduction in the detail of a representation. I see it rather like an orthogonal data view. Let me explain my opinion.

Whatever you do with the "pixel data", you're still in a flat representation, even after classifying and representing the situation differently, in a simmple way: three switches and a lever. Actually, you're still working with the skin of the world, when you would like to access its flesh.

For example, give it a clock. Soon, ML can predict its behavior but without understanding why the minute hand is rotating. It's a black box problem.

The problem is: given a set of observed facts, how can we imagine the causes of these facts, without ever knowing them. These supposed causes are a theory. A theory doesn't have to be proven, it just needs to provide plausible explanation while keeping simple enough.

EDIT:

Imagine a box containing a simple mechanism, like the one in the picture below.

If you move one of the green sticks, the other one moves in the opposite direction. A machine can start learning this mechanism while the box is open. Then if you show it a closed box with 2 green sticks, the machine can reconstruct the whole when given only a part, which means it can recognize the two-sticks mechanism, or, more precisely, believe it is the same mechanism it encountered before. At this point, for the machine, the content of the closed box, and what happens when one stick is moved, are a theory.

At this point, we don't even need to care about what's inside the box anymore, and that's precisely what "abstracting" means. It means removing the mechanism from the equation, replacing it with a symbol (a new "pixel"). It means, "hey, I know this: it's a two sticks box!"

When a robot is facing switches and levers, if it already used other switches and levers in the past, then it knows how to interact with them, and it can guess that some of them are connected to mechanisms similar to other mechanisms it already encountered.

I think that's how it goes. We deal with the unseen. We name the unseen.

In other words, I believe that changing weights will never be enough. New "fake inputs", so to speak, need to be created to reflect new concepts. When a group of nodes tend to work together, new nodes must be created to symbolize the entire group. Let's call them spoon-nodes. When the group is working, the spoon-nodes tend to be active (when you see a spoon you can say the word "spoon"). When the spoon-nodes are activated, the group gets to work (when I say the word "spoon", you can "see" a spoon).
« Last Edit: July 08, 2017, 03:01:09 pm by Zero »

*

elpidiovaldez5

  • Roomba
  • *
  • 17
Re: Thoughts on why ML does not achieve general intelligence .
« Reply #4 on: July 09, 2017, 03:18:41 am »
@zero  I liked your post.  I don't want to get into what is or is not 'abstraction' because when I was googling for any research along the lines I was thinking, I discovered that there really is no common agreement on the term within AI or ML.  Lots of papers talk about abstraction and view it in different ways.  My post was about a mechanism to make learning/problem solving  computationally tractable by transforming raw sensor data into a simplified form.  I also propose that the simplified form entails a reduction of detail.   I call that abstraction, but I will happily call it anything else :)  Often humans write programs using appropriate abstractions which seem very far from raw sensor data.  I am trying to see how a series of 'simple' transformations might be composed to transform raw input into those simplified (sometimes symbolic forms).

Now your post seems to deal with a quite different idea - theory formation, or building a causal model.  Actually I don't see how one would build a causal model using the process I described, and an AGI certainly should be able to do this.  I think one example of what you describe might be 'discovering depth' as the simplest explanation for disparity between stereo images.  Babies are meant to do this in their first months of life.  I don't know if any AI system has spontaneously discovered depth in this fashion.  I remember reading a vision researcher's opinion that a sophisticated neural net should  discover it.

I agree that layered detail-reduction is not going to synthesise theories as you describe.  I still think detail reduction is necessary in order for AI to solve real world problems and achieve general intelligence, but the process you described is necessary too.

Judea Pearl did some work on using and testing causal models in statistics.  This spawned attempts to build programs that hypothesise and test causal models. I have not had time to read up on them.  Josh Tenenbaum also did some work on building and testing naive physics models to explain and predict the world.

Your example with the stick box seems to have two parts:
  • Make the causal model (easy if the box is open and the workings can be observed)
  • Infer the hidden mechanism in the closed box case

I believe part 2) is just pattern completion.  Neural nets often fill in missing detail when given a portion of a pattern.  De-noising autoencoders will do this.

Your final comment about the spoon nodes made me think about the deep belief networks that were in vogue a few years back.  These were many layers of Restricted Boltzmann Machines that are trained layer-wise in an unsupervised fashion to model raw sensory data.  Each successive layer learns a compact representation of the layer below.  So you show the input layer spoon pixels and a 'code' for spoon appears on the top layer.  If you impose the spoon code at the top layer, the input layer 'imagines' a spoon as it would be seen.  But there are many possible types and views of a spoon so the image formed changes over time (RBMs are stochastic), going through likely forms and views in a random fashion.



*

Korrelan

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1454
  • Look into my eyes! WOAH!
    • YouTube
Re: Thoughts on why ML does not achieve general intelligence .
« Reply #5 on: July 09, 2017, 10:56:46 am »
The conversation on this thread reminds me of some of the reasoning/ thought processes I went through before starting my project.

Abstraction and detail reduction is working from the top down, removing information/ resolution from the problem space just makes the whole problem harder to solve. How do you know which information to remove? A general purpose rule for one problem space might/ will affect the systems ability to solve the next problem.

Consider our general programming schema. We use subs and functions to simplify our code by producing general purpose blocks and pass parameters. How far do you go? The more specific the routine is to solve a task the less general it becomes and vice versa. If you keep simplifying the routine to make it more general eventually you will end up at the base schema of binary.

The spoon and lever problems must but be inherently understood by the AGI from within the context of its own experiences.  If it is to understand the problem space as we do then it must learn and develop as we do with comparable intellect.

We each understand the black box problem from our own perspective. How we perceive the problem depends on our own knowledge and experiences.  The box could contain linear pots, proportional feedback circuit and servos mimicking the rocking pivot. 

Does a domestic Hoover suck air up its pipe? Lol.

Quote
I don't know if any AI system has spontaneously discovered depth in this fashion.

This is possible (I’ll find the vid) using a neural schema designed to mimic the human ocular machinery.  The system requires the ability to move/ focus the eyes though, I gave the system the ability to (simulate) slide stereo images left and right to simulate movement.



This vid shows how the system learns to recognise line orientations, it’s the same principle but the two eye fields are fed into one map (left/ left, left/ right – right/ left, right/ right)

The visual cortex’s ocular dominance/ orientation / gradient/ and movement maps develop naturally as part of the overall schema for my AGI connectome.

http://aidreams.co.uk/forum/index.php?topic=10804.0

There are so many problems to solve, I feel it’s easier to design a machine that can learn as well as we do from the ground up; rather than try to engineer a narrow solution to every single problem.

 :)
« Last Edit: July 09, 2017, 11:52:56 am by korrelan »
It thunk... therefore it is!...    /    Project Page    /    KorrTecx Website

*

elpidiovaldez5

  • Roomba
  • *
  • 17
Re: Thoughts on why ML does not achieve general intelligence .
« Reply #6 on: July 10, 2017, 03:11:05 am »
@korrelan Hi, thanks for your comments.  You make some interesting points, but I think you misunderstood what I was proposing somewhat.

I am not talking about a single filter bank of operations to reduce detail -  a one-size fits all approach.  Different problems will certainly need different abstractions, so I am proposing a set of operations which can be composed to create an abstraction that simplifies the problem at hand.  In fact I do not think that the bank of operations should be fixed either.  I would like the AI to be able to learn these operations.  However this seems  beyond the state of the art (and certainly beyond the state of my art !).  Right now I want to investigate making a (hand-coded) set of abstraction operations, and a) see if they can be composed appropriately to help solve different problems. b) see if the system can learn how to compose the given operations.

I note that the many of the detail reducing operations that I have considered can be implemented by variants of convolution, and convolutions can certainly be learnt by gradient descent so that is an interesting direction to follow.  Convolutions with appropriate weights can detect features, reduce scale (by building a scale space pyramid), reduce resolution by using stride, perform morphological operations - opening, closing skeletonisation.  A slight modification of the convolution scheme can also generate max pooling, median filters etc.  Truly a Swiss-Army penknife of machine vision !

We may differ fundamentally on the next point:  I think that reduction in detail from raw sensor data (principally video) is essential to enable real problems to be solved.  I believe that the problems are simple in the right representation/problem space, but intractable in the space of raw data (pixels).  I have been very impressed by Deepmind's Atari playing system , which is end-to-end and works from the pixels.  However their algorithm seems to hit its limit with Montezuma's Revenge.  The key seems to be to start to create simplified model of the input and action space - hierarchical RL.  To enable this the system must learn to abstract the input and learn abstractions of action sequences (sub-policies).

I am not saying that a single reduced-detail version of the input should be the only thing that the AI sees.  Rather multiple abstract representations should be seen as augmenting the raw input.  Learning and problem solving should be able to use whatever level of detail and type of representation suits.

I started this thread because I have been struggling to find any work that concentrates on creating abstraction (simplified problem spaces).  I could not really find any on learning abstractions.  Initially I thought I had struck gold with Josh Tenenbaum's presentation at around 26mins, but I came to the conclusion that he is really talking about learning representations of data, rather than detail reduction.  I do not see that his scheme immediately simplifies a problem space.  I think it is closer to @Zero's idea of theory formation.  Theory formation is the next step in learning, where completely new solution spaces are created, beyond those that can be reached through detail reduction.

I liked your video on your vision system.  It is impressive work.  However you seem to be learning the directed-line filters that are similar to the early processing in mammalian vision.  I do not see that it was 'discovering' depth.  By this I meant starting to represent 3D views of the world as the simplest explanation of occlusion and disparity.

*

Zero

  • Eve
  • ***********
  • 1287
Re: Thoughts on why ML does not achieve general intelligence .
« Reply #7 on: July 10, 2017, 12:36:00 pm »
Quote
Now your post seems to deal with a quite different idea - theory formation, or building a causal model.

Actually, my post wasn't (meant to be) about a different idea. In your list, there's a "Classify features by a discrete label" item. Well I think these labels should still be part of the network, hence the spoon-nodes idea. It's not a top-down approach, it should be done bottom-up. Once your problem is described in terms of symbolic elements (and their organization), it is indeed easier to manipulate. But I admit my input is a bit fuzzy, and I have absolutely no experience with ANNs. For now. :)

*

Korrelan

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1454
  • Look into my eyes! WOAH!
    • YouTube
Re: Thoughts on why ML does not achieve general intelligence .
« Reply #8 on: July 10, 2017, 11:42:53 pm »
We seem to be at odds…

Quote
I think you misunderstood what I was proposing somewhat.

Quote
I am going to define 'abstraction' to be any reduction in the detail of a representation.
I think we could make operations which remove detail in certain simple ways
Reduce resolution - by only taking every other point in each dimension
I still think detail reduction is necessary
Etc

I interpreted this to mean a reduction of detail or lowering of resolution of a data source or problem space.

Quote
I am considering a fixed set of given abstraction operations

Then after my reply…

Quote
I am not talking about a single filter bank of operations to reduce detail -  a one-size fits all approach.
In fact I do not think that the bank of operations should be fixed either.

I will re-read the thread.

 :)
It thunk... therefore it is!...    /    Project Page    /    KorrTecx Website

*

elpidiovaldez5

  • Roomba
  • *
  • 17
Re: Thoughts on why ML does not achieve general intelligence .
« Reply #9 on: July 11, 2017, 04:15:00 am »
@korrelan

To clarify one point: When I said 'I am considering a fixed set of given abstraction operations' in my first post, I was referring to what I am trying right now in my my experiments.  It is not the way I think a general solution should work.  Actually I think just the opposite:  Abstractions should be learned and dictated by the structure of data from the environment and the way that data is used to achieve goals.  My problem is that I don't know how to make the system learn the abstractions.  I thought seeing if I could make one that can learn to compose some hand-coded abstraction filters would shed light on the issue, and be a step in the right direction.  Also I would test the idea that high level abstractions can arise from a sequence of quite simple abstraction steps.

*

Korrelan

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1454
  • Look into my eyes! WOAH!
    • YouTube
Re: Thoughts on why ML does not achieve general intelligence .
« Reply #10 on: July 12, 2017, 11:15:59 am »
It seems to me you are describing a natural function of the brain.  I posted the video as an example of this mechanism at work.

The mammalian visual cortex simulation I posted earlier reduces the resolution of the problem space without loss of information.  This version shows the pyramid cells (black dots) at the center of the orientation fields.  These fire when the pyramid cells receptive field detects a line of a learned orientation in the retinal neurons.  The simulation is self learning and self organising.



So the high resolution Image is being converted into a sparse pattern output through the pyramid cells.

This is a basic function of the brain, taking high resolution data, learning to accurately recognise the embedded patterns and then producing a sparse representation that still fully embodies the high resolution of the input.

 :)
It thunk... therefore it is!...    /    Project Page    /    KorrTecx Website

 


OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

310 Guests, 0 Users

Most Online Today: 376. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles