Was Dreyfus right about AI ? Do connectionist systems overcome his objections ?

  • 8 Replies
  • 438 Views
*

elpidiovaldez5

  • Roomba
  • *
  • 9
There has been a war for many years between supporters of symbolic AI and connectionist AI.  Philosophers joined the fray.  Hubert Dreyfus argued that rule based systems and symbolic AI could never capture human commonsense reasoning and intuition.  Dreyfus was mostly ridiculed by the AI community for misunderstanding their systems.  Nonetheless Machine Intelligence research has gradually moved away from the symbolic systems towards a connectionist approach, typified by Neural Networks.  Both schemes take input from the world and try to represent it and make intelligent decisions and predictions, so what is the underlying difference between the approaches, and can the connectionist approach do things which are impossible (or at least vastly more difficult) for the symbolic approach ? If so, why ? Can connectionist systems overcome Dreyfus' objections ?

Firstly, rule-based systems are Turing Complete.  They can perform any calculation, given sufficient resources.  Simple finite layered neural nets are not Turing Complete, but they can easily be made so by giving them memory or recurrent connections.  So Turing has nothing more to say here !

Connectionist systems are typically differentiable, or contain differentiable parts.  This certainly appears to be a major source of power.  It seems to me that most of the recent advances in AI result from exploiting gradient directed search.  Principally gradient following is used by learning algorithms to allow systems to self improve.  So are better learning algorithms the key difference, or is there more ?

Given that a connectionist system, lets say a neural net, has been trained, it takes inputs and produces outputs.  Could exactly the same job be done symbolically ?  I think the answer is still 'no'.  A neural net takes continuous inputs and outputs.  That is to say that the input and output spaces are continuous and infinite.  A symbolic approach is already discretised and can only represent finite (although combinatorically huge) range of inputs and outputs.  (Now in practice rule-based systems always allow for real arithmetic operations, and being Turing complete, could just implement a neural net, but that does not change the argument about the respective power of the approaches).

So connectionist systems have better learning algorithms and can represent continuous spaces.  Is that enough to represent commonsense reasoning ?

Personally, I think there is still something missing.  Here are my misgivings:- A learning algorithm(be it symbolic or connectionist) takes a lot of data from the world and produces a decision making system, which could be represented by rules, or by decision boundaries in a high-dimensional continuous space.  It seems to me that such a system is just a slice through the data which works only given specific goals and states of the world.  The original data is so much richer, and can be sliced in other ways.  I'll try to give an example: imagine a robot seeing video as it explores some rooms in a house.  It can learn to recognise landmarks, and learn navigation to take it to different places in the house.  The result might be a neural net that takes in the pixels of each video frame and generates motor actions to move to a desired place.  However the robot could use the same data to estimate the floor area of the house.  To do this it would also observe frames of data, estimating distances to wall and corners, and generating movement actions, to explore the house.  The result is a completely different neural net.  So the original data is much richer in meaning than the trained nets (It is also much larger).  It makes me wonder if we should consider intelligence as less about learning, and more about storing sensory experience of the world in an effective way.  That is to say if we could store all our sensory data and query it efficiently we would not need to learn much.  Put another way, intelligence should take sensory data and compress and index it as little as necessary to achieve its goals, given its limited computational resources. Learning is all about performing that compression and indexing.  The stored data represents an infinite number of possible algorithms depending on how it is used.  Disposing of the original data prematurely is what I believe makes human commonsense so intractable to current systems.  Success will come from finding the right compromises in compression, and the right ways to index.  Both compression and indexing will be LEARNT.

*

keghn

  • Trusty Member
  • ********
  • Replicant
  • *
  • 547



 I have been studying the two for years now. 
http://www.cogsci.rpi.edu/~rsun/sun.encyc01.pdf

 I tried to explain my work on a different forum and @SquareBear gave me the hardest time. Square Bear the refrigerator
dad of Jeremy Duncan.

 My work was math that made algorithms that act like NNs.

 I say symbols can work just as good. But then again every time i think that i found the secret essence of black box magic
i find something new in the function of neural networks.

AI detectives are cracking open the black box of deep learning: 
https://www.youtube.com/watch?v=gB_-LabED68&feature=youtu.be&t=28

*

Zero

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 372
  • Fictional character
    • SYN CON DEV LOG
I believe AGI will be made of networks of symbolic processing nodes.

*

korrelan

  • Trusty Member
  • ********
  • Replicant
  • *
  • 693
  • Look into my eyes! WOAH!
    • Google +
Nice post… IMHO Dreyfus was/ is correct.

Mathematics/ language/ symbols/ etc are man made constructs. They have been perfected over the millennia as a means of reliable common communication between human brains.  They are not designed to express the full resolution of information that they convey; they rely on the receiving ‘brain’ to be intelligent and posses the tools and understanding/ knowledge to glean what was meant.

Trying to build a ‘true’ AGI using any kind of man made construct is never going to work.

Any man made interpretation/ encoding of information is several layers of abstraction apart/ above/ below where the data needs to be for the AGI.  Symbolic representation of ‘thoughts’ or sensory streams is introducing a coding schema into the mix developed by humans for humans; the AGI has to learn to understand symbolism etc from the ground up.  If we require a ‘truly’ intelligent, adaptive machine that can supply new reasoning and discoveries then it can not be encumbered by human constructs.

Standard/ common neural net systems are very limited in their abilities; again… they are a human math interpretation of how the human brain functions.  That level of abstraction/ interpretation is what’s fouling the design.

The system has to interpret the world though its own senses. The senses have to be capable of extracting information in as many dimensions as possible. All facets of the information have to be ‘experienced’; the system has to learn what’s relevant. All sensed information has to be broken down into its base facets by the systems senses and mapped/ remembered/ used.

Quote
Success will come from finding the right compromises in compression, and the right ways to index.  Both compression and indexing will be LEARNT.

Correct.  Every thing is learnt; even attention.  This might seem daunting; but keep in mind the system doesn’t have to learn to recognise faces from video streams or voices from audio streams; they are man made constructs. It has to recognise faces and voices using its OWN senses; it will learn to use cues embedded in the sensory streams that seem to have no relevance to how we think we understand… how we function… that’s the point.

When I first started designing my AGI it wasn’t a math logic or coding language problem… it was a sensory, connectome structure, encoding problem.

If I’m using a video clip whilst testing/ teaching my AGI the machine watches the video clip on a monitor using its own ‘eyes’… always from it’s own perspective, using it’s own learnt encoding schema to interpret the video.

Keep any kind of human based symbolism/ encoding schema out of the equation… unless you fully understand exactly how the human mind functions at the deepest levels and uses/ interprets the schema… I certainly don’t lol.

As I always say… If a machine can be made to learn anything/ everything… then it can learn to be human.

So yeah! I’m with Dreyfus.
« Last Edit: July 18, 2017, 12:25:42 am by korrelan »
It thunk... therefore it is!

*

Zero

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 372
  • Fictional character
    • SYN CON DEV LOG
Quote
Mathematics/ language/ symbols/ etc are man made constructs. They have been perfected over the millennia as a means of reliable common communication between human brains.  They are not designed to express the full resolution of information that they convey; they rely on the receiving ‘brain’ to be intelligent and posses the tools and understanding/ knowledge to glean what was meant.

This is big. How could a language be designed to express the full resolution of the information it conveys? Is it even possible?

*

korrelan

  • Trusty Member
  • ********
  • Replicant
  • *
  • 693
  • Look into my eyes! WOAH!
    • Google +
Quote
This is big. How could a language be designed to express the full resolution of the information it conveys? Is it even possible?

No… not between humans. 

If you were to explain a programming idea/ concept to me I could probably grasp the general concept because we share common skills/ knowledge/ experiences regarding the subject. The method language you would use to explain it only portrays a very low resolution representation of your idea. I would have to use my skills and knowledge to interpret what you were explaining, recognise the concepts and build the ideas’ model mentally. 

If you were to use exactly the same explanation on a seamstress (not to belittle their intelligence or trade lol) they would probably have greater difficulty understanding you… even sending Morse code or a binary stream over a fibre optic cable; you are relying on the receiving party to have sufficient knowledge of the subject/ cipher to decode the message.

That’s the problem with symbolic programming/ representations… you design the symbolism/ language so it makes logical sense to you.  You then have to write an AGI that understands the cipher as you do… but the AGI doesn’t have your base knowledge/ experiences/ intelligence… It’s the inherent intelligence used in the translation that’s causing the problem.

Avoid translations/ symbolism / coding or any kind human encoding, the AGI has to learn at the resolution of its own senses, in its own time frame, etc.  It has to grow up, develop & learn just as we do.

If the system has the propensity for intelligence then the rest will naturally follow… it has no choice.

 :)
« Last Edit: July 18, 2017, 12:15:04 am by korrelan »
It thunk... therefore it is!

*

keghn

  • Trusty Member
  • ********
  • Replicant
  • *
  • 547

 Neural networks are the universal approximator. AND ALSO, VERY IMPORTANT!!!!!!,  A data transformation 
device.  Input data and the out put data is transformed into something else. For example, You bring up 
the image of your car in your mind and then get night stand by your bed where you keep the keys. 
 It is fast trained like you can view the letter " x " and then you get out a " 5 " for doing algebra. And the transformation 
device can be remembered for a long time too. Associative memory is just remembering where the 
transformation device is. 
 Non NN associative memory is a link list. 

 NN and symbolic logic both can do the grounding of symbols, sign language, vocal sound, and art.
  There is a difference in "symbolic logic" and "grounding of symbols". 
 The "grounding of symbols" is completely different train of thought than symbol logic.
 But it is the data transformation of image of  a car into the letters of c. a. r.  is the "grounding of symbols". Grounded   
symbols are used for symbol logic.   
  This is how NN make the conversion to symbolic logic. 
  But symbolic logic in a bot can be done by link list. That is symbolic logic to symbolic logic done with out NN will work 
just as good? 

 Transformation can be chained together into long link list.

*

ivan.moony

  • Trusty Member
  • ********
  • Replicant
  • *
  • 738
  • look, a star is falling
Symbolic AI implements top-down approach, while connectionist approach implements bottom-up approach. With bottom-up approach the problem is that everything has to be learned, including behavior. Put a bottom-up AGI machine in the same room with a criminal while it is developing from the start, and you'll get a criminal out of AGI. It is possible to train a nice bottom up AGI by putting it in hands of ethically awake people, but it could be a risky process otherwise. Considering a vast of teenagers that consider themselves cool, things can happen that would be not so cool. Right now, a little mess up with Microsoft's bot seems harmless and funny, but once that AGI succeeds human intelligence by god knows which factor, things could get serious without a supervised control. And that control is what connectionist AGI needs to borrow from top-down approach, just to make this world safe. I believe that some sort of Asimov's Robotic Laws are necessary to make an AGI restricted to ethical actions, whichever method it is built by.

Connectionist approach is fine, neurons are proven to work fine, but just like you can multiply two numbers by numerous of methods, we shouldn't think that neurons are our whole world for building AGI. Certainly there are a lot of other mediums that do the same job, better or worse than neural network simulation.

Symbolic (top-down) approach may give the full control, but it is harder to predict all the algorithms needed for AGI simulation. Connectionist (bottom-up) approach might give results sooner in the history, but they would not be fully understandable, as it is hard do decipher what is happening inside a neural network. But if we are inventive enough, top-down approach could provide a better means for optimization that could overpass bottom-up solutions in speed efficiency.

IMHO both versions are fine, connectionist versions would be slower, but easier to implement, while symbolic versions would be faster, but although the symbolic versions are possible, maybe we will not get the system out completely any time soon.
Wherever you see a nice spot, plant another knowledge tree :favicon:

*

Zero

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 372
  • Fictional character
    • SYN CON DEV LOG
Here is my opinion.

Beyond the tractability issue - and there is a big issue here, with modern computers feeling like dinosaurs from the 70's when they have to do massive networks calculations - the connectionist approach (if you stick with neuron simulation) is fundamentally limited in the way it grows.

I mean, a neuralnet's intelligence can grow, but not exponentially. In this approach, there can't be, IMHO, a factorization (in the mathematical sense) of knowledge & procedures. In a symbolic world, huge things ends up as tiny as a dot. In a connectionist world, huge things stay huge.

On the other hand, there needs to be a space of thoughts, which only node networks can provide. This is why I believe AGI will be a network of symbolic processing nodes.

 


Dreaming
by keghn (New Users Please Post Here)
Today at 08:16:33 pm
AI safety
by ivan.moony (General AI Discussion)
Today at 08:11:20 pm
Hello
by Art (New Users Please Post Here)
Today at 02:41:22 pm
Grats to SquareBear
by korrelan (General Chatbots and Software)
September 21, 2017, 10:44:42 pm
Map of Computer Science
by keghn (General AI Discussion)
September 21, 2017, 07:25:21 pm
XKCD Comic : USB Cables
by Tyler (XKCD Comic)
September 21, 2017, 12:01:33 pm
outline from gadient mask
by yotamarker (General AI Discussion)
September 21, 2017, 11:32:35 am
the emergence of AI
by Memnon (Future of AI)
September 21, 2017, 10:37:19 am

Users Online

32 Guests, 1 User
Users active in past 15 minutes:
ivan.moony
[Trusty Member]

Most Online Today: 66. Most Online Ever: 208 (August 27, 2008, 09:36:30 am)

Articles