Symbolic AI vs Machine Learning

  • 23 Replies
  • 28886 Views
*

DavidP

  • Roomba
  • *
  • 1
Symbolic AI vs Machine Learning
« on: August 25, 2020, 07:48:51 am »
Hi!
I read an article from an chatbot software editor that presents the different interests between a Machine Learning approach and symbolic AI applied to NLP and thus in relation to chatbots and conversational agents (here : https://www.inbenta.com/en/blog/symbolic-ai-vs-machine-learning/). I found the article quite objective, but as it is written by an software company (and therefore necessarily a bit oriented), I would like to have external opinions and if you have any research or consulting articles on the subject.

Thank you in advance!

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1372
  • Humans will disappoint you.
    • Home Page
Re: Symbolic AI vs Machine Learning
« Reply #1 on: August 25, 2020, 09:40:39 am »
That seems like a smartly written article. If you spend a lot of time in AI forums you will have encountered some of the obnoxious machine learning fan boys who tell anyone who will listen that it is all about machine learning and any form of symbolic reasoning was a mistake and a waste of time. You should ignore those people, they are just ignorant.

For a really broad and deep discussion of the strengths and weaknesses of symbolic reasoning and machine learning, look for the articles written by Gary Marcus among others. Both approaches solve problems that the other can't and he reasons quite convincingly that we will make the most progress developing hybrid systems.

https://arxiv.org/abs/2002.06177

*

frankinstien

  • Replicant
  • ********
  • 653
    • Knowledgeable Machines
Re: Symbolic AI vs Machine Learning
« Reply #2 on: August 25, 2020, 10:40:06 am »
Quote
Both approaches solve problems that the other can't and he reasons quite convincingly that we will make the most progress developing hybrid systems.

I have to agree that a hybrid system is likely since machine learning has its place. I work with such an approach where I use OpenNLP to figure out the parts of speech of a document, example below:



But then trying to use CNNs to understand words really doesn't make sense because of how they are trained. What you end up with is a stochastic of how humans arrange words and then respond to such patterns of arranged words. So to solve this problem and avoid brittle coding I ended up using a descriptor framework to define words. The model uses inheritance and nesting of other descriptors. The descriptors use a qualifying vector scheme that are generalizations of types of concepts, e.g. "abilities", "features", "virtues", "ontologies", etc. Now each node under a "RootDescriptor" can have any type of data associated with it, meaning there can be audio, video, links, vectors, and/or even processing functions to an API, functional library, etc. Also there can be more than one entry per word with different properties to represent a different context or use of a word.



With such an approach they're certain abilities that look like they're rule-based but actually they're not. For instance the ability to compare one word with another and find similarities and differences. The approach uses a self-similar tree structure that allows for a generic process to evaluate nodes. I guess that still qualifies as a rule-based system but in this case, the rules never need to change because the structure of the information unit is always the same. So no matter how deep a tree structure gets what to look for and where to look for it is always the same:



Because "Human" and "Dog" both inherit from "Animal" they have quite a bit in common but the approach can pick up on the differences right away without having develop rules for "Human" and "Dog". There is also the need to have a contextual or ontological component to this framework where words have many edge paths, like below:



Where the nodes marked in Red are when the word "Human" is listed, the image above shows how the tool expands the entire edge paths of the parents even though the word doesn't appear in the listing. This is done to allow for the machine to find associated knowledge domains and allow for creative conversation.

There is also the need to process a document where non-conjunctions are identified and their contexts listed, where I took the article you cited and had it evaluated:



There is the ability to select pieces of the document like the first paragraph:



And even one of the sentences:



The intent is to build word objects that can be evaluated by their properties that then lead to logic based on their relationships within sentences, paragraphs and documents. The encoding of the words is coded by hand currently but a vocal interface will automate the coding and updating of the word descriptions so as one talks with the AI it can change or build its knowledge base on its own.
« Last Edit: August 26, 2020, 12:56:41 am by frankinstien »

*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 633
    • AI / robot merchandise
Re: Symbolic AI vs Machine Learning
« Reply #3 on: August 25, 2020, 10:54:33 am »
The article is a fairly decent read, but they conflate the terminology: "symbolic AI" is any and all AI that store information in the form of words, while "machine learning" covers any and all forms of learning, which includes symbolic AI such as N.E.L.L.. What they are really trying to compare is rule-based AI vs machine learning.

While they are correct to point out the downsides of machine-learning approaches, they also make it out as if a machine learning approach is more time-consuming than a rule-based approach, but this is generally not true. Manually writing all content yourself is more work than refining the content that the machine learned. For intent recognition in NLP chatbots, machine learning serves to categorise questions for rule-based responses. That is a hybrid approach, not a purely intransparent machine learning approach.
What differs with machine learning is the amount of control and transparency in the system, and generally you would want consumer-facing chatbots to say only what you have manually approved, rather than everything the machine has learned from the internet. On the other hand if you can't spare a decade of work-hours and want to make a chatbot that can discuss any topic on Earth, you have no choice but to employ machine learning even if the result will have flaws.

Unfortunately the articles I know on this subject are heavily biased towards either the one or the other approach. Both have their merits and weaknesses. Which one is best depends on the scope of the subject matter, how much time you have, and how much control you want over it. One fairly balanced article on the subject details the experiences of chatbots participating in the Amazon Alexa Prize.
CO2 retains heat. More CO2 in the air = hotter climate.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Symbolic AI vs Machine Learning
« Reply #4 on: August 25, 2020, 09:21:41 pm »
Omg. I just read the article DavidP linked us. Yes to the top part, a machine learning net learns from lots of data. The next part of the article is wrong when it says such learning net applied to text words needs to be maintained by the botmaster and told what responses are correct/wanted. No, that's what the training data was for, it just mentioned it...your AI learns what to say on its own. Usually a real brain need teachers though, so yes!, it IS part of it however! You really don't want to intake ALL text etc on the internet, but rather specific data and data not found online. The next part of the article, symbolic AI, yes, that's ALL doable by a learning net, you need not GOFAI!! The next part of the article is worse, it says nets are black boxes and need loads of data, NO, smarter AI requires much less data, seriously i can explain exactly why if you want, and nets aren't black boxes - I'm writing up how all the mechanisms to them work in my new AGI Book, you need not use Backprop, you just update the network weights according to the input (where it travels to), it can learn online too (aka continuously). As for the "we are taught grammar rules, lexicon, semantic, verbs, punctuation, etc in school", trust me these are all AGI mechanisms of a neural network, it's so cool and unifies together in my book I'm writing.
Emergent          https://openai.com/blog/

*

frankinstien

  • Replicant
  • ********
  • 653
    • Knowledgeable Machines
Re: Symbolic AI vs Machine Learning
« Reply #5 on: August 25, 2020, 11:40:22 pm »
Quote
AI learns what to say on its own. Usually a real brain need teachers though, so yes!, it IS part of it however!

Well, a neural net is good for classifying data like identifying parts of speech, but when it comes to understanding or meaning of a word that it can't do. So with a neural net, to get it to respond conversationally you need to set up your training data so it can classify inputs phrases according to appropriate responses . But trying to apply logic to understand a sentence or word to even do a comparison of two concepts can't be done and even the rule-based approaches are just expert systems stuck in a manually created rule set that is very brittle. AI can't simply learn the details of something turn it into a unit of information that then, without any new coding or training, can make comparisons or find relatedness to other concepts. Humans have very good conversational abilities where we naturally are cognizant of similarity, relatedness, symmetry, differences and relationships of concepts or ideas. AI just builds statistics,where it can attempt to predict based on those statistics but it never understands the concepts, it just qualifies and identifies classifications of inputs that are then related to appropriate patterned responses.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Symbolic AI vs Machine Learning
« Reply #6 on: August 26, 2020, 02:11:46 am »


Quote
Well, a neural net is good for classifying data like identifying parts of speech, but when it comes to understanding or meaning of a word that it can't do.
...Nets do understand meaning, just look up Word2Vec and Seq2Vec, it learns 'cat'='dog' or 'cat'='a cute little furball' by seeing similar contexts near both items ex. cat licked, dog licked, cat was, dog was....etc. Meaning is as much as the data linked to other data, or what it is.

Quote
So with a neural net, to get it to respond conversationally you need to set up your training data so it can classify inputs phrases according to appropriate responses
"Appropriate Response" is based on data, and letting it find / invent / get taught patterns like frequency (bark usually follows dog), relation, recency (likely to reappear again or not if already said), reward (girls, food, sleep, i'll usually say those words). Accuracy improves by looking at more data or finding more patterns, just like human brains.

Quote
But trying to apply logic to understand a sentence or word to even do a comparison of two concepts can't be done and even the rule-based approaches are just expert systems stuck in a manually created rule set that is very brittle. AI can't simply learn the details of something turn it into a unit of information that then, without any new coding or training, can make comparisons or find relatedness to other concepts. Humans have very good conversational abilities where we naturally are cognizant of similarity, relatedness, symmetry, differences and relationships of concepts or ideas. AI just builds statistics,where it can attempt to predict based on those statistics but it never understands the concepts, it just qualifies and identifies classifications of inputs that are then related to appropriate patterned responses.
Hmm, logic for relation discovery, like "Cats eat food and lick, if dogs do then they are similar. Dogs do.", nets can be told a=b by amount x, IF z=f, it's a trust node that knows you help it lots, so it lets it in as a belief, and so does the IF, a gateway condition, if true then it triggers and activates it, by just existing aka bark usually follows dog.

Yous have yet to learn why nets are so important. It only gets more nety from there. Merging is a trend in the brain.
Emergent          https://openai.com/blog/

*

frankinstien

  • Replicant
  • ********
  • 653
    • Knowledgeable Machines
Re: Symbolic AI vs Machine Learning
« Reply #7 on: August 26, 2020, 04:13:41 am »
...Nets do understand meaning, just look up Word2Vec and Seq2Vec, it learns 'cat'='dog' or 'cat'='a cute little furball' by seeing similar contexts near both items ex. cat licked, dog licked, cat was, dog was....etc. Meaning is as much as the data linked to other data, or what it is.

No Word2Vec and Seq2Vec do not understand meaning. Word2Vec can classify words into common contexts and Seq2Vec can classify  sequences to make predictions.  But neither of the two could build an information unit so a comparison of similarities and differences  could be done between any two objects say Apples and Nails. Why, because the cues to make that kind of inference is probably not going to be in the corpus that you're training the network on. So if you wanted to do something like that you'd have to carefully build such a data set so the differences could be classified. But if you take a symbolic approach where you can build a self similar data structure then all I need do is describe an apple once and a nail once and its done! Now self similar data structure does not mean the data have identically ordered fields so you can easily match two objects. In fact a good descriptor model allows for complex combinations in any order but how those pieces of information are nested is unpredictable. Yet the components of such a model are self similar so while surface descriptor for an Apple might be listed down some deep edgepath describing the apple, its the first descriptor piece for the nail. But the alogrithim will find that surface descriptor and evaluate if they are different, similar and in what way are they similar!

"Appropriate Response" is based on data, and letting it find / invent / get taught patterns like frequency (bark usually follows dog), relation, recency (likely to reappear again or not if already said), reward (girls, food, sleep, i'll usually say those words). Accuracy improves by looking at more data or finding more patterns, just like human brains.

Sorry but the neural net still doesn't have a concept the way a human being does and human beings don't pick up these concepts by simply picking up patterns in language. Human beings associate words with real objects then map the features of those objects with words as well and then there is the cause and effect dimension that I have yet to see a neural network do well. With a descriptor approach it can link data by simply experiencing an event ONCE, it doesn't need a gagillion examples to learn!

Hmm, logic for relation discovery, like "Cats eat food and lick, if dogs do then they are similar. Dogs do.", nets can be told a=b by amount x, IF z=f, it's a trust node that knows you help it lots, so it lets it in as a belief, and so does the IF, a gateway condition, if true then it triggers and activates it, by just existing aka bark usually follows dog.

Again the neural network can't simply experience an event and learn from that one event, but a desciptor approach can. To say a neural network understands that a "Cat eats food" is inaccurate. The neural network could respond with the appropriate words to some input where "Cat eats food" is the right response but that's as far as it can go. The neural network can't apply its training to a context it wasn't trained on. Where a descriptor model can apply its information structure or unit to any application. BTW using a neural network with a descriptor model is not out of the question and again emphasizes the probability that hybrid systems are going to be more robust than just using one approach.

Realize that to store information in a neural network is inefficient, you will always have to iterate through the entire matrix to get an answer where as symbolic approaches can exploit the efficiencies of modern information technology to encode and retrieve data quickly and not have to iterate through entire database to get an answer...


*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Symbolic AI vs Machine Learning
« Reply #8 on: August 26, 2020, 05:10:06 am »
Quote
No Word2Vec and Seq2Vec do not understand meaning. Word2Vec can classify words into common contexts and Seq2Vec can classify  sequences to make predictions.  But neither of the two could build an information unit so a comparison of similarities and differences  could be done between any two objects say Apples and Nails. Why, because the cues to make that kind of inference is probably not going to be in the corpus that you're training the network on. So if you wanted to do something like that you'd have to carefully build such a data set so the differences could be classified. But if you take a symbolic approach where you can build a self similar data structure then all I need do is describe an apple once and a nail once and its done! Now self similar data structure does not mean the data have identically ordered fields so you can easily match two objects. In fact a good descriptor model allows for complex combinations in any order but how those pieces of information are nested is unpredictable. Yet the components of such a model are self similar so while surface descriptor for an Apple might be listed down some deep edgepath describing the apple, its the first descriptor piece for the nail. But the alogrithim will find that surface descriptor and evaluate if they are different, similar and in what way are they similar!
No, brains can't be told just once "cats lick, dogs lick, cat=dogs bro" and always remember something. Like word2vec, it needs numerous examples sometimes. And, those are forgotten if not strengthened or related or loved or recent!

Also, word2vec CAN tell you how similar apples = nails, and why. Maybe not word2vec for the 'why', but I can make an algorithm that does so easily, you just make a web of shared contexts ex. dog eat, cat eat, so they get 1 point. Also cats eat and dogs dine is proof too, eat=dine. Apples are similar to nails because they both are sold, bought, are objects, are made of atoms, are hard, are similar sizes compared to buildings. Don't forget we use Vision word2vec in our brains.

You've seriously underestimated the power of word2vec/ seq2vec/ neural network tech...

Quote
Sorry but the neural net still doesn't have a concept the way a human being does and human beings don't pick up these concepts by simply picking up patterns in language. Human beings associate words with real objects then map the features of those objects with words as well and then there is the cause and effect dimension that I have yet to see a neural network do well. With a descriptor approach it can link data by simply experiencing an event ONCE, it doesn't need a gagillion examples to learn!
Sure, not yet.
Nets can link text to vision. Lol.
Nets can trust people or read directly that cats=dogs, never said they couldn't.

Quote
Realize that to store information in a neural network is inefficient, you will always have to iterate through the entire matrix to get an answer where as symbolic approaches can exploit the efficiencies of modern information technology to encode and retrieve data quickly and not have to iterate through entire database to get an answer...
Nets make checkpoints / merge data into representations, so that searching takes less time to get accurate answers. During a search, the whole net/matrix isn't used, it just accesses the input matches most, the input cat will activate cat most, dog a tad, and violin barely at all. Nets make time to search fast, and storage small by merging data, the energy to run it ain't that bad either. The brain stores data patterns, which are connected/ merged, you can't/don't "separate" anything, it's 1 system.
Emergent          https://openai.com/blog/

*

frankinstien

  • Replicant
  • ********
  • 653
    • Knowledgeable Machines
Re: Symbolic AI vs Machine Learning
« Reply #9 on: August 26, 2020, 05:39:36 am »
Quote
No, brains can't be told just once "cats lick, dogs lick, cat=dogs bro" and always remember something. Like word2vec, it needs numerous examples sometimes. And, those are forgotten if not strengthened or related or loved or recent!

I'm sorry but that is not true. Any toddler that places his or her hand in a fire will retract that hand immediately and will never do it again after the first experience. One of my dogs was actually bitten by a rattlesnake, fortunately, it was a dry bite because one of the fangs missed his paw and hit the ground while the other fang pierced his paw between two toes. But after he healed he was very reluctant to go back into the fields, he would literally stay on the trail and not veer off it. That was from one experience. So learning immediately is very important from an evolutionary perspective because there are scenarios that require that you learn fast and once and never repeat that mistake again...

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Symbolic AI vs Machine Learning
« Reply #10 on: August 26, 2020, 05:52:56 am »
I said above:
Quote
are forgotten if not strengthened or related or loved or recent!

love / hate.....so pain makes it remembered.
Emergent          https://openai.com/blog/

*

frankinstien

  • Replicant
  • ********
  • 653
    • Knowledgeable Machines
Re: Symbolic AI vs Machine Learning
« Reply #11 on: August 26, 2020, 06:00:05 am »
I said above:
Quote
are forgotten if not strengthened or related or loved or recent!

love / hate.....so pain makes it remembered.

But you forgot the first part:

Quote
No, brains can't be told just once "cats lick, dogs lick, cat=dogs bro" and always remember something.


*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Symbolic AI vs Machine Learning
« Reply #12 on: August 26, 2020, 06:10:12 am »
Only kept if frequent, loved/hated, recent, related, repairs itself, duplicates itself.
Emergent          https://openai.com/blog/

*

frankinstien

  • Replicant
  • ********
  • 653
    • Knowledgeable Machines
Re: Symbolic AI vs Machine Learning
« Reply #13 on: August 26, 2020, 10:08:07 pm »
Here's an article that proves that AI, particularly GPT-3, is performing statistics and not understanding what it's outputting!

The student was copying and pasting outputs from GPT-3 but the problem here is the student picks the output he likes! So at best AI has proven it can combine words that can seem as if they are coherent responses but they actually aren't they are just stochastics that are from averages of how human beings use words where the neural network can find those biases from types of words used.  :(

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Symbolic AI vs Machine Learning
« Reply #14 on: August 26, 2020, 11:39:28 pm »
You'll have to wait until my AGI book is done. AGI is all about statistics, averaging, etc. The past is all it knows and is exactly what it uses to create the future. Even reflexes and DNA store learnt data, like shivering to keep warm, retracting arm from burns, sneezing, tracking motion, syncing limbs. Evolution is all about survival, hence understanding its world is key.
« Last Edit: August 27, 2020, 12:10:46 am by LOCKSUIT »
Emergent          https://openai.com/blog/

 


Requirements for functional equivalence to conscious processing?
by DaltonG (General AI Discussion)
November 19, 2024, 11:56:05 am
Will LLMs ever learn what is ... is?
by HS (Future of AI)
November 10, 2024, 06:28:10 pm
Who's the AI?
by frankinstien (Future of AI)
November 04, 2024, 05:45:05 am
Project Acuitas
by WriterOfMinds (General Project Discussion)
October 27, 2024, 09:17:10 pm
Ai improving AI
by infurl (AI Programming)
October 19, 2024, 03:43:29 am
Atronach's Eye
by WriterOfMinds (Home Made Robots)
October 13, 2024, 09:52:42 pm
Running local AI models
by spydaz (AI Programming)
October 07, 2024, 09:00:53 am
Hi IM BAA---AAACK!!
by MagnusWootton (Home Made Robots)
September 16, 2024, 09:49:10 pm
LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

338 Guests, 0 Users

Most Online Today: 467. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles