Complex Language Understanding & Execution (CLUES) Engine

  • 107 Replies
  • 30234 Views
*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6860
  • Mostly Harmless
Re: Complex Language Understanding & Execution (CLUES) Engine
« Reply #90 on: May 03, 2011, 01:38:41 pm »
Thanks guys, glad you approve.  :)

Victor, I took the privilege of assigning you an avatar as for some reason your old one was not showing up.  Feel free to change it to whatever you like.

*

victorshulist

  • Trusty Member
  • ****
  • Electric Dreamer
  • *
  • 118
Re: Complex Language Understanding & Execution (CLUES) Engine
« Reply #91 on: May 03, 2011, 05:51:24 pm »
Thanks Freddy, I noticed that.  Did the site get attacked?  I saw a wierd front page awhile back.

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6860
  • Mostly Harmless
Re: Complex Language Understanding & Execution (CLUES) Engine
« Reply #92 on: May 03, 2011, 06:19:52 pm »
Yes we had a visitor who decided to change the landing page.

This prompted the upgrade, which I was considering anyway.  I'll be working on the site for the next few days to get some of the features returned - I did a fresh set of files so I'll be busy.  It needed an overhaul really so not all bad.

*

sunama

  • Trusty Member
  • ****
  • Electric Dreamer
  • *
  • 109
    • ai project
Re: Complex Language Understanding & Execution (CLUES) Engine
« Reply #93 on: May 03, 2011, 06:28:55 pm »
Hang on, just to clarify, are you saying that you have done this upgrade within 24 hours or so?
www.aiproject.co.uk
Natural Language Processing

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6860
  • Mostly Harmless
Re: Complex Language Understanding & Execution (CLUES) Engine
« Reply #94 on: May 03, 2011, 06:51:44 pm »
More or less as per time spent on the live site.  I had done a lot of tests on my local machine beforehand to make sure everything would work.  If you count all my tests and research then about three days spent on everything.

I started the live upgrade at about 8.15pm GMT yesterday and I was done by about 11.00pm.

I still have to add a lot of features back still though, at the moment I am trying them out on a mirror site I have.
« Last Edit: May 03, 2011, 11:17:08 pm by Freddy »

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: Complex Language Understanding & Execution (CLUES) Engine
« Reply #95 on: May 04, 2011, 03:10:08 am »
Really putting all that newly learned knowledge through its paces! Good for you!! O0
In the world of AI, it's the thought that counts!

*

squarebear

  • Trusty Member
  • *********
  • Terminator
  • *
  • 869
  • It's Hip to be Square
Re: Complex Language Understanding & Execution (CLUES) Engine
« Reply #96 on: May 04, 2011, 11:23:25 am »
Hehe! yes, Freddy has just got his computer exam and has already redesigned the site! Quick work my friend!
Feeling Chatty?
www.mitsuku.com

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6860
  • Mostly Harmless
Re: Complex Language Understanding & Execution (CLUES) Engine
« Reply #97 on: May 04, 2011, 02:21:53 pm »
 :)

*

victorshulist

  • Trusty Member
  • ****
  • Electric Dreamer
  • *
  • 118
Re: Complex Language Understanding & Execution (CLUES) Engine
« Reply #98 on: May 15, 2011, 08:47:00 pm »
root@r2d2:/gli# ./gli

GLI - General Language Intelligence

(running in mode: sqa-test)

<user says> did jack go to the dance

<Grace says> Sorry, I don’t have an answer for you.

<user says> jack was going to a party and a dance given by his company

<Grace says> Yes, I understand.  I have stored that information in ‘temp-sqa-test’

<user says> did jack go to the dance

<Grace says> If you mean *GIVEN BY HIS COMPANY* dance, *A* dance then yes, (but I’m not sure if it was *THE* dance).


==============================

Here we see Grace handly conjunctions easily.    We stated that Jack was going to both a “party” and “a dance” (so two direct objects of preposition “to”).

She is then able to do what I call ‘flexxy matching” (not ‘fuzzy’ since that only deals with simple pattern matching technology, FLEXXY is fuzzy but with true NLU smile ).

She compares the subtrees of question parse tree and fact parse tree, and finds the difference, and uses that information to build a response.

In the above, Grace’s GLI knows that ‘dance’ is being modified *in the FACT parse tree* by the phrase “given by his company”, but *NOT* in the question parse tree, thus she knows to say “IF you mean…..”

Yes, Grace converts to upper case the parts she wants emphasis on.

Yes, later she will deal with bad grammar, bad spelling ,etc.

*

white

  • Trusty Member
  • *
  • Roomba
  • *
  • 18
Re: Complex Language Understanding & Execution (CLUES) Engine
« Reply #99 on: July 16, 2011, 10:02:02 am »
Hi Victor,

I am new to the forum and am following your project with great interest. I am quite impressed of what you have accomplished so far and am looking forward to read more updates. I am working on something similar but you are obviously quite a bit ahead of me.

Just a question on how the knowledge is built up to get a better understanding on what method you are using? Take you first sentence as an example, 'While I was in Africa, I shot an elephant in my pajamas', how did it learn to associate clothes with humans? Is that based on statistical data (people / humans are associated with clothes far more often than animals in the knowledge base / historical data) or is it because it has been told that 'humans wear clothes more often than animals'?

*

victorshulist

  • Trusty Member
  • ****
  • Electric Dreamer
  • *
  • 118
GLI formerly CLUES Engine
« Reply #100 on: July 25, 2011, 06:55:49 pm »
Hi White

The answer is your second option

                'humans wear clothes more often than animals'

Humans use heuristics to understand language.    There are "usually" or "for-the-most-part"  properties to objects in the bot's KB (which I call the NALADABA - Natural Language Data Base).       Pronounce NALADABA with first and third 'A' as in word "NAIL", and second and fourth 'A' as in word "BAT'.      The bot will store all of its knowledge as naturual language statements and reason between them.

So humans have a "usually-wears" property of "clothing".   

Animals don't have that property.    However both animals and humans have another property of  "able-to-wear" of "cltohing".

Now, when the bot considers the idea of an elephant wearing pajamas, it could not only look at the list of properties of pajamas, and the list of properities of its "group" (clothing), but also the properties of elephant.   It could look at the properties of animals, but also the properities that are specific to elephants.   Elephants could have a property of  "commonly-used-as-pets" which would be probably false for an elephant, but true for a dog, cat or bird.   Then, there could be a rule that tells it that people, as a novelity, *may* (but not *usually*) put pajamas on their dog or cat (I personally think the idea is a bit messed up).   Now, elephant may have a "size" property also.    In the end, there could be a lengthly boolean expression that takes all the properties of elephant and pajamas (and their general groups (animal and clothing) all into account to deduce that the idea of an elephant wearing pajamas has the *combined property* of  (*probably*) not being true.

Right now, my bot doesn't have that many properties for words (it has of course the POS property -- ALL words it knows have a POS value (part of speech -- which helps with parse tree generation).    Right now the only thing it knows about elephants (well, animals in general) is that they do NOT have a property of "usually-wear" , but people *do* have a property of "usually-wear" (and the value of that property is 'clothing').   It also knows 'pajamas' has the property of "usually-used-for" = "clothing"  (and perphaps another property of  "could-be-used-for"  equal to "rag")

So both humans and animals COULD wear clothing, but only humans USUALLY wear clothing.     So, another noun, say "fire" , would of course would have no values at all for property "able-to-wear" nor "usually-wears".

Also, some knowledge can be more specific : instead of generalizing pajamas as clothing, you could have properties that are specific to pajamas that are not shared by the 'parent' (clothing).

btw . CLUES has been renamed GLI - General Language Intelligence , pronounced as 'July'
« Last Edit: July 25, 2011, 07:19:37 pm by victorshulist »

*

white

  • Trusty Member
  • *
  • Roomba
  • *
  • 18
Re: Complex Language Understanding & Execution (CLUES) Engine
« Reply #101 on: July 29, 2011, 10:22:03 am »
Thanks for the clarification. Looking forward to read more updates on your project.

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1372
  • Humans will disappoint you.
    • Home Page
Re: Complex Language Understanding & Execution (CLUES) Engine
« Reply #102 on: July 29, 2011, 11:59:48 am »
@White

You could save yourself a lot of time by reading one of the principal research papers on the subject. It explains everything in great detail along with effective methodologies for implementation.

"A Practical Semantic Representation For Natural Language Parsing" by Myroslava O. Dzikovska 2004

http://www.cs.rochester.edu/u/myros/
http://www.cs.rochester.edu/u/myros/thesis.pdf


*

victorshulist

  • Trusty Member
  • ****
  • Electric Dreamer
  • *
  • 118
Re: Complex Language Understanding & Execution (CLUES) Engine
« Reply #103 on: July 29, 2011, 05:03:58 pm »
Yes, there is a lot of philosophy in common between that paper and my design.    Some differences though are:

-   I don’t have strict “types” and “arguments”
-   Possible grammatical parse trees are generated and do not have to pass any tests.   However, grammar parse trees that DO pass semantic tests are ‘promoted’ with semantic markup.    So grammar parse tree generation is in no way controlled or guided by having to pass any semantic tests.
-   The system will not convert a semantic parse tree (= grammar tree with semantic markup) to any kind of internal representation.  Reasoning will always be done on the semantic parse tree.   (the system may, on-the-fly, convert SPTs into some logical form for use with some inference engine), but long term storage is SPT.
-   One grammar tree can have many semantic interpretations
-   If the input doesn’t converge to a single semantic interpretation, then conversation context and more reasoning will be used to finally differentiate and select one.   (Later, the system will be able to interact and ask clarifying questions to help select the correct semantic interpretation – yes, there could be a lot of recursion here).


"The compositionality of meaning representations" , on page 77, is heavily used in GLI.    Semantic meanings of noun phrases (say a direct compliment of a verb) can combine with a predicate verb for example to generate a semantic meaning of the entire predicate.   
 
« Last Edit: July 29, 2011, 05:15:45 pm by victorshulist »

*

victorshulist

  • Trusty Member
  • ****
  • Electric Dreamer
  • *
  • 118
Re: Complex Language Understanding & Execution (CLUES) Engine
« Reply #104 on: January 29, 2012, 02:03:16 am »

Been a while since an update... so, i'll blow the dust off this thread....

Grace's language handling has become fairly flexible.....

This thread can also serve as a good record of testing.    Please excuse the same words in the examples.... too much work done on the guts of the engine... i will try and make 'fresh' examples involving completely different concepts :)

The following are a few snippets of a test "conversation" I had with Grace today...
Quote

   <user says>

   Henry was going to his closet

   <GRACE says>

   Yes, I understand.  I have stored that information in 'temp-sqa-test'

   <user says>

   Did Henry go to his big closet

   <GRACE says>

    Yes, with conditions.  The conditions are that IF:

   you ignore [*BIG* closet]


Fact is a simple sentence, Grace sees that your question had 'big' modifying closet, but the closest fact to answer it, did not.


Quote

   <user says>

   Did Henry go to his closet

   <GRACE says>

    YES, that exactly matches my knowledge.


This time all modifiers in the question (modifiers of direct object 'closet', matched exactly with the fact -- there were no more, and no less modifiers between quesiton and fact, so she says it matches her knowledge exactly.


Quote

   <user says>

   Joe went to his closet

   <GRACE says>

   Yes, I understand.  I have stored that information in 'temp-sqa-test'

   <user says>

   Did Joe go to my closet

   <GRACE says>

   Yes, with conditions.  The conditions are that IF:

   you mean [*HIS* closet]
   you ignore [*MY* closet]


Note that Grace says in directory "temp-sqa-test" - this is because this is considered temporary (non-sense) statements, that just 'live' long enough to test her, statement by statement, and are typically cleared between test-case sessions.
Here Grace points out two differences in the way 'closet' is modified between your question, and the closest fact she found that could answer your question.
Again, we're still on facts that are simple sentences, and only being modified by *single-term* modifiers ('his', 'her', 'my' etc)

Quote

   <user says>

   Sam went to a dance

   <GRACE says>

   Yes, I understand.  I have stored that information in 'temp-sqa-test'

   <user says>

   Did Sam go to a great dance

   <GRACE says>

   Yes, with conditions.  The conditions are that IF:

   you ignore [*GREAT* dance]


So here Sam *went* to a dance.... and she points out the small fact that there is an 'modifier imbalance'

Quote

   <user says>

   John was going to a fun party     

   <GRACE says>

   Yes, I understand.  I have stored that information in 'temp-sqa-test'

   <user says>

   did John go to a huge fun party

   <GRACE says>

    Yes, with conditions.  The conditions are that IF:

   you ignore [*HUGE* party]


So notice here, it is *WAS GOING* (and not *WENT* as above).   Right now Grace assumes that if someone "was going to X", that they got there.  Later, she will be able to be told that the person perhaps did not actually make it to their destination.


Quote

   <user says>

   Victor was going to a dance given by his company

   <GRACE says>

   Yes, I understand.  I have stored that information in 'temp-sqa-test'

   <user says>

   Did Victor go to a dance

   <GRACE says>

    Yes, with conditions.  The conditions are that IF:

   you mean [*GIVEN BY HIS COMPANY* dance]


Ok, now, we are getting a bit "cute" -- Grace is able to know that, rather than 'dance' simply being modified by a *TERM MODIFIER*, it is being modified by the entire phrase
"given by his company" -- she points out that difference between question and closest fact that could answer the question.
 

Quote

   <user says>

   Did Victor go to a dance given by his big company

   <GRACE says>

    Yes, with conditions.  The conditions are that IF:

   you ignore [*BIG* company]


Ok, so this is interesting, Grace knows that 'dance', being modfied by "given by his company" in the fact, *almost* matches exactly with the way 'dance' is being modified in the question ('by his big company'), she pionts out the only difference is that
  'company' (direct object of the modifier phrase "given by his company") has modifier in question of "big" , but the closest fact doesn't have 'company' modified by 'big'.


Quote

   <user says>

   Tim went to a party given by his employer

   <GRACE says>

   Yes, I understand.  I have stored that information in 'temp-sqa-test'

   <user says>

   Did Tim go to that fun party given by his rich company

   <GRACE says>

   Yes, with conditions.  The conditions are that IF:

   you ignore [*RICH* company]
   you ignore [*FUN* party]
   you ignore [*THAT* party]
   you mean [*A* party]


So the same basic thing happening here, except Tim *went*  (as opposed to Victor, who was "was going")... Grace considers pretty much to be the same semantically.

At this point, I deleted all fact files in temporary directory...then continued...

*  ROUND 2 Testing : Facts in the main clause of a complex sentence *******
(complex sentence is one in which there is a main-clause and a subordinate clause)

Quote

   <user says>

   Victor was going to his closet because they were going to a party     

   <GRACE says>

   Yes, I understand.  I have stored that information in 'temp-sqa-test'

   <user says>

   Did Victor go to his big closet

   <GRACE says>

    Yes, with conditions.  The conditions are that IF:

   you ignore [*BIG* closet]


Ok, so now, the facts are composed of complex sentences, where Grace will have to find the answer in either the main clause or perhaps the subordinate clause.

In this case, she answered the question from the complex sentence --- the answer was in the MAIN CLAUSE of the fact.


Quote

   <user says>

   Joe went to his closet because they were going to a big party

   <GRACE says>

   Yes, I understand.  I have stored that information in 'temp-sqa-test'

   <user says>

   Did Joe go to your closet

   <GRACE says>

    Yes, with conditions.  The conditions are that IF:

   you mean [*HIS* closet]
   you ignore [*YOUR* closet]

Same, but the main clause has the verb "went" rather than "was going".

Quote

   <user says>

   Sam was going to a dance given by his company because Christine was going to a dance

   <GRACE says>

   Yes, I understand.  I have stored that information in 'temp-sqa-test'

   <user says>

   Did Sam go to the dance given by your company

   <GRACE says>

    Yes, with conditions.  The conditions are that IF:

   you ignore [*YOUR* company]
   you mean [*HIS* company]
   you ignore [*THE* dance]
   you mean [*A* dance]


Ok , so here we see a good example of Grace finding the answer to the question, where the source fact is a complex sentence, and the answer is in the main clause
of the sentence, and the direct object 'dance' is being modified by a phrase 'given by his company', and the question also has 'dance' being modified, but by
a phrase that doesn't match exactly ('your' modifies 'company' in the question for example...and other differences which she points out).


Quote

   <user says>

   John went to a dance given by his great company because Christine was going to a party

   <GRACE says>

   Yes, I understand.  I have stored that information in 'temp-sqa-test'

   <user says>

   Did John go to a dance given by my great company

   <GRACE says>

    Yes, with conditions.  The conditions are that IF:

   you ignore [*MY* company]
   you mean [*HIS* company]


Same type of situation, just illustrating that 'went' versus 'was going'.

At this point I cleared the temporary directory...

now . .with facts in the subordinate clause...

Quote

   <user says>

   John was going to his closet because Christine was going to a dance given by her company

   <GRACE says>

   Yes, I understand.  I have stored that information in 'temp-sqa-test'

   <user says>

   did Christine go to that big dance

   <GRACE says>

    Yes, with conditions.  The conditions are that IF:

   you ignore [*THAT* dance]
   you ignore [*BIG* dance]
   you mean [*GIVEN BY HER COMPANY* dance]
   you mean [*A* dance]

so here Grace uses the information in a fact which is a complex sentence , but this time the data is in the subordinate clause.   Pointing out the differences
, including entire phrase "given by her company" which is in the fact, but not in the question.

Quote

   <user says>

   Did Christine go to a dance given by her great company

   <GRACE says>

    Yes, with conditions.  The conditions are that IF:

   you ignore [*GREAT* company]


So, in this case, we made the question more specific, and included the modfying phrase "given by her great company", which ALMOST matches exactly Grace's knowledge, but she points out
 the fact that 'great' is modifying 'company' in our question, but NOT in the closest fact she used to pull an answer from.


These are very low level examples.  But I feel Grace's slow mastery of these things will allow her to build up more complicated interactions.

As I said, sorry for the same dull (and kind of weird examples lol)... my next posts will talk about other concepts...  (does anybody have a specific subject perhaps ? :) )




 


LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

429 Guests, 0 Users

Most Online Today: 447. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles