Pattern based NLP

  • 28 Replies
  • 66296 Views
*

MikeB

  • Electric Dreamer
  • ****
  • 120
Re: Pattern based NLP
« Reply #15 on: January 06, 2021, 08:02:43 am »
Recently added both:
-Tone (9 levels - 3 negative, 3 positive, 3 grooming behaviour/patronising)
-WiC Challenge test (Words in Context - https://pilehvar.github.io/wic/)

The WiC test is one of the few NLP tests that can actually be done on this pattern based NLP, as it's not specifically prediction or knowledge based.

The WiC test (training data & results) is ~5500 lines. It completes in only 2 seconds (1980ms-2000ms), however many of the lines include deep knowledge or some other non-literal meaning to trick everyone, including people, so it'll also trick this NLP... The human score is only 80%. Most NLP's get 60-75%.

Most of the NLP set up is complete, so this year I'll be adding words & sentences in order to get through this test... 

O0


*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1590
    • contrast-zone
Re: Pattern based NLP
« Reply #16 on: January 06, 2021, 09:42:45 am »
Hi MikeB :)

May I ask, how do you derive answers to the tests?
There exist some rules interwoven within this world. As much as it is a blessing, so much it is a curse.

*

MikeB

  • Electric Dreamer
  • ****
  • 120
Re: Pattern based NLP
« Reply #17 on: January 07, 2021, 03:35:29 pm »
Hi MikeB :)

May I ask, how do you derive answers to the tests?

Hi Ivan, I ignore the selected word that the test says to match, altogether, and just look to see if the underlying intention is the same.

In the line "He wore a jock strap with a metal cup. Bees filled the waxen cups with honey."... the word "cup" means the same. A traditional NLP would see if "metal cup" and "waxen cup" means the same based on knowledge linking, but in the pattern matching NLP I just look to see if the basic underlying intention is the same. So both of these sentences would come under "Person describing" with sub tags "clothing, material,..." and some others. If one sentence was a catchphrase or greatly different then it would return not a match.

Another example... "I try to avoid the company of gamblers. We avoided the ball."...the word "avoid" means the same. Both have the intention "Person explain", so this would return true.

It should get at least 60% doing it this way. There is a way to add catchphrases to get a few more, and some other things I can do with tags. Trying to keep real knowledge linking and deducing as far away as possible...

*

MikeB

  • Electric Dreamer
  • ****
  • 120
Re: Pattern based NLP
« Reply #18 on: January 14, 2021, 02:53:06 pm »
Just comparing the two Intentions isn't working out too well. Going to start a specialised way of doing it (still without knowledge) by looking at the words before & after the selected word.

*

MikeB

  • Electric Dreamer
  • ****
  • 120
Re: Pattern based NLP
« Reply #19 on: February 01, 2021, 07:27:24 am »
Restructered the WiC / Word in Context test to look at the words before & after the indicated word, similar to how we do it.

A brief overview...

1) Both sentences are formatted (look for odd symbols, double spaces, spelling, words spaced out like "h e l l o", extended laughing "hehehehe...").
2) Pattern-match each word to a predefined symbol from a list (only ~20 different symbols total, out of ~2300 english words. No stemming.).
3) Analyse WiC:
 a) Input: Both sentences, the 'lookup word', and both locations of the word.
 b) WiC function: Check the 'lookup word' (now a symbol shared with ~100 similar words) exists in the WiC / sentence compatibility table (~50-100 entries).
 c) WiC function: If at least one match, check all other words. Highest word count (3-5 words) is selected as a match. Remember compatibility ID. Now check second sentence for a match. Return match true/false.

This is much more detailed than just checking the intention, as it can pick up the same context even if one sentence is an "instruction" and the other is a "person describing". EG. "come/came" (1) "Come out of the closet" (2) "He came singing down the road".

I got the time down from 2000-2600ms, to ~1400ms by removing most of the pre-formatting and only keeping 'Double Space' check as the test is already formatted...

Score is not worthy of publishing because I've only checked about 100 of the ~5500 records! A lot of sentences are reused though so shouldn't have to check all of them.


*

MikeB

  • Electric Dreamer
  • ****
  • 120
Re: Pattern based NLP
« Reply #20 on: March 01, 2021, 09:49:25 am »
Still working on the WIC test.

Making progress of about 0.1% per day. (20-30% to go)

There's now 3700 words (+1400). 900 WIC pattern sentences (+800). Re-added spell-checking, so the full WIC test takes about 2.5 seconds to complete.

The scale and pickup is actually immense. Each of the 900 WIC pattern sentences has 3-6 "Symbolic Words". Each Symbolic Word represents 10-500 words. So each of the 900 WIC sentences actually picks up 500,000 - 20,000,000 variations.

Many times I add 10-20 WIC patterns (~100,000,000 word-sentence variations) and it only picks up one solitary record in the 5428 record WIC test... So the test is basic... but the word formatting is still broad enough that you can't just cheese the test.

Another problem is lack of words... I'm estimating I'll need at least 5000-7000 total to get a good result, and all these are hand entered in specific categories , so it's going to take some months...

One side effect is that I'm probably going to drop the old "Intention" categories I used to use for the chatbot and use these new WIC categories instead as it picks up an interesting variety. There are about 50 different groups (will be merging some) along the lines of:
"person or thing started to move / person or thing has him..."
"the object/concept of a had-thing"
"had the concept when..."
"a motion was taken / apply a rule / have-take the concept-chance to..."
"i play/avoid the / objects moved/ordered/fell to the
"logic-action an object"
"moving-action the object"
"an object of objects / vivid objects/objectives of"

So these will be better in chatbot programming.

*

MikeB

  • Electric Dreamer
  • ****
  • 120
Re: Pattern based NLP
« Reply #21 on: March 05, 2021, 08:51:40 am »
Sped up the processing thanks to Infurl's suggestion of adding Binary Searches.

Huge results.

Added to Spell Checking (800 words), and Word-token assignment (3700 words).

The original lists are unsorted, so they are hashed & sorted in program. (Hashed by ASCII adding.) There are typically 0-5 duplicate hash ids/collisions so the correct matches are checked letter-by-letter as well.

Processing 5428 lots of two sentences:
Before: 2600ms
After: 76ms of preparing. Hashing & sorting spelling and word list.
After: linear searching the hash lists: 1700ms (900ms faster)
After: binary searching the hash lists: 930ms (1670ms faster)

There are other processes, but for the spell/word search alone, Hashed/Linear seems to make it ~50% faster, and Hashed/Binary seems to make it ~90% faster.

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1590
    • contrast-zone
Re: Pattern based NLP
« Reply #22 on: March 05, 2021, 11:12:46 am »
Great speedup! O0

And the good thing is that, using binary search, growing the search set doesn't slow down in linear scale, it slows down in logarithmic scale (that's almost as good as constant speed). The bigger the search set is, more you see the difference between linear search and binary search.
« Last Edit: March 05, 2021, 01:02:30 pm by ivan.moony »
There exist some rules interwoven within this world. As much as it is a blessing, so much it is a curse.

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1263
  • Humans will disappoint you.
    • Home Page
Re: Pattern based NLP
« Reply #23 on: March 06, 2021, 02:17:16 am »
The original lists are unsorted, so they are hashed & sorted in program. (Hashed by ASCII adding.) There are typically 0-5 duplicate hash ids/collisions so the correct matches are checked letter-by-letter as well.
...
After: 76ms of preparing. Hashing & sorting spelling and word list.

Pro-tip #2. There is no reason that you would have to do the preparation such as hashing and sorting at run-time. You could break out the portion of the code that does that preparation into a separate program which you run at compile time. This program does all the necessary preparation and then prints out all the data structures in a format that can be included by your final program and compiled in place into its final form. That will save you a chunk of time every time you run the actual program.

In my case I am parsing and processing millions of grammar rules which can take a considerable amount of time just to prepare. Although small grammars can be processed from start to finish at run-time, I have found it much faster to compile the different files that make up the grammar into intermediate partially processed files; these files in turn get loaded and merged into a final grammar definition which is then saved in source files that can be compiled and linked directly into my parser software, as well as a database format which can be loaded as a binary file at run-time.

That last feature has lots of advantages. The preprocessed files were so large that it was taking a long time just to compile them, but the best thing is that by separating the data files from the software, I can choose completely different processing options on the command line.

*

MikeB

  • Electric Dreamer
  • ****
  • 120
Re: Pattern based NLP
« Reply #24 on: March 15, 2021, 07:08:02 am »
That might be an idea.. If it gets longer than 500ms to load then I might do that...  Only expecting about 5000-7000 words, but if I add more languages it could take a while.

I'm trying to keep the data slightly linked-in to the software so that it's harder to work out how it works, but it seems like it's just written in the DLL in plain english anyway... so may end up separating them.

*

MikeB

  • Electric Dreamer
  • ****
  • 120
Re: Pattern based NLP
« Reply #25 on: March 23, 2021, 08:38:35 am »
I finally ran into a problem with the word-token grouping not being separated enough, so I'm redoing all the groups.

Originally I'm using 6 Logical groups, 6 Emotive groups, 1 generic 'Possessive/having' group, and a bunch of others including Person (1st/2nd/3rd person). The past/present/future tense is included in the logical/emotive groups as well, and this leads to having to do double-ups in the WIC entries to include bad grammar ("I run away", "I running away", "I runned away", "I ran away")... I'd rather have them in the same group and use a past/present/future tag on the word to analyse later.... The context of "one person running" is the same (it's not "running a fridge"/operating - if it is, it's easy to pick up the extra words...).

The original theory is for 24 word groups, but 16 seems to be the best after I laid all the main keywords out. No prefix/suffix separation anymore...
4 Logical (concepts),
4 Emotive (everything that moves),
4 Burning (analytical/possessive/having/working),
4 Light (romantic/sense/pose/art).

The WIC entries should go down from ~800 to 200-400 with the same pickups and have more range... especially around the 'Burning' and 'Light' categories.

*

MikeB

  • Electric Dreamer
  • ****
  • 120
Re: Pattern based NLP
« Reply #26 on: November 08, 2021, 08:33:01 am »
I'm still working on this. I recently completed the concept for English words into the new Grammar categories.

There are:

Four main groups for Nouns, verbs (actions), and adjectives (modifiers). The groups are: Moving/living things, Analytical/laws/concepts, Logical subparts/binary actions, Light sense/stories/beautiful terms. (IE. The Four elemental/original groups each contain three sub groups - Noun, Verb, Adjective).

Seven other groups: Articles/Quantifier, Person/Agent, Question/Interrogative, Time-spatial, Direction-spatial, Conjuction/sentence breakers, Exlamation/grunt/hi/bye.

All Eleven groups are also encoded with present/future/past and precise/optimistic/explaining at the same time. IE All Present things are Precise, all Future things are Optimistic, all Past things are Explaining. In the Four main/elemental groups: Nouns are Precise/Present, Verbs are Future/Optimistic, Adjectives are Past/Explaining.

The last thing that breaks all grammar common sense is that each word is only permitted to be in one category only. So words like "brush" must either be an action (verb) or the name of a thing (noun). The default is to be an action (verb) as nouns aren't heavily relied on in sentence matching.

The whole concept is speed over quality... but as there's nothing for 3d environments between hard-written dialogue trees and GPT-3, this will sit right inbetween...

There are 3500 words to individually convert over so will take until the new year to do as I'm also looking at Speech Recognition software.

Speech Recognition software today uses Algorithms/Ngrams/NN's and is really slow (1-3 seconds response time) and uses a lot of power... The speed of my FSM/FST/binary NLP is 0.1ms to process a sentence (all words & intention)... So if the speech rec software is fast as well then it's more suited to 3d enviroments even if not as good....

Combining the NLP with speech rec is as simple as writing phonemes next to each word in the dictionary... If the user is speaking via voice then the word-text searching can be skipped altogether... it can go straight from voice phoneme->symonym symbols->pattern sentence pickup->intention grouping.

For audio processing I'm looking at OpenAL Soft (Open Audio Language) right now. There's nothing in the libraries for voice recognition, or even microphone low/high/band filter passing, but it's low-level enough to work on and have both speed and cross compatibility with other OS's.

The fastest approach I've seen is to take about 50ms of audio (shortest phonemes), generalising the pitch then associating it with a phoneme (tuned to your accent). This is about 1ms fast... but again, sits inbetween the best and something hard-coded.

One of the benefits if it works is that a responding chatbot can completely vary the response time to suit the situation... including interrupting the user, which adds another layer of humanness, but depends on how well words & intentions are picked up.

*

MikeB

  • Electric Dreamer
  • ****
  • 120
Re: Pattern based NLP
« Reply #27 on: November 24, 2021, 03:06:58 pm »
Update on audio speech recognition.

Traditional speech recognition uses FFT/DCT/DTT Fast Fourier Transform's to decode audio into voice phonemes. These capture 3 voice formants (frequency ranges specific to a phoneme) from one 'signature'. However these use nested-loops and are slow to process. DTT is the fastest but I want to try it another way...

Most spoken phonemes have a range of different frequency areas combined to make the sound - bass/warmness, middle range, high range. EG. "oh" is mostly bass. "ee" bass-middle. "ss" high.

The way I want to try is separating common frequency ranges first initially, then measure the power & complexity afterwards to tell if one range is loud/complex versus the others.

Separating the frequency ranges (band-passing) can be done in real time using just a few instructions, using pre-calculated IIR filters (http://www.schwietering.com/jayduino/filtuino/index.php). FIR filters are better quality but slow.

There is 20ms inbetween recorded audio frames to process the data, so I'm aiming to get both phoneme and NLP processing out the way in 1-10ms. Using the same thread as the one capturing the data.

This is some captured data for the word "wikipedia". The asterisks (*) represent good power & complexity levels versus background noise.

Currently there's little noise filtering and the band-pass filters need tightening up, but eventually if the results are strongly reproducable then they can be added to tables as base values...


*

MikeB

  • Electric Dreamer
  • ****
  • 120
Re: Pattern based NLP
« Reply #28 on: November 26, 2021, 06:17:06 am »
An update.

I changed the IIR filters to Resonators centered around 600hz, 1250hz, and 3150hz and now have double the signal-to-noise with more stable numbers.

This amplifies the signal for certain types of sounds, but in order for this to work I feel like I need about 10 filters, centered around different frequencies.

One FIR filter or a Fast Fourier Transform (normal approaches to speech rec) are approx 50-100x slower than one pre-calculated IIR resonator filter, so there's plenty of room...

Signal is only 12.5%-25% over noise background, and you need to speak close to the mic, so SNR needs to be improved by at least twice again to work...


 


Project Acuitas
by WriterOfMinds (General Project Discussion)
November 29, 2021, 05:31:57 am
java/kotlin to python
by yotamarker (AI Programming)
November 28, 2021, 07:23:54 pm
Dendrite Processing
by MagnusWootton (General Chat)
November 28, 2021, 03:01:28 pm
Nature's Halo deck?
by frankinstien (General Chat)
November 28, 2021, 01:09:53 am
Artificial God?
by MagnusWootton (General AI Discussion)
November 27, 2021, 10:07:54 pm
Concept Modeling
by infurl (General Project Discussion)
November 27, 2021, 12:25:28 am
Pattern based NLP
by MikeB (General Project Discussion)
November 26, 2021, 06:17:06 am
Quantum computed virtual reality or even computed quantum reality?
by frankinstien (General Chat)
November 21, 2021, 07:11:54 pm
ai websight that explains computer code
by frankinstien (AI News )
November 09, 2021, 01:05:20 am
$50,000.00!!!!!
by Don Patrick (AI News )
October 28, 2021, 04:29:41 pm

Users Online

98 Guests, 1 User
Users active in past 15 minutes:
squarebear
[Trusty Member]

Most Online Today: 127. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles