Ai Dreams Forum

Member's Experiments & Projects => AI Programming => Topic started by: frankinstien on November 17, 2021, 12:20:35 am

Title: Natural Language Cognitive Architecture
Post by: frankinstien on November 17, 2021, 12:20:35 am
LockSuit pointed me to this book "Natural Language Cognitive Architecture" by David Shapiro:

(https://i.imgur.com/zQchk8g.png)

The book focuses on natural language generative Transformers that Shapiro hails as a milestone in AI. He extrapolates transformers to a generalized AGI described in concept by the diagram shown below:

(https://i.imgur.com/xvH8DzK.png)

Where the outer loop is a context-driven process that has inputs from external influences of an environment. And the inner loop is a form of streaming consciousness that can reflect on influences of the outer-loop as well as the inner loop's inferences and both share a common database. The book doesn't go into too much detail in terms of architecture and he states that current approaches with transformers are best suited for single-threaded approaches. He also uses SQL relational databases. While the generative transformers are able to confabulate and/or extrapolate contextual patterns from prompts and cues that can provide a context or goal, they still suffer from codifications that are cumbersome because of how neural networks code information. The whole approach starts at such a small datapoint knernal which are the characters of a language! Then the ANNs build up structural elements within its layers that allow for identifying concepts. The idea of ANNs being burdened with such data, IMO, is an inefficient approach to using data and ANNs!

An alternative approach would be to remove the burden of actual structural data of concepts and use databases to provide that resource. But a SQL database is the wrong approach. Why? Relational databases were designed to remove ambiguity and as such form relationships of data that are very brittle. Tables have fixed columns and language deals with data types that will vary in context data and fields. That's a problem, to make relational databases more flexible you'd have to use join tables to represent varying fieldsets for different concepts. So you'd have a concept table that points to a field table. The problem with that is you now have to do very computationally expensive joins that become a nightmare to associate across varying ideas and contexts.

A better approach is to use an object-oriented data model. Looking at the diagram below:

(https://i.imgur.com/0AIyS8q.png)

We can see a word or concept is described with vectors that need to be exposed so associations and comparisons can be made. This approach thrives on parallelism because we can build hash sets of the feature vectors that provide a time-complexity of O1 lookup advantage where 100s of thousands of lookups can happen in parallel! This removes the need for an ANN to form codifications that structure data into relationships. Those feature vectors include grammar, context, and a plethora of other concepts which would allow for an ANN to do a much simpler process and that is use the structured data and functions that can search, compare and process by virtue of patterns of using external functions rather than having to perform them. So, take for example GPT-3 learned to add, but why learn to add if it's a functional process a machine already can do and is coded much more efficiently than an ANN to do? Wouldn't it be better to train an ANN to use a calculator rather than form internal logic to do arithmetic?

Now look at Amanda's AGI approach:

(https://i.imgur.com/60ITH8B.png)

The diagram depicts a concept of time that has varying degrees of depth, this is very similar to what human brains do. Where Shapiro obsesses with time-stamping data that effectively turns events into points of time which is not as effective as what nature invented. Time depth gives us a sense of work-effort which biology relies on to conserve energy. Not only that but the architecture of Amanda implements parallelism and cross-platform capability so when it needs to use a GPU, it can and that function has descriptors that describe it like any other concept. This allows for the search for a function to be easily associated with the concepts it processes by virtue of hash set lookups of feature vectors!

Training an ANN to use functional capabilities, such as looking up data, comparing for appropriate word use and document evaluation removes the need for knowledge as stored information to be solely an ANN responsibility and focuses the ANN to find patterns of efficacy using functional processes and externally(external to the ANN) stored data! With this approach, there's no need to re-train or fine-tune for the memorization of new data. Nor does this approach require an ANN to learn gigabytes of data, it learns spontaneously by interacting in its environment and can grow its vocabulary and experiences but relies on other processes to do the heavy lifting of comparing, weeding out, and assessing best fit data.

At least that's the theory, let's hope I'm right...
Title: Re: Natural Language Cognitive Architecture
Post by: MagnusWootton on November 17, 2021, 09:03:35 am
The NLP sentence analyzer (not in this thread) that you showed in your other thread, looks really useful.     Good luck I hope u get some amazing success.

On my work-> I didnt do much NLP work, myself.

The closest I ever got to doing NLP was sequences of verbs that complete a task.   All I have to do to activate the verbs, is to modify the symbolic environment.  (like put dish in sink = dish is in sink.)  And then I come up with the order via brute force,   I can get really long sequences.  (Linear to string length of verbs cost) by putting them in subsetted heirarchies like heirarchical temporal memory.

Cool thing is it probably works for raw motor-sensory data as well.  :)     (learning physics "snippets" then putting them in sequences of sequences, like verbs.)  they are cause->effect snippets, recordings of the sensory->motor history.
Title: Re: Natural Language Cognitive Architecture
Post by: MikeB on November 18, 2021, 08:32:12 am
Current NLP's are a "how can we have our cake and eat it too" prediction system. No matter how much context you add, at some point you need to sit down and rewrite grammar into a more suitable way for the NLP. Do the hard work... What comes before words.. Which point in the mind is activated when I'm thinking of this word (and similar words/synonyms in this lanugage and any other language I know)...

Don't know how they can claim to be cognitive...
Title: Re: Natural Language Cognitive Architecture
Post by: MagnusWootton on November 18, 2021, 09:49:14 am
Don't know how they can claim to be cognitive...

Open Ai Codex is amazing.  (If you believe them,  you could think they are absolute charlatans!! hehe)