Below is my brain trying to define a concept, of how a concept driven AI chatbot could function. It’s the first part of a set of notes from a brain storm last night… I don’t have the time to convert into paragraphs so I'm posting my ‘thought’ stream lol. I would imagine an ideal language/ schema to probably include…
Concept engine?
Roundish?
Love/ hate length vectors, they were very close but have become distant.
Every concept is multidimensional
Paragraph/ set to describe one word/ concept..
I use slashes (/) to define concepts, provide extra meaning, links. Sometimes I’m just being lazy and can’t think of correct description/ phrase/ word/ terminology… lol. It seems similar to ‘a picture paints a thousand words’ Graphical link? Concept cross link?
Concept Thesaurus, concept objects? Object orientated concepts?
A set of registers you can feed loads of concepts that build an overall pattern/ picture that represents the main concept, like mixing colours.
New subject linked into/ overlaid over and existing template or concept map.
Concept space/ maps with no entry point, take it right back to the word letters?
Circular/ 3D concept map where layers rotate within each other, or points move closer to current topic but keep links? Use high dimensional/ adaptive concept maps?
Build up a topic picture/ pattern within the concept map.
https://en.wikipedia.org/wiki/Concept_map
Concept map links are built from concepts perhaps linked to emotional amounts/ values.
So ‘is a’ or ‘same as’ have their own concepts.
Use temporal ordering of letters/ words in a sentence to form concept map.
Not using labels but symbols/ values/ Meta tag/ Hash/ custom encoding?
Good/ bad emotions/ feelings/ meanings on linear scales?
An object or concept is described by the links into the main concept map, no concept facets are duplicated, and so all references become cross linked. (Meta tags?)
Concept maps can be adapted by input sentences, links need strength values.
The output from a concept map/ block creates a unique value that effects how the next concept maps process the concepts. This helps limit the hierarchical depth. Binary encoded/ gated concept outputs, one number to specify which outputs, for that topic? Emotion could affect binary gates?
The binary encodings can be changed by a description or experience of that concept.
Use set questions to resolve conflict between/ within concepts.
So a small dog (dog, size 2 (output 2))… a large dog (dog, size 5 (output 1, 3))?. Binary value defines output branch to include different concepts based on size. Ie: patterns of eight branches.
Use time delay/ countdown on concepts to track conversation topics?
Concepts are fired linearly depending on the order of the letters/ words of the input stream. Already triggered concepts guide how the rest of the input stream is received/ encoded.
Expand the topic into as many concept dimensions as possible and only save the top few as the index? Encode high dimensional space?
?? Will a Mouse fit into a soup can?
‘Will’ (whole sentence?) – Question – concept – requires response – output
‘a Mouse’ (a) – mammal – rodent – size – volume – x,y,z
‘fit into’ – function – volume comparison – a < b
‘a soup can’ (b) – object – metal - container – cylindrical – size – volume – x,y,z
if volume (a) < volume (b) then ans = yes else no (define conceptually/ graphically/ spatially)
Use generic concept comparison functions to compare concept values?
And so on…
Let's set a starting point called "item". An item can be anything: a synonym relation, a description of the movement of a hand in a movie, a use-case of a word in a sentence, ...etc.
Items have different types. Depending on the type, the underlying implementation can be very specific, sometimes very fast and low-level, sometimes slower but high-level. Different hardware resources may be used.
Items have something in common: they can be seamlessly used as actors (modifying their surroundings) and as sensors (interpreting their surroundings). For 1 item type, there's 1 semantics ruling both bottom-up activity (when the environment's state activates the item) and top-down activity (when the item's activation influences the environment's state).
Items can only be active or inactive, there's no in-between (because we respect the computer's "working on it or not working on it" nature).
Items only have:
- an active/inactive state
- a type
- links to higher items
- links to lower items (environment)
The structure is not strictly vertical, there can be a lot of loops, cycling links.
Links to lower items are dynamic (you catch a face shape in a cloud).
And so on...