1. induce new rules that hold on existing input
2. deduce answers from existing input by given rules
3. generate semi-random thoughts for chit-chat if you're building a chatbot
I agree, though semi-randomness is maybe not necessary if the system is complex enough.
IF you are talking about adding a memory to a chatbot the personality forge does that. I don't recall off hand what programming language it uses but some thing on that order would work.
No I'm not talking about adding a memory to a chatbot. I'm talking about another kind of bot, whose main activity is not to produce answers to the user but rather to produce "thoughts": answers to itself.
If thoughts were owl files
Yes, it's the same: thoughts are directed graphs, JSON docs containing references to one another. We can put OWL in them.
The problem with this is, is how did the thoughts get in there? If its just what the robot saw, then its just regurgitation of the text going in. (what you are trying to avoid.) which is what we do.
Thoughts are generated by sensing previous thoughts, and by sensing things from outside the program (like user input or web browser).
An AIML file is basically like
trigger => template
trigger => template
trigger => template
trigger => template
...etc
There are two difficult things with AIML: 1) you have huge files hard to maintain and 2) you end up with answers "shadowing" other answers. Moreover, AIML cannot handle structured data, but only flat strings.
Now, what if instead of strings we were using JSON pieces? We could then react on JSON patterns, and emit JSON pieces, that we would call "thoughts".
Next thing: AIML and Rivescript brains are monolithic systems, I mean: the entire system react as a whole. But we know that it is not how real brains work. Real brains have specialised areas, each of which is good at doing one thing.
So, instead of having one single system taking input from user and sending output to user, we could have "areas", taking input from other areas, and sending output to other areas. It would probably be easier to maintain (and no more shadowing since areas are specialised), and more important, we could have dozens of authors working in parallel on one brain, without interferences.
But now, the brain file wouldn't be like AIML anymore (trigger/template + trigger/template + trigger/template), it would be a little bit more complicated. We need to handle the area network topology, thoughts transfer, thoughts transforming, filtering, and probably many more things that I haven't imagined yet. Hence my question: what commands would you need to make your bot?
I would want to keep a great number of recent thoughts in active memory for fast access. Depends on what you mean by "thoughts" though.
In our case, thoughts would be kept in RAM indeed, it needs to be fast. I'm calling "thoughts" a piece of structured data like JSON, with possible reference to other data.
EDIT: typo