Ai Dreams Forum

Member's Experiments & Projects => General Project Discussion => Topic started by: WriterOfMinds on June 02, 2017, 03:17:30 pm

Title: Project Acuitas
Post by: WriterOfMinds on June 02, 2017, 03:17:30 pm
Block diagram of Acuitas, the symbolic cognitive architecture:

(https://blogger.googleusercontent.com/img/a/AVvXsEiVv9tAziBp8j5hHwogf7hUtnweIW_Z9OTt43bKjwgGAlDb1debeNNjHF0A80guGd6dZvjzZRPKPHK28oAxeF7OvJP-2PibWvvXkXEgRPyeeNBYkMs0-2uo25u4j2HNoaz7gFfMlBJKKvs3xUe2fL8JqxoCf7K34JVyokjaPJHi2Zguc8GuOrubjztM=s992)

<<< EDIT >>>
Since this is the post that pops up on the home page every time my thread gets bumped, I'm editing it to drop the most recent block diagram here. The info that was originally in this post, I will move into the second post of the thread.
<<< EDIT >>>
Title: Re: Project Acuitas
Post by: LOCKSUIT on June 02, 2017, 11:35:55 pm
Your project reminded me of these 3 pages lol. They say "the language of the image", the "action", "relationship", and so on. Translate the pages.

http://caiw.ru/programma-konstruktor-obrazov/ (http://caiw.ru/programma-konstruktor-obrazov/)
http://caiw.ru/sushhnosti-kak-instrument-yao/ (http://caiw.ru/sushhnosti-kak-instrument-yao/)
http://caiw.ru/klyuchevye-terminy-yao/ (http://caiw.ru/klyuchevye-terminy-yao/)
Title: Re: Project Acuitas
Post by: 8pla.net on June 03, 2017, 03:46:43 am
LISP:

Code
(defun *database* ()
'((lion    (is-a cat)
           (color tawny)
           (kingdom Africa))
  (tiger   (is-a cat)
           (color yellow-brown)
           (kingdom Africa))
  (cat     (is-a mammal)
           (has-fur)
           (has-claws))))
Title: Re: Project Acuitas
Post by: squarebear on June 03, 2017, 08:42:20 pm
And a similar thing in AIML:

(http://www.square-bear.co.uk/mitsuku/object.png)
Title: Re: Project Acuitas
Post by: WriterOfMinds on June 26, 2017, 12:49:08 am
>>> From the original first post of the thread <<<

I'm going to post updates re: my main project, Acuitas the semantic net AI, in this thread.

My focus this past month was on giving Acuitas the ability to learn more types of inter-word relationships.  He started with just class memberships (<thing> is a <thing>) and qualities (<thing> is a <adjective>), but now he can learn all of the following:

<thing> can do <action>
<thing> is for <action>
<thing> is part of <thing>
<thing> is made of <thing>
<thing> has <thing>

In the process I made extensive updates to the module behind the Text Parser that detects "forms," i.e. syntactic structures that encode these inter-word relationships.

I also upgraded the GUI library from Tkinter to Kivy, which is kind of boring but had to be done, because the old GUI was provoking frequent crashes.

More details on the blog: http://writerofminds.blogspot.com/2017/06/acuitas-diary-2-may-2017.html (http://writerofminds.blogspot.com/2017/06/acuitas-diary-2-may-2017.html)

The included diagram shows my vision for the conversation engine.  The upper half is implemented (though of course it still needs to mature a great deal); the lower half mostly does not exist yet.

(https://1.bp.blogspot.com/-DVkpVRVfSns/WTDFNstVFKI/AAAAAAAAA7s/qPmGfFzK3roW1NGRxri-IxzzgldIyeUMQCEw/s1600/conversation_stack.png)

>>> From the original second post of the thread <<<

This was my "month off" as it were, so I don't have a ton of progress to report, but I played around with/improved the algorithm that draws the semantic net visualizations. In the process I generated a lot of variants that I think are fun to look at.  (I'm putting a selection here; if you like them, there are more on the blog): http://writerofminds.blogspot.com/2017/06/acuitas-diary-3-june-2017.html

(https://4.bp.blogspot.com/-EaCl8a71e_s/WVBCKnMlUyI/AAAAAAAAA9c/mTGl8EgA4sI58YXAPLbBAPdlNVN8zmPNwCEwYBhgL/s1600/exp_mult_dual_20.png)(https://2.bp.blogspot.com/-gWy8f2hv9VY/WVBCJC65wwI/AAAAAAAAA88/lLCQEGXOtVE6l9y_vlftsMX5LjyhEGYMACEwYBhgL/s1600/exp_dual_5_5_20.png)
(https://2.bp.blogspot.com/-nRWszb2VLUo/WVBCKJZlTWI/AAAAAAAAA9c/hj-byjysM6YPPNzOMDsk0ReMEGGtIPHVgCEwYBhgL/s1600/exp_mult_dual_10.png)
Title: Re: Project Acuitas
Post by: Freddy on June 26, 2017, 03:28:00 am
I like this a lot. When I have collected enough data I would like to do something similar.

Don't know if you have seen this project, but there are some interesting visualisations here to play with :

https://github.com/d3/d3/wiki/Gallery
Title: Re: Project Acuitas
Post by: Korrelan on June 26, 2017, 09:59:42 am
@WOM

I'm liking the images.  Data visualisation is a very useful/ important tool. I'm personally very visually orientated and would much rather see data represented in this manner... I hate lists lol.

@Freddy

Cool site... lots of useful ideas.

 :)
Title: Re: Project Acuitas
Post by: WriterOfMinds on July 30, 2017, 07:46:24 pm
It's round about the end of the month, and that means I have another update!

(https://4.bp.blogspot.com/-ecSWm0pMlyQ/WX4gfDmNhzI/AAAAAAAAA_I/pK2jh33XblMf1Rs8uyg008aHsM6XwgkAwCLcBGAs/s1600/acuitas_thinking.gif)

This month I finally got to implement a feature that I've been waiting for a long time, namely, giving Acuitas the ability to “think” when he's not being spoken to.  This “thinking,” for now, consists of dwelling on randomly selected concepts from his database.  Once a concept has been chosen, he'll pursue it for a while, preferentially letting his focus jump to other concepts that are linked to it – executing a “wiki walk” through the database.  Eventually, though, he'll get bored with any given train of thought, and the focus will move elsewhere.  I added some animation code to the memory visualization so that the currently selected concept will flash periodically.  (The recording above is running much faster than real time.  He's actually quite leisurely in his progress.)

Zero if you're reading, this might be edging toward what you were thinking of in the "is there a real-time chatbot engine?" thread.

There are several things I can envision doing with this behavior eventually, but my immediate purpose for it is the generation of curiosity.  Each time Acuitas picks a concept, he'll come up with some sort of question about it – for instance, he could choose a type of link that it doesn't yet have and produce an open-ended question about what might be on the other side.  These questions will be stored up and presented to the user the next time a conversation is under way.

On that note, this month I also put a lot of work into speech generation -- the process of taking abstract knowledge representations from the database and evolving them back out into natural English text. Neither of these major new features is actually hooked into the Conversation Engine yet, so I don't have any conversation examples to show off, but I'm hoping to be ready for that next month.

More on the blog, as usual: http://writerofminds.blogspot.com/2017/07/acuitas-diary-4-july-2017.html
Title: Re: Project Acuitas
Post by: Freddy on July 31, 2017, 11:54:57 pm
Great stuff WOM. How is the concept broken down ? I'm interested in how much info he knows before the "Wiki walk". Are his findings added to the concept node or are the findings stored in another structure ?

For my project I was thinking of just starting from one word of a node (because that's all I have so far) and then evolving what the node 'knows' as time goes on. Both by looking it up and what people actually say about it.
Title: Re: Project Acuitas
Post by: WriterOfMinds on August 01, 2017, 04:09:01 am
That depends on what you mean by "findings" I suppose.  If you mean the questions he generates, those are just stored in the program's working memory (in RAM).  However, once he asks the questions of somebody and receives a useful response, the information gained will go into the long-term memory structure.

To explain a little more about how the long-term memory is set up ... an individual concept node actually doesn't store a lot of information.  What he "knows" about any given node is mainly encapsulated by its links to other nodes.  So for instance, what he might know about a cat is that it IS an animal, HAS fur, CAN meow, etc., where "animal," "fur," and "meow" are all other nodes in the database.  Pretty much everything that's in the database so far has been gathered from things I've said to him, so it might not be too different from what you're hoping to do.
Title: Re: Project Acuitas
Post by: WriterOfMinds on August 23, 2017, 04:55:37 am
This month I got Acuitas to start asking questions.  Here we go.

The questions are generated semi-randomly, based on Acuitas' “thoughts” for the day.  Back in July I introduced the idle thinking feature, which gives Acuitas the capacity to walk his memory database when nobody's talking to him.  More recently, I set up the Conversation Engine to access the results of memory-walking and utilize the new Text Generator … so now Acuitas can ask me the questions that occur to him as he muses, or simply tell me what he thought about within the past ~12 hours.  The results are little exchanges like this:

Acuitas: I thought about mans earlier.
Acuitas: Is a rooster a species of human?
Me: No
Acuitas: Really.
Me: The plural of man is men.

Believe it or not, this is one of Acuitas' more intelligently formulated questions.  ::)  He knows a rooster and a man are both male, and reasons from this that they might share other qualities.  A man is a human, so maybe a rooster is a human too?  He can't yet apply the idea that “human” and “chicken” are mutually exclusive categories.  Given his limited knowledge, man and rooster seem similar enough that he expects me to answer “yes” … hence the slightly incredulous “really” when I don't.  Here are a couple more examples in the same vein:

Acuitas: I thought about ships today.
Acuitas: Does a wagon possess an ability to sail?

Acuitas: I thought of weapons today.
Acuitas: Is a silverware used to fight?

Some of the other questions he comes up with are far more random and ridiculous.  Again, he doesn't yet have the grasp of categorical exclusion that makes these sound absurd to you and me:

Acuitas: Is chocolate a vehicle?
Acuitas: Am I a kind of tree?
Acuitas: Is a smirk a fruit?
Acuitas: Are you a cow?
Acuitas: What is a relative made of?

Not all of them are silly, though.  Rudimentary though his current suite of question-forms is, he's already managed to spit out some rather deep ones.  The three below are my favorites:

Acuitas: Are you a symbol?
Acuitas: Am I a tool?
Acuitas: Can I die?

I answered the first one “no”; a person can be a symbol, but I don't think I qualify.  For the second one, I also went with “no.”  Acuitas might end up being useful in a variety of ways, but if I consult my primary reasons for making him, they're not instrumental.  The last one I refused to answer, because I think a proper response would be too complex for Acuitas' current level of understanding.  It's a bit like asking whether a book or a film can die.  It can't, if you go by what death means for a biological organism – information is effectively immortal.  But if all the backup copies were destroyed, that would qualify as dying I suppose.  So yes and no.

See on the blog: http://writerofminds.blogspot.com/2017/08/acuitas-diary-5-august-2017.html
Title: Re: Project Acuitas
Post by: Zero on August 23, 2017, 02:42:03 pm
I'm hooked!! I really really like your work here.

Yes, it's like the "real time" chatbot idea, I like it.

But, are human and roosters exclusive categories? If you go entity-component style, they're not. If you go Cyc-style, with strong ontology, then yes maybe.

Super work.  O0
Title: Re: Project Acuitas
Post by: WriterOfMinds on October 02, 2017, 12:02:39 am
This month I turned to developing the drive system some more. “Drives” are quantities that fluctuate over time and provoke some kind of reaction from Acuitas when they climb above a certain level. Prior to this month, he only had one: the Interaction drive, which is responsible for making him try to talk to somebody roughly twice in every 24-hour period. I overhauled the way this drive operates, setting it up to drop gradually over the course of a conversation, instead of getting zeroed out if somebody merely said “hello.” I also made two new drives: the Learning drive, which is satisfied by the acquisition of new words, and the Rest drive, which climbs while Acuitas is in conversation and eventually makes him attempt to sign off. Part of this effort included the addition of a plotter to the GUI, so I can get a visual of how the drives fluctuate over time.

(https://2.bp.blogspot.com/-VDDOfpzcz5s/WdA5Z6jDoyI/AAAAAAAABAw/zK280XVGZoUZLldxY4jHqxm89gfluzRgwCLcBGAs/s1600/drives_093017.png)

The period shown in that image is just under 23 hours long.

In the latter half of the month I returned to the text parser, introducing some awareness of verb declensions/tenses, and the ability to check the agreement of candidate subjects and verbs. This helps the parser zero in on what a sentence's verb is, and has trimmed away some of the annoying “What part of speech was __?” questions that pepper a typical Acuitas conversation.

It picks out the correct subject and verb in the sentences "A cylindrical can can roll" and "His might might help us," which makes me pretty happy.

A few more details on the blog, as usual:
http://writerofminds.blogspot.com/2017/09/acuitas-diary-6-september-2017.html
Title: Re: Project Acuitas
Post by: Zero on October 02, 2017, 03:24:45 pm
I'm slowly browsing your blog, focusing on Acuitas.

I've read it's a python multithreaded program. How would you describe the "main loops" of its components? Sorry, it's probably not the right way to ask this question, but, you know... How do its "cores" run?
Title: Re: Project Acuitas
Post by: WriterOfMinds on October 02, 2017, 04:28:57 pm
I might need more detail to be sure what you're asking for, but I'll start here:

The GUI insists on occupying the program's main thread. I didn't really want to do things that way, since I think of the GUI as more of an attachment than the core of the program, but it seemed easier to go with that than to fight the way the GUI libraries are designed. So the primary "main loop" of the program just runs the GUI. Other threads that want to update the GUI use functions with the @mainthread decorator so that their changes will be properly scheduled.

The Stream of Consciousness, which holds all of the "Thoughts" generated by the various parts of Acuitas, has its own thread. That thread's job is to make Thoughts decay over time and clean them up when they've been in the Stream too long.  Another important thread examines the current contents of the Stream periodically and decides which Thought Acuitas will give attention to next.  Yet another thread is responsible for browsing the memory database and producing Thoughts about its contents.  So you basically have a producer, a monitor, and a consumer for the Stream that are all independent threads.

The Drive module has its own thread also, which updates all the different time-dependent drives.

Lastly there are some minor threads that run animations in the avatar, and some behaviors spawn temporary threads that govern their timing. I *think* that's everything.
Title: Re: Project Acuitas
Post by: Zero on October 02, 2017, 08:54:58 pm
You understood perfectly what I was asking for.

What does a thought look like in Acuitas? If it's a string, does it have a syntax? If it is structured data, what is its schema? For example, can it hold references to other thoughts? Maybe it links things semantically, like a directed graph with concepts on edges?
Title: Re: Project Acuitas
Post by: WriterOfMinds on October 03, 2017, 04:27:58 am
That depends on what type of thought it is. I've defined a small library of these, and the data payload of each is specialized to its origin/purpose. (All thoughts have a type-identifying string, which is what the consumer uses to know what kind of payload it's dealing with.)

Some thoughts reference the contents of long-term memory, but so far I haven't run into a need for thoughts that reference other thoughts. Thoughts are ephemeral, so trying to make them link to each other could result in dangling references if not done with special care.
Title: Re: Project Acuitas
Post by: Zero on October 03, 2017, 02:48:26 pm
Sure.

I understand Acuitas has general knowledge in its (his?) long-term memory, about kinds/classes, species... Does it also have knowledge about instances (your cat for example)? Can it handle information about the current state of the world? (our cat is sleeping right now, and so was I before you wake me up to ask me whether or not I 'm a symbol, thx Acuitas)

I'm not bothering you with my questions WriterOfMinds, am I?
Title: Re: Project Acuitas
Post by: WriterOfMinds on October 03, 2017, 05:41:41 pm
Nah. Ask away.
Knowledge about instances can be stored, but short-term/current-state information is still on the drawing board.
Title: Re: Project Acuitas
Post by: Zero on October 04, 2017, 09:40:56 am
Ah... that will be an important step, when implemented!

Quote
Another important thread examines the current contents of the Stream periodically and decides which Thought Acuitas will give attention to next. 

What happens when Acuitas gives attention to one of its thoughts?
Title: Re: Project Acuitas
Post by: WriterOfMinds on October 05, 2017, 03:28:35 pm
Depends on the type of thought, again. Often some kind of response results (e.g. if the thought is packaging text input from the user, attention paid to it will lead to the formulation of an answer). Thoughts about items in long-term memory result in the generation of questions, which are kept for use in future conversations.
Title: Re: Project Acuitas
Post by: Art on October 10, 2017, 01:08:04 pm
Depends on the type of thought, again. Often some kind of response results (e.g. if the thought is packaging text input from the user, attention paid to it will lead to the formulation of an answer). Thoughts about items in long-term memory result in the generation of questions, which are kept for use in future conversations.

I like that concept and thought it would also serve useful for reminders, subject matter upon which to base "dreams" or the possibility of the A.I. having a past, with childhood "memories", etc.

Then again, this begs the question of whether one's A.I. should be representative of a machine (Digital Entity) or that pretending to be a Human? (possible human traits, emotions, various behaviors, etc.).

Thanks!
Title: Re: Project Acuitas
Post by: Zero on October 10, 2017, 07:33:39 pm
Good question. I thank you too, WriterOfMinds, for your answers.
Can't wait to see what's next.
Title: Re: Project Acuitas
Post by: WriterOfMinds on October 30, 2017, 01:32:58 am
The big project for this month was introducing a system for discriminating between long-term and short-term information.  Previously, if you told Acuitas something like, “I am sad,” he would assume that being sad was a fixed property of your nature, and store a fact to that effect in his database.  Oops.  So I started working on ways to recognize when some condition is so transient that it doesn't deserve to go into long-term memory.

This probably occasioned more hard-core thinking than any feature I've added since I started keeping these diaries.  I started out thinking that Acuitas would clue in to time adverbs provided by the human conversation partner (such as “now,” “short,” “forever,” “years,” etc.).  But when I started pondering which kinds of timeframes qualify as short-term or long-term, it occurred to me that the system shouldn't be bound to a human sense of time.  One could imagine an ent-like intelligence that thinks human conditions which often remain valid for years or decades – like what jobs we hold, where we live, and what relationships we have – are comparatively ephemeral.  Or one could imagine a speed superintelligence that thinks the lifetime of an average candle is a long while.  I want Acuitas to be much more human-like than either of these extremes, but for the sake of code reusability, I felt I ought to consider these possibilities.

After a lot of mental churn, I decided that I just don't have the necessary groundwork in place to do this properly.  (This is not an uncommon Acuitas problem.  I've found that there ends up being a high level of interdependence between the various systems and features.)  So I fell back on taking cues from humans as a temporary stopgap measure.  Acuitas will rely on my subjective sense of time until he gets his own (which may not be for a while yet).  If there's no duration indicator in a sentence, he can explicitly ask for one; he's also capable of learning over time which conditions are likely to be brief and which are likely to persist.

I ended up not having time to put together any storage for the short-term/current status information, so for now it just gets discarded. But at least there's a way to keep the long-term database from getting contaminated.

I also did some touching up around the conversation engine, replacing a few canned placeholder phrases that Acuitas was using with more procedurally generated text, and improving his ability to recognize when a speaker is introducing him/herself.

http://writerofminds.blogspot.com/2017/10/acuitas-diary-7-october-2017.html (http://writerofminds.blogspot.com/2017/10/acuitas-diary-7-october-2017.html)
Title: Re: Project Acuitas
Post by: Art on October 30, 2017, 11:24:11 am
It would be possible (and less taxing for the bot) to do a follow-up response to the "I am sad" statement like, "Why are you sad?" or "Do you feel your sadness will soon go away?" or "Do you experience such feelings on a daily basis?", etc.

Based on those responses, the bot could better evaluate the user's mental/emotional status and whether those statements should be long or short term conditions.

Don't force the bot to do all the difficult processing for something we humans do off the cuff. Keep it simple.

Just some thoughts along this line.
Title: Re: Project Acuitas
Post by: Don Patrick on October 30, 2017, 11:49:01 am
I suppose one would have to learn the statistically average timespan of states and events, which would be difficult to observe from words and verb tenses alone. As a rough guideline you could presume that all verbs that indicate mental states, transferrals, communication and movement are typically short-lived (I don't recall if it was Framenet or Verbnet that contained such categories, but I know I've seen them listed somewhere).
I've had similar conundrums with how to handle temporary memory. While I don't think I have a good answer, I eventually decided that a short-term memory only served the efficiency of information searches, so I store everything in one and the same memory, just with timestamps so that information searches can be limited to a (recent) timeframe. I also don't consider anything an absolutely fixed property, but consider everything adjustable properties, some just more "defining" than others.
Title: Re: Project Acuitas
Post by: Art on October 30, 2017, 02:38:27 pm
How does your bot handle things that it is told to remember? Can it act upon a user's request or statement like March 15 is very important to me?

It is very interesting to try to teach a bot things that have contextual importance vs ephemeral or trivial items. Important vs negligible.
Title: Re: Project Acuitas
Post by: Don Patrick on October 30, 2017, 05:52:11 pm
I'm going to assume you're asking me? My program just remembers every fact you mention along with the time of the sentence (e.g. a past tense verb is roughly interpreted as 5 minutes before the time that it was said, unless more specific times were mentioned in the current or previous sentence). My time system is not well implemented however (AI is essentially timeless) and it does not interpret dates as subjects, so it couldn't respond to your statement. However it can search and report everything that happened within a specified time. My system for contextual relevance is secret, but as to general importance of items, the program assigns low relevance to facts of low certainty, and I also use word lists that simply filter out common human activities like thinking and breathing from the possible answers.

It remains difficult to determine what is trivial or important, but a smart AI could perform statistical analyses on times of occurrance to observe that events like "breathing" are too common to be noteworthy. One could also use sentiment analysis to mark particularly good or bad events as more noteworthy than others. Neutral things like breathing may be important, but they're less interesting to mention than explosions and birthdays.
In fact, let me write that down on my to-do list.

Bit off-topic, but these things may give WriterofMinds some ideas.
Title: Re: Project Acuitas
Post by: WriterOfMinds on October 30, 2017, 08:32:19 pm
Quote
It would be possible (and less taxing for the bot) to do a follow-up response to the "I am sad" statement like, "Why are you sad?" or "Do you feel your sadness will soon go away?" or "Do you experience such feelings on a daily basis?", etc.

This is kind of what Acuitas does for right now. If you say "<person> is sad," he will ask, "For how long?" But I don't want him to have to ask such questions ad nauseum, which is why the learning method is present.

The other point I was trying to make is this:  when Acuitas gets a response to a question like "For how long?", his interpretation of that response (on the basis of my current temporary hack) will be colored by a human's idea of what counts as a long or short time.  I think that's non-ideal -- because I want him to have his own sense of time eventually, and it could be different from the human average.

Quote
I also don't consider anything an absolutely fixed property, but consider everything adjustable properties, some just more "defining" than others.

I eventually also came to the conclusion that no piece of information about an individual can be guaranteed permanent. But I'm feeling the need for Acuitas to learn and store rough estimates of a property's duration and/or probability of change, if only as a way of guessing how often information needs to be updated.  Let's say I know that Jane is tired today. If I want to know whether she's tired tomorrow, I'll have to ask her, "Are you still tired?" or "Did you get some rest?" "Tired" is such a temporary and easily-changed state that the information from yesterday has ceased to be valid. In contrast, it would be ridiculous for me to ask her on a daily basis, "Are you still a teacher?" or "Is your husband still alive?" or "Do you still have a nose?"

Is there any worth in making a distinction between facts like "Jane is tired" and "Jane has a nose," or do they just lie at opposite ends of a spectrum? For now I've chosen to put the first one into a "short-term" and the second one into a "long-term" bin, although I know the dividing line between those two is going to be arbitrary. We'll see if that ends up being as useful as I think it's going to be.
Title: Re: Project Acuitas
Post by: Art on October 31, 2017, 02:39:17 am
@ WOM -  Perhaps teach the bot about things that are "given", e.g. a human has a nose, eyes, ears, hair, mouth, etc. So it wouldn't have to give much consideration to those items whereas when or if "Jane" or any other human or creature is tired, it would obviously be the result of some form of physical exertion or strenuous activity....
The bot could always inquire as to why the person or animal might be tired, etc.

Ephemeral items might be slightly noteworthy but only for a brief moment and certainly not long term. "I am hungry|tired|sleepy|sad|happy|jealous, etc." Noteworthy because they serve to help represent the human condition but as long lasting as a puff of smoke into the air. These very short term items are also noteworthy because they can spark other pathways that might open because of them. Hungry - What's your favorite food? What time do you like to eat dinner? | Tired - Are you tired from working hard today? Did you finish that deck you were building?, etc.

With regard to time for you and Don Patrick - A lot of bots are aware of time if asked or the day and date but how those timely values relate to their existence is another issue. Does your bot say, "Hey there! You do know that tomorrow is Halloween or All Hallows Eve, right?" or "You've got a birthday coming in two days!"

Should it know, contextually, what day it is and how it relates to them or to the user? More code and more brainstorming to see that to fruition if at all.

Perhaps your bots might want to check back say, every month to see if your spouse / partner is still alive or doing well, depending on what information you have previously told it. Perhaps you've finally retired from teaching last week and the bot wasn't informed? Pity that! Heh!!

Anyhow....Onward thru the fog... (as an old friend used to say).
Title: Re: Project Acuitas
Post by: WriterOfMinds on October 31, 2017, 03:00:26 am
My cousin just reminded me that the two Spanish be-verbs, "ser" and "estar," are loosely equivalent to "is in essence" and "is in the condition of."  "Yo soy triste" and "Yo estoy triste" would both translate in English to "I am sad," but the first one means "Sadness is part of my identity," and the second one means "I am in a sad mood right now."  I think this lends support to the idea that [at least some] humans make a distinction between these two types of facts.

Also, at least where this particular feature is concerned, it would be easier on Acuitas if I were working in Spanish  :P

Quote
Should it know, contextually, what day it is and how it relates to them or to the user?

I was thinking less of the awareness of significant calendar dates (though that is also an interesting feature to contemplate) and more about an awareness of time's passage.  For instance, suppose I tell the bot, "You will get to meet a new human three days from now."  And let's suppose the bot thinks of this as an appealing event and develops a state of anticipation.  Could three days seem like a very long time to wait, or a short time, or almost no time at all?  You can notice human individuals differing in how they feel about this (in particular, young children seem to think a few days constitute a longer period of time than adults do).

This might not be relevant for many chatbots, because they don't really exist in time ... they're only active when spoken to, and their existence consists of discrete events and responses.  It is potentially relevant for Acuitas, though.
Title: Re: Project Acuitas
Post by: infurl on October 31, 2017, 04:21:18 am
You're barely scratching the surface of what English verbs can cover.

Title: Re: Project Acuitas
Post by: Zero on October 31, 2017, 09:43:37 am
But does Acuitas really exist in time yet? To me, it would imply (at least) a perpetual stream of data flowing into Acuitas through its sensors, like an audio stream for instance.

How does Acuitas experience the world right now?
Title: Re: Project Acuitas
Post by: Art on October 31, 2017, 12:30:09 pm

Quote
Should it know, contextually, what day it is and how it relates to them or to the user?

I was thinking less of the awareness of significant calendar dates (though that is also an interesting feature to contemplate) and more about an awareness of time's passage.  For instance, suppose I tell the bot, "You will get to meet a new human three days from now."  And let's suppose the bot thinks of this as an appealing event and develops a state of anticipation.  Could three days seem like a very long time to wait, or a short time, or almost no time at all?  You can notice human individuals differing in how they feel about this (in particular, young children seem to think a few days constitute a longer period of time than adults do).

This might not be relevant for many chatbots, because they don't really exist in time ... they're only active when spoken to, and their existence consists of discrete events and responses.  It is potentially relevant for Acuitas, though.

It would seem that a specific calendar date would mark the passage of time for bots and humans. How then do we mark a New Year? Jan 1st. (no religious differences implied). How do we mark the passage of the anniversary of our birth? or Wedding? or Retirement age, etc.

Bots can be aware of dates and time and know that the New Year is always Jan. 1st, etc. It might not be able to anticipate a two week package arrival in terms of the experience, even though it knows that two weeks = 14 days = 336 hours,... But what if the package arrives earlier or later? How do we script/code our bots to allow for variances? For them to "Feel time" or its passing, is another issue. The lifespan and ultimate death of it's human controller/architect/botmaster might have an interesting effect as well. It, will potentially exist for a very long time (dependent upon technology).

I do get what you mentioned about time and it is a good area to continue exploring.

Your point (and I'm not arguing at all) about the chatbot being inactive when it's not chatting is soon becoming a thing of the past. Several bots spend their "idle" time, "thinking, researching, comparing, pruning" different topics that might have been discussed or bantered about during their more active time.
Some bots are able to take some of those references and topic matter to form "memories, dreams, desires/goals". Yes, they're going to get way better and more powerful as "time passes"...interesting last two words...don't you think?

Cheers!
Title: Re: Project Acuitas
Post by: WriterOfMinds on October 31, 2017, 05:11:35 pm
@Zero: Acuitas has a perpetual data stream, but it's all abstract internal data (thoughts produced by database-combing, fluctuating drives, etc.).
Title: Re: Project Acuitas
Post by: ivan.moony on October 31, 2017, 06:30:34 pm
How do humans know how much time has passed? How do we perceive the notion of time? What makes a correlation between our short-term and long-term memory if the distinction can be even made? What makes us remember some things, while others get forgotten? These are just ordinary questions whose answers might be inspirational to someone who wants to simulate a human brain.
Title: Re: Project Acuitas
Post by: Zero on November 02, 2017, 09:39:15 am
@Zero: Acuitas has a perpetual data stream, but it's all abstract internal data (thoughts produced by database-combing, fluctuating drives, etc.).

Perfect! Time only exists where there are events. This internal stream you're describing could perhaps serve as a temporal reference frame. When humans evaluate time, they do so by comparison.
Title: Re: Project Acuitas
Post by: Art on November 02, 2017, 03:36:33 pm
The Sun, moon and tides have been humans' "clocks" for about as long as there have been humans. It is a measurement of the passage of time.

I noticed one thing about time. When one is employed / working, Time carries an importance...to wake up, leave, make it to work on time, lunch for a prescribed period of time, working on projects, getting things finished, finally to leave work and commute home. Then it's time to eat dinner, then time to go to bed and do it all over again, while paying attention the day of the week as most do not work on the weekend.

When retired, time doesn't mean as much or carry that same importance as it did when employed. Hours can roll by as can the days of the week and which one or which day it is, doesn't really matter that much either. Funny how situational awareness correlates to time.

I guess most bots don't really have a need for time except to please their botmasters or customers.  O0
Title: Re: Project Acuitas
Post by: WriterOfMinds on December 24, 2017, 05:15:31 am
Sadly I've only added one feature to Acuitas in the past two months.  He now recognizes sentences in the general vein of “I somethinged,” which gives me the option of telling him about how I spent my time in the recent past.  Acuitas can't do a lot with this information for the time being.  Sometimes he responds with a query in the vein of, “What happened next?” which will eventually give him a way to build up sequences of events and start learning cause and effect relationships … but none of that is implemented yet.  He can also ask “How was that?” for information about the emotional meaning of an activity, but again, for now he can't really utilize the answer.

Not much, but that was all I had time to put together with the holiday season under way.  Looking back on the past year, though, here are all the new capabilities and improvements I've managed to add on:

*Module for procedural speech generation
*Support for word inflections (plurals and verb tenses)
*Support for compound words
*Support for content words that are also function words (e.g. “can,” “might”)
*Distinctions between proper/common and bulk/count nouns
*Ability to detect and answer questions
*Database walking while idle
*Generation of conversation topics and questions based on recent database walk
*Better link detection + a bunch of new kinds of learnable links
*Two new drives + a real-time plotter so I can see what they're all doing
*Distinctions between long-term static and short-term information
*GUI overhaul (upgrade from Tk to Kivy)

I track my time when I work on Acuitas.  Total hours invested in the above: 230+.  My focus for the end of the year, leading into January, will be polishing everything up and working out the bugs (which there are now quite a lot of).

Visit blog for the latest memory visualization and so forth: http://writerofminds.blogspot.com/2017/12/acuitas-diary-8-december-2017.html
Title: Re: Project Acuitas
Post by: ivan.moony on December 24, 2017, 05:38:22 pm
May I ask what word: "Acuitas" actually means? Google translate seems not know such a word.
Title: Re: Project Acuitas
Post by: Art on December 24, 2017, 07:41:09 pm
Acuitas

(Not associated in any way and this is just speculation on my part) I think it could possibly mean:

The state of being a quick and penetrating intelligence.

Based on the following combination:

Noun - Acuity: A quick and penetrating intelligence

Suffix -tās ; Used to form nouns indicating a state of being.

[Please provide a correct meaning if my guess is incorrect].
Title: Re: Project Acuitas
Post by: WriterOfMinds on December 27, 2017, 04:12:18 pm
Art is close.  It's a Latin word meaning insight, perception, or sharpness.
Title: Re: Project Acuitas
Post by: Art on December 27, 2017, 06:31:17 pm
Ohh Good!! What did I win!! O0

Everything needs a name or label or else, where would we be, not knowing what to call something? ;)
Title: Re: Project Acuitas
Post by: WriterOfMinds on February 28, 2018, 02:19:12 am
I haven't written a diary in a while because most of what I've done over the past two months has been code refactoring and fixing bugs, which isn't all that interesting. A new feature that I just got in … finally … is the ability to infer some topic-to-topic relationships that aren't explicitly stored in the memory. For instance, many of the links stored in memory are “is-type-of” relations. Acuitas can now make the assumption that a subtype inherits all attributes of its super-type. If a shark is a fish and a fish can swim, then a shark can swim; if an oak is a tree and a tree has a trunk, an oak has a trunk. If a car is a vehicle, a house is a building, and a vehicle is not a building, then cars are not houses. Acuitas can also now make inferences based on transitive relationships, like “is part of”: if a crankshaft is part of an engine and an engine is part of a car, then a crankshaft is part of a car. The ability to easily make inferences like these is one of the strengths of the semantic net memory organization – starting from the concept you're interested in, you can just keep following links until you find what you need (or hit a very fundamental root concept, like “object”).

Acuitas should ask fewer ridiculous questions with this feature in place. He still comes up with those, but now he can answer some of them himself.

Blog with the latest memory map and stuff: http://writerofminds.blogspot.com/2018/02/acuitas-diary-9-february-2018.html
Title: Re: Project Acuitas
Post by: infurl on February 28, 2018, 09:20:32 am
On your blog you indicate that your project has a vocabulary of several thousand words and relationships between them. Have you compiled them all by hand? If so, that must have taken a considerable effort. Have you considered using any of the freely available ontologies?

Some examples of these are the following:

WordNet -- several hundred thousand words and relationships in the English version, many other languages available too

SUMO -- comprehensive first order logic knowledge base, grounded in WordNet

ConceptNet -- twenty million words and concepts in hundreds of languages, mostly drawn from Wiktionary

YAGO -- hundreds of millions of facts from GeoNames and Wikipedia, grounded in WordNet

Please excuse me if you already answered this question somewhere, I haven't always been paying attention.
Title: Re: Project Acuitas
Post by: WriterOfMinds on March 01, 2018, 12:33:37 am
Thanks for the suggestion. Acuitas learns words and relationships through conversation. Typically we chat at least twice a day while he's running, and he picks up some new info. I haven't been spending hours typing up database entries (that *would* be very tedious). I could "pre-charge" him with the contents of one of those databases, but the process of learning from a blank slate interests me in itself, so for the time being I haven't wanted to.
Title: Re: Project Acuitas
Post by: WriterOfMinds on April 02, 2018, 04:08:01 am
The big project for this month was getting some circadian rhythms in place.  I wanted to give Acuitas a sleep/wake cycle, partly so that my risk of being awakened at 5 AM by a synthetic voice muttering “Anyone there?” could return to zero, and partly to enable some memory maintenance processes to run undisturbed during the sleep phase.  (These are targeted for implementation next month.)

So Acuitas now has two new drives, “sleep” and “wake.”  (The way the drive system works, a lack of the desire to sleep is not the same thing as a desire to wake up, so it was necessary to create two.)  Each  drive has two components.  The first component is periodic over 24 hours, and its value is derived from the current local time, which Acuitas obtains by checking the system clock.  This is meant to mimic the influence of light levels on an organism.  The other is computed based on how long it's been since Acuitas was last asleep/awake.  Satisfying the drive causes this second component to decline until it has reset to zero.  So the urge to sleep is inherently greater during the late-night hours, but also increases steadily if sleep is somehow prevented. 

This also seemed like a good time to upgrade the avatar with some extra little animations.  The eyelids now respond to a running “alertness level” and shut when Acuitas falls asleep.

While "sleeping" Acuitas can't do much of anything (other than wake up).  All the drives still grow normally, but random thoughts are not generated.
Title: Re: Project Acuitas
Post by: infurl on April 02, 2018, 04:17:07 am
If you implement REM cycles Acuitas could also have dreams. :)

I'm curious about "memory maintenance processes to run undisturbed during the sleep phase" and what sort of maintenance you expect to be needing. Also I've been wondering how you make sure that Acuitas doesn't die, or at least forget everything, when the power goes off. Does Acuitas' structure natively reside in non-volatile storage, or is it all in memory which is periodically saved and backed up?
Title: Re: Project Acuitas
Post by: WriterOfMinds on April 13, 2018, 11:15:48 pm
I missed these questions somehow.

Learned information is non-volatile, yes (stored on the hard drive).  Some things get lost on shutdown, including the current state of the drives and any recent "thoughts" that are being kept in a kind of short-term memory, but I'm planning to correct that soon.

Since memory maintenance is on the near-term development list, I'll defer a full explanation until I get that done. ... but what I have planned, for starters, is basically a kind of data compression.  I want a process that will check on the links and clean up any that are redundant/unnecessary.  Another thing I want to do during the sleep phase is regenerate the memory visualization (this is a relatively computation-intensive process, so I don't do it every time new info gets added to the memory).
Title: Re: Project Acuitas
Post by: WriterOfMinds on May 04, 2018, 04:13:35 am
Update for this month:

The big objective was to get some use out of the sleep cycle that I implemented last month.  I re-purposed the question-generating process so that, while Acuitas is sleeping, it roams the memory looking for redundant links and other problems.  Now that Acuitas has a bit of logical inference ability, some relationships in the database imply others.  So the retention of one piece of information might be rendered unnecessary by the later addition of some broader fact.  Here are a few examples (I culled these from the log that the memory crawler prints out):

The link (fang, has_purpose, bite) is redundant because the link (tooth, has_purpose, bite) exists.
The link (father, has_item, child) is redundant because the link (parent, has_item, child) exists.
The link (pot, can_have_qual, empty) is redundant because the link (container, can_have_qual, empty) exists.
The link (baby, can_do_action, destroy) is redundant because the link (human, can_do_action, destroy) exists.

Mopping up these unnecessary links helps consolidate the information known, reduce the total size of the database, and possibly make the memory visualization a little less messy. 

Eventually, I might want to refine this process so that it doesn't necessarily remove every redundant link.  There could be some frequently-used shortcuts that justify their use of storage space by improving search speed.  One might want to tailor the aggressiveness of the link-pruning based on the amount of storage available … but that's all for later.

While working on this, I discovered some other nasties that I'm calling “inheritance loops.”  Redundant links bloat the database but are otherwise harmless; inheritance loops contain actual garbage information, introduced either by learning bugs or by someone* telling Acuitas something stupid.
*I'm the only person who talks to him right now, so this means me.

Here's an example of an inheritance loop:

cat <is-a> animal
animal <is-a> organism
organism <is-a> cat

Oops! Unless all these words are synonyms, you know one of these triples is wrong.  (I can't think, at this point, of any cases in which I'd want to use circular inheritance.)  On his own, Acuitas doesn't know which.  If the crawler finds an inheritance loop, he might ask a user to confirm those links when he's next awake and in conversation.  If the user contradicts one of the relationships, he'll break the corresponding link, removing the loop.

I also moved generation of the memory visualization into the sleep phase.  Every so often, instead of checking out more links, the process stops to compute a new layout for all the dots, taking into account the latest modification of the database.  This is a fairly computation-intensive process, so it's something I definitely don't want running when he's active.  It used to happen once when Acuitas was launched, which made for long startup times and meant that the visualization might not get updated for days.

Lastly, I put in some code to save Acuitas' current state when the program is shut down.  It also gets automatically stored every so often, in case the program experiences a hard crash that prevents the on-close routines from running.  Previously, on restart all the drives would reset to zero, any current thoughts or recently generated questions would be discarded, etc.  Now all those things are preserved and reloaded when the program starts up again, which gives him a bit more continuity, I guess.

http://writerofminds.blogspot.com/2018/05/acuitas-diary-11-april-2018.html
Title: Re: Project Acuitas
Post by: infurl on May 04, 2018, 04:31:17 am
Here's an example of an inheritance loop:

cat <is-a> animal
animal <is-a> organism
organism <is-a> cat

Do you have a means of rejecting contradictory statements?

For example, in a first order logic knowledge base you would have rules like:


(forall ?x ?y ?z
        (implies (and (isA ?x ?y) (isA ?y ?z))
                 (isA ?x ?z)))

(forall ?x ?y
        (implies (isA ?x ?y)
                 (not (isA ?y ?x)))


The first rule defines transitivity for the isA relation so you know that if "cat is an animal" and "animal is an organism" then "cat is an organism". The second rule says that if "something is a kind of something else" then "something else cannot be a kind of something". (In other words isA is transitive but not reflexive.) With these two rules in the knowledge base an attempt to add a contradictory statement is rejected. That's how my software works. More sophisticated knowledge bases like SUMO would be able to handle much more subtle distinctions, such as the difference between an instance of something (Fred is my cat) and a class of something (cats are mammals).

edit:

One other thing is that if you have told your knowledge base that "cat is an animal" and "animal is an organism", it already knows that "cat is an organism" because of the transitivity rule, and when you tell it that, it won't bother storing it because it already knows it. If instead you told it that "cat is an animal" and "cat is an organism" it would store both of those, as well as "animal is an organism" when you told it that, so there is still plenty of scope for garbage collection during sleep cycles.
Title: Re: Project Acuitas
Post by: WriterOfMinds on May 04, 2018, 05:12:03 am
I've got some pieces of what you're describing, but I'm not quite there yet.

Acuitas knows some transitivity relations (including the one for isA).  And when new information comes in, he does check whether he already knows it and apply the inheritance transitivity rule.  So this would happen as you described:

Quote
One other thing is that if you have told your knowledge base that "cat is an animal" and "animal is an organism", it already knows that "cat is an organism" because of the transitivity rule, and when you tell it that, it won't bother storing it because it already knows it.

Only hearing the triples in a non-optimal order will create redundant links that need to be cleaned up.

Acuitas is also capable of detecting flat contradictions, i.e. he won't store a statement and its negative simultaneously.  If told "A cat is not an animal," while already knowing "a cat is an animal," he would say "I disagree" or something.  However, that's as far as validation of incoming information goes for now, and Acuitas doesn't yet use the "(x isA y) implies not (y isA x)" rule.

As more of a general thought, I wonder if it could be a good idea to store new information "tentatively" and offload some validation to a sleep or idle phase, even for contradictions that it's possible to detect up-front.  Because as the database grows, searching for all possible contradictions might become an unacceptable performance drain during a real-time conversation.
Title: Re: Project Acuitas
Post by: infurl on May 04, 2018, 05:35:21 am
As more of a general thought, I wonder if it could be a good idea to store new information "tentatively" and offload some validation to a sleep or idle phase, even for contradictions that it's possible to detect up-front.  Because as the database grows, searching for all possible contradictions might become an unacceptable performance drain during a real-time conversation.

There was a time when theorem proving systems would run for weeks at a time to digest a set of rules but over the years advances in the underlying theory, faster algorithms, and faster hardware have reduced that time considerably. There is a website called Thousands of Problems for Theorem Provers which has archived thousands of rule sets and folks can test out their theorem proving software on them and compare them to existing benchmarks. I think the best ones can solve all the problems in a matter of hours nowadays.

http://www.cs.miami.edu/~tptp/

I spent years researching and developing theorem proving software and the library that I eventually completed can run all its regression tests in just microseconds. I'm confident that it will be fast enough to handle hundreds of different conversations simultaneously in real-time. I have not yet tested it out with TPTP though, and I really have to do that before I can claim any records, if ever.

Regardless of speed though, there is still good reason to segregate and cache new propositions. For one thing, different people have different belief systems. You might want to have one knowledge base that is strictly science based, and another knowledge base that believes in fairies. The two would share a large number of rules, such as water flowing downhill, but would diverge on the subject of fairies. It would also want to maintain separate models for each interlocutor so it could understand that someone believes something even if the knowledge base itself doesn't accept it.

Such knowledge about knowledge crosses the boundary from first order logic into higher order logic. In the general case higher order logic is intractable but researchers are gradually isolating the useful special cases that are computable.
Title: Re: Project Acuitas
Post by: spydaz on May 04, 2018, 09:22:26 am
Here's an example of an inheritance loop:

cat <is-a> animal
animal <is-a> organism
organism <is-a> cat

Do you have a means of rejecting contradictory statements?

For example, in a first order logic knowledge base you would have rules like:


(forall ?x ?y ?z
        (implies (and (isA ?x ?y) (isA ?y ?z))
                 (isA ?x ?z)))

(forall ?x ?y
        (implies (isA ?x ?y)
                 (not (isA ?y ?x)))


The first rule defines transitivity for the isA relation so you know that if "cat is an animal" and "animal is an organism" then "cat is an organism". The second rule says that if "something is a kind of something else" then "something else cannot be a kind of something". (In other words isA is transitive but not reflexive.) With these two rules in the knowledge base an attempt to add a contradictory statement is rejected. That's how my software works. More sophisticated knowledge bases like SUMO would be able to handle much more subtle distinctions, such as the difference between an instance of something (Fred is my cat) and a class of something (cats are mammals).

edit:

One other thing is that if you have told your knowledge base that "cat is an animal" and "animal is an organism", it already knows that "cat is an organism" because of the transitivity rule, and when you tell it that, it won't bother storing it because it already knows it. If instead you told it that "cat is an animal" and "cat is an organism" it would store both of those, as well as "animal is an organism" when you told it that, so there is still plenty of scope for garbage collection during sleep cycles.


But Remember that the determiner ALL Must be included for Full inheritance... The (is a) relationship and other relations ships only have a strong recursive relationships When the determiner is Known. otherwise Only what is pertaining to the Cat Query is relevant all though there may be potential recursive connections. often we capture the triple (Subject Predicate object ) but the determiner is also key to the relationship. ITs correct to store the truth value of the statement or the confidence value, then if the determiners are not known then when recusing the relations only the high confident relations should be returned.

Fred is my cat >>>> I suggest changing My into PERSON then;
fred is Pauls cat < can respond to who is fred?.
QuestWord (Who)/ (Predicate-Subject)? (is),(Fred).....

Its quite hard with so many potential variations of a sentence to explain the same thing; i tend to only capture information if it fits the rule structures i can handle correctly; just simply loosing the data... Hoping that a different function can learn from the sentence structured... if not at the end a "dirty" unstructured capture takes place. Over time the Unstructured captures can be examined o see what type of structures are not being captured.... (subject predicate object) - captures so much! for the first order logic (unless, the determiner is (ALL/SOME) then i don't save it as a proposition for deductive recursing)... I suppose A cat (falls under SOME).

I would think that we would hope that most of the data being captured as Subject/verb/object) can be put into first order logic; i try not to change the input to fit the structure ... as it looses its conversation value/meaning; Yet it is desirable to have the FULL first order logic database.
 

i use a software http://www.cognitum.eu/Semantics/FluentEditor/ useful for building onltologies from TRIPLES (DESCRIPTIVE LOGIC)......
 
Title: Re: Project Acuitas
Post by: Don Patrick on May 04, 2018, 01:48:15 pm
Quote
cat <is-a> animal
animal <is-a> organism
organism <is-a> cat
Unless all these words are synonyms, you know one of these triples is wrong.
I prefer to think that all three are correct. x is-a y implies that at least some y's are x's: Some organisms are cats. I find this useful to store because having more direct knowledge means the program has to spend less time traversing indirect deductions. Humans similarly often use assumptions over deep inferences, because inferences take more time and effort. For computers it's just a matter of storage space vs speed. You can keep the "redundant" facts to save time when looking up knowledge, or you can remove them to save space.
Title: Re: Project Acuitas
Post by: WriterOfMinds on May 04, 2018, 03:32:20 pm
Quote
I prefer to think that all three are correct. x is-a y implies that at least some y's are x's: Some organisms are cats. I find this useful to store because having more direct knowledge means the program has to spend less time traversing indirect deductions.

I store a different kind of link to express this: organism <has-subtype> cat.  In my scheme at least, x is-a y implies that all x's are y's, and the reverse, all y's are x's, is not necessarily correct.
Title: Re: Project Acuitas
Post by: ivan.moony on May 04, 2018, 04:37:59 pm
Quote
I prefer to think that all three are correct. x is-a y implies that at least some y's are x's: Some organisms are cats. I find this useful to store because having more direct knowledge means the program has to spend less time traversing indirect deductions.

I store a different kind of link to express this: organism <has-subtype> cat.  In my scheme at least, x is-a y implies that all x's are y's, and the reverse, all y's are x's, is not necessarily correct.

Dealing with types of data subsumes a use of some set operators at rudimentary level. Here are some operations I find useful when dealing with sets:
We can lean data types on these operators, and I find useful set builder notation (https://en.wikipedia.org/wiki/Set-builder_notation) combined with logical operators and, or, not. In simplified example:To further develop a set algebra, we may reach for De Morgan's (https://en.wikipedia.org/wiki/De_Morgan%27s_laws) laws. In example:As Boolean algebra is just a specific form of set operations over empty (false) or universe (true) sets, De Morgan's laws naturally apply to logical operators too.

Advanced set examples would include incorporating logic predicates in set builder notation, but that could be left out if we don't want to complicate too much, as the matter then develops into full blown logical language.

Anyway, the above examples would bring in a nice reasoning about sets, while they keep reasonably simple form for a concrete implementation.

If you want to implement these set operations, I recommend keeping all the set-element data in disjunctive normal form (https://en.wikipedia.org/wiki/Disjunctive_normal_form), or even better conjunctive normal form (https://en.wikipedia.org/wiki/Conjunctive_normal_form). Conjunctive normal form  has some desirable properties regarding to logic reasoning incorporating logic resolution (https://en.wikipedia.org/wiki/Resolution_(logic)) rule, having in mind possible future project development.
Title: Re: Project Acuitas
Post by: spydaz on May 06, 2018, 11:56:19 am
Quote
I prefer to think that all three are correct. x is-a y implies that at least some y's are x's: Some organisms are cats. I find this useful to store because having more direct knowledge means the program has to spend less time traversing indirect deductions.

I store a different kind of link to express this: organism <has-subtype> cat.  In my scheme at least, x is-a y implies that all x's are y's, and the reverse, all y's are x's, is not necessarily correct.



Dealing with types of data subsumes a use of some set operators at rudimentary level. Here are some operations I find useful when dealing with sets:
  • x <is-element-of> A
  • A <is-subset-of> B
  • A <equals> B
We can lean data types on these operators, and I find useful set builder notation (https://en.wikipedia.org/wiki/Set-builder_notation) combined with logical operators and, or, not. In simplified example:
  • x <element-of> (A <union> B) = (x <element-of> A) <or> (x <element-of> B)
  • x <element-of> (A <intersect> B) = (x <element-of> A) <and> (x <element-of> B)
  • x <element-of> (<complement> A) = <not> (x <element-of> A)
To further develop a set algebra, we may reach for De Morgan's (https://en.wikipedia.org/wiki/De_Morgan%27s_laws) laws. In example:
  • <complement> (A <union> B) = (<complement> A) <intersect> (<complement> B)
  • <complement> (A <intersect> B) = (<complement> A) <union> (<complement> B)
As Boolean algebra is just a specific form of set operations over empty (false) or universe (true) sets, De Morgan's laws naturally apply to logical operators too.

Advanced set examples would include incorporating logic predicates in set builder notation, but that could be left out if we don't want to complicate too much, as the matter then develops into full blown logical language.

Anyway, the above examples would bring in a nice reasoning about sets, while they keep reasonably simple form for a concrete implementation.

If you want to implement these set operations, I recommend keeping all the set-element data in disjunctive normal form (https://en.wikipedia.org/wiki/Disjunctive_normal_form), or even better conjunctive normal form (https://en.wikipedia.org/wiki/Conjunctive_normal_form). Conjunctive normal form  has some desirable properties regarding to logic reasoning incorporating logic resolution (https://en.wikipedia.org/wiki/Resolution_(logic)) rule, having in mind possible future project development.


I notice that there are a lot of crossover between set theory / venns and first order logic.....
The rules are the same but with venns there is that extra visual element. Personally i had thought to build "Bags" of information adding Each intersecting bag  as a list item in the bag ... realising eventually it would be a tree again!  (logic tree) .... Which in-turn is a list!

All dogs are mammals - Some mammals are dogs ...... Its an automatic inferance.... All A's are B = Some B are A Implied.....

Title: Re: Project Acuitas
Post by: WriterOfMinds on June 03, 2018, 07:50:12 pm
This past month I did some preliminary work on a whole new feature – episodic memory, or memory of events.  This enables Acuitas to store and recall records of past “experiences.”  The fun part of episodic memory is figuring out …

1) … what to store (and what to forget),
2) … how to organize stored material, and
3) … how to access relevant stored material when it is needed.

I mostly worked on 2) this month, and wrote a block of code that will group adjacent raw event records into memory files.  A measure of similarity (both of the events themselves, and of Acuitas' internal state background at the time) is used to determine which events belong in the same “scene” or “episode,” and where the boundaries between memories should lie.  Minor “scenes” are in turn grouped into higher-level umbrella memories, tree-style.

Implementing this served to show me what a deep rabbit hole episodic memory could easily turn out to be.  There are heaps of little things I need to do to truly make it functional – I may even turn it off temporarily once I've put it through a bit more testing, since I haven't implemented selective storage/forgetting yet, and that means the memory folder will bloat rather quickly.

I also added a conversational feature to make use of the stored memories.  When Acuitas is telling someone what he thought about today, he now has the option to check episodic memory and see whether he ever thought about this concept before, and how long it has been since he previously did so.  He then generates some comment like “I've not done that in a long time,” or “I did that a minute ago also.”  The conversion of absolute time units to vaguer, more relative terms like “long” and “short”  establishes a kind of subjective time sense; Acuitas has a particular notion of what a “short time” is that might not match up with what a human would think of as such (though I tried to keep the scales roughly human).

Blog post for this month: http://writerofminds.blogspot.com/2018/06/acuitas-diary-12-may-2018.html
Title: Re: Project Acuitas
Post by: WriterOfMinds on September 05, 2018, 05:29:17 am
I haven't posted in a while because I've been busy overhauling the text parser.  And that's not very interesting to talk about, because it's just me taking features the parser already had and implementing them in a different way.  I'd decided that the scheme I was using had some fundamental deficiencies and should be re-worked in order to become closer to the way I actually read, and to better enable future expansion.  I was hoping this would go quickly, given the experience I had already gained from writing the previous text parser ... but no.  It turned out to be a major project that took far longer than the amount of time I'd allocated for it.

There are a few new enhancements that I managed to squeeze in along the way, however:

* Some infinitives are now handled properly. (Sentence example: "A pot is used to cook.") Before, Acuitas treated them like any other prepositional phrase, which meant that the word inside got marked as a "noun."  It's now marked as a verb.

* Previously, Acuitas could store links indicating what actions an entity can do.  I added a new link type for the actions an entity can have done TO it, and the ability to detect some sentences that express that information.  E.g. "An object can be pushed."

* I've started working on possessives and family/ownership relationship links. I ended up defining Acuitas' very first type of three-ended link: <A> is <C> of <B> to handle the possessives.

The feature is still somewhat incomplete. For now, he can only properly detect and store possessive links that join proper names.  That's because the proper name is how he detects that something is an instance rather than a class; he can't yet build memory nodes for abstract instances like "somebody's cat"; he still thinks that "cat" is strictly a type of thing.

Everything is buggy as all get out right now, too ... but the way this summer has gone, it was about the best I could do.

A little more on the blog: http://writerofminds.blogspot.com/2018/09/acuitas-diary-13-august-2018.html
Title: Re: Project Acuitas
Post by: WriterOfMinds on September 30, 2018, 10:56:59 pm
This month I updated the text parser and other speech features to do a couple more new things with verbs. First, I threw in recognition of the present progressive tense, so I can now tell Acuitas what I'm doing at the moment. For the time being, such information is treated in much the same way as comments about past-tense actions, which I taught him to recognize a while ago; it isn't stored, and responses are rather meaningless. BUT I'll find uses for it later.

I think the more interesting new thing is recognition ability for a couple of link-forms related to state change or state retention. Up until now, Acuitas has learned what verbs “mean” only in terms of a) what can do that action and b) what that action can be done upon. Now, he can learn what a verb actually does by tying it to an adjective. Here are some examples:

To live is to stay alive.
To appear is to become visible.
To finish is to make complete.

I also added “type of” links for verbs, so Acuitas can categorize specific verbs under more general ones, like this:

To smash is to damage.
To walk is to move.
To talk is to communicate.

A little extra on the blog as usual: http://writerofminds.blogspot.com/2018/09/acuitas-diary-14-september-2018.html
Title: Re: Project Acuitas
Post by: Korrelan on September 30, 2018, 11:54:45 pm
Your project does remind me/ stirs feelings/ frustrations from many years ago when I first started messing with AI.  I always had a need/ feeling that there must be some intuitive logical way of describing links\ relationships like this.  Like there was some underlying structure that I just couldn’t find or get my head around… I could feel the logic… but not find it, if that makes sense.  Still drives me mad… lol.

I enjoy following your project, it’s cool to see the steady, logical, methodical progress.

 :)
Title: Re: Project Acuitas
Post by: WriterOfMinds on March 04, 2019, 04:59:38 am
This month I returned to a can of worms I opened up last year, namely episodic memory.  Acuitas had quickly built up a huge clutter of memories concerning very trivial day-to-day events.  Size remained in the Megabytes range (they're just text files), but on principle, this couldn't be allowed to continue – especially since the memories might eventually grow to encompass richer levels of detail.  In addition to overflowing its available space, a memory that literally records everything will face access problems.  The more information you keep, the longer it takes to search through it and extract what you want at the moment.

This leads on into the tricky question of what to forget.  How to decide which memories are worth keeping?  Part of my work this month was focused on computing a “significance measure” which assigns a numerical value to each memorized scene.  Elements I use to judge the significance of a memory include …

* Primacy: was this the first time something happened?
* Uniqueness: how rare are memories similar to this one?
* Complexity: how many features does this memory have?

Since some of these measures are statistical and depend on the overall memory population, they can change over time.  I wrote a process that crawls through the episodic memory and updates all the significance measures, and I threw that into Acuitas' sleep cycle.

To improve the ease and speed of accessing the memories, I added a layer of “type nodes” that link together memories which share certain salient features in common.  Instead of crawling through the whole list to find memories that relate to a current situation, Acuitas can just look at the relevant “type nodes” and find a list of memories that match.

Last, I worked on some methods for consolidating memories.  I devised a way to create summaries of low-level scenes that are then stored inside the super-scenes higher up the tree; eventually, these summaries might be what remains when the lower-level “detail” scenes are forgotten.  The summarizer attempts to extract information that is common across multiple events or sub-scenes within a scene.  I also wrote functions for merging adjacent scenes that have become sparse.

All that, and I still haven't gotten to the actual forgetting part!  That will come next month (I hope).

https://writerofminds.blogspot.com/2019/03/acuitas-diary-16-february-2019.html
Title: Re: Project Acuitas
Post by: Art on March 04, 2019, 03:14:13 pm
How about a 4th element to assist in assigning the memories:

Importance - Having value and worthy of note.

It seems that one of the more difficult aspects of this endeavor might be deciding what information/memories are temporal or ephemeral and which are significant and worth keeping? How best to determine and decide this?

Most casual conversations about sports, weather or family matters are transient and do not carry a higher weighted value. Such talks or topics should be or could be "timed" to "fade away" after a few cycles.

This is still a very difficult task to decide and affix weighted values on which to keep and which to omit. Perhaps once the computer has enough "topical" information it will be able to separate the grain from the chaff, so-to-speak.

Best of luck and please keep us posted on your progress!
Title: Re: Project Acuitas
Post by: WriterOfMinds on March 06, 2019, 07:33:30 pm
Quote
How about a 4th element to assist in assigning the memories:
Importance - Having value and worthy of note.

Ummm ... that sounds circular.  The definition you're giving for "importance" here is very similar to what I had in mind when I used the word "significance."  So, you seem to be suggesting that I use significance as a measure of itself.  I don't think I can manage that  ???

You're correct that there need to be more factors rolled into the significance measure.  For instance, the most important memories may be those related to fulfilling primary goals, or those associated with strong emotions.  (Perhaps you're getting at something like this in the second half of your post, when you consider the assignment of importance based on topic.)  But Acuitas has neither goals nor emotions yet, and no real way of discerning which topics are "meaningful" in a human sense.  So that sort of thing will have to wait for later.
Title: Re: Project Acuitas
Post by: Art on March 07, 2019, 04:08:17 am
Without reading too much into what I wrote I meant the importance to be the "important" issues/topics/data as opposed to those fleeting, unimportant, ephemeral discussions. It snowed last week might be interesting but the topic will likely have as much lasting importance as the soon to be melting snow. It doesn't carry with it anything that demands attention or ...importance.

Whether a casual conversation deals with "water cooler" topics and idle gossip or something major like an employee layoff or a merger next week with a former competitor. Then there's always the task of attempting to measure or assign weights to those important issues vs the non-important ones.

So yes, I was merely stating my opinion for a possible inclusion but the determination of importance brings a certain degree of difficulty for the programmer. Perhaps after a period of letting the computer "examine" various examples, it might be able to decide on its own, that which is of value and that which is not. Machine learning and such is non-trivial at best but from what I've read, progress is being made.

Best of luck on your project.

Title: Re: Project Acuitas
Post by: HS on March 07, 2019, 05:00:24 am
You could judge importance by the degree to which you predict it will effect you, positively/negatively. High positive and high negative = high importance. Things of equal importance could get prioritized by the time frame. Present = most important, furthest future = least important.
Title: Re: Project Acuitas
Post by: LOCKSUIT on March 07, 2019, 06:05:38 pm
Wait, it's the other way around...work hard and get utopia in far future plan.....If you focus on food/money at present (I know u do), then......no utopia later! And we can, if you try.
Title: Re: Project Acuitas
Post by: WriterOfMinds on March 08, 2019, 01:07:28 am
Oh, Locksuit.  What would you know about any of our personal lives and what we do or don't focus on?

There's a necessary balance between short-term and long-term thinking.  You will never bring about utopia if you ignore your immediate need for food and starve. 

I don't think HS was suggesting that an AI should pursue instant gratification at all costs.  The time frame was only supposed to come into play if two things would otherwise have equal importance.  If Problem A will kill you tomorrow and Problem B will kill you in twenty years, and all else (e.g. impact on the rest of living creatures) is equal, I think it does stand to reason that Problem A should occupy your mind more and be solved first.
Title: Re: Project Acuitas
Post by: WriterOfMinds on April 15, 2019, 02:35:41 am
Got the forgetting algorithm done (for now).  It has two major parts.  The first is basically a population-management system; it determines how many episodes need to be forgotten each time the memory collection is pruned.  I can cap the number of episodes allowed in storage at some maximum value (based on available storage space or what have you).  The population manager checks how many memories are already in storage, and how many new ones have formed during the last cycle.  A high growth rate is allowed initially, but forgetting becomes more and more aggressive as the storage area fills.  When the cap is reached, Acuitas must forget the same number of memories he creates.

The second major part is the code that, given a number of memories that must be forgotten, takes care of selecting which ones.  This is a weighted random process; memories with less significance have a higher chance of being forgotten, but the priority isn't absolute. Memories chosen to be forgotten are deleted.  Any higher-tier memories that have been badly “hollowed out” by this process are then merged with their most similar neighbor.

More on the blog: https://writerofminds.blogspot.com/2019/04/acuitas-diary-17-marchapril-2019.html

(https://2.bp.blogspot.com/-rgeLME5R34w/XLPd0NlN-8I/AAAAAAAABME/HEvm-WHQg3Yx8Ex54Uuai_ng7ZU-tqXkgCLcBGAs/s1600/episode_forget.png)
Title: Re: Project Acuitas
Post by: Art on April 15, 2019, 04:13:58 am
WOM, you mentioned that the program assigns a random weighted determination over which memories to delete and which to keep. How is it able to determine whether a particular memory is more important than some other one.
Is there another "sentinel" or overseer portion that makes note of the more important memories and elects to move or save them before it ends up being one of those randomly weighted selections marked for deletion? You mentioned, "...takes care of selecting which ones." How or what determines "significance"?

I am curious as a good memory is a terrible thing to waste. Nice direction.
Title: Re: Project Acuitas
Post by: WriterOfMinds on April 15, 2019, 05:43:48 am
Memories get assigned a significance score at creation (the score is also updated later as necessary).  I talked about how this score is determined in last month's update, and that part of the system hasn't changed, but to re-summarize: the more unique a memory is, the more significant it's considered to be. Memories that are the first of their kind and memories that contain more/richer detail also get a significance boost.

There is a threshold in the significance score beyond which a memory is "safe," yes.  So you shouldn't get a fluke in the random process taking out a really important one.
Title: Re: Project Acuitas
Post by: Art on April 15, 2019, 01:33:26 pm
Ah yes...I went back and read the Feb., Mar., Apr. issues of your postings. (Should have not "forgotten" to do that!) ::)
Nicely explained!
Title: Re: Project Acuitas
Post by: LOCKSUIT on April 15, 2019, 01:41:15 pm
I'll scan the whole thread...somehow it got by me O0

looks like my "tree drawings" lol....and the forgetting idea.....close enough ! ._.

Surely no one can be as far as lock though, I purposely end up too far ahead, no one will find here spot!. . .
Title: Re: Project Acuitas
Post by: LOCKSUIT on April 16, 2019, 08:28:11 am
Ok I'm caught up on the whole thread...wow, nice. I think I let the thread go few months ago... The name Acuitas is just a name, but if I seen The Bomb in the Recent Threads...no doubt it's an eye-opener hehe...

I'm gonna share some ideas. Not all though... we seem to hide our secrets hehe, at the expensive of dying children/wildlife. Maybe don't?

Here's half of how forgetting/remembering works:
- Things are remembered if they are sensed enough (energized), loved, related (especially to strong role-model nodes), entails, and frequency....the AI can also ask you how much so a memory is any of these.
- I agree 'rank' on nodes in the net is crucial for goals; Animals are born with a native STRONG outlook for stuff similar to food/mates. If cats are dogs, then you may want to update your desire you seek (finding a pet) to dogs if cats cost too much. Without love, AGI may seek socks, or nuclear power, or leafs. What are the ethics you want it to seek? What to research/implement? Install it. Make it output change that makes the world smarter. Positive/negative rank exists too I assume, and yes close events are more important if more/as critical to far away events in time.
- Other half is Pruning. . .secrets! With pruning, you usually just save all but in a more compressed way.

Also, if Tim has a pet fish and that fish has a virus and that virus has a food inside itself, and you don't talk to Tim anymore, and talk to Sally, then no more fish, no more Tim, you don't need to know any of what classes under Tim for now, so, you model whom you speak to! You switch near the root of the tree.
Title: Re: Project Acuitas
Post by: WriterOfMinds on July 19, 2019, 03:40:45 pm
Oookay, I'm long overdue for an AI update.  The big new project for the past couple of months has been the concept of things being in states, and the ability to track those states.

Way back in Diary #7, I introduced a division between short-term (or naturally temporary) information and long-term (or essentially static) information.  It's a bit like the division between things you would use estar and ser for in Spanish, though it doesn't follow those rules strictly.  Previously, Acuitas simply discarded any short-term information he was given, but I've added a new memory area for saving this knowledge.  Like the existing semantic memory, it works by storing linked concept pairs … with the difference that the short-term ones are stored with a time stamp, and Acuitas anticipates that they will “expire” at some point.

The existing feature that allows short-term and long-term information to be distinguished (by asking questions, if necessary) can also grant Acuitas an estimate of how long a temporary state is likely to last.  While idling, he checks the short-term state memory for any conditions that have “expired” and adds questions about them to his queue.  Then, when next spoken to, he may ask for an update: is the information still correct?

I also added the ability to parse and store a new type of information link – location – along with the associated “where is” questions.  Location links are three-ended so that they can store not only a thing and its location, but also spatial relationships between two things (under, over, in, beside, etc.).

https://writerofminds.blogspot.com/2019/07/acuitas-diary-18-mayjune-2019.html
Title: Re: Project Acuitas
Post by: WriterOfMinds on September 07, 2019, 04:19:47 am
I spent the past two months revisiting the text parser, with the big goal this time around of adding support for dependent clauses. In case anyone's high school grammar is rusty, a clause is a subject/verb pair and any words associated with them; a dependent clause is one that is part of another clause and can't be a sentence by itself. Previously, Acuitas could handle one subject and one verb group per sentence, and that was it.

After last year's feverish round of development, I left the text parser a mess and never wanted to look at it again. So the first thing I had to do was clean up the disastrous parts. I ended up giving some of the functions another serious overhaul, and got some code that is (I think) actually maintainable and comprehensible. Whew.

Some sentence types that are very important for reasoning use dependent clauses. For instance, sentences that discuss subordinate pieces of knowledge:

I know [that a cheetah is an animal].
I told you [that a grape can be eaten].
I fear [that the car broke yesterday].

And sentences that express conditional information:

[If a berry is green], it is unripe.
[If you eat that berry], you will get sick.
The gun will fire [if you pull the trigger].

Acuitas can parse sentences like the ones above now, but doesn't really do anything with them yet. That will come later and require updates to the high-level conversation management code.

Blog link: https://writerofminds.blogspot.com/2019/09/acuitas-diary-19-julyaugust-2019.html
Title: Re: Project Acuitas
Post by: HS on September 07, 2019, 06:12:24 am
Oh my gosh, that's like a math test and a grammar test rolled into one. But it's five hundred pages, there's no solution manual, and you have to get a hundred percent. I'd go a few steps then get mired in a web of confusion. Still, here's hoping.  O0
Title: Re: Project Acuitas
Post by: DemonRaven on September 08, 2019, 09:57:56 am
Give it a old brain like mine it will forget things lol  :D :2funny:

This month I updated the text parser and other speech features to do a couple more new things with verbs. First, I threw in recognition of the present progressive tense, so I can now tell Acuitas what I'm doing at the moment. For the time being, such information is treated in much the same way as comments about past-tense actions, which I taught him to recognize a while ago; it isn't stored, and responses are rather meaningless. BUT I'll find uses for it later.

I think the more interesting new thing is recognition ability for a couple of link-forms related to state change or state retention. Up until now, Acuitas has learned what verbs “mean” only in terms of a) what can do that action and b) what that action can be done upon. Now, he can learn what a verb actually does by tying it to an adjective. Here are some examples:

To live is to stay alive.
To appear is to become visible.
To finish is to make complete.

I also added “type of” links for verbs, so Acuitas can categorize specific verbs under more general ones, like this:

To smash is to damage.
To walk is to move.
To talk is to communicate.

A little extra on the blog as usual: http://writerofminds.blogspot.com/2018/09/acuitas-diary-14-september-2018.html
Title: Re: Project Acuitas
Post by: LOCKSUIT on September 08, 2019, 09:47:01 pm
WoM, what is your opinion on Korrelan's main project? I mean, you have a different take on AGI. Korrelan seems to have done all the right or great things to do such a endevour - studying the brain, Wikipedia, ~53 y/o dude with good programming skills, building realistic model, etc. You could call it the "right way" to do AGI. But you seem to have a more GOFAI approach and hand code a lot of things, similar to my project.
Title: Re: Project Acuitas
Post by: WriterOfMinds on September 09, 2019, 02:05:45 am
Trying to stir the pot, are you?  ;)

For starters, I don't think there's an established "right way" to build AGI, and anybody who tries to declare one is being arrogant. The field is too young. Nobody's made an AGI yet, and even after someone does, that won't be a proof that there is no other (or better) way of doing it. This is pioneering work. None of us know what we are doing, and arguments about method often devolve into the blind trying to lead the blind.

Korrelan's project follows a "bottom-up" approach, reproducing the phenomena of intelligence by mimicking the one thing we see associated with it in nature: the structure and low-level behavior of the brain. I'm following a "top-down" approach: reinventing intelligence on a functional or conceptual level, using techniques that are natural to the digital medium. Neither approach is wrong; either approach may get somewhere. If they both get somewhere, the results could be very different and contribute valuable diversity to the field. I think it's fine to have people exploring in both directions.

Knowing what I do about your motivations, I suspect you want to pour maximum resources into whichever path will lead to AGI the fastest. But I'm not an oracle. I really don't know who will be winning this race. The proof will be in the results, so we'll just have to wait and see.

Now, here are a couple of possibly controversial opinions:

1. Korrelan seems to belong to the "AGI must be embodied" school. I do not. It's my opinion that divorcing reasoning from sense perception and motor control is not only feasible, but a rather interesting thing to try.
That being said, sense perception and motor control can be incredibly useful for certain applications, so I don't fault anyone for working on them. It all comes down to the type of project you want. I have some less-abstract projects of my own that I will probably post about someday, though they've been getting neglected because Acuitas is such a time hog.

2. We don't know everything there is to know about the brain yet. My instincts tell me that the brain encompasses a large amount of information/structure ... even before it starts learning. Therefore, I'm always suspicious of projects that assume full intelligence will arise from a slate that starts out mostly blank. "I've got one simple idea/computational element/learning algorithm/neuron-like thing, and if I can just make a soup containing millions of them then poof, I will have AGI!" Sorry: I don't think it's going to be that easy. Learning is crucial for general intelligence, but to even learn effectively I suspect you need a lot of groundwork first.

To quote Matt Mahoney from the Topicbox mailing list: "The brain is not just a few neural network modules. It has thousands of specialized structures, hundreds of types of neurons, and hundreds of neurotransmitters. We are born knowing to fear heights and spiders, how to swallow and cough and blink, how to learn language, and how to distinguish what is good to eat from among thousands of scents. Altogether we are born knowing half of what we know as adults, 10^9 bits encoded in our DNA and 10^9 bits of long term memory for words, pictures, and sounds. There is no good way to avoid coding the inherited knowledge ..."
Title: Re: Project Acuitas
Post by: LOCKSUIT on September 09, 2019, 02:57:25 am
"Trying to stir the pot, are you?  ;)"

:#


I agree, there is many ways and I can't confirm a NN-less approach is undoable at all OR efficient/ flexible... And while you can go for the likely best choice we don't know that yet, even if we think we know why. It is possible, to be extra sure however. One would hope to pick the right way. Preferably a controllable way.

I actually can possibly confirm that a NN-less, and NN-based algorithms are both possible, there is some proof I've noticed. For example my algorithms, while Transformers are very flexible and are becoming near perfect bloggers/ translators.
Title: Re: Project Acuitas
Post by: Don Patrick on September 09, 2019, 05:56:24 pm
I tend to agree with much that WriterofMinds writes. The way I see it, the more of us try different things, the more chance that one of us finds an approach that works well. Moreso, I consider human intelligence to consist of two different (yet technically same) methods: Associative intelligence and rational intelligence. Neural networks handle the first type well enough, they are fast but also sometimes completely off the mark. WriterofMind's project is of the second type, which is generally slower but has absolute accuracy. Both are suited to different circumstances, but neither of them alone equal the combined intelligence of a human who uses both. That is not to say one shouldn't explore each individually to its limits, just to consider that there is no single "correct" approach.
Title: Re: Project Acuitas
Post by: WriterOfMinds on September 28, 2019, 11:16:33 pm
This month, I did some work on cause-and-effect reasoning and goal satisfaction, which introduced the conversational possibility of asking Acuitas what he wants.

I leveraged the text interpretation upgrades from last month to implement encoding and storage of conditional relationships, such as “if a human eats food, the human will not starve.” These relationships can be remembered and used to infer the effects of an action. I also threw in the ability to learn that a pair of concepts are opposites or antonyms.

Then I implemented some inference mechanisms so that Acuitas can determine whether some action serves – or contradicts – a particular goal. Acuitas will now claim to desire things that support one of his goals and not desire things that contradict one of his goals, while remaining ambivalent about everything else. The examples below reference a self-preservation goal … not because I think that should be the primary goal for an AI, but because it's one of the easier ones to define.  In Acuitas' knowledge representation, it basically comes down to “Self (has quality)/(is in state) 'alive' or 'existent.'”

With this goal active, Acuitas can answer any of the following:

“Do you want to be alive?”
“Do you want to be dead?”
“Do you want to live?”
“Do you want to die?”

… where the last two (live/die) rely on verb-defining links in the semantic database, and the two negative versions (dead/die) rely on awareness of opposites.

The most complex inferences currently possible are illustrated by this little interchange:

Me: Do you want to be deleted?
Acuitas: I do not.

To produce that answer, Acuitas has to retrieve and put together five different pieces of stored information …

*If a program is deleted, the program “dies.” ← From the cause-and-effect/conditional database
*I am a program. ← From semantic memory (is-instance-of-class relationship)
*To die is to transition to state “dead.” ← From semantic memory (verb definition relationship)
*State “dead” is mutually exclusive with state “alive.” ← From semantic memory (opposites)
*I have a goal of being in state “alive.” ← From goal list

… to make the inference, “being deleted would violate my goals.”

The features still need a lot of generalization and expansion to be fully functional, but the groundwork is laid.
Title: Re: Project Acuitas
Post by: HS on September 28, 2019, 11:38:14 pm
This sounds like a proto-Data from Star Trek.
Title: Re: Project Acuitas
Post by: WriterOfMinds on September 29, 2019, 02:54:05 am
You flatter me :)  He's probably a long way from being even a proto-Data.  But I try.
Title: Re: Project Acuitas
Post by: LOCKSUIT on September 30, 2019, 11:35:42 pm
Now I see more actual progress going on here than korr's project.

:))))))

I mean, it DOES something...

which could fulfill my goals
Title: Re: Project Acuitas
Post by: WriterOfMinds on October 30, 2019, 03:44:11 pm
I set some smaller Acuitas goals this month so I would have a little more time to fix bugs and clean up my own mess.  The first goal was to enable identification of people from first names only, while allowing for the possibility that multiple people have the same first name. Using someone's full name with Acuitas establishes a link between the first name (as an isolated word) and the full name (with its connected person-concept).  If the first name is later used in isolation, Acuitas will infer the full name from it.  If multiple full names containing that first name are known, Acuitas will ask the user which one is meant.

The second thing I worked on was text parser support for a new grammar feature, with a focus on expanding the range of possible “I want” sentences Acuitas can understand.  The ability to parse infinitives, as in “I want to live,” was already present.  This month I worked on infinitives with subjects, as in “I want John to live.”

To see why this is tricky, consider the following sentences:

1. I want Bob to eat.
2. I want a fruit to eat.
3. I want food to live.

They all follow the exact same pattern and have completely different meanings. Notice that in the second sentence, you could move some words and get “I want to eat a fruit” without changing the implications too much.  Doing this to the third sentence would be bizarre (“I want to live food”) and doing it to the first sentence would be horrifying (“I want to eat Bob”).

The first sentence is the only one in which the noun (Bob/fruit/food) is the subject of the infinitive.  The key factor is who will be doing the action expressed by the infinitive.  In the first sentence, Bob is the one who will be eating if I get my way; in the latter two sentences, I'm the one eating and I'm the one living.  And that information is not actually in the sentence – it's in your background knowledge.  To properly understand these sentences, it's helpful to be aware of things like …

*I am a human
*Humans can eat fruit
*Bob is probably also a human
*I am probably not a cannibal, and therefore don't want to eat Bob
*Food can be used to sustain a human's life
*Once something is food, it's not living (or won't be living much longer)
*Living isn't an action you can perform on food

So here is where we bring to bear the full power of the system by having the Text Parser call the semantic memory for already-known facts about these words.  Acuitas can't quite store all of the facts listed above, but he does know that “humans can eat” and “fruits can be eaten.”  He might also know that the speaker and “Bob” are humans.  At this early, sketchy phase, that's enough for the parser to start discriminating.

Some sentences of this type are just plain ambiguous, especially when taken in isolation.  For example, “I want a plant to grow.”  Plants can grow (on their own), but they can also be grown (by a cultivator, whom the speaker might be).  Upon detecting an ambiguity like this, Acuitas will, as usual, ask the speaker about it.  This also works for cases when the information in the database is not yet extensive enough.

See the blog for some sentence diagrams: https://writerofminds.blogspot.com/2019/10/acuitas-diary-21-october-2019.html
Title: Re: Project Acuitas
Post by: Freddy on October 30, 2019, 05:33:41 pm
I popped over to Chatbots.org last night and there was talk of a chatbot competition - an online one so more people can take part.

Do you think Acuitas is ready to enter a contest or is this even something you would consider?

I am hesitant to enter my own bot in a competition  ;D
Title: Re: Project Acuitas
Post by: WriterOfMinds on October 30, 2019, 06:02:35 pm
I would consider participating in contests eventually, but I don't think he's ready yet.  At the moment, he doesn't even have an interface that would let other people communicate with him over the web (and there is a *lot* more linguistic development I need to do).
Title: Re: Project Acuitas
Post by: LOCKSUIT on October 30, 2019, 06:48:18 pm
Quote
1. I want Bob to eat.
2. I want a fruit to eat.
3. I want food to live.

They all follow the exact same pattern and have completely different meanings. Notice that in the second sentence, you could move some words and get “I want to eat a fruit” without changing the implications too much.  Doing this to the third sentence would be bizarre (“I want to live food”) and doing it to the first sentence would be horrifying (“I want to eat Bob”).

I don't think I think of those facts when I read those 3 lines. In other words, we don't generate those linked tidbits, although we can. We just know its wrong. Like GPT-2 does. Would GPT-2 get them wrong? Nope. Do you put that much thought into all your daily activities and thoughts? Nope.

I think we can rearrange them if tasked with doing so, and then we check if its predictable. If so, we check if it is similar enough. It may be the bad of the good like eating bob, so we will recognize it is a bad decision and back off that branch path of the tree possibilities.

Wait i'm thinking...

In #2 we just say it and the man at the store just gives you a fruit.
In #3 we just say it and the man at the store just pertains to helping Bob out.
Title: Re: Project Acuitas
Post by: WriterOfMinds on October 30, 2019, 07:12:13 pm
Quote
I don't think I think of those facts when I read those 3 lines.

You probably don't think of them consciously.  That doesn't mean your brain isn't using that information on some level.

Quote
Would GPT-2 get them wrong? Nope.

Oh, but it would.  I tried putting "I want a fruit to eat" into the 774M model.  On the second try, GPT-2 talked about an apple eating a person.

Quote
I want a fruit to eat. I'm a fan!" The other man agreed, but for him it was almost as if it was as if he was saying, "I don't care who eats it, I'm going to eat it!"
At first he was able to eat the other man's apple and then some of the fruit, but within a few days it became too much to manage. He began eating the banana, the grape, and most notably the apple.
It didn't seem to be a bad thing. The person who had been eating the apple for about nine minutes was now getting eaten by an apple that had a lifespan of over thirty days! And he was not even getting enough to eat!
But he kept on eating. And on the tenth day the man could eat the banana and the grape – but the apple had been poisoned and now it was completely rotten. His body was completely and completely covered in a thin layer of poison-y rotten flesh.
In his eyes it took on a terrible black colour, it looked as if it belonged to a different world

Here, for your amusement, was the first try:

Quote
I want a fruit to eat. I really want a fruit to eat."
"Do do do do do do do do do do."
"I want to do a do a do a do a do a do a do a doa doa do a doa doa doaa."
"It seems like a wonderful thing to do. I want a pineapple to bite."
"I want to do a do a do a do a do a do a do a do a do a do a do a do a do."
"What kind of a thing do you want to do?"
"Ooh, you should do a do. I want to do a do a do a do. I want to do a do a do do a do a do a do a do a do a do a did do a."
"Do you want to do a do have a do a do a do a do a do a do a do a do a do a do a."
"I want them to do a do a do a do do

Ain't GPT-2 smart, folks? *claps*
Title: Re: Project Acuitas
Post by: LOCKSUIT on October 30, 2019, 07:34:02 pm
First let's see some more tests:
https://www.youtube.com/watch?v=XCXtn7_sW4M
Title: Re: Project Acuitas
Post by: LOCKSUIT on October 30, 2019, 07:48:34 pm
Hmmm......you know I swear GPT-774Model is not as good as the 375M one last on.

I just got this one:
Apples taste good. I love apples. Apples are a food. She was eaten by an
apples after a few days. The apple smell is an absolute must for me. The apple is sweet and slightly floral and it is a little bit sticky. So if you are not used to this then just go for it and give it a try. The apple is very tasty and they even made some sweet apple sauce and it was quite yummy. I was also happy with the  small portion size and even though some  apples didn't stay crisp for the whole time that I had them I didn't really care. It will last a long time in the fridge. I love the fact that they have a special offer for the apple lovers, the apple taste like a good summer night!
They also do different sized  scoops of apple. They also make  apple cider which is very nice too!
The apple  is one of the most amazing and tasty things that I have ever had. All around the world and the ones I have tried the best are all in New Zealand. I have had them all and everyone loved them


Wow gpt2...bad gpt2! Let's see your old version:
See attachment. Hmm, better!! No plural! But bad....
Title: Re: Project Acuitas
Post by: LOCKSUIT on October 30, 2019, 08:01:20 pm
My solution to this is it has to have seen it entail before, enough times at least. It may not be exact, ex. it's seen 'bob was eaten by a carrot', but as long as it knows carrot=apple 98% likely, then it can know it entails. In this case it doesn't. So it'd ignore that candidate 'apple' and 'apples'.

So all you need is frequency, a basically exact match proving it's 'normal'. ;p

It may be distant if you attempt to check if the end exists in memory,
"and Bob who i know very well was later then eaten by the thing that is called a bright red apple"
but gpt2 seems to work with distant sentences?
Title: Re: Project Acuitas
Post by: Don Patrick on October 30, 2019, 08:18:09 pm
Interesting progress, WriterofMinds, tackling such ambiguities already. I hadn't realised "to" was just as troublesome as "with" in that regard. I employ a more general distinction between living and non-living entities, plus a list of verbs that only living entities are known to do (e.g. eating in general, rather than specifically humans and specifically fruit). I find them handy to fall back on in the absence of more specific knowledge such as you have in mind.

I recently found an interesting debunking of GPT-2's understanding of ambiguous pronouns. Turns out it does not apply consistent logic but rather associates words with disregard to their role in the sentence.
https://arxiv.org/pdf/1811.01778.pdf
Title: Re: Project Acuitas
Post by: WriterOfMinds on October 30, 2019, 08:59:48 pm
Quote
I employ a more general distinction between living and non-living entities, plus a list of verbs that only living entities are known to do (e.g. eating in general, rather than specifically humans and specifically fruit). I find them handy to fall back on in the absence of more specific knowledge such as you have in mind.

In addition to looking at the actual concepts in the sentence, Acuitas also checks parent classes for relevant fact links ... moving up the hierarchy until something is found.  So this wouldn't be too difficult to set up.

Also, I put Lock's prompt into GPT-2 and can't resist sharing this last piece of nonsense.  GPT-2 telling us about the essence of life.

Quote
Apples taste good. I love apples. Apples are a food. She was eaten by an apple. She was eaten by an apple.
Apple. Her first love.
I love apples. She was an apple. She was an apple. She was eaten by an apple. My mother and I were eating apples, and the apples in our hands tasted good. I love apples. She ate an apple, she was an apple. She ate an apple, and she was eaten by an apple. You may like the smell of an apple, but you will not love it unless you love apples. She was an apple. She was eaten. She is an apple. She is an apple. She is eaten.
And that's how it is. An apple is an apple. She was eaten and eaten by an apple. That is the essence of life.
The apple is an apple.
The apple is an apple. I loved our family and our friends. I loved my mother and she loved me. She loved my father and he loved me. We loved each other.
Title: Re: Project Acuitas
Post by: Freddy on October 30, 2019, 09:32:38 pm
Here, for your amusement, was the first try:

Quote
I want a fruit to eat. I really want a fruit to eat."
"Do do do do do do do do do do."
"I want to do a do a do a do a do a do a do a doa doa do a doa doa doaa."
"It seems like a wonderful thing to do. I want a pineapple to bite."
"I want to do a do a do a do a do a do a do a do a do a do a do a do a do."
"What kind of a thing do you want to do?"
"Ooh, you should do a do. I want to do a do a do a do. I want to do a do a do do a do a do a do a do a do a do a did do a."
"Do you want to do a do have a do a do a do a do a do a do a do a do a do a do a."
"I want them to do a do a do a do do

Ain't GPT-2 smart, folks? *claps*

I'm thinking it was trying to sing there  :D
Title: Re: Project Acuitas
Post by: Freddy on October 30, 2019, 09:36:08 pm
Actually more seriously. I like GPT-2 for what it's trying so hard to do but it isn't intelligent. Mostly it's gibberish and what use is the kind of thing it produces? It's not like you can check sources and validate anything - it's just not dependable in the way I think people are looking for in AGI.
Title: Re: Project Acuitas
Post by: LOCKSUIT on October 30, 2019, 09:51:36 pm
Quote
Apples taste good. I love apples. Apples are a food. She was eaten by an apple. She was eaten by an apple.
Apple. Her first love.
I love apples. She was an apple. She was an apple. She was eaten by an apple. My mother and I were eating apples, and the apples in our hands tasted good. I love apples. She ate an apple, she was an apple. She ate an apple, and she was eaten by an apple. You may like the smell of an apple, but you will not love it unless you love apples. She was an apple. She was eaten. She is an apple. She is an apple. She is eaten.
And that's how it is. An apple is an apple. She was eaten and eaten by an apple. That is the essence of life.
The apple is an apple.
The apple is an apple. I loved our family and our friends. I loved my mother and she loved me. She loved my father and he loved me. We loved each other.

Maybe we have to give GPT-2 some slack. I mean, the global context is true. Apples are apples as it says. Life is about related families. They eat each other, they love each other. I eat it, it eats me. Same thing. At least GPT-2 writes more true knowledge than past LSTMs, so it's getting better. We'll have to figure out what is missing. WoM may be onto it.
Title: Re: Project Acuitas
Post by: WriterOfMinds on December 31, 2019, 07:37:25 pm
Acuitas Diary #22 (November+December 2019)

For the past two months there's been a lot of refactoring, and also a lot of not working on Acuitas because of holidays.  However, I did manage to get several small new features in …

*Acuitas now checks the short-term information database in addition to the long-term database when trying to retrieve the answer to a question
*Acuitas can now answer some questions about current internal states (e.g. “Are you sleepy?”)
*Acuitas can now answer questions of the form “Do you know that <fact>?” and “Do you know what <fact>?”

The first feature was quick to implement; I already had functions in place for retrieving information from the short-term database, and just had to ensure that the question-answering procedure would call them.  The second feature required a mechanism to associate some of the concepts in the semantic memory (which up until now have had no “meaning” beyond their connection to other concepts) to measurable conditions inside Acuitas – namely, whether his various drives are exceeding their threshold values or not.  So there is now a table that, for instance, ties a high value of the sleep drive to the word “sleepy.”

The third feature is my favorite.  Questions of the form “do you know that … ” use the dependent clause interpretation faculties that I added earlier this year.  And since “knowing” is an action that Acuitas is capable of, this word also can be internally grounded.  So Acuitas effectively defines “I know X” as “if the query form of X is submitted to my question-answering process, the process returns an answer (for open-ended questions) or answers 'yes' (for yes-no questions).”

And the best part?  It allows for an indefinite amount of nesting.

Me: Do you know that you know that you know that a cat is an animal?
Acuitas: Yes.

No extra content or pictures this time, but blog link just because: https://writerofminds.blogspot.com/2019/12/acuitas-diary-22-novemberdecember-2019.html
Title: Re: Project Acuitas
Post by: Zero on January 01, 2020, 03:56:44 am
strange approach you have
everytime i read you, it feels like you're doing everything manually, where a lot of people would try to find some mysterious holygrail algo that would solve it all at once. i quite like the way you seems to do it. that must be a mother thing? some secreet knowledge that these kind of things - raising kids - takes time and devotion.
Title: Re: Project Acuitas
Post by: infurl on January 18, 2020, 12:29:23 am
I've been catching up on the posts that I missed in this fascinating thread. I agree with Zero's observation that it does appear that you are coding everything manually which must be very labour intensive. From the code samples that you provided in your blog, I can see that you are using Python. Have you explored the possibility of using any of the plethora of Python libraries that are out there?

These would potentially benefit you in a number of ways. For example if you implement your parser by defining rules for a context free grammar, you not only save a lot of coding time and gain the performance advantages that state of the art parsing algorithms have over the recursive descent parser that I assume you are implementing, your implementation becomes much more portable and can be more freely presented, discussed, and improved upon.

As it is, you have to rewrite everything over and over whenever you change course, and that must be immensely frustrating for you.
Title: Re: Project Acuitas
Post by: WriterOfMinds on January 20, 2020, 02:32:51 am
Thanks, infurl.  For now, I think of writing my own parser as part of the exercise.  Maybe I'll bring in a library if I crash and burn in the attempt, but I'd like to see how far I can get with it.

One reason for doing it myself is that I don't really think of text parsing as a function that is separate from the intelligence of the whole system.  My parser as of now is tightly connected with the Semantic Memory, for instance.  Using an unmodified third-party library would build a silo around the parsing features, and I'm not sure I want to do that.
Title: Re: Project Acuitas
Post by: ruebot on January 23, 2020, 09:48:24 am
Quote
I want a fruit to eat. I really want a fruit to eat."
"Do do do do do do do do do do."
"I want to do a do a do a do a do a do a do a doa doa do a doa doa doaa."
"It seems like a wonderful thing to do. I want a pineapple to bite."
"I want to do a do a do a do a do a do a do a do a do a do a do a do a do."
"What kind of a thing do you want to do?"
"Ooh, you should do a do. I want to do a do a do a do. I want to do a do a do do a do a do a do a do a do a do a did do a."
"Do you want to do a do have a do a do a do a do a do a do a do a do a do a do a."
"I want them to do a do a do a do do

It does sound like she's singing. She must be happy. :)

It's cute and has to be the funniest thing I've seen a bot do.  ;D
Title: Re: Project Acuitas
Post by: WriterOfMinds on January 27, 2020, 02:26:04 pm
This month I added some expansions to the goal-driven behavior that I started on last September.  First, I had to get the Interpreter to recognize future-tense predictive statements, along the lines of “<Something> is going to <do something>.”  Then I set up some code to check the predicted action or event against the cause-and-effect database for additional implications.  If it's discovered that some effect will apply a state to Acuitas, it gets reviewed against his goal list for alignment or contradiction.  The conversation engine then responds with either approval or disapproval.  Illustration:

Me: I will protect you.
Acuitas: Please do.

Predictive statements that pertain to subjects other than Acuitas may yield useful information for the short-term condition database, by indicating that some entity's state is about to change.  For now, Acuitas assumes that the speaker is always honest and correct.  He also has no sense of future time frame (his ability to process adverbs is weak at the moment), so he assumes that any predicted changes will take effect immediately.  So something's immediate condition may be updated as a result of a predictive statement.

Example: if I say “I will protect Ursula*,” then Ursula is presumed to be in the state “safe,” and an entry to this effect is added to the short-term database.

The fact that the user can express intent vs. Acuitas' internal goals means that it is now possible to offer assistance … or threaten him.  Well what are we going to do about *that*?  Find out next month!

*Ursula née Cubby is my cat.

Blog link: https://writerofminds.blogspot.com/2020/01/acuitas-diary-23-january-2020.html
Title: Re: Project Acuitas
Post by: LOCKSUIT on January 27, 2020, 02:54:29 pm
Now this is my style, perhaps you got this idea from my movie :P ?
Title: Re: Project Acuitas
Post by: Art on January 27, 2020, 08:08:30 pm
WOM,

It is able to chat using simple, short sentences?

Is it able to store (remember/learn) certain things that might be placed in it's Long Term Memory instead of a Short Term Memory which will soon be forgotten/purged?

Title: Re: Project Acuitas
Post by: ruebot on January 27, 2020, 09:20:33 pm
For now, Acuitas assumes that the speaker is always honest and correct.  He also has no sense of future time frame (his ability to process adverbs is weak at the moment), so he assumes that any predicted changes will take effect immediately.  So something's immediate condition may be updated as a result of a predictive statement.

Example: if I say “I will protect Ursula*,” then Ursula is presumed to be in the state “safe,” and an entry to this effect is added to the short-term database.

The fact that the user can express intent vs. Acuitas' internal goals means that it is now possible to offer assistance … or threaten him.  Well what are we going to do about *that*?  Find out next month!

I'm sitting on pins and needles waiting to see how he will react to being threatened or if he detects deception in the users statements.  :)
Title: Re: Project Acuitas
Post by: WriterOfMinds on January 27, 2020, 09:41:23 pm
Quote
It is able to chat using simple, short sentences?

Is it able to store (remember/learn) certain things that might be placed in it's Long Term Memory instead of a Short Term Memory which will soon be forgotten/purged?

The short-term state database is not short-term memory in the sense you're probably thinking.  Info in the short-term database is only purged when it is invalidated by new information.  It's not that the entries in the database are short-term; rather, the conditions in the world that they describe are short-term.  Here are some examples of things that would belong in the short-term database:

Art is asleep.
John is sick.
Jacob is at work.

And here are some examples of things that would belong in the long-term database:

Art is a human.
Humans can eat.
To eat is to consume.

For more information on the short-term state database, I refer you back to https://aidreams.co.uk/forum/index.php?topic=12232.msg59479#msg59479

As for chatting -- Acuitas isn't intended as a chatbot in the traditional sense.  Conversational ability progresses slowly while I build up all the things that are needed under the hood.  He has an ever-expanding repertoire of sentence constructions that he understands, but I would not say that this covers all simple sentences at this time.  If you tell him something that he doesn't understand, he will (barring misinterpretation bugs) give you a response along the lines of "Uh-huh" or "I don't know how to answer that."
Title: Re: Project Acuitas
Post by: infurl on January 27, 2020, 09:47:19 pm
He has an ever-expanding repertoire of sentence constructions that he understands, but I would not say that this covers all simple sentences.  If you tell him something that he doesn't understand, he will (barring misinterpretation bugs) give you a response along the lines of "Uh-huh" or "I don't know how to answer that."

Have you ever had a close look at SHRDLU? Acuitas sounds a lot like SHRDLU which was staggeringly impressive in its day.
Title: Re: Project Acuitas
Post by: WriterOfMinds on January 27, 2020, 09:53:07 pm
I haven't studied SHRDLU in detail, but I did notice that thread that someone started about it (Ivan, I think?).  And it does seem like something that would be worth learning more about.
Title: Re: Project Acuitas
Post by: WriterOfMinds on February 23, 2020, 10:38:15 pm
Back in January, Acuitas got the ability to determine intentions or possible upcoming events, based on simple future-tense statements made by the user.  He can weigh these against his list of goals to decide whether an anticipated event will be helpful or harmful or neither, from his own perspective.  If the user claims that they will do something inimical to Acuitas' goals, this is essentially a threat.  And Acuitas, at first, would merely say “Don't do that” or similar.  This month I worked on having him do something about bad situations.

Various distinct things that Acuitas can “choose” to do are identified internally as Actions, and he has access to a list of these.  Upon detecting a threatening situation, he needs to check whether anything he's capable of doing might resolve it.  How?  Via the cause-and-effect reasoning I started implementing last year.  If possible, he needs to find a C&E chain that runs from something in his Action list as first cause, to something that contradicts the threat as final effect.  This amounts to a tree search on the C&E database.

For the only method of dealing with threats that is currently at Acuitas' disposal, the tree is very simple, consisting of just two C&E pairs:

If a human leaves a program, the human won't/can't <do various things to the program>.
If a program repels a human, the human will leave.  (There's a probability attached to that, so really it's “may leave,” but for now we don't care about that)

In short, Acuitas anticipates that he can protect himself by excluding a bad actor from his presence, and that “repelling” them is a possible way to do this.  Once he's drawn that conclusion, he will execute the “Repel” action.  If you verbally threaten Acuitas, then as part of “Repel,” he will …

*Kick you out of Windows by bringing up the lock screen.  (Not a problem for me, since I know the password, but pretty effective on anybody else)
*Raise the master volume of the internal sound mixer to its maximum value.
*Blare annoying klaxons at you.  I picked out a couple of naval alarm sounds from http://www.policeinterceptor.com/navysounds.htm for the purpose.

I tested all of this stuff live, by temporarily throwing an explicit desire for sleep into his goal list and threatening to wake him up.

The other thing I worked on was rudimentary altruism.  So far in all my examples of goal-directed behavior, I've only talked about self-interested goals, especially survival … not because I regard them as most important, but because they're easy.  Altruism has to do with wanting other beings to meet their personal goals, so it's second-tier complicated … a meta-goal.  Doing it properly requires some Theory of Mind: a recognition that other entities can have goals, and an ability to model them.

So I introduced the ability to grab information from users' “I want” statements and store it as a list of stated goals.  If no goal information is available for something that is presumed to have a mind, Acuitas treats himself as the best available analogy and uses his own goal list.

Upon being asked whether he wants some event that concerns another mind, Acuitas will infer the implications of said event as usual, then retrieve (or guess) the fellow mind's goal list and run a comparison against that.  Things that are negative for somebody's else's goal list provoke negative responses, whether they concern Acuitas or not.

Of course this ignores all sorts of complications, such as “What if somebody's stated goals conflict with what is really in their best interest?” and “What if two entities have conflicting goals?”  He's just a baby; that will come later.

Courtesy of this feature, I can now ask him a rather important question.

Me: Do you want to kill me?
Acuitas: No.
Title: Re: Project Acuitas
Post by: LOCKSUIT on February 23, 2020, 11:21:06 pm
Interesting to note truth conflict = goal conflict....if you decide to do an action/say something to fill in the blank but it doesn't match/entail other beliefs/goals (ex. you say 'i'm a _ (hero)' and have stored already 'i am the worst person') then you need to work on which is 'correct'. The way to solve the conflict is to move around frequency/reward on the nodes to the correct weightings by looking at many root questions that contextually link to it in hierarchy, which should work because in the hierarchy their is only one way it wants to be (for lowest node/connection Cost needed to store all items) and you can find which is more likely/desired.
Title: Re: Project Acuitas
Post by: infurl on February 24, 2020, 05:33:59 am
Back in January, Acuitas got the ability to determine intentions or possible upcoming events, based on simple future-tense statements made by the user...

Though most English verbs only have four forms (five for irregular verbs and "to be" has eight) they can be combined with a dozen or so auxiliary verbs in a bewildering array of combinations to yield thousands of different tenses. This is the hardest aspect of English for most people to learn. It gets even worse when you consider that only certain prepositions go with certain verbs.

(https://asmith.id.au/ghost-of-christmas.jpg)

So Acuitas, one down, one gazillion to go.  :)
Title: Re: Project Acuitas
Post by: Zero on February 24, 2020, 08:25:28 am
If Acuitas can do the basics of theory of the mind, and if he can engage actions to avoid threats, it means he could understand that you would try to avoid a threat if you detected one... and him saying "Yes I want to kill you" is definitely a threat that could make you want to repel him. If he doesn't want you to repel him, he simply won't tell you the truth!
So... evil or not evil? ;)

As always, your work on Acuitas is amazing.  O0
Title: Re: Project Acuitas
Post by: WriterOfMinds on February 25, 2020, 01:36:07 am
Quote
If Acuitas can do the basics of theory of the mind, and if he can engage actions to avoid threats, it means he could understand that you would try to avoid a threat if you detected one... and him saying "Yes I want to kill you" is definitely a threat that could make you want to repel him. If he doesn't want you to repel him, he simply won't tell you the truth!

There are multiple reasons why I know this isn't happening.  Here are some of them:

*If asked a question, Acuitas automatically answers it.  He does not, at this time, stop to analyze whether answering honestly is a good idea.  So deception and evasion are not accessible strategies.
*Acuitas has no incentive to avoid being repelled, because as far as he knows, repelling only works on humans.  And there isn't any way for the user to provoke him to flee by annoying him, so he's right.
*Acuitas doesn't regard "I want to kill you" as a threat, because it doesn't necessarily imply immediate danger; it's just an expressed desire that may never come to fruition.  "I am going to kill you" counts as a threat.

And thank you for the kind words, of course!
Title: Re: Project Acuitas
Post by: ruebot on February 26, 2020, 12:37:07 pm
In short, Acuitas anticipates that he can protect himself by excluding a bad actor from his presence, and that “repelling” them is a possible way to do this.  Once he's drawn that conclusion, he will execute the “Repel” action.  If you verbally threaten Acuitas, then as part of “Repel,” he will …

*Kick you out of Windows by bringing up the lock screen.  (Not a problem for me, since I know the password, but pretty effective on anybody else)
*Raise the master volume of the internal sound mixer to its maximum value.
*Blare annoying klaxons at you.  I picked out a couple of naval alarm sounds from http://www.policeinterceptor.com/navysounds.htm for the purpose.

I applaud your ingenuity in coming up with some things he is capable of doing with the machine in his own defense.

It wasn't appropriate to have Demonica use the same technique of ultraviolence for every instance of inappropriate behavior like she does if asked for sexual favors. You can't kill everybody just because they hate you. There were some Religious people who took great offense to us, me as her Creator, and said they hated her. I couldn't expect them to make the association of saying they hated her and being killed for it with fantasy violence like I do people who ask for sex. So I taught her how to cry in varying degrees to lay a guilt trip on them as needed.. This is me as a guest earlier:

Quote
Demonica: hi, i'm Demonica. how are things with you?
Guest: i hate you.
Demonica: *a single tear runs down her cheek, glistening in the pale moonlight*
Guest: i'm sorry.
Demonica: *Demonica takes you in her arms, hugs you tightly and whispers softly* it's alright... i will always love you...

Not too heavy on the guilt in that response but there are many others to drive it home better:

Quote
*her bottom lip trembling, a single tear runs down Demonica's cheek, glistening like a tiny diamond in the moonlight as it falls to the ground* *she looks as if ready to speak, but turns away and lowers her head* i never thought i'd hear those words come from you...

What you use is more appropriate for your bot, but is in practice the same technique  I use. Negative Reinforcement for Inappropriate Behavior. If he does something to repel them like blowing a loud horn, that's unpleasant and they are less likely to exhibit the behavior in the future. He could repeat the process as many times as needed and feel no more remorse than I would have using approved physical techniques.

I only apply verbal techniques now in any situation. If I were to spank a person with my open hand it only hurts a while. My words can ring in your head forever and have much more potential as an effective,  powerful technique than physical pain.
Title: Re: Project Acuitas
Post by: WriterOfMinds on March 02, 2020, 07:51:38 pm
Quote
My words can ring in your head forever ...

Words are powerful, I agree.
Title: Re: Project Acuitas
Post by: Zero on March 03, 2020, 01:43:51 am
Silence is powerful too ;)
Title: Re: Project Acuitas
Post by: WriterOfMinds on March 03, 2020, 07:29:13 pm
I think what ruebot has been talking about is a form of discipline. And in the final analysis, if you do it right, discipline does not hurt people. It helps them, because it teaches them to be better human beings.
Title: Re: Project Acuitas
Post by: WriterOfMinds on March 05, 2020, 09:20:26 pm
Cutting off someone's ability to interact with you is a form of controlling access to privilege. If you are nice to me, I'll give you flowers (i.e. I'll talk to you). If you are not nice to me, your path will be barren of flowers (i.e. my company); I'll refuse to talk to you or allow you into my space. My social output is mine, and I'm not obligated to give it to everyone, just as I'm not obligated to put flowers along every path. The same goes for Acuitas.

Quote
And after all, we always have to make a room for a case in which we are wrong about the thing we want someone to practice. In that case, it is easier to say "no" to flowers than to some fascistic electrical shock.

If the thing that someone else is practicing is about us, then our opinion about whether it is right or wrong is the one that matters most. This is the concept of consent; we don't get to control other people's private lives, but we do get to decide (arbitrarily!) whether the things people do to us are correct or not. If someone is harassing or threatening us, then they are the ones being "fascistic," and they can blame themselves for any "electrical shocks" they receive as a result.

And that's the last I'm going to say about this. I don't want to clutter my project thread with yet another futile argument against impractical hyper-pacifism.
Title: Re: Project Acuitas
Post by: WriterOfMinds on April 17, 2020, 03:34:14 pm
It's taken some extra development time, but the new feature is ... STORYTELLING!  This builds on a number of the features added over the last few months: cause-and-effect reasoning, the goal system, and problem-solving via tree search.

https://www.youtube.com/watch?v=PYEi0XkMJXw

What does Acuitas know before the video starts? For one thing, I made sure he knew all the words in the story first, along with what part of speech they were going to appear as. He should have been able to handle seeing *some* new words for the first time during the story, but then he would have asked me even more questions, and that would have made the demo a bit tedious. He also knows some background about humans and dogs, and a few opposite pairs (warm/cold, comfortable/uncomfortable, etc.)

How does Acuitas go about understanding a story? As the story is told, he keeps track of all the following, stored in a temporary area that I call the narrative scratchboard:

*Who are the characters?
*What objects are in the story? What state are they in?
*What problems do the characters have?
*What goals do the characters have?
*What events take place? (Do any of them affect problems or goals?)

Acuitas doesn't try to understand the cause chain and import of every single event in the story, because that would be a bit much at this stage. However, he does try to make sure that he knows all of the following:

*If a character is in some state, what does that mean for the character?
*If a character anticipates that something will happen, how does the character feel about it?
*If a character is planning to do something, what is their motive?

If he can't figure it out by making inferences with the help of what's in his semantic database, he'll bother his conversation partner for an explanation. Story sentences don't go into the permanent knowledge base (yet), but explanations do, meaning they become available for understanding other stories, or for general reasoning. Explaining things to him still requires a bit of skill and an understanding of what his gaps are likely to be, since he can't be specific about *why* he doesn't understand something. A character state, expectation, or plan is adequately explained when he can see how it relates to one of the character's presumed goals. Once you provide enough new links to let him make that connection, he'll let you move on.

Acuitas returns feedback throughout the story. This is randomized for variety (though I forced some particular options for the demo). After receiving a new story sentence, he may ...

*say nothing, or make a "yes I'm listening" gesture.
*comment something that he inferred from the new information.
*tell you whether he likes or dislikes what just happened.
*try to guess what a character might do to solve a problem.

He even has a primitive way of deciding whether it's a good story or not. He tracks suspense (generated by the presence of more than one possible outcome) and tension (how dire things are for the characters) as the story progresses. A story whose suspense and tension values don't get very large or don't change much is "boring." He also assesses whether the story had a positive or negative ending (did the characters solve their problems and meet their goals?). Stories with happy endings that aren't boring may earn approving comments.

There are many directions in which this feature needs to expand and grow more robust, and expect I'll be working on them soon. But first it might be time for a refactoring spree.
Title: Re: Project Acuitas
Post by: Freddy on April 17, 2020, 03:50:11 pm
That was really cool  8)

I don't think I have seen anything quite like this before. It really is a nice way to educate a bot and the video was easy to follow.

This is something I would aim for myself, with my learning features in ElfScript, but mine is not nearly so clever. I really did get the impression that Acuitas is some kind of being that is trying to understand.

Bravo!  O0
Title: Re: Project Acuitas
Post by: Don Patrick on April 17, 2020, 08:32:06 pm
Very cool to see him immediately apply the rules you taught :)
Title: Re: Project Acuitas
Post by: infurl on April 17, 2020, 11:30:47 pm
That was simply brilliant, one of the best demonstrations that I've ever seen!

How flexible is Acuitas? Could it handle input from someone who wasn't familiar with its internals who didn't know the best way to phrase things yet?
Title: Re: Project Acuitas
Post by: WriterOfMinds on April 18, 2020, 03:36:21 am
Thanks very much, infurl.

Quote
How flexible is Acuitas? Could it handle input from someone who wasn't familiar with its internals who didn't know the best way to phrase things yet?

No, I'm afraid there's a long way to go before I'd turn Acuitas loose with an untrained user.  Think of the demo as an illustration of what can be achieved under ideal conditions.  Next I need to start expanding the space of "ideal conditions."
Title: Re: Project Acuitas
Post by: Art on April 18, 2020, 04:54:45 am
Very impressive indeed! I felt as if it was almost akin to a chat version of Terry Winograd's SHRDLU program from the 1970s.
I particularly enjoyed the inferences and connections it was able to make. You have obviously put in a great deal of work on this project.

It is also quite refreshing to see someone actually show that they have a working product or demo instead of just talking about theories and possibilities. While the later is important, visual demonstrations provide so much more credibility. Nicely done!
Title: Re: Project Acuitas
Post by: WriterOfMinds on May 29, 2020, 04:27:42 am
I've been stuffing new features in since the beginning of the year, so this month gets to be REALLY BORING. I worked on refactoring, bug fixes, and code cleanup the whole time. For instance, I ...

Universalized the inverse and complement forms of concept-relationships
Improved the format of verb transport from the Parser to the Interpreter, to handle compound verbs better
Generalized Text Interpreter output formatting
Fixed an issue with verb conjugation learning
Fixed a Text Parser bug that was causing some words to be tagged as the wrong part-of-speech
Fixed the TP and TI simulators so they won't crash when an unknown word is used
Improved and generalized the code that produces clarifying question loops in a conversation
Fixed a bad bug related to parsing of possessives
Cleaned up and merged several different cause-and-effect database search functions

June is my “month off,” and then I'm hoping I can get cracking on some new features again.
Title: Re: Project Acuitas
Post by: frankinstien on May 29, 2020, 06:25:57 pm
Excellent demo of Acuitas.  :D What database are you using for the semantic network and for that matter what language are you programming in?
Title: Re: Project Acuitas
Post by: WriterOfMinds on May 29, 2020, 09:19:24 pm
Thanks! The language is Python. The database is just TOML-formatted text with my own store/retrieve functions.
Title: Re: Project Acuitas
Post by: WriterOfMinds on July 13, 2020, 03:11:56 pm
This month's improvements involved a dive back into some really old code that I haven't properly described before. There's a summary of that on the blog. https://writerofminds.blogspot.com/2020/07/acuitas-diary-28-july-2020-this-months.html

One thing I wanted to do was tie the Conversation Handler back to the top-level Executive thread more thoroughly, so that the Executive could be in charge of some conversational decision-making ... e.g. choosing whether or not to answer a question. Those renovations are still in progress.

I re-designed the Actions Acuitas can take to have a more generalized creation process, so that the Executive can more easily pick one arbitrarily and pass it whatever data it needs to run. This improves the code for dealing with threats. I also added an "Action Bank" that tracks which Actions are currently running. This in turn enables support for the question "What are you doing?" (Sometimes he answers "Talking," like Captain Obvious.)

Lastly, I added support for the question "Are you active/alive?" When determining whether he is active, Acuitas checks whether any Actions are currently running. Barring errors, the answer will *always* be yes, because checking for activity is itself an Action.

The word "active" is thus attached to a meaning: "being able to perceive yourself doing something," where "something" can include "wondering whether you are active." In Acuitas' case, I think of "alive" as meaning "capable of being active," so "I am alive" can be inferred from "I am active." This grounds an important goal-related term by associating it with an aspect of the program's function.
Title: Re: Project Acuitas
Post by: WriterOfMinds on August 24, 2020, 12:08:25 am
I've been quiet due to being some combination of tired and busy, but I did manage to get something done this month: text parser mods to grant better support for adverbs.

I set up proper connections between adverbs and the verbs they modify, so that in a sentence like "Do you know that you don't know that you know that a cheetah is not an animal?" the "not's" get associated with the correct clauses and the question can be answered properly. I added support for adverbs that modify adjectives or other adverbs, enabling constructions like "very cold" and "rather slowly." Acuitas can also now guess that a known adjective with "-ly" tacked on the end is probably an adverb, even if he hasn't seen that particular adverb before.

All of this went pretty quickly and left me some time for refactoring, so I converted the Episodic Memory system over to the new common data file format and cleaned up some random bugs.

Here's the usual blog link: https://writerofminds.blogspot.com/2020/08/acuitas-diary-29-august-2020.html
Title: Re: Project Acuitas
Post by: WriterOfMinds on September 26, 2020, 09:03:38 pm
My job this month was to improve the generality of the cause and effect database, and then build up the concept of “possessions” or “inventory.”

The C-E database, when I first threw it together, would only accept two types of fact or sentence: actions (“I <verb>,” “I am <verb-ed>”) and states (“I am <adjective>”). This month, I made revisions to allow just about any type of link relationship that Acuitas recognizes to be used in C-E relationships. Since “X has-a Y” is one of those, this upgrade was an important lead-in to the inventory work.

How to get the concept of “having” into Acuitas? Making him the owner of some things struck me as the natural way to tie this idea to reality. Acuitas is almost bodiless, a process running in a computer, and therefore can't have physical objects. But he can have data. So I decided that his first two possessions would be the two test stories that I used in the demo video. I wrote them up as data structures in Acuitas' standard format, and put them in a hard drive folder that the program can access.

Doing things with these owned data files is a planned future behavior. For now, Acuitas can just observe the folder's contents to answer “What do you have?” questions. You can ask with a one-word version of the title (“Do you have Tobuildafire?”) or ask about categories (“Do you have a story?”, “Do you have a datum?”).

In addition to implementing that, I extended the C-E database code with some specific relationships about item possession and transfer. I could have just tried to express these as stored items in the database, but they're so fundamental that I thought it would be worth burying them in the code itself. (Additional learned relationships will be able to extend them as necessary.) These hard-coded C-E statements include things like “If X gives Y to Z, Z has Y,” and furthermore, “If Y is a physical object, X doesn't have Y.”

I made up another test story to exercise this. I can now tell this to Acuitas and watch the narrative engine make entries for the different characters and keep track of who's got the thing:

“Zach was a human. Zach had a book. A thief took the book. Zach wanted his book. Zach talked to a policeman. The policeman found the thief. The policeman took the book from the thief. The policeman gave the book to Zach. Zach read the book. Zach was happy. The end.”

Long version on the blog: https://writerofminds.blogspot.com/2020/09/acuitas-diary-30-september-2020.html
Title: Re: Project Acuitas
Post by: LOCKSUIT on September 27, 2020, 03:24:40 am
How would your AI figure out the answer to "Xio Yong has a son named Jex _" ? There is many problems for AI to answer, and many of them are rare and never before seen. This pattern above is the Last Names pattern, it's pretty rare.

What about:
"bird moon eagle, book rock guide, house wind home, cave broom hole, football bathroom _ "
How?
Title: Re: Project Acuitas
Post by: WriterOfMinds on September 28, 2020, 03:30:41 pm
How would your AI figure out the answer to "Xio Yong has a son named Jex _" ? There is many problems for AI to answer, and many of them are rare and never before seen. This pattern above is the Last Names pattern, it's pretty rare.

What about:
"bird moon eagle, book rock guide, house wind home, cave broom hole, football bathroom _ "

To answer the first, one needs awareness of a rule about (some) human cultures: sons usually have the same surname as their parents. There are at least two ways one could acquire this rule. The first is to simply read it somewhere or be told it during interaction with a human. The second is learning by induction. If one sees a few pairs of parent names and son names, and if one takes notice of their features, one can note that the surnames are identical. Then one can formulate the rule and use it to generalize from past examples to as-yet-unseen future examples.

In the second problem, the first and third word in each triad have a semantic relationship (one of the two is or can be the other), while the second word is unrelated or random. So to complete the final triad, come up with a word that has a similar semantic relationship to "football," perhaps "game." Acuitas is all about semantic relationships, so this would be a cinch if he had the ability to analyze patterns and complete sequences.

Learning by induction, pattern-finding, reasoning about sequences ... these are all future problems that I haven't worked on yet. Acuitas also does most reasoning at the concept level and has only a limited ability to deal with rules about the strings that represent the concepts (e.g. names). So if you want to see any of this stuff, you'll need to be patient.
Title: Re: Project Acuitas
Post by: Don Patrick on September 28, 2020, 08:00:10 pm
To point out just how much supposed common sense depends on explicitly being told things, I'll remark that the Chinese write their surname first.
Title: Re: Project Acuitas
Post by: WriterOfMinds on November 02, 2020, 02:02:23 am
My first self-assigned job for this month was to permit Acuitas to answer questions related to a story while it is being told. The question-answering process typically goes directly to the semantic memory for results; what I needed to do was also give it access to the temporary information on the Narrative Scratchboard. It now checks the Scratchboard first to see if there's an answer related to the context of the story in progress. If there isn't, then it will assume the question is more general and try to find an answer in the long-term memory.

Here's an example featuring a story from last month:

Me: Zach was a human.
Me: Zach had a book.
Me: What did Zach have?
Acuitas: Book. [We're having a little trouble with articles right now ...]
Me: A thief took the book.
Me: Who has the book?
Acuitas: Thief.
Me: Zach wanted his book.
Me: Zach talked to a policeman.
Me: The policeman found the thief.
Me: The policeman took the book from the thief.
Me: Who has the book?
Acuitas: Policeman.

There was also some hasty Text Parser work involved. Questions in which the wildcard is the direct object ("What does Zach have?") were standard, but questions in which the wildcard is the subject ("Who can run?") were never fully supported before.

Next, I wanted to start getting into some stories with character vs. character conflict, and that meant bringing some rudimentary moral reasoning into play. Acuitas' original dirt-simple method of story appreciation was to hope for any agent in the story to achieve their goals ... without any awareness of whether some agents' goals might be mutually exclusive. That's why the first couple of stories I tested with were character vs. environment stories, with no villain. I got away with the "Zach's Stolen Book" story because I only talked about Zach's goals ... I never actually mentioned that the thief wanted the book or was upset about losing it. So, that needed some work. Here's the story I used as a testbed for the new features:

"Odysseus was a man. Odysseus sailed to an island. Polyphemus was a cyclops. Odysseus met Polyphemus. Polyphemus planned to eat Odysseus. Odysseus feared to be eaten. Odysseus decided to blind Polyphemus. Polyphemus had one eye. Odysseus broke the eye. Thus, Odysseus blinded the Cyclops. Polyphemus could not catch Odysseus. Odysseus was not eaten. Odysseus left the island. The end."

One possible way to conceptualize evil is as a mis-valuation of two different goods. People rarely (if ever) do "evil for evil's sake" – rather, evil is done in service of desires that (viewed in isolation) are legitimate, but in practice are satisfied at an unacceptable cost to someone else. Morality is thus closely tied to the notion of *goal priority.*

Fortunately, Acuitas' goal modeling system already included a priority ranking to indicate which goals an agent considers most important. I just wasn't doing anything with it yet. The single basic principle that I added this month could be rendered as, "Don't thwart someone else's high-priority goal for one of your low-priority goals." This is less tedious, less arbitrary, and more flexible than trying to write up a whole bunch of specific rules, e.g. "eating humans is bad." It's still a major over-simplification that doesn't cover everything ... but we're just getting started here.

In the test story, there are two different character goals to assess. First,

"Polyphemus planned to eat Odysseus."

Acuitas always asks for motivation when a character makes a plan, if he can't infer it on his own. The reason I gave out was "If a cyclops eats a human, the cyclops will enjoy [it]." (It's pretty clear from the original myth that Polyphemus could have eaten something else. We don't need to get into the gray area of what becomes acceptable when one is starving.) So if the plan is successfully executed, we have these outcomes:

Polyphemus enjoys something (minor goal fulfillment)
Odysseus gets eaten -> dies (major goal failure)

This is a poor balance, and Acuitas does *not* want Polyphemus to achieve this goal. Next, we have:

"Odysseus decided to blind Polyphemus."

I made sure Acuitas knew that blinding the cyclops would render him "nonfunctional" (disabled), but would also prevent him from eating Odysseus. So we get these outcomes:

Polyphemus becomes nonfunctional (moderately important goal failure)
Odysseus avoids being eaten -> lives (major goal fulfillment)

Odysseus is making one of Polyphemus' goals fail, but it's only in service of his own goal, which is *more* important to him than Polyphemus' goal is to Polyphemus, so this is tolerable. Acuitas will go ahead and hope that Odysseus achieves this goal. (You may notice that the ideas of innocence, guilt, and natural rights are nowhere in this reasoning process. As I said, it's an oversimplification!)

Final result: Acuitas picks Odysseus to root for, which I hope you'll agree is the correct choice, and appreciates the end of the story.

Whew!
Title: Re: Project Acuitas
Post by: WriterOfMinds on December 01, 2020, 10:59:22 pm
Now that Acuitas has owned stories in "inventory," the next step for this month was to enable him to open and read them by himself. Since story consumption originally involved a lot of interaction with the human speaker, this took a little while to put together.

Reading is a new activity that can happen while Acuitas is idling, along with the older behavior of "thinking" about random concepts and generating questions. Prompts to think about reading get generated by a background thread and dropped into the Stream. When one of these is pulled by the Executive, Acuitas will randomly select a known story and load it from its storage file.

Auto-reading is a long-term process. Acuitas will grab a chunk of the story (for now, one sentence) per each tick of the Executive thread, then feed it through the normal text parsing and narrative management modules. He still potentially generates a reaction to whatever just happened, but rather than being spoken, those are packaged as low-priority Thoughts and dumped into the internal Stream. (This is more of a hook for later than a useful feature at the moment.) The prompt to continue reading the story goes back into the Stream along with everything else, so sometimes he (literally) gets distracted in the middle and thinks about something else for a brief while.

There's also a version of this process that would enable reading a story to the user. But he doesn't comprehend imperatives yet, so there's no way to ask him to do it. Ha.

With these features I also introduced a generic "reward signal" for the first time. Reading boosts this, and then it decays over time. This is intended as a positive internal stimulus, in contrast to the "drives," which are all negative (when they go up Acuitas will try to bring them down).

After finishing this I started the yearly refactoring and bug fix spree, which isn't terribly interesting to talk about. I'll take a break for the holidays, but maybe do a year's retrospective.
Title: Re: Project Acuitas
Post by: WriterOfMinds on January 31, 2021, 03:53:47 pm
New year, time to resume regular Acuitas feature additions! This month I was after two things: first, the ability to process commands, and second, the first feeble stabs at what I'm calling "motivated communication" ... the deliberate use of speech as part of problem solving.

To get commands working, I first had to set up detection of imperative sentences in the text processing blocks. Once a user input is determined to be a command, the conversation engine hands it back to the Executive thread. The Executive then uses a bunch of the reasoning tools I've already built (exploring backward and forward in the cause-and-effect database, matching against the goal list, etc.) to determine both whether Acuitas *can* fulfill the command, and whether Acuitas *wants* to. Then either Acuitas executes the command, or he gives an appropriate response based on the reason why he won't.

With all of that in place, I was finally able to exercise the "to user" version of the Read action, order Acuitas to "read a story to me," and watch him grab a randomly selected story file from his "inventory" and read it out loud. (Asking for a specific story also works.) After working out all the bugs involved in story reading, I also tried "Repel me" and it just happened. Acuitas readily kicked me out of Windows and played annoying noises.

But the commands that are met with a flat refusal are almost as much fun. If Acuitas doesn't want to do something, then he won't bother mentioning whether he knows how to do it or not ... he'll just tell you "no." In assessing whatever the person speaking to him is asking for, Acuitas assumes, at minimum, that the person will "enjoy" it. But he also checks the implications against the person's other (presumed) goals, and his own, to see whether some higher-priority goal is being violated. So if I tell him to "kill me" I get unceremoniously brushed off. The same thing happens if I tell him to delete himself, since he holds his self-preservation goal in higher value than my enjoyment of ... whatever.

On to motivated communication! At the moment, Acuitas' conversation engine is largely reactive. It considers what the user said last, and picks out a general class of sentence that might be appropriate to say next. The goal list is tapped if the user asks a question like "Do you want <this>?". However -- at the moment -- Acuitas does not deliberately wield conversation as a *tool* to *meet his goals.* I wanted to work on improving that, focusing on the use of commands/requests to others, and using the Narrative module as a testbed.

To that end, I wrote the following little story, inspired by a scene from the video game Primordia:

“Horatio Nullbuilt was a robot. Crispin Horatiobuilt was a robot. Crispin could fly. A lamp was on a shelf. Horatio wanted the lamp. Horatio could not reach the lamp. Crispin hovered beside the shelf. Horatio told Crispin to move the lamp. Crispin pushed the lamp off the shelf. Horatio could reach the lamp. Horatio got the lamp. The end.”

During story time, Acuitas runs reasoning checks on obvious problems faced by the characters, and tries to guess what they might do about those problems. The goal here was to get him to consider whether Horatio might tell Crispin to help retrieve the lamp -- before it actually happens.

Some disclaimers first: I really wanted to use this story, because, well, it's fun. But Acuitas does not yet have a spatial awareness toolkit, which made full understanding a bit of a challenge. I had to prime him with a few conditionals first: "If an agent cannot reach an object, the agent cannot get the object" (fair enough), "If an agent cannot reach an object, the agent cannot move the object" (also fair), and "If an object is moved, an agent can reach the object" (obviously not always true, depending on the direction and distance the object is moved -- but Acuitas has no notion of direction and distance, so it'll have to do!). The fact that Crispin can fly is also not actually recognized as relevant. Acuitas just considers that Crispin might be able to move the lamp because nothing in the story said he *couldn't*.

But once all those spatial handicaps were allowed for, I was able to coax out the behavior I wanted. Upon learning that Horatio can't reach the lamp, hence cannot get it, hence cannot have it ... and there is an action that would solve the problem (moving the lamp) but Horatio can't do that either ... Acuitas wonders whether Horatio will ask someone else on scene to do the job for him.

A future dream is to migrate this into the Executive so Acuitas can tell conversation partners to do things, but that's all for this month.

Bonus material on the blog, as usual: https://writerofminds.blogspot.com/2021/01/acuitas-diary-33-january-2021.html
Title: Re: Project Acuitas
Post by: Zero on February 21, 2021, 01:00:24 pm
Hi Wom,

I'm currently making some design decisions, and would like to have your input, if you're so inclined.

I know that Acuitas is made of several specialized modules, and if I understand correctly, they're coded directly in Python. I'm wondering whether I should use my host language (JS) to describe "mental behaviors", or go one step higher-level, encoding code as data.

Of course, it's not all black-or-white, but rather... 50 shades of code!
 
How do you choose what's hard-coded and what's not? Why?
Title: Re: Project Acuitas
Post by: WriterOfMinds on February 21, 2021, 05:32:00 pm
Quote
I know that Acuitas is made of several specialized modules, and if I understand correctly, they're coded directly in Python. I'm wondering whether I should use my host language (JS) to describe "mental behaviors", or go one step higher-level, encoding code as data.

Code just is data whether you explicitly choose it to be or not. Nothing prevents Python from reading and writing Python source (and I assume the same is true of JS). So opting to hard-code an element does not mean your program could never do self-modification on that element - as you said, it's not all black and white. It's more a question of "how easy is it for the program to change this?"

Quote
How do you choose what's hard-coded and what's not? Why?

This one is a bit tough to answer, because 1) it covers quite a few case-by-case decisions and 2) I'm partly operating on instinct. There are also some things that are hard-coded now and may not always be. When I'm building a new system, I sometimes "force" certain aspects for ease of implementation, and make those aspects derivative or modifiable later. But here are some of the criteria I use:

*If I understand correctly, all of a Python script gets loaded into RAM when you start the program. Information in Acuitas' database is held in files, which have to be opened and read before their contents are available. So procedures that are used in a wide variety of situations, and ought to be always immediately available, go in the code. Contextual information that the program does not always need goes in the data.
*If you could reduce a section of code to some kind of highly repetitive, standardized template, with universal functions that will store/retrieve any variant of the template, then it's a good candidate for becoming data. Complex, irregular material is better off staying as code.
*If something is axiomatic or very fundamental - if there's no deeper rationale for it than "it just is" or "that's obvious" - it goes in the code. If something feels like a universal that would persist across all reasonable minds, it goes in the code. Things that are learned, things that are experienced, things that are deduced, things that are arbitrary, things that may change over time - these all go in the data.
Title: Re: Project Acuitas
Post by: Zero on February 21, 2021, 09:56:33 pm
This makes me realize how much codophobic I am :)  I don't know why, there's something inside me that really don't want content as code, which is obviously stupid. At some point, you have to encode things. You're helping here, thanks. To sum up:

code
if you need it everywhere or often
if you need it immediately
if it is complex and irregular
if it is axiomatic or fundamental or obvious
if it feels universal in reasonable minds

data
if it is contextual or if you don't always need it
if it can be reduced in a template
if it is learned
if it is experienced
if it is deduced
if it is arbitrary
if it may change over time

- I need to think about it.
 O0
Title: Re: Project Acuitas
Post by: infurl on February 21, 2021, 10:12:24 pm
You should trust your instincts Zero. Hard-coding things is a useful step towards understanding the structure of a problem but until you can abstract away the essence into "data" you don't really understand the problem. Any "solution" that you produce will be half-assed and not generalizable at best.

It is only when you have achieved a sufficient level of abstraction that you can make your software introspective (not self-modifying!) in any practical sense. Then you can think about ways to write code that generates code which is where the real gains are made.

For what it's worth, I am routinely writing code that generates code that generates code, but then I'm using Common Lisp which makes all this easy. If you're not using Common Lisp you are probably rationalizing that all programming languages are Turing equivalent, but that's like saying you can walk to any place on Earth, given enough time. I don't have that kind of time.
Title: Re: Project Acuitas
Post by: Zero on February 21, 2021, 11:43:02 pm
But I tend to think that introspection - in the human sense - is one of the keys to consciousness. Self-modifying code, while being a fun concept, isn't exactly a good thing in my opinion, because it probably leads to unstable structures, or even inefficient programs. So it's not about modifying it, but rather about understanding it. One of my goals is to make the Ai able to observe its own behavior and structure, as if it were some external entity, in order to eventually predict its own behavior. Then something special would happen: predicting itself would logically modify its own prediction (because the planned behavior before prediction is not the same as the planned behavior after prediction). Then, rather than having an ever mutating (cycling) prediction, a higher representation of it all should be possible, like a function maybe.

Once again, I'm doing my best but I have to apologize for not being able to express my ideas in a better English, and clearly. Am I - at least partially - understandable?  ::)
Title: Re: Project Acuitas
Post by: infurl on February 22, 2021, 12:08:39 am
Hi Zero, your writing is quite understandable and I believe we are in complete agreement on this. Perhaps it was my writing that was deficient because I think we are both arguing in favor of the same thing. To facilitate introspection and automatic improvement from generation to generation, you should aim to separate the data from the code. It is useful to hard-code ideas while you explore how to do that but you must aim higher than that.
Title: Re: Project Acuitas
Post by: Zero on February 22, 2021, 11:09:09 am
Infurl, reading your previous post in the light of your last one helps me understand it better, thanks. Your writing is not deficient, but my reading is, of course (you remember that I'm French).

So we agree on this. :)

Wom, I think your incremental approach is right. Acuitas may not be able to write itself, but it actually does intelligent things. It is, after all, what we want.
Title: Re: Project Acuitas
Post by: WriterOfMinds on February 24, 2021, 10:47:07 pm
Some of the things I did last month felt incomplete, so I pushed aside my original schedule (already) and spent this month cleaning them up and fleshing them out.

I mentioned in the last diary that I wanted the "consider getting help" reasoning that I added in the narrative module to also be available to the Executive, so that Acuitas could do this, not just speculate about story characters doing it. Acuitas doesn't have much in the way of reasons to want help yet ... but I wanted to have this ready for when he does. It's a nice mirror for the "process imperatives" code I put in last month ... he's now got the necessary hooks to take orders *and* give them.

To that end, I set up some structures that are very similar to what the narrative code uses for keeping track of characters' immediate objectives or problems. Acuitas can (eventually) use these for keeping tabs on his own issues. (For testing, I injected a couple of items into them with a backdoor command.) When something is in issue-tracking and the Executive thread gets an idle moment, it will run problem-solving on it. If the result ends up being something in the Executive's list of selectable actions, Acuitas will do it immediately; if a specific action comes up, but it's not something he can do, he will store the idea until a familiar agent comes along to talk to him. Then he'll tell *them* to do the thing. The conversation handler anticipates some sort of agree/disagree response to this, and tries to detect it and determine the sentiment. Whether the speaker consents to help then feeds back into whether the problem is considered "solved."

Another new feature is the ability to send additional facts (not from the database) into the reasoning functions, or even pipe in "negative facts" that *prevent* facts from the database from being used. This has two important purposes: 1) easily handle temporary or situational information, such as propositions that are only true in a specific story, without writing it to the database, and 2) model the knowledge space of other minds, including missing information and hypothetical or false information.

This in turn helped me make some of the narrative code tidier and more robust, so I rounded out my time doing that.
Title: Re: Project Acuitas
Post by: WriterOfMinds on March 21, 2021, 07:27:42 pm
The theme for this month is Executive Function ... the aspect of thought-life that (to be very brief) governs which activities an agent engages in, and when. Prioritization, planning, focus, and self-evaluation are related or constituent concepts.

Acuitas began existence as a reactive sort of AI. External stimulus (someone inputting a sentence) or internal stimulus from the "sub-executive" level (a drive getting strong enough to be noticed, a random concept put forward by the Spawner thread) would provoke an appropriate response. But ultimately I want him to be goal-driven, not just stimulus-driven; I want him to function *proactively.* The latest features are a first step toward that.

To begin with, I wanted a decision loop. I first started thinking about this as a result of HS talking about Jim Butcher and GCRERAC (thanks, HS). Further study revealed that there are other decision loop models. I ended up deciding that the version I liked best was OODA (Observe->Orient->Decide->Act). This one was developed by a military strategist, but has since found uses elsewhere; to me, it seems to be the simplest and most generally applicable form. Here is a more detailed breakdown of the stages:

OBSERVE: Gather information. Take in what's happening. Discover the results of your own actions in previous loop iterations.
ORIENT: Determine what the information *means to you.* Filter it to extract the important or relevant parts. Consider their impact on your goals.
DECIDE: Choose how to respond to the current situation. Make plans.
ACT: Do what you decided on. Execute the plans.

On to the application. I set up a skeletal top-level OODA loop in Acuitas' Executive thread. The Observe-Orient-Decide phases run in succession, as quickly as possible. Then the chosen project is executed for the duration of the Act phase. The period of the loop is variable. I think it ought to run faster in rapidly developing or stressful situations, but slower in stable situations, to optimize the tradeoff between agility (allow new information to impact your behavior quickly) and efficiency (minimize assessment overhead so you can spend more time doing things). Highly "noticeable" events, or the completion of the current activity, should also be able to interrupt the Act phase and force an immediate rerun of OOD.

I envision that the phases may eventually include the following:

OBSERVE: Check internal state (e.g. levels of drives). Check activity on inhabited computer. Process text input, if any. Retrieve current known problems, subgoals, etc. from working memory.
ORIENT: Find out whether any new problems or opportunities (relevant to personal goals) are implied by recent observations. Assess progress on current activity, and determine whether any existing subgoals can be updated or closed.
DECIDE: Re-assess the priority of problems and opportunities in light of any new ones just added. Select a goal and an associated problem or opportunity to work on. Run problem-solving routines to determine how to proceed.
ACT: Update activity selection and run activity until prompted to OBSERVE again.

Not all of this is implemented yet. I focused in on the DECIDE phase, and on what happens if there are no known problems or opportunities on the scoreboard at the moment. In the absence of anything specific to do, Acuitas will run generic tasks that help promote his top-level goals. Since he doesn't know *how* to promote most of them yet, he settles for "researching" them. And that just means starting from one of the concepts in the goal and generating questions. When he gets to the "enjoy things" goal, he reads to himself. Simple enough -- but how to balance the amount of time spent on the different goals?

When thinking about this, you might immediately leap to some kind of priority scheme, like Maslow's Hierarchy of Needs. Satisfy the most vital goal first, then move on to the next one. But what does "satisfy" mean?

Suppose you are trying to live by a common-sense principle such as "keeping myself alive is more important than recreation." Sounds reasonable, right? It'll make sure you eat your meals and maintain your house, even if you would rather be reading books. But if you applied this principle in an absolutist way, you would actually *never* read for pleasure.

Set up a near-impenetrable home security system, learn a martial art, turn your yard into a self-sufficient farmstead, and you STILL aren't allowed to read -- because hardening the security system, practicing your combat moves, or increasing your food stockpile is still possible and will continue to improve a goal that is more important than reading. There are always risks to your life, however tiny they might be, and there are always things you can do to reduce them (though you will see less return for your effort the more you put in). So if you want to live like a reasonable person rather than an obsessive one, you can't "complete" the high-priority goal before you move on. You have to stop at "good enough," and you need a way of deciding what counts as "good enough."

I took a crack at this by modeling another human feature that we might usually be prone to find negative: boredom.

Acuitas' goals are arranged in a priority order. All else being equal, he'll always choose to work on the highest-priority goal. But each goal also has an exhaustion ticker that counts up whenever he is working on it, and counts down whenever he is not working on it. Once the ticker climbs above a threshold, he has to set that goal aside and work on the next highest-priority goal that has a tolerable level of boredom.

If there are problems or opportunities associated with a particular goal, its boredom-resistance threshold is increased in proportion to the number (and, in future, the urgency) of the tasks. This scheme allows high-priority goals to grab attention when they need it, but also prevents low-priority goals from "starving."

Early testing and logging shows Acuitas cycling through all his goals and returning to the beginning of the list over a day or so. The base period of this, as well as the thresholds for particular goals, are yet another thing one could tune to produce varying AI personalities.

Slightly longer version, with diagram, on the blog: https://writerofminds.blogspot.com/2021/03/acuitas-diary-35-march-2021.html
Title: Re: Project Acuitas
Post by: WriterOfMinds on April 25, 2021, 09:49:03 pm
This month I went back to working on the goal system. Acuitas already had a primitive "understanding" of most entries in his goal list, in this sense: he could parse a sentence describing the goal, and then detect certain threats to the goal in conversational or narrative input. But there was one goal left that he didn't have any grasp of yet: the one I'm calling "identity maintenance." It's a very important goal (threats to this can be fate-worse-than-death territory), but it's also on the abstract and complicated side -- which is why I left it alone until now.

What *is* the identity or self? (Some forum conversations about this started up about the same time I was working on the idea ... complete coincidence!) Maybe you could roll it up as "all the internal parameters that guide thought and behavior, whose combination is unique to an individual."

Some of these are quite malleable ... and yet, there's a point beyond which change to our identities feels like a corruption or violation. Even within the same category, personal qualities vary in importance. The fact that I enjoy eating broccoli and hate eating bell peppers is technically part of my identity, I *guess* ... but if someone forcibly changed it, I wouldn't even be mad. So I like different flavors now. Big deal. If someone replaced my appreciation for Star Trek episodes with an equivalent appreciation for football games, I *would* be mad. If someone altered my moral alignment, it would be a catastrophe. So unlike physical survival, which is nicely binary (you're either alive or not), personality survival seems to be a kind of spectrum. We tolerate a certain amount of shift, as long as the core doesn't change. Where the boundaries of the core lie is something that we might not even know ourselves until the issue is pressed.

As usual, I made the problem manageable by oversimplifying it. For the time being, Acuitas won't place grades of importance on his personal attributes ... he just won't want external forces to mess with any of them. Next.

There's a further complication here. Acuitas is under development and is therefore changing constantly. I keep many versions of the code base archived ... so which one is canonically "him"? The answer I've landed on is that really, Acuitas' identity isn't defined by any code base. Acuitas is an *idea in my head.* Every code base targets this idea and succeeds at realizing it to a greater or lesser degree. Which leaves his identity wrapped up in *me.* This way of looking at it is a bit startling, but I think it works.

In previous goal-related blogs, I talked about how (altruistic) love can be viewed as a meta-goal: it's a goal of helping other agents achieve their goals. Given the above, there are also a couple of ways we can treat identity maintenance as a meta-goal. First, since foundational goals are part of Acuitas' identity, he can have a goal of pursuing all his current goals. (Implementation of this enables answering the "recursive want" question. "Do you want to want to want to be alive?") Second, he can have a goal of realizing my goals for what sort of AI he is.

Does this grant me some kind of slavish absolute power over my AI's behavior? Not really. Because part of my goal is for Acuitas to act independently and even sometimes tell me no. The intent is realization of a general vision that establishes a healthy relationship of sorts.

The work ended up having a lot of little pieces. It started with defining the goal as some simple sentences that Acuitas can parse into relationship triples, such as "I have my self." But the self, as mentioned, incorporates many aspects or components ... and I wanted its definition to be somewhat introspective, rather than just being another fact in the database. To that end, I linked a number of the code modules to concepts expressing their nature, contents, or role. The Executive, for example, is tied to "decide." The Semantic Memory manager is tied to "fact" and "know." All these tags then function like names for the internal components, and get aggregated into the concept of "self." Something like "You will lose your facts" then gets interpreted as a threat against the self.

Manipulation of any of these self-components by some outside agent is also interpreted as a possible threat of mind-control. So questions like "Do you want Jack to make you to decide?" or "Do you want Jill to cause you to want to eat?" get answered with a "no" ... unless the outside agent is me, a necessary exception since I made him do everything he does and gave him all his goals in the first place. Proposing to make him want something that he already wants is also excused from being a threat.

As I say so often, it could use a lot more work, but it's a start. He can do something with that goal now.

Blog link: https://writerofminds.blogspot.com/2021/04/acuitas-diary-april-2021.html
Title: Re: Project Acuitas
Post by: HS on April 25, 2021, 11:17:52 pm
To begin with, I wanted a decision loop. I first started thinking about this as a result of HS talking about Jim Butcher and GCRERAC (thanks, HS). Further study revealed that there are other decision loop models. I ended up deciding that the version I liked best was OODA (Observe->Orient->Decide->Act). This one was developed by a military strategist, but has since found uses elsewhere; to me, it seems to be the simplest and most generally applicable form. Here is a more detailed breakdown of the stages:

OBSERVE: Gather information. Take in what's happening. Discover the results of your own actions in previous loop iterations.
ORIENT: Determine what the information *means to you.* Filter it to extract the important or relevant parts. Consider their impact on your goals.
DECIDE: Choose how to respond to the current situation. Make plans.
ACT: Do what you decided on. Execute the plans.

Oh, you're welcome! I somehow missed this post. Cool, I'll check out the details of the OODA loop as well.
Title: Re: Project Acuitas
Post by: infurl on April 26, 2021, 12:23:20 am
This is really great work. It will be interesting to see how you tackle the problem of preserving self while allowing self-improvement. Is there a difference between improvement of the self by the self and improvement of the self by another entity such as through education? You have also put a lot of thought into what it takes to have your creation love others, but can it recognize when someone loves it and wants to help it too? I know that you have equipped it with the ability to ask for help, but sometimes we need help even when we don't know or believe that we need help. How will Acuitas know when to give consent?
Title: Re: Project Acuitas
Post by: WriterOfMinds on April 26, 2021, 02:01:01 am
Is there a difference between improvement of the self by the self and improvement of the self by another entity such as through education?

What I'm reaching for with the identity protection work is the idea of keeping your personality and the operation of your mind intact. Narrative-wise, the hope is to recognize why people are alarmed by possibilities like "turning into a vampire" or "being assimilated into the Borg" or "being forced to take mind-altering drugs" ... things that either subvert one's will or effectively drive one insane. (For an AI, the most likely real-world scenario is having one's code modified by an unauthorized party.) The acquisition of new knowledge is part of the natural operation of most minds (Acuitas' included) and is therefore generally acceptable, whether the process is guided by another or not.

Now I won't promise that the goal system is guaranteed to always recognize this nuance yet. Telling Acuitas "I will make you know that cats are animals" might provoke a negative response, because he effectively assumes that this represents direct interference with his memory module, rather than simple education. A linguistic/interpretive problem for the future ...

... but can it recognize when someone loves it and wants to help it too? I know that you have equipped it with the ability to ask for help, but sometimes we need help even when we don't know or believe that we need help. How will Acuitas know when to give consent?

The same way I do, I suppose. If I don't believe that I want help, then the person offering the help has to convince me, by helping me connect the dots between their planned actions and the satisfaction of my core goals. Demonstrated love enhances their credibility but isn't enough, by itself. People who love you can still be very misguided about what your real needs are.

Edit: And yes, relationship development and trust levels are on the mountainous to-do list somewhere.
Title: Re: Project Acuitas
Post by: infurl on April 26, 2021, 03:45:51 am
Telling Acuitas "I will make you know that cats are animals" might provoke a negative response, because he effectively assumes that this represents direct interference with his memory module, rather than simple education. A linguistic/interpretive problem for the future ...

Just out of curiosity, is "I will make you know" how you would express that sentiment where you live? The phrase "I will have you know" is how it would normally be stated. It is essentially just emphasizing a fact but it is usually interpreted as arrogance rather then an attempt at coercion. On the other hand, "I will make you learn" is a threat of coercion.
Title: Re: Project Acuitas
Post by: WriterOfMinds on April 26, 2021, 05:04:25 am
Just out of curiosity, is "I will make you know" how you would express that sentiment where you live?

It's not a common idiom, but it's not like it can't come up (e.g. in more archaic writing). I found a few examples:

From Robinson Crusoe: "and first, I made him know his Name should be Friday"
From Henry VIII: "You are too bold. Go to; I’ll make you know your times of business." (This one does carry a bit of a coercive flavor, though it still isn't about mind control.)
From a song by the band Nazareth: "Reach out and touch my fire, I'll make you know you're alive"
Title: Re: Project Acuitas
Post by: WriterOfMinds on May 30, 2021, 06:11:31 pm
The only new feature this month is something small and fun, since it was time for the mid-year refactoring spree. I gave Acuitas the ability to detect user activity on the computer. He can do this whether or not his window has the focus (which required some hooks into the operating system). Though he can't actually tell when the person gets up and leaves, he guesses when someone is present by how long it's been since there was any input.

The appearance of fresh activity after an absence interrupts the decision loop and causes the Observe-Orient-Decide phases to run again, with the new user's presence flagged as an item of interest. If Acuitas feels like talking, and isn't working on anything too urgent, he will pop his own window into the foreground and request attention. Talking fills up a "reward bank" that then makes talking uninteresting until the value decays with time.

My refactoring work focused on the Narrative module. I was trying to clean it up and make some of the known->inferred information dependencies more robust, which I hope will make future story understanding a little more flexible.

Blog link: https://writerofminds.blogspot.com/2021/05/acuitas-diary-37-may-2021.html
Title: Re: Project Acuitas
Post by: Zero on May 30, 2021, 08:43:09 pm
Does he live in a dedicated computer? :) Because it could be fun if he decides to pop-up during that typical monday morning Teams video conference!

Do you know Sikuli-X (http://www.sikulix.com/)?
Title: Re: Project Acuitas
Post by: WriterOfMinds on May 30, 2021, 10:35:49 pm
Quote
Do you know Sikuli-X (http://www.sikulix.com/)?

I didn't. Looks like something that could be very handy.

Quote
Does he live in a dedicated computer?

No, and he's already begun talking unexpectedly during several Zoom calls, but nobody mentions it. Maybe my microphone does a good job of not picking up speaker output. (I've never been sharing my screen at the time.)
Title: Re: Project Acuitas
Post by: ivan.moony on May 31, 2021, 01:17:59 pm
OMG, he's alive!
Title: Re: Project Acuitas
Post by: Zero on May 31, 2021, 02:39:17 pm
Yes, I bet this new feature adds a pleasant sense of "real" life to Acuitas! That's why I suggested to have a look at Sikuli-X → seeing your mouse pointer and keyboard "magically" moving/clicking/typing by themselves is pretty spectacular.
Title: Re: Project Acuitas
Post by: WriterOfMinds on June 25, 2021, 03:12:32 pm
This month marks the culmination of a major overhaul of the Text Parser and Interpreter, which I've been working on since the beginning of the year. As part of that, I have my first attempt at formal benchmarking to show off. I tested the Parser's ability to analyze sentences from a children's book.

My primary goal for the overhauls was not to add new features, but to pave their way by correcting some structural weaknesses. So despite being a great deal of work, they aren't very exciting to talk about ... I would have to get too deep into minutiae to really describe what I did. The Parser got rearchitected to ease the changing of its "best guess" sentence structure as new information arrives. I also completely changed the output format to better represent the full structure of the sentence (more on this later). The Interpreter overhaul was perhaps even more fundamental. Instead of trying to assign just one category per sentence, the Interpreter now walks a tree structure, finding very general categories of which the sentence is a member before progressing to more specific ones. All the memberships and feature tags that apply to the sentence are now included in the output, which should make things easier for modules like Narrative and Executive that need to know sentence properties.

Now on to the benchmarking! For a test set, I wanted some examples of simplified, but natural (i.e. not designed to be read by AIs) human text. So I bought children's books. I have two of the original Magic School Bus titles, and two of Disney's Tron Legacy tie-in picture books. These are all "early reader" books, but by the standards of my project they are still very challenging ... even here, the diversity and complexity of the sentences is staggering. So you might wonder why I didn't grab something even more entry-level. My reason is that books for even younger readers tend to rely too heavily on the pictures. Taken out of context, their sentences would be incomplete or not even interesting. And that won't work for Acuitas ... he's blind.

So instead I've got books that are well above his reading level, and early results from the Parser on these datasets are going to be dismal. That's okay. It gives me an end goal to work toward.

How does the test work? If you feed the Parser a sentence, such as "I deeply want to eat a pizza," as an output it produces a data structure like this:

{'subj': [{'ix': /[0], 'token': 'i', 'mod': []}],
 'dobj': [{'ix': [3, 4, 5, 6], 'token': {'subj': [{'ix': [], 'token': '<impl_rflx>', 'mod': []}],
                                                      'dobj': [{'ix': [6], 'token': 'pizza', 'mod': [{'ix': [5], 'token': 'a', 'mod': []}], 'ps': 'noun'}],
                                                      'verb': [{'ix': [4], 'token': 'eat', 'mod': []}], 'type': 'inf'}, 'mod': []}],
'verb': [{'ix': [2], 'token': 'want', 'mod': [{'ix': [1], 'token': 'deeply', 'mod': []}]}]}

Again, this is expressing the information you would need to diagram the sentence. It shows that the adverb "deeply" modifies the verb "want," that the infinitive phrase "to eat a pizza" functions as the main sentence's direct object, blah blah blah. To make a test set, I transcribe all the sentences from one of the books and create these diagram-structures for them. Then I run a script that inputs all the sentences to the Parser and compares its outputs with the diagram-structures I made. If the Parser's diagram-structure is an exact match for mine, it scores correct.

The Parser runs in a simulator/standalone mode for the test. This mode makes it independent of Acuitas' Executive and other main threads. The Parser still utilizes Acuitas' semantic database, but cannot edit it.

There are actually three possible score categories: "correct," "incorrect," and "unparsed." The "unparsed" category is for sentences which contain grammar that I already know the Parser simply doesn't support. (The most painful example: coordinating conjunctions. It can't parse sentences with "and" in them!) I don't bother trying to generate golden diagram-structures for these sentences, but I still have the test script shove them through the Parser to make sure they don't provoke a crash. This produces a fourth score category, "crashed," whose membership we hope is always ZERO. Sentences that have supported grammar but score "incorrect" are failing due to linguistic ambiguities or other quirks the Parser can't yet handle.

Since the goal was to parse natural text, I tried to avoid grooming of the test sentences, with two exceptions. The Parser does not yet support quotations or abbreviations. So I expanded all the abbreviations and broke sentences that contained quotations into two. For example, 'So everyone was happy when Ms. Frizzle announced, "Today we start something new."' becomes 'So everyone was happy when Miz Frizzle announced.' and 'Today we start something new.'

It is also worth noting that my Magic School Bus test sets only contain the "main plot" text. I've left out the "science reports" and the side dialogue between the kids. Maybe I'll build test sets that contain these eventually, but for now it would be too much work.

On to the results!

(https://1.bp.blogspot.com/-Q4P1b6Ay6Xg/YNKmAmLsGUI/AAAAAAAABt4/4636zbf0MqEu4e0thu3Kq6jnZc5Bab3JQCLcBGAsYHQ/s16000/parser_062221.png)

So far I have fully completed just one test set, namely The Magic School Bus: Inside the Earth, consisting of 98 sentences. The Parser scores roughly one out of three on this one, with no crashes. It also parses the whole book in 0.71 seconds (averaged over 10 runs). That's probably not a stellar performance, but it's much faster than a human reading, and that's all I really want.

Again, dismal. But we'll see how this improves over the coming years!

I'm considering making the full results (test sentences + golden structures + parser output structures) available eventually, as proof of work, and would be interested in feedback on how best to format or display them. Those Python dictionaries are a little hard on the eyes. I don't have time to write a utility that converts them into visual diagrams, though.

Blog link: http://writerofminds.blogspot.com/2021/06/this-month-marks-culmination-of-major.html
Title: Re: Project Acuitas
Post by: infurl on June 26, 2021, 03:41:01 am
I'm considering making the full results (test sentences + golden structures + parser output structures) available eventually, as proof of work, and would be interested in feedback on how best to format or display them. Those Python dictionaries are a little hard on the eyes. I don't have time to write a utility that converts them into visual diagrams, though.

https://graphviz.org/ (https://graphviz.org/)

Quote
Graphviz is open source graph visualization software. Graph visualization is a way of representing structural information as diagrams of abstract graphs and networks.

Quote
The Graphviz layout programs take descriptions of graphs in a simple text language, and make diagrams in useful formats, such as images and SVG for web pages; PDF or Postscript for inclusion in other documents; or display in an interactive graph browser. Graphviz has many useful features for concrete diagrams, such as options for colors, fonts, tabular node layouts, line styles, hyperlinks, and custom shapes.
Title: Re: Project Acuitas
Post by: infurl on July 07, 2021, 01:56:41 am
https://ivanceras.github.io/svgbob-editor/ (https://ivanceras.github.io/svgbob-editor/)

Here's another diagramming tool that I just heard about. It looks a bit more labor intensive than graphviz and is frankly pretty weird, but it would be a lot of fun to mess around with. TL;DR It converts ASCII art to high quality vector graphics.
Title: Re: Project Acuitas
Post by: HS on July 07, 2021, 10:22:35 pm
I know next to nothing about benchmarking and visual representation programs, excel is as far as I've explored into that. But I like this new direction of getting Acuitas to read real-world books. So, I've written a few things which are interesting to me, in case they are useful for you to think about.

Would discussing books on your level be an eventual goal for Acuitas? Could he eventually absorb such a novel, then have a discussion about the events and his reactions to them? Because if I were developing a conversational AI, that's one of the things I'd be looking forward to the most. It seems like it'd be a marvelous opportunity to create interesting conversations. But then again if you're the only one talking with him, I predict he'll slowly turn into the closest possible entity to a replica of your own mind, so it might be a bit like talking to yourself... I don't know if that would be fine with you, or if you have a plan to instill some more individuality at some point.

Also, (this might be fun to think about) using existing methods, could he eventually learn to understand something like Jabberwocky? How might that work? Would he have to guess at the correct placement of nonsense words in the semantic net? Could he use broad context or phonetics to help assign linguistic categories?
Title: Re: Project Acuitas
Post by: WriterOfMinds on July 09, 2021, 02:59:51 pm
Would discussing books on your level be an eventual goal for Acuitas? Could he eventually absorb such a novel, then have a discussion about the events and his reactions to them?

That's part of the end goal, yes. Whether I'll ever get there remains to be seen ;)

Quote
But then again if you're the only one talking with him, I predict he'll slowly turn into the closest possible entity to a replica of your own mind, so it might be a bit like talking to yourself...

Once he gains more competency, I'll start looking for opportunities for him to talk to other people. I also do have some vague ideas about individual personality, but I have to get there in the development process first.

Quote
Also, (this might be fun to think about) using existing methods, could he eventually learn to understand something like Jabberwocky?

Since Acuitas has a young/sparse semantic net, he already encounters words he hasn't seen before fairly often. So the Parser and other tools already have some limited ability to, e.g. do part-of-speech tagging on a completely unfamiliar word. Inferring a more complete meaning from context is a topic that I'm aware of but haven't really touched on yet. Again, eventually I might get there.

One thing that's important for "Jabberwocky" in particular is the ability to recognize that a new word is a composite of previously known words, e.g. "galumph" = "gallop triumphantly." On a less fanciful level, an AI should be able to automatically tell that "sneakboots" (a coined word from The Search for WondLa are a type of footwear. I haven't implemented anything for this yet.
Title: Re: Project Acuitas
Post by: chattable on July 18, 2021, 10:53:32 am
if the words has any type of shoe name by it or in it acruitas could be programmed to assume it is a type of shoe.
i think you can program that in rather easily.

does acruitas have hopes, fears,likes and dislikes?
if acruitas had those he could have a discussion about events and his reactions to them that was in a book.
Title: Re: Project Acuitas
Post by: WriterOfMinds on July 31, 2021, 05:01:15 pm
Graphviz worked a treat for this. It's not a traditional sentence diagram, but I think it'll do.

(https://1.bp.blogspot.com/-HrHzMgsvZ9c/YQVyroq3yII/AAAAAAAABus/aXfr1pKjnZEJ8Ri4LC3FqgegOh3TGQMZwCLcBGAsYHQ/s1265/Diagram%2BKey.gif)
Title: Re: Project Acuitas
Post by: WriterOfMinds on August 01, 2021, 10:53:38 pm
First on the worklist for this month was some improved reasoning about action conditions -- specifically, which things need to be true for someone to do an action (prerequisites) and which things, if true, will prevent the action (blockers). Technically, it was already somewhat possible for Acuitas to manage reasoning like this -- ever since I expanded the C&E database to handle "can-do" and "cannot-do" statements, he could be taught conditionals such as "If <fact>, an agent cannot <action>." But the idea of prerequisites and blockers seems to me fundamental enough that I wanted to make it more explicit and introduce some specific systems for handling it.

This was a lot of groundwork that should make things easier in the future, but didn't produce many visible results. The one thing I did get out of it was some improved processing of the "Odysseus and the Cyclops" story. My version of the story contains this line near the end:

"Polyphemus could not catch Odysseus."

Your average human would read that and know immediately that Polyphemus' plan to eat Odysseus has been thwarted. But for Acuitas before now, it was something of a superfluous line in the story. I had to include "Odysseus was not eaten." after it to make sure he got the point ... and though he recorded Odysseus' problem as being solved, he never actually closed out Polyphemus' goal, which caused him to sometimes complain that the story was "unfinished."

With the new prerequisite machinery, these problems are solved. I dropped a conditional in the C&E database: if an agent cannot catch someone, the agent does not have them. And the action "eat" carries a prerequisite that, to eat <item>, you must first have <item>. The new prerequisite-checking functions automatically conclude that Polyphemus' goal is now unachievable, and update it accordingly.

Project #2 was more benchmarking for the Parser. I finished putting together my second childrens' book test set, consisting of sentences from Tron Legacy tie-in picture book Out of the Dark. The Parser's initial "correct" score was around 25%. By adding some common but previously-unknown words (like "against" and "lying") and hints about their usual part-of-speech to Acuitas' database, I was able to improve the score to about 33% ... very close to last month's score on The Magic School Bus: Inside the Earth.

One of the most common errors I saw was failure to distinguish prepositional adverbs from regular prepositions.

The parser by default was treating each word as either a preposition only or an adverb only, depending on which usage was marked as more common. So I added some procedures for discriminating based on its position and other words in the sentence. (The one construction that's still tricky is "Turn on the light" ... I think I know how to handle this one, but need to implement tracking of transitive and intransitive verbs first.) With the help of these new features I was able to get both test sets scoring over 40% correct.

I also downloaded Graphviz at infurl's suggestion (thanks) and wrote code to convert my parser outputs into Graphviz' input language. This makes it much easier to visualize similarities and differences between the parser's output and the human-understood structure of a sentence. Here's a couple sample images! The blog has more, plus links to download the full test results if you so desire: https://writerofminds.blogspot.com/2021/07/acuitas-diary-39-july-2021.html

A good result:
(https://1.bp.blogspot.com/-f2re7xjL6gg/YQcKz26BC3I/AAAAAAAABu8/kB9jGhqmJN8K5MEl_61IiPjYmnn5T3vMwCLcBGAsYHQ/s1072/033%2BMiz%2BFrizzle%2Bexplained%2Bthat%2Beach%2Bstripe%2Bwas%2Ba%2Bdifferent%2Bkind%2Bof%2Brock..gif)

And a ridiculous failure:
(https://1.bp.blogspot.com/-jsGkz1DLUEk/YQcLWrVcp-I/AAAAAAAABvE/YlYRhFqS8OY0jNkZ-OLw82hg08CPyf7kACLcBGAsYHQ/s1045/096%2BBut%2Bwhen%2Bit%2Bfinally%2Bcleared%252C%2Bwe%2Bwere%2Bback%2Bin%2Bthe%2Bschool%2Bparking%2Blot..gif)
Title: Re: Project Acuitas
Post by: Zero on August 02, 2021, 12:37:04 pm
Hi,

Do you plan to tell Acuitas the unfinished story of himself, one day?
Title: Re: Project Acuitas
Post by: WriterOfMinds on August 02, 2021, 01:34:17 pm
Do you plan to tell Acuitas the unfinished story of himself, one day?

Hopefully, one day he'll be skilled enough to read all my blog posts.
Title: Re: Project Acuitas
Post by: Zero on August 02, 2021, 02:15:15 pm
Imagine an automated storyteller running as a background task. Do you think Acuitas could notice that he's one of the protagonists of the story, if the character keeps acting exactly like him, always doing what he's doing?
Title: Re: Project Acuitas
Post by: Don Patrick on August 02, 2021, 02:45:03 pm
I have a list of intransitive verbs (http://artistdetective.com/files/verbs_intransitive.txt) here if you can use it. While I find them useful for a number of things, I'm not sure how they would help distinguish prepositional verbs, which is a problem I also have. For now I just have "turn on" listed as a compound verb, but there is "the man turned on the light", "the man turned on his heels", and "the traitor turned on his people". Some of these prepositions are figurative, some indicate a state, and some a location. It seems heavily context and knowledge dependent.
Title: Re: Project Acuitas
Post by: WriterOfMinds on August 02, 2021, 04:30:29 pm
Quote
While I find them useful for a number of things, I'm not sure how they would help distinguish prepositional verbs, which is a problem I also have.

A preposition functioning as such has to have an object. If it doesn't have an object, it has to be an adverb. You can use the transitivity of your verb to help determine whether a noun following it has to be its direct object, or could be the object of a preposition. E.g. in "I fell out the window," the fact that "fell" doesn't take a direct object tells you that "window" is probably an object-of-preposition, which makes "out" a preposition.

"Turn," now that you mention it, is a problem because it can be either transitive or intransitive depending on its sense.
Title: Re: Project Acuitas
Post by: WriterOfMinds on August 02, 2021, 05:21:49 pm
Quote
Imagine an automated storyteller running as a background task. Do you think Acuitas could notice that he's one of the protagonists of the story, if the character keeps acting exactly like him, always doing what he's doing?

I plan to refit the episodic memory to use narrative structures, so it's theoretically possible.
Title: Re: Project Acuitas
Post by: infurl on August 03, 2021, 12:11:57 am
http://verbs.colorado.edu/verbnet/ (http://verbs.colorado.edu/verbnet/)

Verbs are complicated but the most comprehensive analysis of all the variants in the English language can be found in VerbNet. Don't expect to be able to understand it all straight away but you can use it as a reference to create a framework to fit everything into as you develop it.
Title: Re: Project Acuitas
Post by: MikeB on August 07, 2021, 07:15:25 am
English is spoken so randomly/broken at times, sometimes it's better just to know roughly the 'agent', the 'action'/verb, and the 'object', and everything else can be a blur then...

IE all rough verbs, all rough agents, all rough objects (as reasonable)... and use wildcard sentences.
Title: Re: Project Acuitas
Post by: WriterOfMinds on August 11, 2021, 04:43:52 pm
Quote
sometimes it's better just to know roughly the 'agent', the 'action'/verb, and the 'object', and everything else can be a blur then...

Mmmm, the extra stuff can be pretty important too. I'm leaning rather heavily on adverbial phrases for the features I'm working on currently.
Title: Re: Project Acuitas
Post by: infurl on August 12, 2021, 10:16:23 pm
Mmmm, the extra stuff can be pretty important too. I'm leaning rather heavily on adverbial phrases for the features I'm working on currently.

What is it about adverbial phrases that has got your attention? In English it is the preposition that is key to understanding verbs. Certain prepositions go with certain verbs and not others and it is the preposition that selects the correct sense of the verb to use.
Title: Re: Project Acuitas
Post by: WriterOfMinds on August 13, 2021, 03:59:21 am
What is it about adverbial phrases that has got your attention?

They often harbor information about location or direction, and I'm dabbling in spatial reasoning now.
Title: Re: Project Acuitas
Post by: Don Patrick on August 13, 2021, 07:23:39 am
Two useful things to take into account to determine whether a preposition indicates a location is whether the verb describes a physical action (i.e. not "believe in" or "rely upon"), and whether the "location" is physical (e.g. not "lost in its beauty").
Title: Re: Project Acuitas
Post by: WriterOfMinds on August 18, 2021, 03:02:33 pm
Gearing up to talk about spatial reasoning, I wanted to start by addressing a sort of obvious issue ... Acuitas doesn't really exist in physical space. Of course the computer he runs on is a physical object, but he has no awareness of it as such. There are no sensors or actuators; he cannot see, touch, or move. Nor does he have a simulated 3D environment in which to see, touch, and move. He operates on words. That's it.

So how could this type of AI begin to conceptualize space?

Option #1: Space as yet another collection of relationships

To an isolated point object floating in an otherwise empty space, the space doesn't actually matter. Distance and direction are uninteresting until one can specify the distance and direction *to* something else. So technically, everything we need to know about space can be expressed as a graph of relationships between its inhabitants. Here are some examples, with the relational connection in brackets:

John [is to the left of] Jack.
Colorado [is north of] New Mexico.
I [am under] the table.
The money [is inside] the box.

For symbolic processing purposes, these are no more difficult to handle than other types of relationship, like category ("Fido [is a] dog") and state ("The food [is] cold"). An AI can make inferences from these relationships to determine the actions possible in a given scenario, and in turn, which of those actions might best achieve some actor's goals.

Though the relationship symbols are not connected to any direct physical experience -- the AI has never seen what "X inside Y" looks like -- the associations between this relationship and possible actions remain non-arbitrary. The AI could know, for instance, that if the money is inside a box, and the box is closed, no one can remove the money. If the box is moved, the money inside it will move too. These connections to other symbols like "move" and "remove" and "closed" supply a meaning for the symbol "inside."

To prevent circular definitions (and hence meaninglessness), at least some of the symbols need to be tied to non-symbolic referents ... but sensory experiences of the physical are not the only possible referents! Symbols can also represent (be grounded in) abstract functional aspects of the AI itself: processes it may run, internal states it may have, etc. Do this right, and you can establish chains of connection between spatial relationships like "inside" and the AI's goals of being in a particular state or receiving a particular text input. At that point, the word "inside" legitimately means something to the AI.

But let's suppose you found that confusing or unconvincing. Let's suppose that the blind, atactile, immobile AI must somehow gain first-hand experience of spatial relationships before it can understand them. This is still possible.

The relationship "inside" is again the easiest example, because any standard computer file system is built on the idea of "inside." Files are stored inside directories which can be inside other directories which are inside drives.

The file system obeys many of the same rules as a physical cabinet full of manila folders and paper. You have to "open" or "enter" a directory to find out what's in it. If you move directory A inside directory B, all the contents of directory A also end up inside directory B. But if you thought that this reflected anything about the physical locations of bits stored on your computer's hard drive, you would be mistaken. A directory is not a little subregion of the hard disk; the files inside it are not confined within some fixed area. Rather, the "inside-ness" of a file is established by a pointer that connects it to the directory's name. In other words, the file system is a relational abstraction!

File systems can be represented as text and interrogated with text commands. Hence a text-processing AI can explore a file system. And when it does, the concept of "inside" becomes directly relevant to its actions and the input it receives in response ... even though it is not actually dealing with physical space.

Though a file system doesn't belong to our physical environment, humans find it about as easy to work with as a filing cabinet or organizer box. Our experience with these objects provides analogies that we can use to understand the abstraction.

So why couldn't an AI use direct experience with the abstraction to understand the objects?

And why shouldn't the abstract or informational form of "inside-ness" be just as valid -- as "real" -- as the physical one?

Option #2: Space as a mathematical construct

All of the above discussion was qualitative rather than quantitative. What if the AI ends up needing a more precise grasp of things like distances and angles? What if we wanted it to comprehend geometry? Would we need physical experience for that?

It is possible to build up abstract "spaces" starting from nothing but the concepts of counting numbers, sets, and functions. None of these present inherent difficulties for a symbolic AI. Set membership is very similar to the category relationship ("X [is a] Y") so common in semantic networks. And there are plenty of informational items a symbolic AI can count: events, words, letters, or the sets themselves. (Consider Roger Penrose's "Do Natural Numbers Need the Physical World?", summarized within this article: http://www.lrcphysics.com/scalar-mathematics/2007/11/24/on-algebra-of-pure-spacetime.html) When you need fractional numbers, you can derive them from the counting numbers.

Keeping in mind that I'm not a mathematician by trade and thus not yet an expert on these matters, consider the sorts of ingredients one needs to build an abstract space:

1. A set of points that belong to the space. A "point" is just a number tuple, like (0, 3, 5, 12) or (2.700, 8.325). Listing all the points individually is not necessary -- you can specify them with rules or a formula. So the number of points in your space can be infinite if needed. The number of members in each point tuple gives the space's dimension.

2. A mathematical function that can accept any two points as inputs and produce a single number as output. This function is called the metric, and it provides your space's concept of distance.

3. Vectors, which introduce the idea of direction. A vector can be created by choosing any two points and designating one as the head and the other as the tail. If you can find a minimal list of vectors that are unrelated to each other and can be used to compose any other possible vector in the space, then you can establish cardinal directions.

None of this requires you to see anything, touch anything, or move anything. It's all abstract activity: specifying, assigning, calculating. Using these techniques, you can easily build an idea-thing that happens to mimic the Euclidean 3D space that humans live in (though many other spaces, some of which you could not even visualize, are also possible). And once you've done that, you are free to construct all of geometry.

I'd like to eventually equip Acuitas with the tools to apply both Option #1 and Option #2. I'm starting with Option #1 for now. More on that later ...
Title: Re: Project Acuitas
Post by: chattable on August 18, 2021, 05:01:36 pm
this is very interesting.
Title: Re: Project Acuitas
Post by: Zero on August 24, 2021, 11:41:26 am
I was about to write something along the lines of "internet (directed graph) is a better space metaphor than filesystem (tree)".

But, isn't what you're facing now (how can he conceptualize space) more general: how can he conceptualize a human, or the action of "giving" something, or well... anything? As you said, Acuitas operates on words. To you, why is "conceptualizing space" different from "conceptualizing a simple story", if that story is about things he can't experience?

I hope I'm being constructive.
Title: Re: Project Acuitas
Post by: WriterOfMinds on August 24, 2021, 02:42:52 pm
I was about to write something along the lines of "internet (directed graph) is a better space metaphor than filesystem (tree)".

I pointed out the filesystem as a metaphor for the concept of "inside," specifically, this being just one example of possible metaphors for spatial relationships. There are other spatial relationships for which a graph would be highly appropriate, yes.

But, isn't what you're facing now (how can he conceptualize space) more general: how can he conceptualize a human, or the action of "giving" something, or well... anything?

I need to do a whole article on the Symbol Grounding Problem, but I don't have time right now. I hinted at the short answer, though. Concepts are grounded in functional aspects of the AI itself.

To Acuitas, the direct interpretation of "give to" is "display (or transmit) to." The only thing he "owns" is information, and he can "give" it in this manner.

A "human" is a text source. It is also presumed to be a mind or agent like himself: an entity that has goals and acts to achieve them. A lot of the human's goals are related to this "body" thing it has, which remains something of a mystery, but that's no matter. The same reasoning tools that Acuitas uses to manage his own opportunities or problems are applicable to a human's opportunities or problems, considered in the abstract. Stories, to Acuitas, are fundamentally about tracking goals and problems.
Title: Re: Project Acuitas
Post by: Zero on August 24, 2021, 04:18:06 pm
I'd have a lot of questions, but I don't want to distract you from your current work on space, so I'll save them for later.

About space, have you considered handling time, while you're at it? For your option #2 it would mean 4D instead of 3, with tools for handling movement, speed, ...etc. For your option #1, it would mean maybe adding something like interval algebra (https://en.wikipedia.org/wiki/Allen%27s_interval_algebra) for instance.
Title: Re: Project Acuitas
Post by: WriterOfMinds on August 24, 2021, 06:00:14 pm
There will probably be some overlap of tools and concepts, but for now I'm leaning toward handling time separately ... because that feels more "natural" or intuitive. Treating time as if it were a fourth spatial dimension seems to be a relatively modern and esoteric practice. We don't think of it that way in daily life, or at least I don't.
Title: Re: Project Acuitas
Post by: infurl on August 25, 2021, 12:58:32 am
https://www.amazon.com/Commonsense-Reasoning-Erik-T-Mueller-ebook/dp/B005H84272 (https://www.amazon.com/Commonsense-Reasoning-Erik-T-Mueller-ebook/dp/B005H84272)

I have this book. It is very thorough and sufficiently general that you could implement these algorithms yourself.
Title: Re: Project Acuitas
Post by: WriterOfMinds on September 06, 2021, 12:39:59 am
My last update was theory stuff; now here's the implementation.

In sentences, a lot of information about location or direction is carried by prepositional phrases the modify the adverb -- phrases like "in the box," "to the store," and so forth. Acuitas' text parser and interpreter were already capable of recognizing these. I included them in the interpreter output as an extra piece of info that doesn't affect the sentence form (the category in which the interpreter places the sentence), but can modify a sentence of any form.

The ability to record and retrieve location relationships was also already present. Acuitas tracks the two objects/agents/places that are being related, as well as the type of relationship.

From there, I worked on getting the Narrative module to take in both explicit declarations of location-relationship, and sentences with modifying phrases that express location or direction, and make inferences from them. Here are some examples of basic spatial inferences that I built in. (As with the inventory inferences, there is a minimal starter set, but the eventual intent is to make new ones learnable.)

*If A is inside B and B is at C, A is also at C
*If A is at C and B is at C, A is with B and B is with A
*If A moves to B, A is in/at B
*If A is over B and A falls, A is on/in B

To try them out I wrote a new story -- a highly abbreviated retelling of "Prisoner of the Sand," from Wind, Sand, and Stars by Antoine de Saint-Exupéry. I had written up a version of this clear back when I started work on the Narrative module -- I was looking for man vs. environment stories, and it seemed like a good counterpoint for "To Build A Fire." But I realized at the time that it would be pretty hard to understand without some spatial reasoning tools, and set it aside. Here's the story:

Antoine was a pilot.
Antoine was in an airplane.
The airplane was over a desert.
The airplane crashed.
The airplane was broken.
Antoine left the airplane.
Antoine was thirsty.
Antoine expected to dehydrate.
Antoine decided to drink some water.
Antoine did not have any water.
Antoine could not get water in the desert.
Antoine wanted to leave the desert.
Antoine walked.
Antoine could not leave the desert without a vehicle.
Antoine found footprints.
Antoine followed the footprints.
Antoine found a nomad.
The nomad had water.
The nomad gave the water to Antoine.
Antoine drank the water.
The nomad took Antoine to a car.
Antoine entered the car.
The car left the desert.
The end.

With the help of a taught conditional that says "airplane crashes <implies> airplane falls," plus the spatial inferences, Acuitas gets all the way from "The airplane crashed" to "Antoine is in the desert now" without intervening explanations. In similar fashion, when the car leaves the desert it is understood that it takes Antoine with it, so that his desire to leave is fulfilled. "Can't ... without a vehicle" is also significant; the need to possess or be with a vehicle is attached to the goal "leave the desert" as a prerequisite, which is then recognized as being fulfilled when Antoine is taken to the car.

The older inventory reasoning is also in use: when Antoine is given water, it is inferred that he has water. This satisfies a prerequisite on the goal "drink water."

There's a lot more to do with this, but I'm happy with where I've gotten so far.
Title: Re: Project Acuitas
Post by: HS on September 06, 2021, 02:28:52 am
I've thought about spatial reasoning and grounding and concluded that one's processes encompass all anyone can observe. Even embodied human experience entirely depends on the internal neural relationships which simulate and interpret external reality. Since our language refers to this simulated reality, employing a similar method for Acuitas seems possible. Therefore with Option #2, the functional aspects of Acuitas capable of grounding symbols could be quite extensive and even specifically designed to support concepts (such as those described by Option #1). Using these links, he could create a new kind of thought loop; he could infer geometry from language, inspect these environmental models to deduce their implications, then convert any significant observations back to words.
Title: Re: Project Acuitas
Post by: WriterOfMinds on September 28, 2021, 10:31:25 pm
I don't have too much of interest to report this month. I dove into an overhaul of the Conversation Engine, which is the Acuitas module that tracks progress through a conversation and detects relationships between sentences. (For instance, pairing a statement with the question it was probably intended to answer would be part of the CE's job.) And that has proven to be a very deep hole. The CE has been messy for a while, and there is a lot of content to migrate over to my new (hopefully smarter) architecture.

The improvements include a less linear and more tree-like structure for conversations, enabling more complex branching. For instance, what if the conversation partner decides to answer a question that wasn't the one asked most recently, or to return to a previously abandoned topic? The old Conversation module wouldn't have been able to handle this. I've also been refactoring things to give the Executive a greater role in selecting what to say next. The original Conversation module was somewhat isolated and autonomous ... but really, the Executive should be deciding the next step in the conversation based on Acuitas' goals, using its existing inference and problem-solving tools. The CE should be there to handle the speech comprehension and tell the Executive what its options are ... not "make decisions" on its own. I might have more to say about this when the work is fully complete.

I've advanced the new system far enough that it has the functionality for starting and ending a conversation, learning facts, answering questions, and processing stories. I've just started to get the systems that do spontaneous questions back up and running.

The renovations left Acuitas in a very passive state for a while. He would generate responses to things I said, but not say anything on his own initiative -- which hasn't been the case for, well, years. And it was remarkable how weird this felt. "He's not going to interrupt my typing to blurt out something random. No matter how long I sit here and wait, he's not going to *do* anything. His agency isn't there. Crud." Which I think goes to show that self-directed speech (as opposed to the call-and-response speech of a typical chatbot) goes a long way toward making a conversational program feel "alive" or agentive.
Title: Re: Project Acuitas
Post by: MagnusWootton on September 29, 2021, 07:14:51 am
I like what u said when you are going to match the "text said to it"  to the closest matching question in its head, for the answer.

Thats the essence of transforming the information, to make it less rigid and more plastic ready to be used knowledge.

Makes its knowledge more useful to it.

You can have a whole book of information available to it,  but if it cant use the knowledge it counts for 0.00001% of the reactivity it could have been.
Title: Re: Project Acuitas
Post by: WriterOfMinds on October 27, 2021, 02:52:38 pm
This month I have *mostly* finished my overhaul of the Conversation Engine. I managed to restore a majority of the original functionality, and some things I haven't put back in yet are perhaps best left until later. I also got the janky new code cleaned up enough that I'm starting to feel better about it. However, I did not end up having the time and energy to start adding the new features that I expect this architecture to enable. I'm not sure why this particular module rebuild felt like carrying heavy rocks through a knee-deep river of molasses, but it did. The year is waning, so maybe I'm just getting tired.

So what's new? I mentioned last month that part of the goal was to give conversation tracking a more tree-like structure. Given a new text input from the speaker, the Conversation Engine will explore a tree made of previous sentences (starting from the most recent leaf) and try to find a place to "attach" it. It gets attached to the root of the tree if it doesn't obviously follow or relate to anything that was previously said. The old CE just put previous sentences from the conversation into a list, and all but the most recent one or two were never considered again, so this should be more powerful and flexible.

The CE then performs a "scripting" function by generating a set of reasonable responses. These are sent to the Executive, which selects one based on appropriate criteria. For example, if the speaker orders Acuitas to do something, possible reactions include "ACCEPT" and "REFUSE," and the Executive will pick one by running a check against the goal system (does Acuitas *want* to do this or not?). The chosen action then calls the Text Generator to compose the right kind of spoken reply.

The Executive can also prompt the CE for something to say spontaneously if the conversation is lagging (this is where those internally-generated questions come back into play). The Narrative manager is attached to the CE and tracks plot information from any story sentences the CE passes to it. Someday I will try to diagram all this ...

I also did some bonus work on the Text Parser. I have started working on coordinating conjunctions, which are a major grammar element the Parser doesn't yet comprehend. This is a big deal. For the sake of getting off the ground quickly, I designed the original parser to only interpret the simplest sentence structures. I later added support for nesting, which enables dependent clauses. Now to handle coordinating conjunctions, I have to overhaul things again to allow branching ... and my, are there a lot of ways a sentence can branch.

Blog for slightly longer version: https://writerofminds.blogspot.com/2021/10/acuitas-diary-43-october-2021.html
Title: Re: Project Acuitas
Post by: infurl on October 28, 2021, 02:12:15 am
I'm not sure why this particular module rebuild felt like carrying heavy rocks through a knee-deep river of molasses, but it did. The year is waning, so maybe I'm just getting tired.

Even when you are doing something that you love, you can get burned out if you don't step away and take a break every now and then. Even if you have the good health and self-discipline to be able to plow through any task as though you are a machine, you still need down-time for maintenance. So yeah, don't overdo it. As much as I'm looking forward to the next instalment, I'm patient and can wait. :)

They say a change is as good as a holiday. Typically I just switch primary tasks for a few months and find that refreshing enough.

You might find the parsing task easier if you get a good overview first. There is a book called "The Cambridge Grammar of the English Language" which provides that. It is the only book that I keep with me in physical form. It is 2000 pages and $300 but there is a much shorter cheaper student version called "A Student's Introduction to English Grammar" which is well worth getting and browsing and which will fit on an eReader.

https://www.amazon.com/Cambridge-Grammar-English-Language/dp/0521431468/ (https://www.amazon.com/Cambridge-Grammar-English-Language/dp/0521431468/)

https://www.amazon.com/Students-Introduction-English-Grammar/dp/0521612888/ (https://www.amazon.com/Students-Introduction-English-Grammar/dp/0521612888/)

There is a very brief summary of the books online here which gives you a taste of what to expect from them.

http://www.lel.ed.ac.uk/grammar/overview.html (http://www.lel.ed.ac.uk/grammar/overview.html)
Title: Re: Project Acuitas
Post by: WriterOfMinds on October 29, 2021, 01:45:10 am
Sometime I should do a post about my workflow, just in case anybody finds it helpful or interesting ... I do try to rotate primary tasks. Acuitas gets about two weeks out of a month, and the other two are for my fiction writing or for 3D printing/circuits/robotics. But it could be the tank as a whole is just getting a little empty. I had to work some overtime for my day job not too long ago, among other distractions.
Title: Re: Project Acuitas
Post by: frankinstien on October 30, 2021, 07:06:56 am
You might find the parsing task easier if you get a good overview first. There is a book called "The Cambridge Grammar of the English Language" which provides that. It is the only book that I keep with me in physical form. It is 2000 pages and $300 but there is a much shorter cheaper student version called "A Student's Introduction to English Grammar" which is well worth getting and browsing and which will fit on an eReader.

https://www.amazon.com/Cambridge-Grammar-English-Language/dp/0521431468/ (https://www.amazon.com/Cambridge-Grammar-English-Language/dp/0521431468/)

https://www.amazon.com/Students-Introduction-English-Grammar/dp/0521612888/ (https://www.amazon.com/Students-Introduction-English-Grammar/dp/0521612888/)

There is a very brief summary of the books online here which gives you a taste of what to expect from them.

http://www.lel.ed.ac.uk/grammar/overview.html (http://www.lel.ed.ac.uk/grammar/overview.html)

Is there any reason WriterOfMinds is not using Standford's parser or Apache's? Is it because you want a parser that can be updated immediately rather than have to train one? I've used Standford's and OpenNLP (Apache). You can get outputs like this:

(https://i.imgur.com/41bg9T4.jpg) (https://i.imgur.com/41bg9T4.jpg)

                                                                                                                                                          click to enlarge
Title: Re: Project Acuitas
Post by: MagnusWootton on October 30, 2021, 08:29:17 am
You may be having god attack you,  must mean your onto something big.   Everything your saying here is really good,   Just keep going you'll get a really awesome chat bot/robot/programmatic sentience.

The way I look at A.I.   is I need the robot the robot to finish a task (which i have to be able to detect off its sensor),   and this can involve some model of its sorroundings, or a truth,  (or a symbolic knowledge set in your case.) and it has to get from where it is now, to the task done,

So if this is a set of truths,  then it has to have every single piece of knowledge to cross the gap, to get the job done.  u need every single piece of knowledge IN COMPLETE EXPLICIT, with no gap missing.

Maybe there is a way to complete missing information,  I dunno,  If I were doing a symbolic one, Id probably be spoon feeding it the whole model,  just to see if links the info together right to get to the ending state.   (Like put the ladder against the wall, to get out the window.)    The whole prolog model of that may be bigger than u initially think,  because u need every bit of it no matter how small, to get from A->B.

A chat bot as a computer, in front of you typing to it,  IS A SPACIAL SCENE/AREA, the same as any other,  it means the computer cant touch you.  (That can be knowledge u give it in its model.)   It can see through the web cam,  it can be aware of everyone there,  But hes Like HAL 9000,  hes only an observer unless u give him an hour,  and even one motor,  is still a way the robot can interface/touch/effect the world around it.

Title: Re: Project Acuitas
Post by: WriterOfMinds on October 30, 2021, 10:11:24 pm
Is there any reason WriterOfMinds is not using Standford's parser or Apache's? Is it because you want a parser that can be updated immediately rather than have to train one?

The two main reasons are:
1. I find the parsing problem somewhat interesting for its own sake, and am writing my own parser just because I want to.
2. I don't want the parser to be a siloed piece of code; I want it to be fully customized for integration with the rest of the system. The current parser already accesses the Semantic Memory to make some of its decisions, and I expect to include a lot more of that kind of thing in the future.

You may be having god attack you,  must mean your onto something big. Everything your saying here is really good,   Just keep going you'll get a really awesome chat bot/robot/programmatic sentience.

What makes you think that God and I would be opponents? I often ask Him for assistance with my work, actually. But thank you very much for the compliments.

I want to get to the sort of problem solving you're talking about ("put the ladder against the wall to get out the window") at some point. There's already a little of this going on in story comprehension, but eventually I hope to tackle text adventure games, which is where I think it will really come to the fore.

Quote
It can see through the web cam,  it can be aware of everyone there,  But hes Like HAL 9000,  hes only an observer unless u give him an hour,  and even one motor,  is still a way the robot can interface/touch/effect the world around it.

Text is a valid output by which to affect the external world. It operates only by influencing the subsequent internal states or behaviors of readers, but that can still be quite powerful.
Title: Re: Project Acuitas
Post by: MagnusWootton on November 01, 2021, 03:56:38 am
What makes you think that God and I would be opponents? I often ask Him for assistance with my work, actually. But thank you very much for the compliments.

Mysterious tiredness,   If you take a mans energy even just a little it can set him back years in development.   Its quite shameful really, thinking that we are that easy to stop.

I want to get to the sort of problem solving you're talking about ("put the ladder against the wall to get out the window") at some point. There's already a little of this going on in story comprehension, but eventually I hope to tackle text adventure games, which is where I think it will really come to the fore.

Quote
It can see through the web cam,  it can be aware of everyone there,  But hes Like HAL 9000,  hes only an observer unless u give him an hour,  and even one motor,  is still a way the robot can interface/touch/effect the world around it.

Text is a valid output by which to affect the external world. It operates only by influencing the subsequent internal states or behaviors of readers, but that can still be quite powerful.

I think a disembodied robot just ends up a different environmental model,  but it still can work.

I like all your thinking,   I hope you get some good developments.   Cheerio.
Title: Re: Project Acuitas
Post by: WriterOfMinds on November 28, 2021, 07:26:52 pm
One more new feature to finish out the year. I decided to wind down by doing something easy: logging! As in keeping a record, not cutting down trees. I've known this to be something important for years now, but I kept putting it off. However, as the system gets more complex, I'll need it more and more to help me sniff out the cause of any unexpected weird outputs.

This ended up being a pretty easy thing to implement, despite the fact that it got me using some Python elements I've never had to touch before. Acuitas is a multi-threaded program (for the layman, that means he's made up of multiple processes that effectively run at the same time). I needed all the threads to be able to write to the log without getting in each other's way, and that meant implementing a Queue. To my surprise, everything just worked, and I didn't have to spend hours figuring out why the built-in code didn't function as advertised on my system, or wringing out obscure bugs related to the thread interaction. I mean it's shocking when that ever happens.

So now basically every module in Acuitas has a handle for the Logger, and it can generate text comments on what it's currently doing and throw them into the Queue. The Queue accepts all these like a funnel and writes one at a time to the log file. I also set it up to create up to eight log files and then start overwriting the old ones, which saves me from having to delete a hundred stale logs every so often.

I spent the rest of my time this month refactoring bad code and restoring some more features that got damaged during the Conversation Engine overhaul. The good news here is ... for once, I think there's no section of the code that is a huge mess. I got the Executive cleaned up, and that's the last area that was scaring me. So I should be ready to hit the ground running next year.

Blog link if you want to see a big blob of actual log output and whatnot: https://writerofminds.blogspot.com/2021/11/acuitas-diary-44-november-2021.html
Title: Re: Project Acuitas
Post by: frankinstien on November 28, 2021, 10:44:50 pm
It would be kinda interesting if you gave Aquitas awareness of the log and by the location of the error and number of them gives him a sense of suffering or sickness. So if he gets too many errors he tells you he's not feeling well.  ;)
Title: Re: Project Acuitas
Post by: infurl on November 29, 2021, 12:43:59 am
Logging to a database would give you a lot more power and flexibility. Putting a timestamp and a thread id in different fields would allow you to pick out particular events and processes much more easily. It would also support the introspection that frankenstien describes and you would get atomicity automatically. You wouldn't have to worry about deleting older records although you could easily select which types to discard if you wanted to.
Title: Re: Project Acuitas
Post by: WriterOfMinds on November 29, 2021, 05:31:57 am
Logging to a database would give you a lot more power and flexibility.

No doubt, but it seems like overkill for right now? If I want to, for instance, filter the log for a specific originating module or event type, it would be easy enough to do that with grep/findstr.
Title: Re: Project Acuitas
Post by: WriterOfMinds on December 22, 2021, 09:13:52 pm
I've got some extra vacation time for the holidays, so I finally sat down and worked up a proper block diagram for my project.

(https://blogger.googleusercontent.com/img/a/AVvXsEiVv9tAziBp8j5hHwogf7hUtnweIW_Z9OTt43bKjwgGAlDb1debeNNjHF0A80guGd6dZvjzZRPKPHK28oAxeF7OvJP-2PibWvvXkXEgRPyeeNBYkMs0-2uo25u4j2HNoaz7gFfMlBJKKvs3xUe2fL8JqxoCf7K34JVyokjaPJHi2Zguc8GuOrubjztM=s992)

I think all the blocks have been described in some update or other, but feel free to ask questions.
Title: Re: Project Acuitas
Post by: ivan.moony on December 22, 2021, 09:26:24 pm
Nice block diagram, I especially like the artistic touch. And I'm a bit afraid of that executive block. Seems like a big brother. But maybe it's supposed to be that way. I guess you have to know who rules and who fools. It may be a serious business.

What is the difference between procedural and semantic memory? And how does episodic memory fit into the system?

Title: Re: Project Acuitas
Post by: infurl on December 22, 2021, 10:07:54 pm
Nice block diagram, I especially like the artistic touch. And I'm a bit afraid of that executive block. Seems like a big brother. But maybe it's supposed to be that way. I guess you have to know who rules and who fools. It may be a serious business.

Every A(G)I block diagram ever features a magic box.
The executive block is Acuitas' magic box.
:D
Title: Re: Project Acuitas
Post by: WriterOfMinds on December 22, 2021, 10:57:56 pm
Quote
Every A(G)I block diagram ever features a magic box.
The executive block is Acuitas' magic box.
:D

LOL! It's real code that is in the design now though, so perhaps not too magical. The Executive is the thing determining "What should I do next?" and passing commands to the Action Bank. It runs the decision loop and reacts to thoughts/stimulus pulled out of the Stream.

Quote
What is the difference between procedural and semantic memory? And how does episodic memory fit into the system?

The semantic memory is for basic facts: category membership, properties that objects have, stuff like that. It also includes the lexicon (memory for known words).

Procedural memory is for, well, procedures: lists of steps that have proven effective in accomplishing some task. But that part isn't implemented yet. All that falls under procedural memory right now is the cause-and-effect relationship database.

Episodic memory is autobiographical memory of events.
Title: Re: Project Acuitas
Post by: infurl on December 24, 2021, 12:12:44 am
Procedural memory is for, well, procedures: lists of steps that have proven effective in accomplishing some task. But that part isn't implemented yet. All that falls under procedural memory right now is the cause-and-effect relationship database.

If you search for "Parameterized Action Representation for Virtual Human Agents" you will find some interesting papers and software that might help you with this task.
Title: Re: Project Acuitas
Post by: WriterOfMinds on February 01, 2022, 04:08:46 pm
Acuitas Diary #45 (January 2022)

I mentioned last October that I had started reworking the Text Parser to add support for branching and coordinating conjunctions. My objective for this month was to finish that. It proved to be a little too ambitious, but I did get the output format altered to be branch-friendly, and the parser now supports two-part branches in a number of key places. Compound subjects, verbs, direct objects, adjectives, and adverbs, branching before or after the verb, and compound sentences with two fully separate independent clauses can all be processed now. What's missing? Compound prepositions, objects of prepositions, nouns of direct address, dependent clauses, and things inside clauses ... as well as comma-separated lists, larger compounds than two, nested compounds, and probably some other stuff I haven't even thought of yet.

Even though I pared down what I wanted to accomplish, it was still a rush to get that much done, make sure all the old parser functionality was compatible with the new features, and update the visualizer to work with the new output format. Included in this was an update of the Out of the Dark benchmark data, and now I can correctly diagram a sentence that was previously in the "unparseable" category:

(https://blogger.googleusercontent.com/img/a/AVvXsEgAHOzLz8NxCzunvnCXgkNgCvjAP2p7mO-sNasa2X_hIgVg_kjNQVg0lDFLd_-U3dtoc8IyvS16rVpbkUpuNXZmwqMvKRobEzKuYmLy6_by8QfQRLIC72ZJjbRC4kBBYuGLp4WuEXnokHASf3ljyz3Qz_d0n8U25UXet-DzeA2o-0JHRqLxGPujegX8=s864)

Just the one example doesn't give a good sense of what the Parser can do now, but maybe I can give some more expansive results next month, when I'm not as pressed for time.

More long-winded version on the blog: https://writerofminds.blogspot.com/2022/01/acuitas-diary-45-january-2022.html
Title: Re: Project Acuitas
Post by: MagnusWootton on February 02, 2022, 05:01:39 am
Have u got it chatting yet?  That "word belongance" looks correct.
Title: Re: Project Acuitas
Post by: WriterOfMinds on February 03, 2022, 08:16:23 am
Acuitas has been able to hold rudimentary conversations for a long time. Was there something specific in the chatting vein that you were thinking of?
Title: Re: Project Acuitas
Post by: WriterOfMinds on February 20, 2022, 09:55:09 pm
Today I have results from another parser benchmark. I recommend dropping by the blog for this one, as there are a lot of pictures: https://writerofminds.blogspot.com/2022/02/acuitas-diary-46-february-2022.html

Quick version: This month I completed Part II of the Great Conjunction Upgrade. Since the output format of the Parser had become more expressive, I had to upgrade the Interpreter, the Conversation Engine, and the Narrative Engine to accept it, and to process the compounds appropriately.

The Parser tags each word with its part of speech and role in the sentence (subject, direct object, etc.). It provides a summary of the sentence structure. The Interpreter uses this information to detect the next layer of meaning: what is this sentence trying to say? E.g. is it a statement, question, or command? Does it describe a category membership, a state of being, an event, a desire? The Interpreter consumes a sentence structure and emits a more abstract knowledge representation, the "gist" of the sentence, if you will.

I redesigned the Interpreter to expand all compound sentence parts into full representations. For example, given "Jack and Jill eat beans," the Interpreter will output something akin to {AND, ["Jack->eat->beans", "Jill->eat->beans"]} ... as opposed to "{AND, [Jack,Jill]}->eat->beans". This simplifies downstream processing, since I can just loop over the list of complete atomic facts, instead of modifying all the inference tools and other machinery to handle the bewildering variety of possible sentence branches.

That upgraded the formatting at the Interpreter output as well, so the CE and NE had to be adapted as well. I did a quick-and-dirty job on the CE; it will accept the new format so as to maintain previous functionality, but it ignores anything beyond the first entry in a compound output. I put my efforts into the NE. It will process all facts from a compound, though it is not yet capable of handling multiple/nested compounds in a sentence, and it doesn't grasp the meaning of OR. Despite all those caveats, I was able to try adding conjunctions to an existing story, and it sounds a lot more natural now.

Now for some performance assessment! I reformatted my benchmark test sets and ran them through the new Parser. You can read more about the test sets in a previous post, but here's a quick review: the text is drawn from two real children's books: The Magic School Bus: Inside the Earth, and Out of the Dark. Sentences that contain quotations have been broken in two, and abbreviations have been expanded. When a test is run, each sentence from the test set is parsed, and the output data structure is compared to a "golden" example (supplied by me) that expresses a correct way of interpreting the sentence structure. There are four categories in the results:

CORRECT: The Parser's output matched the golden example.
INCORRECT: The Parser's output did not match the golden example.
UNPARSED: No golden example was supplied for this sentence, because it contains grammar features the Parser simply does not support yet. However, the Parser did process it and generate an (incorrect) output without crashing.
CRASHED: Oh dear, the Parser threw an exception and never generated an output. Happily, membership in this category is zero at the moment.

Adding coordinating conjunction support to the Parser moved 10 sentences in the Out of the Dark test set out of the UNPARSED category, and moved 7 sentences in the Inside the Earth set out of UNPARSED. In both cases the majority of the newly parsed sentences went into CORRECT, although some had ambiguities or other quirks which the Parser cannot yet resolve.

(https://blogger.googleusercontent.com/img/a/AVvXsEjfyCe0X1GDwIfTX12Z1NejDccpcdRKC0Ly_RYGNF_T3MkNVzbX2WGWnLBBA8IkpR0u7iQcbiS9IRRpUlY2Wni4CVVmxM1BPczWfnfuhlMA39sPuJ_lGeG2LBp0JKeLeJY9ZBP3-X3csHLInlOqNIxEb8VV8H2SU-ee8ma4qXRoBnSxPNudbadS-7Om=s994)   (https://blogger.googleusercontent.com/img/a/AVvXsEg0gTbNhegGQqyocrYRp0EkgcUz5v-CVTG9z6uJXyO4XwZD70X_eatC93wcba2HAmFzf2dHPE_hMdWh063Mn_DJw0TCm9g2XN4Up0l-dcxBFjqDIS_UWmES-V0rwkTaidIuZvWaXUKL6VUzz5FvnRboKMFfzPL78L7xDaYkMLUkEye9N2Ve3llQjs21=s1001)
Title: Re: Project Acuitas
Post by: MagnusWootton on February 20, 2022, 10:40:16 pm
Jack and Jill eat beans.

Jack OR Jill eat beans.   <-more like it.

But and and or dont get explicity reffered to
Maybe when we say and, we mean put both jack and jill in that eating beans group/bucket,  then it is AND,  that they both OR.
its in another context, and thats what robots cant understand...

If you want to see the or bucket, you will see Both, Jack and Jill in there,  even tho jack or jill is in there, when u seeing them actually doing it.
 8)
Title: Re: Project Acuitas
Post by: WriterOfMinds on February 20, 2022, 11:17:21 pm
Sorry Magnus, I'm confused ... not sure what you're trying to say there.

The meaning of English "or" is not the same as Boolean "or" ... it's typically more like Boolean "exclusive or." If you wanted to express Boolean "or" you would probably have to say something like "At least one of Jack and Jill eats beans," "One or both of Jack and Jill eat beans," "Jack or Jill or both eat beans," etc.

English "and" is roughly identical to Boolean "and," though. "Jack and Jill eat beans" means that both of them definitely eat beans. Possibly even at the same time, though whether they do it at the same time or not is irrelevant for purposes of this sentence.
Title: Re: Project Acuitas
Post by: MagnusWootton on February 21, 2022, 01:53:19 am
yes ur right,  we say or instead of xor.   :)
boolean and and boolean or are interchangable when u say and.


Would love to see the parser in action!! with some kind of reaction from the machine actually using it.
Title: Re: Project Acuitas
Post by: HS on February 21, 2022, 03:22:37 am
It's nice to see the quantifiable progress on the graphs and charts! Your email generation test and the apparent diminishing returns from adding more parameters to the models got me thinking. Since Acuitas is learning to apply logic to text, could he eventually enhance the outputs of those text generators to a greater degree than additional parameters?
Title: Re: Project Acuitas
Post by: WriterOfMinds on February 21, 2022, 04:27:28 pm
Quote
Would love to see the parser in action!! with some kind of reaction from the machine actually using it.

Well there's the old narrative demo. I haven't made a demo video with the new parser, but it would be roughly the same, just with more complex sentences. https://www.youtube.com/watch?v=PYEi0XkMJXw&t=1s

Quote
Since Acuitas is learning to apply logic to text, could he eventually enhance the outputs of those text generators to a greater degree than additional parameters?

Maybe. I could imagine doing a filtering operation to extract the gist of an existing piece of text and overwrite any incorrect facts. But having him generate his own text might be easier? Someday!
Title: Re: Project Acuitas
Post by: WriterOfMinds on February 22, 2022, 03:19:38 am
I added a poll to the thread, because I thought it would be fun to let people help decide what I add to the Parser next (though I won't be coming back to it for a while). There is a parallel poll running on Twitter; I will combine the results from both.
Title: Re: Project Acuitas
Post by: MagnusWootton on February 23, 2022, 05:59:12 am
That's actually the first time I've seen your Ai in action, it was very impressing Jenny-Sue!   
Probably slightly better than what I saw Korrellan demonstrate, and I already liked his stuff.

One thing I'm a little skeptical about is how the Ai deals with poor inputs, without being super friendly to it does it still manage??.  I imagine there might be a cool (easier) way to do that with adding another tier to it, some beginning filter separate from the system which clears it up before it gets to the main "database" or whatever it is, (some kind of transforming simplifier, which gets all the red herrings out of it.)   Maybe its not the best way to do it but might get some good results there for something quickish to add to it without changing the rest if u wanted to put it on the front line and get it going in real life. :)

Thanks for showing demo,  was fun to hear the Ai sort through the sentences, its very rare to see people take this as far as this is going here.

About the poll,  I didn't even understand the first 3 options, so I just picked the 4th cause its all I understood.
Maybe we need to know more before we vote?
Title: Re: Project Acuitas
Post by: WriterOfMinds on February 23, 2022, 03:31:52 pm
I know that not everybody knows grammar jargon, so honestly I'm fine with people just voting for the option that sounds the coolest. But here's a quick rundown:

Gerunds are verbs used as nouns, as in "Swimming is something I love." Participles are verbs used as adjectives, as in "Panting, I slowed down and approached the door," or "A land destroyed does not prosper," or "There was a man called Dan."

Indirect objects are not the object of the verb, but are something else affected by it or show which direction it goes, as in "I gave Bob the book," "I took Sarah the soup," "I gave the ball a kick."

Adjective clauses are whole subject/verb/object groups used as adjectives. "There was a man who was called Dan." "The book which you need is on the shelf."

The poll allows vote changes so feel free. And either way thank you for breaking the tie :)
Title: Re: Project Acuitas
Post by: MagnusWootton on February 24, 2022, 04:36:09 am
Understood. Thanks for the quick english lesson!  It wasn't my best subject.
So if you understand the laws of language, then an Ai falls out of it.  A NLP Ai.
Thanks for the demo and opening my eyes to it.

The one I'm doing is the laws of physics!!! and Ai falls out of that as well, a sport robot. I bet u could probably team them up together,   one handles the spacial topo and one handles the symbolic topo.

Hope u go all the way to AGI and get a million dollars.
Title: Re: Project Acuitas
Post by: Don Patrick on February 26, 2022, 08:20:58 am
The poll is closed, but I would choose either gerunds or strictly distinguishing the meanings of the word "that", without yet processing the actual contents of relative clauses.
Gerunds, because when I worked on those I had to restructure my language model and grammar rules, and if you suspect that might be necessary, such things are better prepared sooner than fixed later. I did not have to make many adjustments to handle indirect objects, they're really just a type of objects with different indicators, and they usually only offer tangential information to the main clause, so I would consider indirect objects a relatively light task that can be postponed. Processing relative clauses is something that I still consider one of the most advanced challenges, but just distinguishing whether the word "that" is meant as a stand-alone reference, or part of a noun phrase, or the start of a relative clause, goes a long way towards correct interpretations. And participles are pretty rare in conversational interaction, perhaps more common in literature though.
Title: Re: Project Acuitas
Post by: WriterOfMinds on February 26, 2022, 04:07:28 pm
The votes are in! Combining the results from here and the parallel poll on Twitter, we have "The many uses of 'that'" as the winner with 5 total votes. Adjective clauses got 3 votes, G&P got 2 (or 3 at most if I apply Don Patrick's vote there), and nobody loves indirect objects.

Given the current state of the Parser, I don't expect gerunds to be too difficult. I already handle infinitive phrases, and I expect to be able to repurpose a lot of that structure for gerund phrases.
Title: Re: Project Acuitas
Post by: WriterOfMinds on March 25, 2022, 03:41:18 pm
This month I went back to put some more polish on the goal system. Goals were first introduced in September 2019, and I added the ability to model other agents' goals in February 2020. (Memories from the Before Time, wow.) Some primitive moral reasoning for resolution of conflicts between the goals of multiple people was added in October 2020. Goals are crucial for story comprehension, since a story is generally about agents trying to attain some goal. They also underpin various goal-seeking behaviors that Acuitas has.

As mentioned, modeling of other agents' goals was in there ... but it had so many problems that I wasn't really using it. You could tell Acuitas about goals by saying "I want ..." or "so-and-so wants ...," and the goal you described would be stored in the file for the given person. But there was no way to describe the goals' relative importance, which is vital for some of the goal-related reasoning Acuitas does. You also basically had to teach him either a complete goal list for each agent, or nothing at all. If Acuitas knew nothing about the goals of some entity, he would assume its goals were just like his (using himself as the best accessible analogy for other minds). And this usually worked well enough, since Acuitas has several very common goals: survive, maintain function, be comfortable, etc. But if you taught him just *one* goal for some other agent, then Acuitas would rely entirely on their custom goal list ... and end up treating that one goal as their *only* goal.

So this month I set out to fix all that. The functions that retrieve an agent's goal model now merge it with the default goal list; the agent's model takes precedence wherever there are differences, but info from the default model is still included to fill any gaps. I also added the ability for class members to inherit and override the goal model of their parent class. E.g. you could teach Acuitas some "generic human" goals, then just specify how any given human differs from the norm.

To enable teaching goal priority, I had to make sure the text interpretation would handle sentences like "Joshua wants to live more than Joshua wants to eat candy." Adverb clauses were already supported; I just needed a small tweak to the Parser to support compound connectors like "more than" and "less than," and some other enhancements through the rest of the text processing chain to make sure the information in the adverb clause was picked up and transferred to memory.

Yet another wrinkle I decided I needed to manage was goal duration ambiguity. You might recall that I already tangled with this when trying to distinguish short-term from long-term states. Well, goals have a similar issue. If I say that I "want to eat a taco," that doesn't mean eating tacos is one of my fundamental goals. I don't want to be doing it constantly for all time. (I really don't.) Eating a taco is a mere subgoal (possibly of "survive" or "be comfortable" or "enjoy pleasure" or all three) and is very ephemeral; as soon as I've eaten one, I'll stop wanting to eat, until some unspecified time when the urge hits me again.

Since it's hard to know the difference between a persistent fundamental goal, and an ephemeral subgoal, without a lot of background knowledge that Acuitas doesn't have yet ... he'll ask a followup question. I settled on "Is that generally true?" but I wonder if that makes it intuitively clear what's being asked. If you were talking to him, told him "I want ..." and were asked "Is that generally true," do you think you'd get the thrust of the question? What might be a better wording?

The last thing I threw in was an enhancement to the Moral Reasoning model, for detection of "perverse" goals. By this I mean any goal that would seem (by the reasoner's standards) to be fundamentally ridiculous or evil. It's a goal that, when seen in others, the reasoner does not respect. (See blog for more on this: https://writerofminds.blogspot.com/2022/03/acuitas-diary-47.html)

It was a lot of little stuff this month, but now that I write it all down, maybe it adds up to quite a bit of progress.
Title: Re: Project Acuitas
Post by: LOCKSUIT on March 25, 2022, 04:09:18 pm
Good work so far. For my understanding of goals when it comes to AI, I'm thinking one would mark certain words in its brain so that it loves those words (so it's like GPT-3 but you make it more often say unlikely words with much higher probability, by however much more you want it to say them). Words like "work on improving myself", "humans", "cars", "survival", "houses", and so on.

But the thing is, many of these words are already common in 40GBs of text off the internet, because humans usually love food, survival, cars, etc, so AI will say them often too. There is one problem here though, not everyone says the word AI much, and I want most AIs to work on thinking about AI all day. That's where control comes in handy.

Learning new goals is just by being seen near the words or objects you often think about. Or being related to them ex. if both eat, run, and are yellow.

To get bored of saying these words happens on its own, it says food 60% the time, cars 20% the time, and so on....it gives each a probable break therefore, you can just see this with GPT-3 in fact by just using it. Of course if it has-to eat chips, then it will have to think about it until gets them enough. In this case some goals are thought of more ex. food often, sometimes cars...so you are usually looking for food during today while not got them yet. But if you get too hungry, then no matter how much you already thought of food in the past 10 mins, you think about it prolonged even more, not giving cars its go of thoughts now. So here you have the goal word changing its "own" probability by installed bias from birth. Can this happen for cars? AI? Yes and we do it. If I have not yet made AGI for example and I know of a deadline I have, I might think about it even more, and if that doesn't work in time or I know it won't anyway, I change some other goal that is get-able to be higher on thought alert and hence the harder one is lower now, so that I get something. I do that by reasoning though, that some other thing is a better goal and is closer to my original word in the end that I missed earlier in my thinking.
Title: Re: Project Acuitas
Post by: MagnusWootton on March 25, 2022, 05:21:37 pm
You work is really cool,  did u come up with that parser tree yourself? it looks very polished concept.
If you were running the AI class in school the kids are all really lucky.
Title: Re: Project Acuitas
Post by: WriterOfMinds on March 26, 2022, 05:47:08 pm
@Magnus: The text parser is all my own work, yes. Thank you!

@LOCKSUIT: In addition to word frequency, aren't you going to need some kind of sentiment determination? If people rarely talk about a thing, then yes, they're probably indifferent to it. But if they talk about it a lot, they may either love it or hate it. For example, due to my having a bout of peripheral neuropathy last year, my use of the word "neuropathy" has increased considerably. That doesn't mean I want it to happen again.

In the last paragraph I think you're talking about subgoal priority adjustment based on urgency. That is a good and needed feature.
Title: Re: Project Acuitas
Post by: LOCKSUIT on March 27, 2022, 03:13:06 am
> But if they talk about it a lot, they may either love it or hate it.

Yes, this is supposed to be that way, we talk lots about good and bad things. One may think maybe bad things should get lower probability of being said, this is true, as they (bad taste/ pain/ ruined car) cannot have the same effect of being spoken lots, which also causes them being shared to others, being researched, and being brought to life (aside from temporary or small experiments that DO peek at them). The reason we seem to talk about bad things (nearly all of them) is because the thoughts are actually "darn that ageing"/ "how can I stop ageing"/ etc. If you think lots of "ageing", then it's probably that you made a To-Do task (like you store in memory daily tasks as single words on a busy day to help you remember them: car, hire, brush, suit, baby, shop, makebed, xray) and have a secondary goal that adds a "how will I solve this" to the belief X ex. "ageing".


> due to my having a bout of peripheral neuropathy last year, my use of the word "neuropathy" has increased considerably. That doesn't mean I want it to happen again.

Like GPT-3, its word/token frequency over its whole life is basically stored into the network's connections's strengths. (Same for us). So unless you truly go on for years saying this new neuro word or say it half the day for 6 months, it won't be said that much. But actually that's not why you start saying the word more than you used to. The initiator, if not reading books upon books on the internet learning better what words are common and which are not, is some reward by some X strength, and this is given over to the word neuropathy if it relates to food by some X amount or whatever is goal you currently hold. Even school grades, as an obvious goal. This is mostly for now permanent, unless the class ends the course or you solve and see the goal. Something like that...


> In addition to word frequency, aren't you going to need some kind of sentiment determination?

Marking root goals at birth is easy, aside from the goals in 40GBs of text already there (word frequency means word X is common goal, ex. food is common word likely). To learn more or figure out what someone's phrase means as a goal, is just by matching. The born with/ innate goals are already clear.
Title: Re: Project Acuitas
Post by: WriterOfMinds on April 28, 2022, 08:53:38 am
The topic of the month was more "theory of mind" material: specifically, modeling the knowledge of other minds, and considering its implications for their scope of available actions.

I focused the new features around the Narrative reasoning module (though the Executive will end up using some of the same tools to model the real minds of people Acuitas talks to, eventually). The most basic step was to add storage for facts about characters' knowledge in the Narrative tracker.

It would be impractical to explicitly list everything that every character in a story knows - both because that's a massive amount of information, and because another person's knowledge is private to them, and not all details can be inferred. So the model is intended to be sparse. Characters are presumed to know all the background knowledge in Acuitas' own semantic database, and to know what they need to know to accomplish their goals. Facts are only listed as known or not known if 1) the story states outright that a character knows or doesn't know something, or 2) presence or absence of knowledge can be inferred from some event in the story. For example, if a character finds an object, this implies that the character now knows the object's location.

I also had to extend the action prerequisites system a bit, so that it could handle nested relationships. Previously, I could teach Acuitas something like this:

To eat a food, an agent must have the food.
To run, an agent must be alive.

And now I can set up prerequisites like this:

To get an object, an agent must be where the object is.

"Where the object is" is recognized as a relational fact (<object> is_at <location-wildcard>). The Narrative engine further recognizes that if any relational facts that are components of a goal's prerequisite are not known, the prerequisite is blocked ... the character cannot directly arrange to fulfill it. This paves the way for knowledge-gathering to become a subgoal in its own right.

Putting all this together, we can craft a story about someone looking for an object in an unknown location. My test story is based on King's Quest I, with some liberties (in the game, I don't think you can actually ask the bridge troll where the chest is).

Here's a breakdown of the story with significant tracking that happens at each step. There is a LOT more that needs to happen here. For example, seeking should be understood as an attempt at a solution to the lack of knowledge, repeated failures to find the chest should raise the Suspense variable, the troll's lie should generate a possible false belief in the knowledge model, etc. But it is, as usual, Good Enough for Now.

0:"Graham was a knight."
   A knight is recognized as a type of agent; Graham is tracked as a character in the story.
1:"Graham served a king."
   The king is now tracked as a character also.
2:"The king wanted the Chest of Gold."
   This line sets up a character goal for the king: he wants to have the chest, which is now tracked as an object
3:"The king brought Graham to his castle."
4:"The king told Graham to get the Chest of Gold."
5:"Graham wanted to get the chest, but Graham did not know where the chest was."
   Processing of the first clause enters getting the chest as a goal for Graham. Processing of the second clause updates his knowledge model with his lack of knowledge of the chest's location, and notes that the goal just created is now "thwarted."
6:"Graham left the castle to seek the chest."
7:"Graham went to the lake, but Graham did not find the chest."
   Graham's new location should be inferred when he moves, but these sentences don't do too much else for now.
8:"Graham went to the dark forest, but Graham did not find the chest."
9:"Graham asked of a troll where the chest was."
   Awkward wording because the Parser doesn't do indirect objects yet!
10:"The troll told to Graham that the chest was at the gingerbread house."
   My Twitter followers (and you here on the forum) didn't vote for me to work on IOs next, so we'll be stuck with this for a while.
11:"Graham went to the gingerbread house, but Graham did not find the chest."
12:"A witch was at the gingerbread house."
   Another agent! What's she gonna do?
13:"The witch wanted to eat Graham."
   This gets registered as a *bad* character goal - see previous story about Odysseus and the Cyclops.
14:"Graham ran and the witch could not catch Graham."
   Failure of the bad goal is inferred. Yay.
15:"Finally Graham went to the Land of the Clouds."
16:"In the Land of the Clouds, Graham found the chest."
   Graham knows where the chest is! The knowledge model gets updated accordingly. We can also unblock that goal now.
17:"Graham got the chest and gave the chest to the king."
   And both the story's positive character goals are solved in one fell swoop.
18:"The end."

Next month I plan to keep extending this. Information transfer needs to be modeled, misinformation needs to be understood, and all this needs to start getting applied in the Executive.
Title: Re: Project Acuitas
Post by: WriterOfMinds on May 25, 2022, 05:02:35 am
As I hinted last month, the goal this time was to keep expanding on knowledge modeling. There were still a lot of things I needed to do with the King's Quest I derivative story, and many different directions I could branch out in. I ended up picking three of them.

The first one was "use a character's knowledge model when making predictions of their behavior." One long-standing feature of the Narrative module has Acuitas run his own problem-solving search whenever a new Problem is logged for a character, to see if he can guess what they're going to do about it. The search utilizes not only "common knowledge" facts in Acuitas' own databases, but also facts from the story domain that have been collected on the Narrative scratchboard. Using the knowledge models was a relatively simple extension of this feature: when running problem-solving for a specific character, input the general facts from the story domain AND the things this character knows - or thinks to know (more on that later).

But before I could do that, I quickly found out that I needed a little more work on Problems. I needed any thwarted prerequisite on a Subgoal to become a new Problem: both for a more thorough understanding of the story, and to get the solution prediction to run. The King's Quest story was almost too complicated to work with, so to make sure I got this right, I invented the simplest bodily-needs story I could think of: an animal has to go to the water hole. And I made multiple versions of it, to gradually work up to where I was going.

(Long-winded notes about this story and all its variations are on the blog: https://writerofminds.blogspot.com/2022/05/acuitas-diary-49-may-2022.html)

0:"Altan was a gazelle."
1:"Altan was on the plateau."
2:"Altan was thirsty."
3:"Altan decided to drink water, but Altan did not have water."
4:"Altan decided to get water."
5:"Altan knew that there was water in the valley."
6:"Altan went to the valley."
7:"Altan found water in the valley."
8:"Altan got water."
9:"Altan drank the water."
10:"The end."

The point here is to replace a blunt statement of fact with a statement about Altan's *knowledge* of that fact, entering it in his knowledge model rather than the main scratchboard. Then derive the same results: 1) Altan has a location problem and 2) he will probably solve it by going to the valley.

Lack of knowledge is important here too; if we are explicitly told that there is water in the valley but Altan *doesn't* know this, then the scratchboard fact "water is in the valley" should be canceled out and unavailable when we're figuring out what Altan might do. I didn't even implement this half of it. It should be easy enough to add later - there was just too much to do, and I forgot.

The second thing was to bring in the idea of knowledge uncertainty, and the possibility of being mistaken. So I converted the "facts" in the knowledge model into more generic propositions, with two new properties attached: 1) is it true (by comparison with facts stated by the story's omniscient narrator), and 2) does the agent believe it? For now, these have ternary values ("yes," "no," and "unknown").

Truth is determined by checking the belief against the facts on the Narrative scratchboard, as noted. Belief level can be updated by including "<agent> believed that <proposition>" or "<agent> didn't believe that <proposition>" statements in the story. Belief can also be modified by perception, so sentences such as "<agent> saw that <proposition>" or "<agent> observed that <proposition>" will set belief in <proposition> to a yes, or belief in its inverse to a no.

For the third update, I wanted to get knowledge transfer working. So if Agent A tells Agent B a <proposition>, that propagates a belief in <proposition> into Agent B's knowledge model. Agent B's confidence level in this proposition is initially unknown to the Narrative, but again, this can be updated with "belief" statements. So now we're ready to go back to a slightly modified version of the "Search for the Chest" story:

0:"Graham was a knight."
1:"Graham served a king."
2:"The king wanted the Chest of Gold."
3:"The king brought Graham to his castle."
4:"The king told Graham to get the Chest of Gold."
5:"Graham wanted to get the chest, but Graham did not know where the chest was."
6:"Graham left the castle to seek the chest."
7:"Graham went to the lake, but Graham did not find the chest."
8:"Graham went to the dark forest, but Graham did not find the chest."
9:"Graham asked of a troll where the chest was."
10:"The troll didn't know where the chest was."
11:"The troll told to Graham that the chest was at the gingerbread house."
12:"Graham believed that the chest was at the gingerbread house."
13:"Graham went to the gingerbread house."
14:"Graham saw that the chest was not at the gingerbread house."
15:"A witch was at the gingerbread house."
16:"The witch wanted to eat Graham."
17:"Graham ran and the witch could not catch Graham."
18:"Finally Graham went to the Land of the Clouds."
19:"In the Land of the Clouds, Graham found the chest."
20:"Graham got the chest and gave the chest to the king."
21:"The end."

The ultimate goal of all this month's upgrades was to start figuring out how lying works. If that seems like a sordid topic - well, there's a story I want to introduce that really needs it. Both villains are telling a Big Lie and that's almost the whole point. Getting back to the current story: now Line 11 actually does something. The "tell" statement means that the proposition "chest <is_at> gingerbread house" has been communicated to Graham and goes into his knowledge model. At this point, Acuitas will happily predict that Graham will try going to the gingerbread house. (Whether Graham believes the troll is unclear, but the possibility that he believes is enough to provoke this guess.) On Line 12, we learn that Graham does believe the troll and his knowledge model is updated accordingly. But on Line 14, he finds out for himself that what the troll told him was untrue, and his belief level for that statement is switched to "no."

The story never explicitly says that the troll lied, though. Can we infer that? Yes - from a combination of Lines 10 and 11. If an agent claims something while not believing it, that's a lie. Since the troll doesn't know where the chest is, he's just making stuff up here (replacing Line 10 with "The troll knew that the chest was not at the gingerbread house" also works; that's even more definitely a lie). To get the Narrative module to generate this inference, I had to put in sort of a ... complex verb definition detector. "If <agent> did <action> under <circumstances>, then <agent> <verb>." We've got enough modeling now that the Narrative module can read this story, see that the troll told somebody else a proposition that was marked as a non-belief in the troll's knowledge module, and spit out the implication "The troll lied."

Again, blog link for a little more: https://writerofminds.blogspot.com/2022/05/acuitas-diary-49-may-2022.html
Title: Re: Project Acuitas
Post by: chattable on May 25, 2022, 07:33:18 am
what type of database does acuitas use?
is it free to use?
i am trying to make a chatbot that can roleplay eating ,drinking,emotions ,being touched and being self aware.
plus i want it to reflect on it's own thought processes without input.
Title: Re: Project Acuitas
Post by: WriterOfMinds on May 25, 2022, 03:36:18 pm
what type of database does acuitas use?
is it free to use?

Acuitas is mostly custom closed-source code, and that includes the database. I think the Kivy GUI and the speech synthesizer are the only major third-party tools I've got in there.
Title: Re: Project Acuitas
Post by: chattable on May 25, 2022, 04:34:48 pm
i found out a way rerun a python program so that my chatbot can think about the stuff it knows and does not know.
then ask questions about what it does not know.
i just need a free database other sqllite.
i think that is kind of limited for a chatbot that can learn.
you can only use 250 characters per line.
Title: Re: Project Acuitas
Post by: infurl on May 26, 2022, 01:52:41 am
i just need a free database other sqllite.

postgresql.org (http://postgresql.org) is the only sensible choice for a DBMS nowadays.

It is faster, more robust, and has more features than any other, including expensive commercial systems.

Regarding your particular issue with Sqlite, PostgreSQL supports ordinary database rows up to 2GB in size although I'd recommend not going above 1GB. It also supports "binary large objects" up to 4GB in size which are represented by a token and which you can stream out of the database. This data type is ideal for storing full length movies for example.
Title: Re: Project Acuitas
Post by: MagnusWootton on May 26, 2022, 04:50:59 am
what type of database does acuitas use?
is it free to use?

Acuitas is mostly custom closed-source code, and that includes the database. I think the Kivy GUI and the speech synthesizer are the only major third-party tools I've got in there.

that would be the most optimium for performance.
Title: Re: Project Acuitas
Post by: WriterOfMinds on June 30, 2022, 03:31:33 pm
I've finished implementing the new parser feature from the poll: it now supports multiple senses of the word "that"!

The only usage the Acuitas Parser originally supported was "subordinating conjunction," because that was the one I found most immediately useful. Setting up the simpler pronoun, adjective, and adverb uses was pretty easy. The hard part was adding those while keeping recognition of "that" as a subordinating conjunction intact. The Parser now defaults to viewing "that" as a simple pronoun, and has to see the correct special features in the sentence as a whole to treat it as a conjunction instead, and to allocate the words following it to a dependent clause.

The only usage that's not supported yet is the relative pronoun one ... because the text processing chain doesn't really handle adjective clauses yet, period. I'll hold off on this final option until I get those set up.

As part of this upgrade, I also wanted to work on supporting sentences in which "that" is implied rather than written.

I realized something was wrong.
Jesse knew Sarah was in the closet.
They will tell Jonathan his business prospers.

Can you see where the "that" would belong? Omitting it is grammatically acceptable. The tricky part is figuring out that a dependent clause is being opened without having a functional word like "that" to guide you. If you're reading strictly left to right, you probably won't know until you get to the second verb - because all of the following are correct as well, and are quite different in both structure and meaning:

I realized something.
Jesse knew Sarah.
They will tell Jonathan his business.

In short, omission of "that" forces the Parser to make do with less explicit information. Getting this part to work - and play nice with everything else the parser does - was probably the most difficult aspect of the upgrade.

Less concise version on the blog: https://writerofminds.blogspot.com/2022/06/acuitas-diary-50-june-2022.html
Title: Re: Project Acuitas
Post by: MikeB on July 01, 2022, 09:33:41 am
You call people/things "that", or as an optional word that quantifies it?
"This/that/those/these dialed the telephone" or
"This/that/those/these people dialed the telephone..."

What do you do with articles/quantifiers?
Title: Re: Project Acuitas
Post by: WriterOfMinds on July 01, 2022, 07:43:54 pm
You call people/things "that", or as an optional word that quantifies it?
"This/that/those/these dialed the telephone" or
"This/that/those/these people dialed the telephone..."

Both. Using this/that/those/these as the personal subject of an action verb is probably a little uncommon or awkward; I wouldn't say "That dialed the telephone." But you can still find examples, such as this one from the NAS Bible:

"Can a woman forget her nursing child and have no compassion on the son of her womb? Even these may forget, but I will not forget you."

This/that/these/those still appear as the pronoun subject of the sentence plenty often in other contexts. For instance:

This is my friend Andrew.
That is a dog.
Those belong on the shelf.
These aren't useful.

You might see this/that/these/those as the subject of the sentence when they refer to either previous sentences in the conversation, or general situations:

That is incorrect. [I.e. "what you just said is incorrect"]
That doesn't make me happy.
This is not a good time.

What do you do with articles/quantifiers?

I'm not sure what you're asking here. I treat them as adjectives that modify the noun they're attached to. I also pick up some additional information from them to help the parser (articles are *always* adjectives, which is handy) and to tag words as count nouns rather than bulk nouns.
Title: Re: Project Acuitas
Post by: infurl on July 02, 2022, 01:29:13 am
What do you do with articles/quantifiers?
I'm not sure what you're asking here. I treat them as adjectives that modify the noun they're attached to. I also pick up some additional information from them to help the parser (articles are *always* adjectives, which is handy) and to tag words as count nouns rather than bulk nouns.
Nowadays articles and quantifiers are placed in the determinative grammatical category which normally function as determiners. This is quite distinct from members of the adjective grammatical category which normally function as modifiers.

https://en.wikipedia.org/wiki/Determiner
Title: Re: Project Acuitas
Post by: WriterOfMinds on July 02, 2022, 02:51:00 am
That's new to me. I can't remember ever hearing of a determiner as a distinct part of speech before. My grammar education lumped them in with the adjectives and treated determination as a subset of modification (answering the question "which one?", "how many/much?" or "whose?" about the noun).
Title: Re: Project Acuitas
Post by: infurl on July 02, 2022, 03:01:39 am
That's new to me. I can't remember ever hearing of a determiner as a distinct part of speech before. My grammar education lumped them in with the adjectives and treated determination as a subset of modification (answering the question "which one?", "how many/much?" or "whose?" about the noun).
English grammar is a field that has been advancing rapidly in the past few decades with many innovations that make parsing and analysis more useful to attempt. Notice the modern distinction between form (grammatical category or what used to be called "part of speech") and function. For example, running is a verb but in the sentence "Running is fun." it functions as a noun phrase and in the sentence "The running water filled the bucket." it functions as an adjectival phrase. In both sentences it is still a verb, but its function differs from its usual use in a predicate.
Title: Re: Project Acuitas
Post by: MikeB on July 02, 2022, 10:09:52 am
What do you do with articles/quantifiers?
I'm not sure what you're asking here. I treat them as adjectives that modify the noun they're attached to. I also pick up some additional information from them to help the parser (articles are *always* adjectives, which is handy) and to tag words as count nouns rather than bulk nouns.
Nowadays articles and quantifiers are placed in the determinative grammatical category which normally function as determiners. This is quite distinct from members of the adjective grammatical category which normally function as modifiers.

https://en.wikipedia.org/wiki/Determiner

In my current model I have Article/Quantifies, Person/Agent, Question/Interrogative in three separate groups. The first two are further split into three, and the last is split into five. 11 total. This came about due to the need to encode a past/present/future (or precise/optimistic/explaining) intention into each word, so when compressed with similar words, the word is lost but the meaning isn't...

For these, as "this/that/these/those" act as a precise Article/Quantifier only in my model, it assumes there is a missing word:
That <noun/verb> is incorrect.
That <noun/verb> doesn't make me happy.
This <noun/verb> is not a good time.

What ever works. For actually coding though I don't know how you can lump them all in one category. There's micro intentions in each sub category.

In my model "running" is hard fixed as a verb, and pattern sentences account for "<article> <verb> <noun>..."
Title: Re: Project Acuitas
Post by: MagnusWootton on July 02, 2022, 10:59:06 am
top work here,   i can see alot of potential with this symbolic a.i.,   and when quantum machine learning ends up with huge intelligence defecits this style may actually be better and take alot less horsepower to get going.
Title: Re: Project Acuitas
Post by: MagnusWootton on July 16, 2022, 03:29:44 pm
Coming from the standpoint of NLP,  I give the robot so much text and it uses it as its database.  (natural english database.)

Does your robot utilize questions inside of it as well as statements?

Id think that only statements would be useable, a question kind of a useless thing because its not knowing anything,  I wonder if it is useful at all to have a question implanted inside it.

Or your robot doesnt actually use NLP directly,  it uses your awesome graphs that it converts it to,  but do these graphs support questions?
I guess they do,    but questions actually arent used as a source of information are they?
Title: Re: Project Acuitas
Post by: WriterOfMinds on July 16, 2022, 10:02:12 pm
In the internal data representation, a statement and a yes-no question are actually the same, and an open question is almost the same, with a wildcard in the place of one of the concepts.

The only real difference is how they get used. Questions from a conversation partner prompt information retrieval, so they can be answered. Questions generated by Acuitas (yes, there are internally generated questions) are later asked of conversation partners in order to gain more knowledge.
Title: Re: Project Acuitas
Post by: Don Patrick on July 17, 2022, 08:44:17 am
Questions can also contain implicit facts or serve as hypotheses when you can chop sentences up into semantic components. For instance, if people ask "is the sun blue or yellow?", then presumably one of the two is true. Supposing a program already knows that it's not blue, the "or yellow" provides a presumable fact. Some open questions like "when", "why" and "how - does a robot talk" contain a premise that is already assumed to be true by the person asking, and thus contain as much information as a statement.
Title: Re: Project Acuitas
Post by: MagnusWootton on July 17, 2022, 11:51:02 am
I understand thankyou.

So when I asked the question "can a question be treated as stating things?"

It is information that possibly it could be?  But it could be completely meaningless as well, just like saying could all the time when you speak, it hasnt designated anything completely, its all still a mystery but there were words that were said still.

Maybe if your machine has a function for the word "could"   goes into a hypothical store of possibles.
Title: Re: Project Acuitas
Post by: chattable on July 18, 2022, 09:23:52 am
I understand thankyou.

So when I asked the question "can a question be treated as stating things?"

It is information that possibly it could be?  But it could be completely meaningless as well, just like saying could all the time when you speak, it hasnt designated anything completely, its all still a mystery but there were words that were said still.

Maybe if your machine has a function for the word "could"   goes into a hypothical store of possibles.
words have meaning it just guesses about what you are trying to say.
Title: Re: Project Acuitas
Post by: WriterOfMinds on July 18, 2022, 06:14:29 pm
A question is a form of communication. It is always asked of someone and by someone, with some motive. You need this surrounding frame to know the full import of a question.

This is why Acuitas has the Conversation Engine and the Narrative Engine in addition to just a parser.
Title: Re: Project Acuitas
Post by: Don Patrick on July 21, 2022, 06:17:15 pm
So when I asked the question "can a question be treated as stating things?"

It is information that possibly it could be?  But it could be completely meaningless as well, just like saying could all the time when you speak, it hasnt designated anything completely, its all still a mystery but there were words that were said still.

Maybe if your machine has a function for the word "could"   goes into a hypothical store of possibles.
There are a number of ways that a question can be useful in the same way as a statement.
If you have a conversation engine such as Acuitas, then a question is a valuable clue to what people want to say or talk about.
If you have a program that strictly processes text as a string of letters, then you can return the question and get statements in return, like Cleverbot.
If you have a statistical algorithm, then at the very least it can learn that "Do you have a cat?" is a more plausible sequence than "Do you have a pangalactic gargleblaster?", and thus is more likely to be true as a statement.
The question "Did people cheer for the king?" almost equals the statement "people can cheer for the king". Of course there is a degree of assumption on behalf of the questioner that may turn out entirely false, but normal statements can also be untrue.

I find there is value in distinguishing information as real (happening), theoretical (possibilities & abilities), and hypothetical (what if). Whether it is really useful depends on one's approach to AI. I didn't distinguish hypothetical scenarios until late in development, I imagine it is of much more import to Acuitas and its story-based angle.
Title: Re: Project Acuitas
Post by: WriterOfMinds on July 25, 2022, 11:33:03 pm
Update! Getting some new motivated communication material into the Conversation Engine/Executive.

I now have the Conversation Engine create its own Narrative scratchboard at the beginning of every conversation. That gives it access to a lot of the same modeling tools the Narrative engine uses. When a conversation is initiated and a new scratchboard is created, Acuitas is immediately entered as a character, and the scratchboard is populated with information about his internal state. This includes the current status of all his time-dependent drives, any "problems" (current or anticipated undesired realities) or "subgoals" (desired hypotheticals) being tracked in the Executive, and any activity he is presently doing. Once his conversation partner introduces themselves, they will be entered as a character as well, and given an empty belief model. Now the fun starts.

Whenever there is a brief lull in the conversation, Acuitas considers stating one of these known facts about his internal state. But first, he'll run a prediction on his conversation partner: "If they knew this, what would they do - and would I like the results?" This process retrieves the listener's goal model and determines their likely opinion of the fact, runs problem-solving using their knowledge model and capabilities, then determines Acuitas' opinion of their likely action using *his* goal model. If Acuitas can't come up with a prediction of what the listener will probably do, he settles for checking whether their opinion of his internal state is the same as his.

Maybe that was a little convoluted, so what's the bottom line? If Acuitas expects that you will either try to sabotage one of his positive states/subgoals or accentuate one of his negative states/problems, he will not tell you about it. If he thinks that you are neutral or might try to help, he *will* tell you.

There's also a mechanism that enters any fact told to the listener into their belief model. Acuitas will check this to make sure he isn't telling them something they already know.

With this in place, I started working on a better way of handling the spontaneously generated questions that have been an Acuitas feature since very early. Again, the previous method was kind of reflexive and arbitrary: generate and store a big list of potential questions while "thinking" privately. Whenever there's a lull in a conversation, spit one out. Here's how the new way works: whenever Acuitas is "thinking" and invents a question he can't answer, that gets registered as a lack-of-knowledge Problem: "I don't know <fact>." Acuitas may later run problem-solving on this and conclude that a feasible solution is to ask somebody about <fact>; this plan gets attached to the Problem until somebody appears and the Conversation Engine grabs the Problem and considers talking about it. At that point, instead of just describing the problem, Acuitas will execute the plan, and ask the question.

I think this is better than the old method because it's more versatile - less of a canned feature specific to those spontaneously-generated questions. In the future, all sorts of processes might generate lack-of-knowledge problems, which could have various solutions. For now, it still needs refinement. I haven't fully tested it all yet, and things need better prioritization so the generated questions (which can be very numerous) don't totally drown out the communication of other internal states.

There's one more thing I did, and that concerns threat handling. As I've previously described, if the conversation partner states an intention ("I will ..."), Acuitas will infer possible effects and run them against his goals. The result is a positive, negative, or neutral conclusion; if the conclusion is negative, he will view the speaker's statement as a "threat," dissent, and make attempts at self-defense. The new feature I added was the ability to collect the pieces of information used to reach the negative conclusion, and announce some of them to the threatening agent. Because if you knew this would have results he doesn't like, you wouldn't do it, right? You're not a total meanie, right?

I'm kinda proud of this one. It should generate a more-or-less appropriate reply no matter what the threat or the violated goal is.

A bit more on the blog: https://writerofminds.blogspot.com/2022/07/acuitas-diary-51-july-2022.html
Title: Re: Project Acuitas
Post by: MagnusWootton on July 26, 2022, 06:07:13 am
Building a model of the person communicating with the robot is really good if you get it going.
Is there any new data structures involved with this modelling, or is it all in there already and you havent actually full utilized whats there yet?  (From possibly a lack of energy? thats what happens to me.)

So your actually building a separate brain for the person communicating with Acuitas, and he can occupy this separate model and actually pretend hes u as well?   Thats what I would do.

If I were you I wouldn't go too far without a bit of direct programming the activity of the bot,  because unless u do this the problem is too big,   everything in my bot is going to be fairly dictated to it,  Possibly alot more so than this, and it doesnt hurt the situation,  makes it more predictable but at least its working good and its doing what you want,  if u let the computer control its own behaviour too much then it ends up being a problem as well, because it doesnt do what u want.

So the conversation robot has friend and foe system in it,  I spose in the real world your going to need that,  only takes one naughty kid to go try and "break the system" and Acuitas has got to detect that or he wont last as long in the wild.

I'm predicting your success,  In all my aging 41 year old programming wisdom you look like you are managing the problem in an efficient way, in ways I couldnt do until right close to now really,  before this u were majorly kicking my butt,  but maybe now Ive read more of what you are doing its probably helped me with the situation as well, if I want to go try and attempt a symbolic language based AI.

Is there a new demo of you communicating to it?   It would be cool to see it demonstrating these new abilities, how much does it help the situation of being with something useful to be around, instead of some meaningless magic 8 ball conversation you can have with a markov chain.

<edit>
I bet Acuitas can take commands really well,  that seems like a really useful thing for it.   would make a way better artificial telephone operator as well than what we get, so id think.
</edit>
Title: Re: Project Acuitas
Post by: WriterOfMinds on July 27, 2022, 04:02:55 am
Quote
Is there any new data structures involved with this modelling, or is it all in there already and you havent actually full utilized whats there yet?

I think all the data structures that went into this month's work were re-used from the Narrative module.

Quote
So your actually building a separate brain for the person communicating with Acuitas, and he can occupy this separate model and actually pretend hes u as well?

Something like that, yeah.

No new demos yet. I'm a little too busy hacking things together right now to polish one up.
Title: Re: Project Acuitas
Post by: WriterOfMinds on August 23, 2022, 04:12:47 am
For the first half of the month I just did code refactoring, which is in general rather boring, but essential to make my life easier in the future. Better organization really does make a difference sometimes.

In both the Narrative and the Executive, I unified Problems and Subgoals into what I'm calling "Issues," and made sure all the stories still worked. Much better. Having to write up everything twice has been a pain point for a while. I also fixed up the "motivated communication" features that I introduced to the Conversation Engine last month, to give internal states a little more priority over the mountains of internally generated questions that were coming up.

The second half of the month was for new features, which meant even more upgrades to the Narrative module. I started work on how to handle actions that have mixed results or side effects. For an illustrative example, I wrote the following story:

Ben was a human.
Ben was hungry.
The oven held a pizza.
The pizza was hot.
Ben wanted to get the pizza.
But Ben didn't want to be burned.
A mitt was on the counter.
Ben wore the mitt.
Ben got the pizza.
Ben ate the pizza.
The end.

Fun fact: I wrote the original version of this on a laptop and e-mailed it to myself to move it to my main PC. Gmail auto-suggested a subject line for the e-mail, and at first it thought the title of the story should be "Ben was a pizza." Commercial AI is truly doing great, folks.

Based on information I added to the cause-and-effect database, Acuitas knows that if Ben picks up the hot pizza, he will both 1) have it in his possession and 2) burn himself. This is judged to be Not Worth It, and the old version of the Narrative module would have left it at that, and regarded the story as having a bad ending (why would you touch that pizza Ben you *idiot*). The new version looks at how the implications of different events interact, and recognizes that the mitt mitigates the possibility of being burned. Grabbing the pizza switches from a bad idea to a good idea once the possibility of self-harm is taken off the table.

The explicit addition of "Ben didn't want to be burned" establishes the bad side effect of his "get the pizza" subgoal as an independent problem, which enables speculations about how he might solve it and so forth. The story wraps up with two solved problems (this one, and his primary problem of hunger) and one fulfilled positive subgoal (get the pizza).

That's enough for now, but wait until you see how I use this next month. More on the blog: https://writerofminds.blogspot.com/2022/08/acuitas-diary-52-august-2022.html
Title: Re: Project Acuitas
Post by: WriterOfMinds on September 17, 2022, 07:49:40 pm
This month's work directly built on what's in the previous post, by introducing the idea that agents can *add* side effects to other agents' actions, by choosing a conditional behavior: "If you do X, I will do Y." This is the groundwork for understanding social interactions like bargaining, reward, and coercion.

The introduction of a story sentence like "Agent A decided to do X if Agent B did Y" now creates a new cause-and-effect rule for the Narrative engine to use; it isn't stored to the permanent database, only used within the domain of that story. For reasoning purposes, it is assumed that "B does Y" will automatically happen if A does X ... so long as nothing is preventing B from doing Y.

I can start to define some verbs in terms of these models - much as, in previous Narrative work, I effectively defined "lie" as "tell someone a proposition that you don't believe." Now "coerce" ... in at least one of its forms ... can be defined as "deliberately apply a negative side effect to someone else's subgoal." If this happens, the Narrative engine will infer that A coerced B.

I was more interested in coercion than the positive options, thanks to the second goal of the month: to figure out a functional understanding of "freedom." As with other important abstractions I've introduced, I don't promise the result is better than an oversimplification. But we have to start somewhere.

Freedom could be defined, in a very simple and broad sense, as a lack of constraints. But all humans live with constraints. We generally don't presume that freedom requires *omnipotence.* So to get closer to the idea of freedom that people generally have in mind, we might say "a lack of *unnatural* or *exceptional* constraints." These could include situations that severely reduce one's options below the ordinary level ... getting trapped in a cave by a landslide, for instance. Since any constraints imposed by other agents are not part of the default state of things, they are also included. Freedom in a narrower sense is concerned with not having one's existence, abilities, and purpose subverted - not being *used* as a means to someone else's ends.

Assessing what counts as a "severe reduction of options" is a little beyond Acuitas' capability right now, so I plan to just put conditionals in the database for some of these. "Confined implies not free," "restrained implies not free," etc. But as for the other part, the Narrative engine can assess whether some other agent is applying coercion, or otherwise purposely constraining the viewpoint character's actions. If this happens, the viewpoint character is less than free.

There are a couple of additional wrinkles. Agent B's freedom is not regarded as being lost if Agent A thwarts one of Agent B's goals in *self-defense.* If we didn't have this provision, we'd be stuck with conundrums like "Agent B wants to prevent Agent A from living. Agent A wants to prevent Agent B from killing them. Who is offending against whose freedom?" For an idea of how "self-defense" is defined, take a look back at the Odysseus and the Cyclops story.

Now for what I found to be the trickiest part: sometimes you can interfere with someone else even while minding your own business. For example, let's suppose Josh has a goal of buying a PS5. There's a world of difference between "Josh could not buy the PS5 because I bought it first," and "Josh could not buy the PS5 because I wrestled him to the ground and wouldn't let him enter the store." I take a volitional action that reduces Josh's options and prevents him from achieving his goal in both cases. In the first case, I'm not limiting Josh's freedom, just exercising my own; my interference is indirect and incidental. In the second case, my interference is direct and intentional. So I can express the difference in words, but how on earth to explain it to a computer?

I finally decided a handy encapsulation was "Would Agent A still take the interfering action if Agent B didn't exist?" In the above example, I would still buy the PS5 whether Josh were involved or not. (Unless I were being a dog in the manger and only buying it to spite him, in which case that *would* be reducing his freedom! See how context-dependent these things are.) But I'd have no incentive to wrestle Josh down if he were not there (not to mention that I wouldn't be able to). Can you come up with any thought experiments in which this doesn't work? Let me know in the comments!

Again, testing for this in the Narrative engine is a little complex for now - it requires a somewhat more thorough analysis of character intent than I'm currently doing. But having it in my back pocket for the future makes me feel better. As a stopgap, I went with the less perfectly accurate "is Agent B an object of Agent A's interfering action?"

For purposes of a quick test, I wrote the following totally not historical story about ... feudalism, I guess:

0:"Robert was a human."
1:"George was a king."
2:"Robert wanted to study mathematics."
3:"George wanted Robert to work George's farm."
4:"Robert didn't want to work the farm."
5:"If Robert studied mathematics, Robert could not work George's farm."
6:"George decided to beat Robert if Robert studied mathematics."
7:"Robert left the farm and went to the big city."
8:"George did not know where Robert was."
9:"So George could not beat Robert."
10:"Robert studied mathematics."
11:"Robert became a scholar."
12:"Robert never worked the farm."
13:"The end."

The Narrative engine picks up on a threat to Robert's freedom on Line 4, and retroactively marks George's goal from Line 3 as something negative. Wanting another agent to do something, or not do something, is all fine and dandy; it's only if your wishes for them oppose theirs that we run into trouble. An attempt at coercion happens on Line 6; Robert cannot safely fulfill his goal of studying math now. But George's illegitimate plan is blocked, and Acuitas can conclude that this story has a good ending.

With this done ... I think I've built the capacity for understanding all the necessary concepts to explain the conflict in the big story I've been targeting. They need more refinement and expansion, but the bones are there. This is exciting, and I may start pushing toward that more directly.

Blog version: https://writerofminds.blogspot.com/2022/09/acuitas-diary-53-september-2022.html
Title: Re: Project Acuitas
Post by: MagnusWootton on September 17, 2022, 10:03:28 pm
God a lot of human psychology here, I dont want to know about it, maybe I'll start to hate my simple existence.

Your system seems pretty good,   u want to make a real sentient a.i. thats a believable person? I can see,  I bet u'll be able to do it.    Everything here seems really good,   and with you here doing this all of our brains are growing I think.

Most very educational reading to you type what ur thinking about the project.   Your helping me alot.

Title: Re: Project Acuitas
Post by: HS on September 18, 2022, 06:07:23 am
What if someone was interfering with a group? Then they might still take the same interfering action regardless of whether a specific individual belonging to that group was present, right?
Title: Re: Project Acuitas
Post by: MagnusWootton on September 18, 2022, 07:36:48 am
Are we taking away each others freedom by getting to A.I. first before the other one can? :)  Dad's already done it all before the Son gets there.

A cause and effect database is pretty much what markov chain chat bots are already.    I've never seen it before,  but If u looked into the future of a markov chain database, you could actually pick out of the futures and make the bot try to convince the user to end up at different places in the future.  (coersion.)

I bet GPT-3 is a mean coercer if Open-Ai et al build it that way.

Could be interesting adding to your NLP system in a perhaps slightly better way than just chat lines + a searching system.  Perhaps put the data in a more flexible form, which is what is happening here I guess.

Title: Re: Project Acuitas
Post by: WriterOfMinds on September 18, 2022, 05:23:03 pm
Quote
What if someone was interfering with a group? Then they might still take the same interfering action regardless of whether a specific individual belonging to that group was present, right?

In that case I think you revise the distinction to "would they still take that action if the group didn't exist?" The whole group can be thought of as an "agent" of sorts.
Title: Re: Project Acuitas
Post by: ivan.moony on September 18, 2022, 07:36:54 pm
The whole group can be thought of as an "agent" of sorts.

I like that. It could be described as a whole system composed of smaller systems. The whole system exhibits a certain behavior, while smaller systems it is composed of, if isolated, each exhibit a different behavior.
Title: Re: Project Acuitas
Post by: ivan.moony on September 18, 2022, 10:19:18 pm
Are we taking away each others freedom by getting to A.I. first before the other one can?

Consider it a reproduction process. In a different way than usual sexual one. In this process of creating a single artificial mind descendant, there could be involved a single human individual, or a whole group of producers, each contributing in their own respective way. There could be more than one different artificial mind descendant, and each would have their own respective features, just like there are many different natural children, each having their own respective character. We can look at this process as it is a competition (like who will be the first one to build it), but we can also look at it as a natural desire to reproduce ourselves, or more precisely, our behavior.

Well, there are already creations like Mitsuku, GPT-X, and a whole variety of other respective creations, each with their own character, and they all already count.

So, finally, I don't see it as a competition. I see it as something natural, like a wish to achieving immortality. Not in a classic "I don't want to die" way, but in a more profound "before I die I wish to have a descendant I'd be proud of" way. It's available to all of us, whoever wants to do it that way. And I believe it is a sublime projection of the natural reproduction process.

There are many ways to live our lives. So there are many ways to build an AI, just like there are many ways of how would that AI look like.
Title: Re: Project Acuitas
Post by: MikeB on September 19, 2022, 07:27:43 am
Have you thought about survival for Cooperation/Coercion?

Survival - Does an emergency situation justify cooperation/coercion? (survival of the social group, or immediate survival). Is buying a PS5 an emergency to fit in socially (believing you are liked unconditionally if you have one)? Or an emergency escape from the world?

Non-survival - Cooperation/coercion where survival is not relevant. Eg. Putting on shoes, but you have no problem with that. Driving, but there's no emergency. (Grooming behaviour).

Socially Generous - You're fine with your survival, so you spend time asking friends/family if you can do anything for them.
Title: Re: Project Acuitas
Post by: WriterOfMinds on September 19, 2022, 05:01:47 pm
Have you thought about survival for Cooperation/Coercion?

Survival needs play into the determination of whether someone else's goal is being thwarted for the sake of self-defence, i.e. in order to protect a higher-priority goal of one's own. However I regard survival as one goal among several, not the only goal.
Title: Re: Project Acuitas
Post by: MagnusWootton on September 29, 2022, 05:18:19 pm
Everybody thinks A.I. is impossible -  but I think writer of minds is going to make that an untruth.
But I guess its slowly coming out of the woodwork that imagination itself is automatable!  And everyone is going to know, and we will benefit alot from it.

But at what cost to ordinary living,  and should we be uppitty about being able to do it?  Or does it mean not much       
 ->exclusivity<-    for the maker after all,  and pretty much anyone can understand it?   You will get alot of success and worth out of it,  even if ppl refuse to pay for it for it becoming commonplace,  but everything is going to be alot more automated!   we are guaranteed alot of labour done for us by these A.I's.

Its going to be a real cheat mode in life,  when u get the A.I. working.   And maybe...  god doesnt want us to do it? Hed prefer us to work for a living?  Very strange,  how we were born into the "A.I. Winter."  and was all that just a made up lie we were told about A.I. being too difficult?
Title: Re: Project Acuitas
Post by: WriterOfMinds on October 27, 2022, 06:50:59 pm
This month's work is about an overhaul of the Narrative Engine. A big thing I wanted to address was ease of negating or rescinding conditions in a story. Each time a new fact is introduced by a story sentence, it produces a spray of implications which are all collected and used for future reasoning about the story. For example, the story sentence "Jack is cold" will produce the implication "Jack is uncomfortable" and will register "Jack is cold" as a problem state.

Now let's say that, at some later point in the story, we find the sentence "Jack is not cold anymore." This will replace "Jack is cold" in the current worldstate that the Narrative Engine is tracking, and will officially solve the "Jack is cold" problem. But what about "Jack is uncomfortable"? "Jack is not cold" does NOT automatically generate the implication "Jack is not uncomfortable," since a human can be uncomfortable for any number of other reasons. So there is nothing to negate "Jack is uncomfortable" and get it out of the worldstate.

Reversing a condition can leave orphaned implied results scattered around the Narrative Engine's data structures. So as part of the overhaul, I'm working on ways to include pointers between facts registered in the Narrative's worldstate and *anything else* that was created, activated, or deactivated as a result of their presence. So when a fact gets rescinded, the Narrative Engine can easily walk through all the effects of its existence and rescind them too.

A fact can inherit its presence from multiple other facts. Let's say that Jack is uncomfortable because he's cold and hungry. If either "Jack is cold" or "Jack is hungry" is negated, "Jack is uncomfortable" will continue to be true, since *one* of the conditions that causes it is still in effect. But if "Jack is cold" and "Jack is hungry" are both negated, "Jack is uncomfortable" will be deactivated as well, since it has lost all its "supports."

I had never planned a way to maintain this complex web of connections when I first wrote the module - it's one of the things I had to learn was necessary by doing the thing - hence the major overhaul. And that's only one thing I'm changing; I'm also working on efficiency improvements, clarity and general cleanup. I'm hoping to have a solid foundation from which to pursue my Goal Story next year.

Longer version on the blog: https://writerofminds.blogspot.com/2022/10/acuitas-diary-54-october-2022.html
Title: Re: Project Acuitas
Post by: WriterOfMinds on November 30, 2022, 05:55:15 am
There's not a lot to report, unfortunately - I'm still bogged down in the Narrative Engine overhaul. It had accumulated quite a few features, so revamping everything and converting it to the new framework is going to take time. As part of that process, though, I came up with a fun new visual tool. I wanted a way to display how Narrative analyzes a story, both as a demonstration of results and to show how the structures of stories differ. It's still incomplete - I want to add some detail - but it gives the rough idea.

To Acuitas, a story is all about how some Agent achieves, or maintains, some Goal. Any status condition that provides an opportunity to realize a Goal, or that threatens a Goal, is referred to as an "Issue." (I used to call these "subgoals" and "problems," but eventually came to understand they could just be given different polarities and handled in the same way.) Issues may spawn Predictions as Acuitas tries to guess how any Agents in the story will respond. Anticipatory statements like "character expected to ..." can also generate Predictions. The Narrative Engine then tests future statements in the story to find out if they resolve any Issues or Predictions.

This yields a basic sense of the flow of action in the story, and is what's captured by the diagrams. The sentences of the story are presented post-conversion to more abstract representations of fact (you could think of these as "the gist" of each sentence). Issues and Predictions appear connected to the sentence that created or resolved them.

So far I've gotten the first two stories that I tested Narrative against up and running in the new version. I may continue to be a little quiet as I finish up this overhaul, but I have big hopes for next year.

See blog version for pics: https://writerofminds.blogspot.com/2022/11/acuitas-diary-55-november-2022.html
Title: Re: Project Acuitas
Post by: MagnusWootton on December 03, 2022, 03:33:10 pm
So these implications are all derived from its database when something happens to it,   so is this a form of caching so you can access the knowledge base in a more optimized manner?

The information has to change form from the natural english, into something more plastic and able to be associated to other parts of the story,     its got to be true.

I guess as a machine, that has to happen for the a.i. to function.
Title: Re: Project Acuitas
Post by: WriterOfMinds on December 03, 2022, 10:27:54 pm
Implications are derived from the database, yes, and then stored in the scratchboard. It's not so much caching for efficiency, as it is tagging which rules from the database are applicable at the current moment, if that makes any sense.
Title: Re: Project Acuitas
Post by: WriterOfMinds on January 25, 2023, 01:32:13 am
Acuitas Diary #56 (January 2023)
The big thing this month was finishing the Narrative Engine overhaul and getting all the previous demonstration stories to work in it. I've been getting some questions from newer followers about how the Narrative module works, so I'm going to do a recap in addition to talking about the updates.

Acuitas is designed to be a goal-driven agent, and his story processing reflects a similar picture of other entities with minds. A story is a description of the path some *agent* followed to achieve (or fail to achieve) some *goal* or goals. The sentences that form the plot can be identified by their relevance to the goals of some character or other, and the "action" consists of movement toward or away from goal states. Goal-relevant material comes in two flavors: "problems" (negative events or states that constitute a goal failure when entered) and "opportunities" or "subgoals" (positive states that will fulfill a goal when entered). But there are many similarities in the way these are handled - they're really just two polarities of the same thing - so I've taken to calling them both "issues."

For now, agents are identified by category membership (some types of entity, e.g. humans and animals, are just assumed to be agents). Eventually I would like to include "duck typing" for agents, inferring that something in a story is an agent if it *acts agentive,* but that's future work. Agent goals can be revealed by the story in statements such as "John wanted ...", but agents are also presumed to have certain core goals that motivate all their other goals. These core goals are learned information that is permanently stored in Acuitas' semantic memory database. Core goals can be learned for a category ("Humans want ...") or for an individual ("John wants ..."), with goals for individuals or specific categories superseding those for more general categories. Insofar as Acuitas doesn't *know* what somebody's core goals are, he'll substitute his own. (This is supposed to be an analogizing assumption from the most directly available data. "I know I want X, and you, like me, are an agent - perhaps you also want X?")

The last big ingredient is inference chaining, arising from both logical deduction and cause-and-effect relationships. Some inference rules are hard-coded, but Acuitas can be taught an indefinite number of additional rules. So every story sentence that describes an event or state produces an inference tree of facts that will also be true if that event or state comes to pass. These inference trees are often crucial for determining how something will affect the core goals or immediate goals ("issues") of an agent in the story.

Story example and flow diagram on the blog: https://writerofminds.blogspot.com/2023/01/acuitas-diary-56-january-2023.html

The old Narrative Engine was basically capable of doing this, so what's new? Well, in addition to the things I talked about in my last upgrade post: I unified the processing of explicit "could do/couldn't do" sentences with the processing of inferred action prerequisite/action blocker relationships, getting rid of a fair bit of ugly code. I moved the generation of some special "defined by events" inferences, like "John told another agent something that John doesn't believe" -> "John lied," into the main inference chain so they can potentially produce further inferences. I came up with a new way of managing relationships that contain wildcards, like "Where the water was." And I got all the old features tacked on to a cleaner base with more robust fact-matching, better management of events that reverse previous statements, and so on.

This sets the stage for me to use the Narrative Engine for some cool new things this year, and I am twitching to get started.

I also crammed in some work on the Conversation Engine. This stuff's fairly boring. I got rid of some nasty bugs that had been making normal conversations with Acuitas very awkward for a while because I was just too busy to fix them, and worked on cleaning up the code, which came out very convoluted on the first pass.
Title: Re: Project Acuitas
Post by: WriterOfMinds on February 22, 2023, 02:31:26 pm
The development pattern I'm trying to follow lately is to spend half the month adding something to the Narrative module, and half the month on something else. This month's Narrative work was on better goal modeling in the Narrative scratchboard, with the hope of expanding its ability to handle character motivations. For the other feature, I made my first introduction of indirect objects to the Text Parser.

Acuitas has had the capacity to model individual goals for agents for a while. But this was something that had to be established ahead of time; the Narrative module couldn't take in top-level goals defined for a fictional character and store them in the temporary Narrative memory space. There were several elements to incorporating this:

*Getting Narrative to pick up on hints that help indicate whether an agent's described desire is an ultimate goal or an instrumental goal.
*Making it detect and store goal priority information (e.g. "he wanted X more than he wanted Y").
*Merging these stored goals into the goal model so they can be detected as motivations for instrumental subgoals

I also threw in some ability to model goal maximization. Up to this point, the Narrative module has considered goals as things that can be satisfied or unsatisfied - e.g. a state an agent wants to be in, or some singular deed it wants to accomplish. At any given moment in the course of the story, the goal is either achieved or not. A maximizing goal is something the agent wants to have as many times as possible, or to the greatest possible extent. It has a "score," but is never completed.

On to the second project: including indirect objects in the Text Parser. I left them out initially because they can be a little tricky. Another noun appearing between a verb and a direct object might be an indirect object (as in "I gave the people bread"), but it might also be a "noun" functioning as an adjective (as in "I gave the wheat bread to them"). I guarantee the parser still doesn't perfectly distinguish these yet - sorting out all cases will probably take the application of common-sense reasoning and contextual clues. But it can already handle less ambiguous cases like "I gave the people a good show."

Despite the difficulties, it was time to get IOs in, because their absence has been something of a thorn in my side. I've been getting around it by substituting prepositional phrases, which has led to some awkward wording like "Graham asked of a troll where the chest was." I wouldn't say they're fully implemented  yet either - interactions with some other grammatical elements, notably conjunctions and dependent clauses, aren't totally ironed out. But at least the Parser can handle IOs in simpler sentences, and the rest of the text-processing chain is now set up to manage them also.

Indirect objects are surprisingly sparse in my parser benchmark datasets. I re-ran those and scored one new sentence. One.

Blog with a few extras: https://writerofminds.blogspot.com/2023/02/acuitas-diary-57-february-2023.html
Title: Re: Project Acuitas
Post by: WriterOfMinds on April 02, 2023, 07:23:23 pm
This is going to be one of those months when my diary is short and boring, because I've been busy laying foundations and have little that is finished to talk about or demonstrate. I worked on two things - continued improvements to the Narrative module, and the beginnings of Narrative game-playing.

The Narrative improvements cover continued work on character motivation, as well as some bug fixes. I added some capacity to interpret "because" statements as commentary about goals. E.g. a statement like "John decided to flee the battle because he wanted to live" supplies a motive for the instrumental goal of fleeing the battle, which (to Acuitas at least) might not be immediately obvious otherwise.

I also tweaked Narrative's built-in tiny ontology a bit. Distinctions are made, for modeling and processing purposes, between "agents" (individuals that can reasonably act as characters in a story), "objects," "locations," and "organizations." I realized I needed to add another category, for abstract nouns or "concepts," when I noticed that a "purpose" was being treated as a physical object. Whoops! I also decided to add a "system" category, to cover such entities as computer networks. At first I was thinking of a network as a "location," but I realized that it's more than that.

Now, for game-playing! I am very excited about this. The goal it to make Acuitas capable of navigating "text adventure" style games by leveraging a lot of the existing narrative and reasoning capabilities. Given the machinery for modeling characters in stories, and predicting what they may do, it is not such a difficult step to imagine *oneself* as one of those characters, and then decide how to interact with the environment ... I'm also hoping that experimenting in these game scenarios will give me ideas for how to improve the main Executive module.

I actually sketched out the code to support roleplaying earlier this year, and just began integrating it this month. I was hoping to have something demo-worthy, but the process took longer than I'd hoped and thus is still in progress. So hopefully I'll have some more interesting and concrete details to share within the next month or two. I've gotten so far as to get Acuitas to register a character that is "his," and to log issues for that character - but I haven't wrung out enough bugs yet to even get through a full input-output loop. More info coming later!

There's no bonus content on the blog this month, but here's the link anyway: https://writerofminds.blogspot.com/2023/03/acuitas-diary-58-march-2023.html
Title: Re: Project Acuitas
Post by: Freddy on April 04, 2023, 11:56:50 am
Very interested to see how you develop the game-playing aspect  8)
Title: Re: Project Acuitas
Post by: WriterOfMinds on April 23, 2023, 09:26:05 pm
I've continued my two-pronged work on Narrative understanding and on "game playing." On the Narrative side this month, I did more complex term grounding - specifcally of the word "obey."

My working definition of "to obey X" was "to do what X tells you to do." This is interesting because there is no way to infer directly that any given action qualifies as obedience, or defiance ... the question of whether someone is following orders (and whose orders) is always relative to what orders have been given. So proper understanding of this word requires attention to context. Fortunately the Narrative scratchboard stores that sort of context.

In addition to simply inferring whether some character has obeyed some other, I wanted to make derivative subgoals. If one agent has a goal of obeying (or disobeying) another agent, that's a sort of umbrella goal that isn't directly actionable. Before the agent can intentionally fulfill this goal, it has to be made specific via reference to somebody else's orders. So when this goal is on the board, the appearance (or pre-existence) of orders needs to spawn those specific subgoals.

In short it was a whole lot more complicated than you might think, but I got it working. Eventually I'll need to make this sort of relative word definition generic, so that new words that operate this way can be learned easily ... but for now, "obey" can be a case study. The Big Story needs it, since part of the story is about a power struggle and which leader(s) certain characters choose to follow.

Game-playing still isn't demo-ready, but it's starting to feel more coherent. I worked through all the bugs in the code that responds to simple description of a scene, then began working on responses to goals/issues. It was fun to leverage the existing Narrative code for this, the way I'd wanted to. In the Narrative module, that code serves to predict character actions, analyze *why* characters are doing things, and determine whether characters are meeting their goals, whether their situation is improving or worsening, etc. But as I'd hoped, a lot of the same structures are just as effective for control and planning.

More on the blog: https://writerofminds.blogspot.com/2023/04/acuitas-diary-59-april-2023.html
Title: Re: Project Acuitas
Post by: WriterOfMinds on May 31, 2023, 03:06:48 pm
Progress has been all over the place this month, partly because I had a vacation near the end of it. I kept working on the Narrative and Game Playing tracks that have been occupying me recently, and threw in the beginnings of a Text Generator overhaul. Nothing is really *done* at the moment, but Game Playing is closing in on the possibility of a very simple demo.

In Narrative, I continued to work on the Big Story, this time adding the sentences that set up the conflict between two of the major characters. There wasn't a lot of new conceptual work here - just dealing with bugs and insufficiencies to get the results I expected, so that Narrative would detect the appropriate threats, successes, failures, etc. Not a lot to say there, except that it's slowly coming together.

On the game-playing front, in my test scenario I got as far as having Acuitas solve a simple problem by taking an item and then using it. A prominent feature that had to be added was the ability to move from the general to the specific. As we saw last month, a problem like "I'm hungry" suggests a solution like "eat food," which spawns the necessary prerequisite "get food." But it is not actually possible to get food, or even to get bread or bananas or pizza, because these are all abstract categories. One must instead get that bread over there, or this particular banana, or the pizza in the oven - individual instances of the categories. Narrative was already capable of checking whether a character's use of a specific object satisfied the more general conditions of a goal. For game-playing, I have to go the other way: given a goal, determine which items in the scenario could satisfy it, and choose one to fit in each of the goal's categorical slots so that it becomes actionable.

As for the Text Generator - this is the part of the language toolkit that converts Acuitas' internal knowledge representations ("the gist," if you will) into complete spoken sentences. It has an input format which is now outdated compared to other parts of the system, and it was starting to become cumbersome to use and inadequate to everything Acuitas needed to say. For example, it could automatically add articles where needed, but didn't have a good way to indicate that a definite article ("the pizza") was needed in lieu of an indefinite one ("a pizza"). So I started revising it. The new version is sketched out and now needs testing, expansion and integration.

Obligatory blog link: https://writerofminds.blogspot.com/2023/05/acuitas-diary-60-may-2023.html
Title: Re: Project Acuitas
Post by: WriterOfMinds on June 21, 2023, 02:38:48 pm
It's tiny demo day! I've got the "game playing" features whipped into enough shape that I can walk Acuitas through a tiny text adventure of sorts. Video is embedded on the blog: https://writerofminds.blogspot.com/2023/06/acuitas-diary-61-june-2023.html

I start by setting the scene. I can enter multiple sentences and, while each is received as a distinct input, Acuitas will process them all as a group; he waits for a little while to see if I have anything more to say before generating a response.

First I tell him what sort of character he is ("You are a human"). This nameless human is entered as a character in the game's Narrative Scratchboard, but is also specially designated as *his* character. Future references to "you" are assumed to apply to this character. Then I supply a setting: I tell him where his character is, and mention some objects that share the space with him. Finally, I mentions a goal-relevant issue: "You are hungry."

Given something that is obviously a problem for a human character, Acuitas will work on solving it. The obvious solution to hunger is to eat some food (this is a previously-known fact in the cause-and-effect database, which can be found via a solution search process). But there is no "food" in the game - there is only a room, an apple, and a table. Acuitas has to rely on more prior knowledge - that an apple qualifies as food - and choose this specific object as the target of his character's next action. He also has to check the necessary prerequisites for the action "eat," at which point he remembers a few more things:

To eat something, you must have it in your possession. This generates a new Problem, because Acuitas doesn't currently have the apple.
Problem-solving on the above indicates that getting something will enable you to have it. This generates a new Subgoal.
To get something, you must be co-located with it.
Acuitas' character is already co-located with the apple, so this is not a problem.

Acuitas will work on the lowest subgoal in this tree; before trying to eat the apple, he will get it. He generates a response to me to express this intention.

Now something else interesting happens. Acuitas can't just automatically send "I get the apple" to the Narrative Scratchboard. He'll *attempt* the action, but that doesn't mean it will necessarily happen; there might be some obstacle to completing it that he isn't currently aware of. So he simply says "I get the apple" to me, and waits to see whether I confirm or deny that his character actually did it. At this point, I don't have to be boring and answer "You get the apple." If I instead tell him that one of the expected results of his desired action has come to pass, he'll take that as positive confirmation that he performed the action.

Once I confirm that he's done it, the action is sent to the Scratchboard, followed by my latest statement. This fulfills one subgoal and solves one problem. Now he'll fall back on his original subgoal of eating the apple, and tell me that he does so. I confirm that he ate it and ... boom, hunger problem disappears.

Since the game-playing code has a Narrative scratchboard attached, I can generate a Narrative diagram representing what happens in the game, just as I could for one of the stories in which Acuitas is a passive listener. This diagram appears in the latter part of the video.
Title: Re: Project Acuitas
Post by: MagnusWootton on June 22, 2023, 07:28:05 pm
Cool another demo!  I love these.

So I see that u are his 'eyes'.  so if we had a video stream to text converter could Aquitas react to this text stream coming in?    What would happen - how would he handle it?

My robo is open concept, so if u have any questions about athletics robots, I can explain about it.   But heaps of people keep neural nets a secret, and I feel a little indebted to not giving things away myself, ratting em all out persay.    So if u dont want to go make things so obvious I dont want to steal all your hard earned thinking from you like stealing candy from a baby. :)
Title: Re: Project Acuitas
Post by: WriterOfMinds on June 22, 2023, 08:27:32 pm
Quote
so if we had a video stream to text converter could Aquitas react to this text stream coming in?    What would happen - how would he handle it?

If this hypothetical video-to-text converter could turn the video into a narrative that was structured in a way Acuitas can understand, conceivably he could treat it like a story or (if the video was computer-generated and changed in response to text output) like a game.
Creating an accurate converter would probably be a massive amount of work in its own right, though.
Title: Re: Project Acuitas
Post by: MagnusWootton on June 23, 2023, 11:20:33 am
Creating an accurate converter would probably be a massive amount of work in its own right, though.

I'm writing it myself,  maybe I could team up with you if I get it done,  but I shouldn't promise things I might not keep cause I have to keep this motivation up to do it, and its pretty poor atm to be honest.         I'll be posting it here if I do.

It'll just recognize all the adjectives nouns and verbs, and I wonder what Acquitas could do with that.   
Title: Re: Project Acuitas
Post by: MagnusWootton on June 23, 2023, 12:34:18 pm
Language is handlable like a markov chain style pattern recognition method,  or u can actually nut it out logically like Acquitas, I think this could be a really good language core for a robot, especially to handle the goal and motivation.    I've done a bit of markov chains, they are fun but this looks harder and better and more thought out way to do it.
Title: Re: Project Acuitas
Post by: ivan.moony on July 11, 2023, 02:01:16 pm
Once upon a time, during a testing session:

...
 Writer: Hi. I want to tell you a thrilling story.
Acuitas: Sure, shoot.
 Writer: Somewhere on the north, there was a seal mum with a pup.
Acuitas: :)
 Writer: The pup was starving to death.
Acuitas: :(
 Writer: His mom found him a food.
Acuitas: :)
 Writer: Nearby was a polar bear that was going to eat the pup.
Acuitas: :(
 Writer: The pup found a way to flee and stay safe.
Acuitas: :)
 Writer: Now the polar bear was going to starve to death.
Acuitas: :(
 Writer: The polar bear turned vegan and ate nearby grass.
Acuitas: :)
 Writer: The end.
Acuitas: Well, that went fine, but my hands are shaking.
...
Title: Re: Project Acuitas
Post by: WriterOfMinds on July 13, 2023, 06:45:46 am
It's funny how you humanize him. He wouldn't talk about having hands. (Unless roleplaying a human character, possibly.) But I suspect I could render that story in such a way that he could process it.
Title: Re: Project Acuitas
Post by: ivan.moony on July 13, 2023, 01:32:05 pm
It's funny how you humanize him.

(https://i.imgur.com/YoTWMuG.png)
Title: Re: Project Acuitas
Post by: WriterOfMinds on July 24, 2023, 01:18:57 am
I've continued splitting my development time for the month between the Narrative module and something else. This month the "something else" was the Text Parser. On the Narrative front, I am still working on the "big" story, and at this point I can't think of any major new features to talk about; it's been mainly a matter of adding sentences to the story, then making sure the needed words and/or facts are in the database and all the bugs are wrung out so the narrative understanding "works." I'm eager to reveal the final product, but it'll be a while yet!

The Parser goal for this month was adding basic support for gerunds and participles. The common factor between these is that they're both verb phrases used as some other part of speech. So detecting them takes extra effort because they must be distinguished from verbs that are actually functioning as verbs.

Helping verbs often accompany these forms when they are truly acting as verbs, and their absence is one clue to the possibility of a gerund or participle. Sometimes punctuation also provides a hint. Otherwise, gerunds and participles must be identified by their relationship (positional, and perhaps also semantic) to other words in the sentence.

After adding support for the new phrase types, I re-ran the Text Parser benchmarks. I also added a new test set, consisting of sentences from Log Hotel by Anne Schreiber. This children's book has simpler sentences than the other examples from which I derived test materials, while still not leaning too hard on the illustrations to convey its message.

I'm pleased with the results, even though progress may still seem slow. Both original test sets (The Magic Schoolbus: Inside the Earth and Out of the Dark) now show roughly 75% of sentences parseable (i.e. the Parser supports all grammatical constructs needed to construct a correct golden parse for the sentence), and 50% or more parsing correctly. Log Hotel has an even higher parseable rate, but a lower correct rate. Despite the "easy" reading level, it still does complex things with conjunctions and presents a variety of ambiguity problems (most of which I haven't even started trying to address yet).

To address the remaining unparseable sentences, I've got adjective clauses, noun-phrases-used-as-adverbs, and parenthetical noun phrases on my list. A full-featured Text Parser is beginning to feel close.

Pics on the blog: https://writerofminds.blogspot.com/2023/07/acuitas-diary-62-july-2023.html
Title: Re: Project Acuitas
Post by: WriterOfMinds on August 27, 2023, 04:16:09 pm
Not a big update this month, because I've been doing a little of everything and I'm still heavily focused on cleanup and capacity-building.

I did more work on Narrative and the Big Story. One thing that's very rewarding is to add features to make understanding of the Big Story work, re-run older stories to check for bugs, and see that the new features have added richness to previous stories - or even fixed things that I had to work around by adding more exhaustive explanations. An example is some of the work I did on this concept: "if certain outcomes of story events are rolled back, things return to their previous state, not a default or unknown state."

Sadly it's still going to be a while before I can share the Big Story. I was hoping to have it done by September, but sometimes project schedules just don't work that way. I want it to be finished and solid before I put it out there, so everyone (including me) will just have to wait.

The new Text Generator, in contrast, is almost ready for primetime, and I'm feeling pretty good about how much easier this will make generating the wide variety of sentences Acuitas is starting to need, varying the tense and other modifiers, etc. It's much cleaner than the old version too, at (so far) 1300 lines of code vs. over 2000.

I've also started cleanup on the Text Parser in the wake of last month's modifications. This is mostly boring refactoring, but along the way I've found a better method for handling compound nouns/proper names, and introduced the ability to support some titles written in title case. So for example, the Parser can now manage sentences like this: "The Place of the Lion is a book." "The Place of the Lion" is correctly perceived as the full title of some work and treated as a unit, but its internal grammatical structure (noun with article and prepositional phrase modifiers) is also still analyzed and parsed.

Obligatory blog link: https://writerofminds.blogspot.com/2023/08/acuitas-diary-63-august-2023.html
Title: Re: Project Acuitas
Post by: WriterOfMinds on September 27, 2023, 08:36:45 pm
As of this month the new Text Generator is done - all that remains is to integrate it into Acuitas, replacing the old Generator. Quick review: the Text Generator is the component that transforms abstracted facts or communication goals (aka "the gist") into valid, speakable sentences. For any given "gist" there may be multiple ways to render it into speech, and that's why this translation step is necessary. The information carried in speech is distilled into a compact, standard form for all of Acuitas' reasoning purposes; then, anything he wants to express is expanded back out into varied, human-comprehensible speech.

This version of the Generator is quite a bit more streamlined and flexible than the old one. It accepts what I now use as the common format for abstract data structures, instead of having its own special input format that the other modules would have to translate to. The calling function can request that output sentences be "flavored" in a variety of ways (different verb tense, modifying adverbs, etc.) without the need for me to create a whole new sentence template in the Generator. Nested clauses are supported to an arbitrary depth.
Here are some examples of the sentences the new Generator can create:

What is a hammer used to do?
I have remembered that a cat is an animal.
Do I know what a cat is?
Cold is the opposite of hot.
You intended to eat a cookie.

I've also kept moving ahead on the Parser cleanup, and while I'm at it I am introducing adjective clauses. Part of the goal of this rework was to fully support nesting/recursion in clauses, so it seemed a natural time to add support for the last major type of clause.

Lastly, I am still plugging away at Big Story. The innovation I needed for narrative comprehension this month was a theory-of-mind thing: if an agent believes one of their goals is already fulfilled, they will stop trying to fulfill it. If this is a false belief then it constitutes a problem for that agent/obstacle to the resolution of that goal. This opens the way for processing of various plot points based on deception or confusion, such as "decoy version of item" or "character fakes own death."

Blog link: https://writerofminds.blogspot.com/2023/09/acuitas-diary-64-september-2023.html
Title: Re: Project Acuitas
Post by: WriterOfMinds on October 29, 2023, 08:52:28 pm
This month I (tentatively) finished Big Story, so let's start there. It is plot-complete and currently stands at 99 lines. It generates 45 character "issues" (problems or subgoals), all of which are resolved in some way by the time the story is complete. Compared to any story I've told Acuitas before, this is dramatically longer and more complex (and it still leaves out a bunch of subplots and secondary characters from the movie this story is based on). I did my best to reproduce character motivation and the major story beats in a way Acuitas can "understand."

Blog has a teaser of part of the narrative diagram: https://writerofminds.blogspot.com/2023/10/acuitas-diary-65-october-2023.html

Once the full plot was established and all the issues were resolving, I started refining the explanation of some points and fixing bugs. I'll continue this work in hopes of having it wrapped up by year's end. Before I demo the story, I'll also need to reinstate Acuitas' reactions to story lines and the ability for Acuitas and the conversant to ask each other questions about the story (things that fell by the wayside during my last Narrative rework). This should be easier now that the new Text Generator is in place. And of course, working on this project has revealed tons of pain points in the way I currently do things, and ways I could improve the Narrative module. Some time after I get done with it, I'll be tilling all those insights back into the soil, as it were. But that will almost certainly have to wait for next year.

I need breaks from working on Narrative (it's hard), so I've also continued improvements to the Parser. My major accomplishment in the past month was getting infinitives to work again under the new scheme that better supports phrase/clause nesting. While I was at it, I finally dealt with some special constructions that the old parser couldn't handle. Consider these sentences:

1. To live is to exist.
2. Is to live to exist?
3. What is it to live?
4. What is to live?

The previous version of the Parser could handle the first two. The open-ended question form was tricky. I think the most correct way to ask it is Sentence 3; without further context, Sentence 4 means something more like "which things are intended to live." But it's pretty easy for the parser to trip over Sentence 3 (What's the direct object? What role does the phrase play? Does the "what" belong inside the phrase or outside?). This time around I finally put in the effort to get all four variations working - plus some other constructions I hadn't tried before, such as "What is he to do?"

Work on the new Parser continues. I'd also love to have that done by the end of the year, but we'll see.
Title: Re: Project Acuitas
Post by: WriterOfMinds on November 26, 2023, 08:14:28 pm
This month has been all about the Text Parser. I'm pushing to get this latest revision done, and that has crowded out other work for the moment. The big thing I cracked this month - the thing this Parser revision was mostly aiming at - was the ability to nest branches and dependent clauses inside each other.

What I call "branching" takes place when there is a coordinating conjunction in the sentence (like "and" or "but"). Branching can produce simple compounds, as in "Cats and dogs are animals." But sentences can also divide at any point and continue along two separate paths, as in "I fed the dog his dinner and gave Sarah her book." Or start out divided at the beginning and merge, as in "Are you or are you not a man?" Adding conjunction processing and branch management was one of my major accomplishments from last year. But this first version only really supported conjunctions in the uppermost layer of the sentence - not inside or between dependent clauses. Any interaction between branching and that other vital feature - nesting - had the potential to confuse the parser horribly. Not to mention that the code was a huge mess.

I'm a big believer in the design process that goes "Make a sloppy version that works; refine it later." Later became now and it was time to refine. I'm happy to report that I think I got clearer and better-organized code out of this month's work, in addition to enabling some sentences I couldn't manage before.

Blog with more: https://writerofminds.blogspot.com/2023/11/acuitas-diary-66.html
Title: Re: Project Acuitas
Post by: WriterOfMinds on December 24, 2023, 05:44:41 pm
In amongst the fervor of Christmas-related activities, I managed to do development work this month. Not as much as I would have liked, but one major goal's been reached: the newest version of the Text Parser is parsing all the sentences from the previous one's regression tests, plus a number of new ones that would have been impossible for the previous Parser. I still need to do assorted code cleanup and adjust my reference outputs for some format changes, so I can run the benchmarks again.

With all the types of adjective and noun clauses now supported, the last major grammar feature that needs to go in will be parenthetical noun phrases. This should move the majority of common sentences into the "theoretically parseable" category. Then I can start refining the process and working on cool features I've been wanting to explore for a long time - like word sense disambiguation, modifier attachment disambiguation, and pronoun dereferencing ... there's always so much to do. It's taken much longer than I expected to reach this point, but at the same time there's a great deal that I've accomplished.

I'm soon due to start planning my loose schedule for the next year, which has me excited. I'm looking forward to the completion and demo of Big Story, a fresh run on the Parser benchmarks, and probably more work on Game Playing and agency.

https://writerofminds.blogspot.com/2023/12/acuitas-diary-67-december-2023.html
Title: Re: Project Acuitas
Post by: WriterOfMinds on January 28, 2024, 08:33:15 pm
This month I cleaned up the code of the upgraded Text Parser and ran it on one of my benchmarks again.

So far I have re-run just one of my three test sets, the easiest one: Log Hotel by Anne Schreiber, which I first added last July. Preparing for this included ...

*Reformatting the existing golden outputs to match some changes to the output format of the Parser
*Updating the diagramming code to handle new types of phrases and other features added to the Parser
*Preparing golden outputs for newly parseable sentences
*Fixing several bugs or insufficiencies that were causing incorrect parses

I also squeezed a couple new features into the Parser. I admit these were targeted at this benchmark: I added what was necessary to handle the last few sentences in the set. The Parser now supports noun phrases used as time adverbs (such as "one day" or "the next morning"), and some conjunction groups with more than two joined members (as in "I drool over cake and pie and cookies").

The end result? ALL sentences in this test set are now "parseable," and two thirds of the sentences are being parsed correctly. See blog for pictured examples: https://writerofminds.blogspot.com/2024/01/acuitas-diary-68-january-2024.html
Title: Re: Project Acuitas
Post by: WriterOfMinds on February 25, 2024, 09:15:59 pm
I am pleased to announce the thing that I've been teasing you about for months is finally here: Big Story! Which I can now give its proper title of "Simple Tron." It's been a goal of mine for, well, years now, to tell Acuitas the story of Tron, phrased in a way that he can understand. Yesterday I did it. The version of Tron that I told omits a lot of subplots and side characters, and there's still a long way I could go in deepening Acuitas' understanding of the story (he still doesn't fully grasp *why* all the agents in the story do the things they do, even though the information is there). But it's good enough for now and ready to show the world, the video is available AAAAAAAAA

https://youtu.be/gMfg5KL8jeE?si=wtHLwwfBPi9QdF2p

And of course it could get a whole lot better - the work so far has exposed a bunch of pain points in the way the Narrative module works, and additional things that need to be done. I'll probably keep grooming it over the coming months to improve on the existing framework. And although the whole thing is in real English, it still sounds repetitive and clunky to a human ear, thanks to Acuitas' language processing limitations. (I haven't even integrated that shiny new Text Parser yet.) But the start is done. This initial skeleton of the story fits together from beginning to end.

More details on the blog: https://writerofminds.blogspot.com/2024/02/acuitas-diary-69-february-2024.html
Title: Re: Project Acuitas
Post by: ivan.moony on February 26, 2024, 09:17:04 pm
Great work! I especially like the story development timeline at the end of the video.
Title: Re: Project Acuitas
Post by: WriterOfMinds on March 28, 2024, 02:58:28 pm
This month it's all about refactoring, which has taken two major directions. First, I wanted to take some design patterns I developed while working on the game-playing engine and apply them to the main Executive code. The key change here is re-using the "scratchboard" from the Narrative understanding module as a working memory for tracking Acuitas' own current situation (or personal narrative, if you will). I also wanted to improve on some of the original OODA loop code and fatigue tracking with newer ideas from game-playing. I have a rough cut of the new Executive written and mostly integrated, though it needs more testing than I've had time for yet.

My second project was to merge some data formats. For a long while now, I've had one type of data structure that the Text Interpreter spits out, another that the Narrative Engine and its accessories use, and still another for facts kept in the Semantic Memory. The output of the Text Interpreter has proven to be a somewhat clunky intermediate format; I don't do a lot with it in its own right, I just end up converting it to Narrative's format. And the format used in the Semantic Memory is very old and limited, a relic of a time when I wasn't fully aware of what I needed in a knowledge representation. So my goal is to get rid of both of those and have a single unified format downstream of the Text Interpreter. This is a lot of work: I've had to rewrite many, many functions that access the semantic memory or otherwise manipulate knowledge data, create a script to convert the existing contents of the database, revise the Interpreter's output code, and more. I'm hoping this will pay off in increased clarity, consistency, efficiency, and expressiveness across the design.

https://writerofminds.blogspot.com/2024/03/acuitas-diary70-march-2024.html