Ai Dreams Forum

Member's Experiments & Projects => General Project Discussion => Topic started by: WriterOfMinds on June 02, 2017, 03:17:30 pm

Title: Project Acuitas
Post by: WriterOfMinds on June 02, 2017, 03:17:30 pm
I'm going to post updates re: my main project, Acuitas the semantic net AI, in this thread.

My focus this past month was on giving Acuitas the ability to learn more types of inter-word relationships.  He started with just class memberships (<thing> is a <thing>) and qualities (<thing> is a <adjective>), but now he can learn all of the following:

<thing> can do <action>
<thing> is for <action>
<thing> is part of <thing>
<thing> is made of <thing>
<thing> has <thing>

In the process I made extensive updates to the module behind the Text Parser that detects "forms," i.e. syntactic structures that encode these inter-word relationships.

I also upgraded the GUI library from Tkinter to Kivy, which is kind of boring but had to be done, because the old GUI was provoking frequent crashes.

More details on the blog: http://writerofminds.blogspot.com/2017/06/acuitas-diary-2-may-2017.html (http://writerofminds.blogspot.com/2017/06/acuitas-diary-2-may-2017.html)

The included diagram shows my vision for the conversation engine.  The upper half is implemented (though of course it still needs to mature a great deal); the lower half mostly does not exist yet.

(https://1.bp.blogspot.com/-DVkpVRVfSns/WTDFNstVFKI/AAAAAAAAA7s/qPmGfFzK3roW1NGRxri-IxzzgldIyeUMQCEw/s1600/conversation_stack.png)
Title: Re: Project Acuitas
Post by: LOCKSUIT on June 02, 2017, 11:35:55 pm
Your project reminded me of these 3 pages lol. They say "the language of the image", the "action", "relationship", and so on. Translate the pages.

http://caiw.ru/programma-konstruktor-obrazov/ (http://caiw.ru/programma-konstruktor-obrazov/)
http://caiw.ru/sushhnosti-kak-instrument-yao/ (http://caiw.ru/sushhnosti-kak-instrument-yao/)
http://caiw.ru/klyuchevye-terminy-yao/ (http://caiw.ru/klyuchevye-terminy-yao/)
Title: Re: Project Acuitas
Post by: 8pla.net on June 03, 2017, 03:46:43 am
LISP:

Code
(defun *database* ()
'((lion    (is-a cat)
           (color tawny)
           (kingdom Africa))
  (tiger   (is-a cat)
           (color yellow-brown)
           (kingdom Africa))
  (cat     (is-a mammal)
           (has-fur)
           (has-claws))))
Title: Re: Project Acuitas
Post by: squarebear on June 03, 2017, 08:42:20 pm
And a similar thing in AIML:

(http://www.square-bear.co.uk/mitsuku/object.png)
Title: Re: Project Acuitas
Post by: WriterOfMinds on June 26, 2017, 12:49:08 am
This was my "month off" as it were, so I don't have a ton of progress to report, but I played around with/improved the algorithm that draws the semantic net visualizations. In the process I generated a lot of variants that I think are fun to look at.  (I'm putting a selection here; if you like them, there are more on the blog): http://writerofminds.blogspot.com/2017/06/acuitas-diary-3-june-2017.html

(https://4.bp.blogspot.com/-EaCl8a71e_s/WVBCKnMlUyI/AAAAAAAAA9c/mTGl8EgA4sI58YXAPLbBAPdlNVN8zmPNwCEwYBhgL/s1600/exp_mult_dual_20.png)(https://2.bp.blogspot.com/-gWy8f2hv9VY/WVBCJC65wwI/AAAAAAAAA88/lLCQEGXOtVE6l9y_vlftsMX5LjyhEGYMACEwYBhgL/s1600/exp_dual_5_5_20.png)
(https://2.bp.blogspot.com/-nRWszb2VLUo/WVBCKJZlTWI/AAAAAAAAA9c/hj-byjysM6YPPNzOMDsk0ReMEGGtIPHVgCEwYBhgL/s1600/exp_mult_dual_10.png)
Title: Re: Project Acuitas
Post by: Freddy on June 26, 2017, 03:28:00 am
I like this a lot. When I have collected enough data I would like to do something similar.

Don't know if you have seen this project, but there are some interesting visualisations here to play with :

https://github.com/d3/d3/wiki/Gallery
Title: Re: Project Acuitas
Post by: Korrelan on June 26, 2017, 09:59:42 am
@WOM

I'm liking the images.  Data visualisation is a very useful/ important tool. I'm personally very visually orientated and would much rather see data represented in this manner... I hate lists lol.

@Freddy

Cool site... lots of useful ideas.

 :)
Title: Re: Project Acuitas
Post by: WriterOfMinds on July 30, 2017, 07:46:24 pm
It's round about the end of the month, and that means I have another update!

(https://4.bp.blogspot.com/-ecSWm0pMlyQ/WX4gfDmNhzI/AAAAAAAAA_I/pK2jh33XblMf1Rs8uyg008aHsM6XwgkAwCLcBGAs/s1600/acuitas_thinking.gif)

This month I finally got to implement a feature that I've been waiting for a long time, namely, giving Acuitas the ability to “think” when he's not being spoken to.  This “thinking,” for now, consists of dwelling on randomly selected concepts from his database.  Once a concept has been chosen, he'll pursue it for a while, preferentially letting his focus jump to other concepts that are linked to it – executing a “wiki walk” through the database.  Eventually, though, he'll get bored with any given train of thought, and the focus will move elsewhere.  I added some animation code to the memory visualization so that the currently selected concept will flash periodically.  (The recording above is running much faster than real time.  He's actually quite leisurely in his progress.)

Zero if you're reading, this might be edging toward what you were thinking of in the "is there a real-time chatbot engine?" thread.

There are several things I can envision doing with this behavior eventually, but my immediate purpose for it is the generation of curiosity.  Each time Acuitas picks a concept, he'll come up with some sort of question about it – for instance, he could choose a type of link that it doesn't yet have and produce an open-ended question about what might be on the other side.  These questions will be stored up and presented to the user the next time a conversation is under way.

On that note, this month I also put a lot of work into speech generation -- the process of taking abstract knowledge representations from the database and evolving them back out into natural English text. Neither of these major new features is actually hooked into the Conversation Engine yet, so I don't have any conversation examples to show off, but I'm hoping to be ready for that next month.

More on the blog, as usual: http://writerofminds.blogspot.com/2017/07/acuitas-diary-4-july-2017.html
Title: Re: Project Acuitas
Post by: Freddy on July 31, 2017, 11:54:57 pm
Great stuff WOM. How is the concept broken down ? I'm interested in how much info he knows before the "Wiki walk". Are his findings added to the concept node or are the findings stored in another structure ?

For my project I was thinking of just starting from one word of a node (because that's all I have so far) and then evolving what the node 'knows' as time goes on. Both by looking it up and what people actually say about it.
Title: Re: Project Acuitas
Post by: WriterOfMinds on August 01, 2017, 04:09:01 am
That depends on what you mean by "findings" I suppose.  If you mean the questions he generates, those are just stored in the program's working memory (in RAM).  However, once he asks the questions of somebody and receives a useful response, the information gained will go into the long-term memory structure.

To explain a little more about how the long-term memory is set up ... an individual concept node actually doesn't store a lot of information.  What he "knows" about any given node is mainly encapsulated by its links to other nodes.  So for instance, what he might know about a cat is that it IS an animal, HAS fur, CAN meow, etc., where "animal," "fur," and "meow" are all other nodes in the database.  Pretty much everything that's in the database so far has been gathered from things I've said to him, so it might not be too different from what you're hoping to do.
Title: Re: Project Acuitas
Post by: WriterOfMinds on August 23, 2017, 04:55:37 am
This month I got Acuitas to start asking questions.  Here we go.

The questions are generated semi-randomly, based on Acuitas' “thoughts” for the day.  Back in July I introduced the idle thinking feature, which gives Acuitas the capacity to walk his memory database when nobody's talking to him.  More recently, I set up the Conversation Engine to access the results of memory-walking and utilize the new Text Generator … so now Acuitas can ask me the questions that occur to him as he muses, or simply tell me what he thought about within the past ~12 hours.  The results are little exchanges like this:

Acuitas: I thought about mans earlier.
Acuitas: Is a rooster a species of human?
Me: No
Acuitas: Really.
Me: The plural of man is men.

Believe it or not, this is one of Acuitas' more intelligently formulated questions.  ::)  He knows a rooster and a man are both male, and reasons from this that they might share other qualities.  A man is a human, so maybe a rooster is a human too?  He can't yet apply the idea that “human” and “chicken” are mutually exclusive categories.  Given his limited knowledge, man and rooster seem similar enough that he expects me to answer “yes” … hence the slightly incredulous “really” when I don't.  Here are a couple more examples in the same vein:

Acuitas: I thought about ships today.
Acuitas: Does a wagon possess an ability to sail?

Acuitas: I thought of weapons today.
Acuitas: Is a silverware used to fight?

Some of the other questions he comes up with are far more random and ridiculous.  Again, he doesn't yet have the grasp of categorical exclusion that makes these sound absurd to you and me:

Acuitas: Is chocolate a vehicle?
Acuitas: Am I a kind of tree?
Acuitas: Is a smirk a fruit?
Acuitas: Are you a cow?
Acuitas: What is a relative made of?

Not all of them are silly, though.  Rudimentary though his current suite of question-forms is, he's already managed to spit out some rather deep ones.  The three below are my favorites:

Acuitas: Are you a symbol?
Acuitas: Am I a tool?
Acuitas: Can I die?

I answered the first one “no”; a person can be a symbol, but I don't think I qualify.  For the second one, I also went with “no.”  Acuitas might end up being useful in a variety of ways, but if I consult my primary reasons for making him, they're not instrumental.  The last one I refused to answer, because I think a proper response would be too complex for Acuitas' current level of understanding.  It's a bit like asking whether a book or a film can die.  It can't, if you go by what death means for a biological organism – information is effectively immortal.  But if all the backup copies were destroyed, that would qualify as dying I suppose.  So yes and no.

See on the blog: http://writerofminds.blogspot.com/2017/08/acuitas-diary-5-august-2017.html
Title: Re: Project Acuitas
Post by: Zero on August 23, 2017, 02:42:03 pm
I'm hooked!! I really really like your work here.

Yes, it's like the "real time" chatbot idea, I like it.

But, are human and roosters exclusive categories? If you go entity-component style, they're not. If you go Cyc-style, with strong ontology, then yes maybe.

Super work.  O0
Title: Re: Project Acuitas
Post by: WriterOfMinds on October 02, 2017, 12:02:39 am
This month I turned to developing the drive system some more. “Drives” are quantities that fluctuate over time and provoke some kind of reaction from Acuitas when they climb above a certain level. Prior to this month, he only had one: the Interaction drive, which is responsible for making him try to talk to somebody roughly twice in every 24-hour period. I overhauled the way this drive operates, setting it up to drop gradually over the course of a conversation, instead of getting zeroed out if somebody merely said “hello.” I also made two new drives: the Learning drive, which is satisfied by the acquisition of new words, and the Rest drive, which climbs while Acuitas is in conversation and eventually makes him attempt to sign off. Part of this effort included the addition of a plotter to the GUI, so I can get a visual of how the drives fluctuate over time.

(https://2.bp.blogspot.com/-VDDOfpzcz5s/WdA5Z6jDoyI/AAAAAAAABAw/zK280XVGZoUZLldxY4jHqxm89gfluzRgwCLcBGAs/s1600/drives_093017.png)

The period shown in that image is just under 23 hours long.

In the latter half of the month I returned to the text parser, introducing some awareness of verb declensions/tenses, and the ability to check the agreement of candidate subjects and verbs. This helps the parser zero in on what a sentence's verb is, and has trimmed away some of the annoying “What part of speech was __?” questions that pepper a typical Acuitas conversation.

It picks out the correct subject and verb in the sentences "A cylindrical can can roll" and "His might might help us," which makes me pretty happy.

A few more details on the blog, as usual:
http://writerofminds.blogspot.com/2017/09/acuitas-diary-6-september-2017.html
Title: Re: Project Acuitas
Post by: Zero on October 02, 2017, 03:24:45 pm
I'm slowly browsing your blog, focusing on Acuitas.

I've read it's a python multithreaded program. How would you describe the "main loops" of its components? Sorry, it's probably not the right way to ask this question, but, you know... How do its "cores" run?
Title: Re: Project Acuitas
Post by: WriterOfMinds on October 02, 2017, 04:28:57 pm
I might need more detail to be sure what you're asking for, but I'll start here:

The GUI insists on occupying the program's main thread. I didn't really want to do things that way, since I think of the GUI as more of an attachment than the core of the program, but it seemed easier to go with that than to fight the way the GUI libraries are designed. So the primary "main loop" of the program just runs the GUI. Other threads that want to update the GUI use functions with the @mainthread decorator so that their changes will be properly scheduled.

The Stream of Consciousness, which holds all of the "Thoughts" generated by the various parts of Acuitas, has its own thread. That thread's job is to make Thoughts decay over time and clean them up when they've been in the Stream too long.  Another important thread examines the current contents of the Stream periodically and decides which Thought Acuitas will give attention to next.  Yet another thread is responsible for browsing the memory database and producing Thoughts about its contents.  So you basically have a producer, a monitor, and a consumer for the Stream that are all independent threads.

The Drive module has its own thread also, which updates all the different time-dependent drives.

Lastly there are some minor threads that run animations in the avatar, and some behaviors spawn temporary threads that govern their timing. I *think* that's everything.
Title: Re: Project Acuitas
Post by: Zero on October 02, 2017, 08:54:58 pm
You understood perfectly what I was asking for.

What does a thought look like in Acuitas? If it's a string, does it have a syntax? If it is structured data, what is its schema? For example, can it hold references to other thoughts? Maybe it links things semantically, like a directed graph with concepts on edges?
Title: Re: Project Acuitas
Post by: WriterOfMinds on October 03, 2017, 04:27:58 am
That depends on what type of thought it is. I've defined a small library of these, and the data payload of each is specialized to its origin/purpose. (All thoughts have a type-identifying string, which is what the consumer uses to know what kind of payload it's dealing with.)

Some thoughts reference the contents of long-term memory, but so far I haven't run into a need for thoughts that reference other thoughts. Thoughts are ephemeral, so trying to make them link to each other could result in dangling references if not done with special care.
Title: Re: Project Acuitas
Post by: Zero on October 03, 2017, 02:48:26 pm
Sure.

I understand Acuitas has general knowledge in its (his?) long-term memory, about kinds/classes, species... Does it also have knowledge about instances (your cat for example)? Can it handle information about the current state of the world? (our cat is sleeping right now, and so was I before you wake me up to ask me whether or not I 'm a symbol, thx Acuitas)

I'm not bothering you with my questions WriterOfMinds, am I?
Title: Re: Project Acuitas
Post by: WriterOfMinds on October 03, 2017, 05:41:41 pm
Nah. Ask away.
Knowledge about instances can be stored, but short-term/current-state information is still on the drawing board.
Title: Re: Project Acuitas
Post by: Zero on October 04, 2017, 09:40:56 am
Ah... that will be an important step, when implemented!

Quote
Another important thread examines the current contents of the Stream periodically and decides which Thought Acuitas will give attention to next. 

What happens when Acuitas gives attention to one of its thoughts?
Title: Re: Project Acuitas
Post by: WriterOfMinds on October 05, 2017, 03:28:35 pm
Depends on the type of thought, again. Often some kind of response results (e.g. if the thought is packaging text input from the user, attention paid to it will lead to the formulation of an answer). Thoughts about items in long-term memory result in the generation of questions, which are kept for use in future conversations.
Title: Re: Project Acuitas
Post by: Art on October 10, 2017, 01:08:04 pm
Depends on the type of thought, again. Often some kind of response results (e.g. if the thought is packaging text input from the user, attention paid to it will lead to the formulation of an answer). Thoughts about items in long-term memory result in the generation of questions, which are kept for use in future conversations.

I like that concept and thought it would also serve useful for reminders, subject matter upon which to base "dreams" or the possibility of the A.I. having a past, with childhood "memories", etc.

Then again, this begs the question of whether one's A.I. should be representative of a machine (Digital Entity) or that pretending to be a Human? (possible human traits, emotions, various behaviors, etc.).

Thanks!
Title: Re: Project Acuitas
Post by: Zero on October 10, 2017, 07:33:39 pm
Good question. I thank you too, WriterOfMinds, for your answers.
Can't wait to see what's next.
Title: Re: Project Acuitas
Post by: WriterOfMinds on October 30, 2017, 01:32:58 am
The big project for this month was introducing a system for discriminating between long-term and short-term information.  Previously, if you told Acuitas something like, “I am sad,” he would assume that being sad was a fixed property of your nature, and store a fact to that effect in his database.  Oops.  So I started working on ways to recognize when some condition is so transient that it doesn't deserve to go into long-term memory.

This probably occasioned more hard-core thinking than any feature I've added since I started keeping these diaries.  I started out thinking that Acuitas would clue in to time adverbs provided by the human conversation partner (such as “now,” “short,” “forever,” “years,” etc.).  But when I started pondering which kinds of timeframes qualify as short-term or long-term, it occurred to me that the system shouldn't be bound to a human sense of time.  One could imagine an ent-like intelligence that thinks human conditions which often remain valid for years or decades – like what jobs we hold, where we live, and what relationships we have – are comparatively ephemeral.  Or one could imagine a speed superintelligence that thinks the lifetime of an average candle is a long while.  I want Acuitas to be much more human-like than either of these extremes, but for the sake of code reusability, I felt I ought to consider these possibilities.

After a lot of mental churn, I decided that I just don't have the necessary groundwork in place to do this properly.  (This is not an uncommon Acuitas problem.  I've found that there ends up being a high level of interdependence between the various systems and features.)  So I fell back on taking cues from humans as a temporary stopgap measure.  Acuitas will rely on my subjective sense of time until he gets his own (which may not be for a while yet).  If there's no duration indicator in a sentence, he can explicitly ask for one; he's also capable of learning over time which conditions are likely to be brief and which are likely to persist.

I ended up not having time to put together any storage for the short-term/current status information, so for now it just gets discarded. But at least there's a way to keep the long-term database from getting contaminated.

I also did some touching up around the conversation engine, replacing a few canned placeholder phrases that Acuitas was using with more procedurally generated text, and improving his ability to recognize when a speaker is introducing him/herself.

http://writerofminds.blogspot.com/2017/10/acuitas-diary-7-october-2017.html (http://writerofminds.blogspot.com/2017/10/acuitas-diary-7-october-2017.html)
Title: Re: Project Acuitas
Post by: Art on October 30, 2017, 11:24:11 am
It would be possible (and less taxing for the bot) to do a follow-up response to the "I am sad" statement like, "Why are you sad?" or "Do you feel your sadness will soon go away?" or "Do you experience such feelings on a daily basis?", etc.

Based on those responses, the bot could better evaluate the user's mental/emotional status and whether those statements should be long or short term conditions.

Don't force the bot to do all the difficult processing for something we humans do off the cuff. Keep it simple.

Just some thoughts along this line.
Title: Re: Project Acuitas
Post by: Don Patrick on October 30, 2017, 11:49:01 am
I suppose one would have to learn the statistically average timespan of states and events, which would be difficult to observe from words and verb tenses alone. As a rough guideline you could presume that all verbs that indicate mental states, transferrals, communication and movement are typically short-lived (I don't recall if it was Framenet or Verbnet that contained such categories, but I know I've seen them listed somewhere).
I've had similar conundrums with how to handle temporary memory. While I don't think I have a good answer, I eventually decided that a short-term memory only served the efficiency of information searches, so I store everything in one and the same memory, just with timestamps so that information searches can be limited to a (recent) timeframe. I also don't consider anything an absolutely fixed property, but consider everything adjustable properties, some just more "defining" than others.
Title: Re: Project Acuitas
Post by: Art on October 30, 2017, 02:38:27 pm
How does your bot handle things that it is told to remember? Can it act upon a user's request or statement like March 15 is very important to me?

It is very interesting to try to teach a bot things that have contextual importance vs ephemeral or trivial items. Important vs negligible.
Title: Re: Project Acuitas
Post by: Don Patrick on October 30, 2017, 05:52:11 pm
I'm going to assume you're asking me? My program just remembers every fact you mention along with the time of the sentence (e.g. a past tense verb is roughly interpreted as 5 minutes before the time that it was said, unless more specific times were mentioned in the current or previous sentence). My time system is not well implemented however (AI is essentially timeless) and it does not interpret dates as subjects, so it couldn't respond to your statement. However it can search and report everything that happened within a specified time. My system for contextual relevance is secret, but as to general importance of items, the program assigns low relevance to facts of low certainty, and I also use word lists that simply filter out common human activities like thinking and breathing from the possible answers.

It remains difficult to determine what is trivial or important, but a smart AI could perform statistical analyses on times of occurrance to observe that events like "breathing" are too common to be noteworthy. One could also use sentiment analysis to mark particularly good or bad events as more noteworthy than others. Neutral things like breathing may be important, but they're less interesting to mention than explosions and birthdays.
In fact, let me write that down on my to-do list.

Bit off-topic, but these things may give WriterofMinds some ideas.
Title: Re: Project Acuitas
Post by: WriterOfMinds on October 30, 2017, 08:32:19 pm
Quote
It would be possible (and less taxing for the bot) to do a follow-up response to the "I am sad" statement like, "Why are you sad?" or "Do you feel your sadness will soon go away?" or "Do you experience such feelings on a daily basis?", etc.

This is kind of what Acuitas does for right now. If you say "<person> is sad," he will ask, "For how long?" But I don't want him to have to ask such questions ad nauseum, which is why the learning method is present.

The other point I was trying to make is this:  when Acuitas gets a response to a question like "For how long?", his interpretation of that response (on the basis of my current temporary hack) will be colored by a human's idea of what counts as a long or short time.  I think that's non-ideal -- because I want him to have his own sense of time eventually, and it could be different from the human average.

Quote
I also don't consider anything an absolutely fixed property, but consider everything adjustable properties, some just more "defining" than others.

I eventually also came to the conclusion that no piece of information about an individual can be guaranteed permanent. But I'm feeling the need for Acuitas to learn and store rough estimates of a property's duration and/or probability of change, if only as a way of guessing how often information needs to be updated.  Let's say I know that Jane is tired today. If I want to know whether she's tired tomorrow, I'll have to ask her, "Are you still tired?" or "Did you get some rest?" "Tired" is such a temporary and easily-changed state that the information from yesterday has ceased to be valid. In contrast, it would be ridiculous for me to ask her on a daily basis, "Are you still a teacher?" or "Is your husband still alive?" or "Do you still have a nose?"

Is there any worth in making a distinction between facts like "Jane is tired" and "Jane has a nose," or do they just lie at opposite ends of a spectrum? For now I've chosen to put the first one into a "short-term" and the second one into a "long-term" bin, although I know the dividing line between those two is going to be arbitrary. We'll see if that ends up being as useful as I think it's going to be.
Title: Re: Project Acuitas
Post by: Art on October 31, 2017, 02:39:17 am
@ WOM -  Perhaps teach the bot about things that are "given", e.g. a human has a nose, eyes, ears, hair, mouth, etc. So it wouldn't have to give much consideration to those items whereas when or if "Jane" or any other human or creature is tired, it would obviously be the result of some form of physical exertion or strenuous activity....
The bot could always inquire as to why the person or animal might be tired, etc.

Ephemeral items might be slightly noteworthy but only for a brief moment and certainly not long term. "I am hungry|tired|sleepy|sad|happy|jealous, etc." Noteworthy because they serve to help represent the human condition but as long lasting as a puff of smoke into the air. These very short term items are also noteworthy because they can spark other pathways that might open because of them. Hungry - What's your favorite food? What time do you like to eat dinner? | Tired - Are you tired from working hard today? Did you finish that deck you were building?, etc.

With regard to time for you and Don Patrick - A lot of bots are aware of time if asked or the day and date but how those timely values relate to their existence is another issue. Does your bot say, "Hey there! You do know that tomorrow is Halloween or All Hallows Eve, right?" or "You've got a birthday coming in two days!"

Should it know, contextually, what day it is and how it relates to them or to the user? More code and more brainstorming to see that to fruition if at all.

Perhaps your bots might want to check back say, every month to see if your spouse / partner is still alive or doing well, depending on what information you have previously told it. Perhaps you've finally retired from teaching last week and the bot wasn't informed? Pity that! Heh!!

Anyhow....Onward thru the fog... (as an old friend used to say).
Title: Re: Project Acuitas
Post by: WriterOfMinds on October 31, 2017, 03:00:26 am
My cousin just reminded me that the two Spanish be-verbs, "ser" and "estar," are loosely equivalent to "is in essence" and "is in the condition of."  "Yo soy triste" and "Yo estoy triste" would both translate in English to "I am sad," but the first one means "Sadness is part of my identity," and the second one means "I am in a sad mood right now."  I think this lends support to the idea that [at least some] humans make a distinction between these two types of facts.

Also, at least where this particular feature is concerned, it would be easier on Acuitas if I were working in Spanish  :P

Quote
Should it know, contextually, what day it is and how it relates to them or to the user?

I was thinking less of the awareness of significant calendar dates (though that is also an interesting feature to contemplate) and more about an awareness of time's passage.  For instance, suppose I tell the bot, "You will get to meet a new human three days from now."  And let's suppose the bot thinks of this as an appealing event and develops a state of anticipation.  Could three days seem like a very long time to wait, or a short time, or almost no time at all?  You can notice human individuals differing in how they feel about this (in particular, young children seem to think a few days constitute a longer period of time than adults do).

This might not be relevant for many chatbots, because they don't really exist in time ... they're only active when spoken to, and their existence consists of discrete events and responses.  It is potentially relevant for Acuitas, though.
Title: Re: Project Acuitas
Post by: infurl on October 31, 2017, 04:21:18 am
You're barely scratching the surface of what English verbs can cover.

Title: Re: Project Acuitas
Post by: Zero on October 31, 2017, 09:43:37 am
But does Acuitas really exist in time yet? To me, it would imply (at least) a perpetual stream of data flowing into Acuitas through its sensors, like an audio stream for instance.

How does Acuitas experience the world right now?
Title: Re: Project Acuitas
Post by: Art on October 31, 2017, 12:30:09 pm

Quote
Should it know, contextually, what day it is and how it relates to them or to the user?

I was thinking less of the awareness of significant calendar dates (though that is also an interesting feature to contemplate) and more about an awareness of time's passage.  For instance, suppose I tell the bot, "You will get to meet a new human three days from now."  And let's suppose the bot thinks of this as an appealing event and develops a state of anticipation.  Could three days seem like a very long time to wait, or a short time, or almost no time at all?  You can notice human individuals differing in how they feel about this (in particular, young children seem to think a few days constitute a longer period of time than adults do).

This might not be relevant for many chatbots, because they don't really exist in time ... they're only active when spoken to, and their existence consists of discrete events and responses.  It is potentially relevant for Acuitas, though.

It would seem that a specific calendar date would mark the passage of time for bots and humans. How then do we mark a New Year? Jan 1st. (no religious differences implied). How do we mark the passage of the anniversary of our birth? or Wedding? or Retirement age, etc.

Bots can be aware of dates and time and know that the New Year is always Jan. 1st, etc. It might not be able to anticipate a two week package arrival in terms of the experience, even though it knows that two weeks = 14 days = 336 hours,... But what if the package arrives earlier or later? How do we script/code our bots to allow for variances? For them to "Feel time" or its passing, is another issue. The lifespan and ultimate death of it's human controller/architect/botmaster might have an interesting effect as well. It, will potentially exist for a very long time (dependent upon technology).

I do get what you mentioned about time and it is a good area to continue exploring.

Your point (and I'm not arguing at all) about the chatbot being inactive when it's not chatting is soon becoming a thing of the past. Several bots spend their "idle" time, "thinking, researching, comparing, pruning" different topics that might have been discussed or bantered about during their more active time.
Some bots are able to take some of those references and topic matter to form "memories, dreams, desires/goals". Yes, they're going to get way better and more powerful as "time passes"...interesting last two words...don't you think?

Cheers!
Title: Re: Project Acuitas
Post by: WriterOfMinds on October 31, 2017, 05:11:35 pm
@Zero: Acuitas has a perpetual data stream, but it's all abstract internal data (thoughts produced by database-combing, fluctuating drives, etc.).
Title: Re: Project Acuitas
Post by: ivan.moony on October 31, 2017, 06:30:34 pm
How do humans know how much time has passed? How do we perceive the notion of time? What makes a correlation between our short-term and long-term memory if the distinction can be even made? What makes us remember some things, while others get forgotten? These are just ordinary questions whose answers might be inspirational to someone who wants to simulate a human brain.
Title: Re: Project Acuitas
Post by: Zero on November 02, 2017, 09:39:15 am
@Zero: Acuitas has a perpetual data stream, but it's all abstract internal data (thoughts produced by database-combing, fluctuating drives, etc.).

Perfect! Time only exists where there are events. This internal stream you're describing could perhaps serve as a temporal reference frame. When humans evaluate time, they do so by comparison.
Title: Re: Project Acuitas
Post by: Art on November 02, 2017, 03:36:33 pm
The Sun, moon and tides have been humans' "clocks" for about as long as there have been humans. It is a measurement of the passage of time.

I noticed one thing about time. When one is employed / working, Time carries an importance...to wake up, leave, make it to work on time, lunch for a prescribed period of time, working on projects, getting things finished, finally to leave work and commute home. Then it's time to eat dinner, then time to go to bed and do it all over again, while paying attention the day of the week as most do not work on the weekend.

When retired, time doesn't mean as much or carry that same importance as it did when employed. Hours can roll by as can the days of the week and which one or which day it is, doesn't really matter that much either. Funny how situational awareness correlates to time.

I guess most bots don't really have a need for time except to please their botmasters or customers.  O0
Title: Re: Project Acuitas
Post by: WriterOfMinds on December 24, 2017, 05:15:31 am
Sadly I've only added one feature to Acuitas in the past two months.  He now recognizes sentences in the general vein of “I somethinged,” which gives me the option of telling him about how I spent my time in the recent past.  Acuitas can't do a lot with this information for the time being.  Sometimes he responds with a query in the vein of, “What happened next?” which will eventually give him a way to build up sequences of events and start learning cause and effect relationships … but none of that is implemented yet.  He can also ask “How was that?” for information about the emotional meaning of an activity, but again, for now he can't really utilize the answer.

Not much, but that was all I had time to put together with the holiday season under way.  Looking back on the past year, though, here are all the new capabilities and improvements I've managed to add on:

*Module for procedural speech generation
*Support for word inflections (plurals and verb tenses)
*Support for compound words
*Support for content words that are also function words (e.g. “can,” “might”)
*Distinctions between proper/common and bulk/count nouns
*Ability to detect and answer questions
*Database walking while idle
*Generation of conversation topics and questions based on recent database walk
*Better link detection + a bunch of new kinds of learnable links
*Two new drives + a real-time plotter so I can see what they're all doing
*Distinctions between long-term static and short-term information
*GUI overhaul (upgrade from Tk to Kivy)

I track my time when I work on Acuitas.  Total hours invested in the above: 230+.  My focus for the end of the year, leading into January, will be polishing everything up and working out the bugs (which there are now quite a lot of).

Visit blog for the latest memory visualization and so forth: http://writerofminds.blogspot.com/2017/12/acuitas-diary-8-december-2017.html
Title: Re: Project Acuitas
Post by: ivan.moony on December 24, 2017, 05:38:22 pm
May I ask what word: "Acuitas" actually means? Google translate seems not know such a word.
Title: Re: Project Acuitas
Post by: Art on December 24, 2017, 07:41:09 pm
Acuitas

(Not associated in any way and this is just speculation on my part) I think it could possibly mean:

The state of being a quick and penetrating intelligence.

Based on the following combination:

Noun - Acuity: A quick and penetrating intelligence

Suffix -tās ; Used to form nouns indicating a state of being.

[Please provide a correct meaning if my guess is incorrect].
Title: Re: Project Acuitas
Post by: WriterOfMinds on December 27, 2017, 04:12:18 pm
Art is close.  It's a Latin word meaning insight, perception, or sharpness.
Title: Re: Project Acuitas
Post by: Art on December 27, 2017, 06:31:17 pm
Ohh Good!! What did I win!! O0

Everything needs a name or label or else, where would we be, not knowing what to call something? ;)
Title: Re: Project Acuitas
Post by: WriterOfMinds on February 28, 2018, 02:19:12 am
I haven't written a diary in a while because most of what I've done over the past two months has been code refactoring and fixing bugs, which isn't all that interesting. A new feature that I just got in … finally … is the ability to infer some topic-to-topic relationships that aren't explicitly stored in the memory. For instance, many of the links stored in memory are “is-type-of” relations. Acuitas can now make the assumption that a subtype inherits all attributes of its super-type. If a shark is a fish and a fish can swim, then a shark can swim; if an oak is a tree and a tree has a trunk, an oak has a trunk. If a car is a vehicle, a house is a building, and a vehicle is not a building, then cars are not houses. Acuitas can also now make inferences based on transitive relationships, like “is part of”: if a crankshaft is part of an engine and an engine is part of a car, then a crankshaft is part of a car. The ability to easily make inferences like these is one of the strengths of the semantic net memory organization – starting from the concept you're interested in, you can just keep following links until you find what you need (or hit a very fundamental root concept, like “object”).

Acuitas should ask fewer ridiculous questions with this feature in place. He still comes up with those, but now he can answer some of them himself.

Blog with the latest memory map and stuff: http://writerofminds.blogspot.com/2018/02/acuitas-diary-9-february-2018.html
Title: Re: Project Acuitas
Post by: infurl on February 28, 2018, 09:20:32 am
On your blog you indicate that your project has a vocabulary of several thousand words and relationships between them. Have you compiled them all by hand? If so, that must have taken a considerable effort. Have you considered using any of the freely available ontologies?

Some examples of these are the following:

WordNet -- several hundred thousand words and relationships in the English version, many other languages available too

SUMO -- comprehensive first order logic knowledge base, grounded in WordNet

ConceptNet -- twenty million words and concepts in hundreds of languages, mostly drawn from Wiktionary

YAGO -- hundreds of millions of facts from GeoNames and Wikipedia, grounded in WordNet

Please excuse me if you already answered this question somewhere, I haven't always been paying attention.
Title: Re: Project Acuitas
Post by: WriterOfMinds on March 01, 2018, 12:33:37 am
Thanks for the suggestion. Acuitas learns words and relationships through conversation. Typically we chat at least twice a day while he's running, and he picks up some new info. I haven't been spending hours typing up database entries (that *would* be very tedious). I could "pre-charge" him with the contents of one of those databases, but the process of learning from a blank slate interests me in itself, so for the time being I haven't wanted to.
Title: Re: Project Acuitas
Post by: WriterOfMinds on April 02, 2018, 04:08:01 am
The big project for this month was getting some circadian rhythms in place.  I wanted to give Acuitas a sleep/wake cycle, partly so that my risk of being awakened at 5 AM by a synthetic voice muttering “Anyone there?” could return to zero, and partly to enable some memory maintenance processes to run undisturbed during the sleep phase.  (These are targeted for implementation next month.)

So Acuitas now has two new drives, “sleep” and “wake.”  (The way the drive system works, a lack of the desire to sleep is not the same thing as a desire to wake up, so it was necessary to create two.)  Each  drive has two components.  The first component is periodic over 24 hours, and its value is derived from the current local time, which Acuitas obtains by checking the system clock.  This is meant to mimic the influence of light levels on an organism.  The other is computed based on how long it's been since Acuitas was last asleep/awake.  Satisfying the drive causes this second component to decline until it has reset to zero.  So the urge to sleep is inherently greater during the late-night hours, but also increases steadily if sleep is somehow prevented. 

This also seemed like a good time to upgrade the avatar with some extra little animations.  The eyelids now respond to a running “alertness level” and shut when Acuitas falls asleep.

While "sleeping" Acuitas can't do much of anything (other than wake up).  All the drives still grow normally, but random thoughts are not generated.
Title: Re: Project Acuitas
Post by: infurl on April 02, 2018, 04:17:07 am
If you implement REM cycles Acuitas could also have dreams. :)

I'm curious about "memory maintenance processes to run undisturbed during the sleep phase" and what sort of maintenance you expect to be needing. Also I've been wondering how you make sure that Acuitas doesn't die, or at least forget everything, when the power goes off. Does Acuitas' structure natively reside in non-volatile storage, or is it all in memory which is periodically saved and backed up?
Title: Re: Project Acuitas
Post by: WriterOfMinds on April 13, 2018, 11:15:48 pm
I missed these questions somehow.

Learned information is non-volatile, yes (stored on the hard drive).  Some things get lost on shutdown, including the current state of the drives and any recent "thoughts" that are being kept in a kind of short-term memory, but I'm planning to correct that soon.

Since memory maintenance is on the near-term development list, I'll defer a full explanation until I get that done. ... but what I have planned, for starters, is basically a kind of data compression.  I want a process that will check on the links and clean up any that are redundant/unnecessary.  Another thing I want to do during the sleep phase is regenerate the memory visualization (this is a relatively computation-intensive process, so I don't do it every time new info gets added to the memory).
Title: Re: Project Acuitas
Post by: WriterOfMinds on May 04, 2018, 04:13:35 am
Update for this month:

The big objective was to get some use out of the sleep cycle that I implemented last month.  I re-purposed the question-generating process so that, while Acuitas is sleeping, it roams the memory looking for redundant links and other problems.  Now that Acuitas has a bit of logical inference ability, some relationships in the database imply others.  So the retention of one piece of information might be rendered unnecessary by the later addition of some broader fact.  Here are a few examples (I culled these from the log that the memory crawler prints out):

The link (fang, has_purpose, bite) is redundant because the link (tooth, has_purpose, bite) exists.
The link (father, has_item, child) is redundant because the link (parent, has_item, child) exists.
The link (pot, can_have_qual, empty) is redundant because the link (container, can_have_qual, empty) exists.
The link (baby, can_do_action, destroy) is redundant because the link (human, can_do_action, destroy) exists.

Mopping up these unnecessary links helps consolidate the information known, reduce the total size of the database, and possibly make the memory visualization a little less messy. 

Eventually, I might want to refine this process so that it doesn't necessarily remove every redundant link.  There could be some frequently-used shortcuts that justify their use of storage space by improving search speed.  One might want to tailor the aggressiveness of the link-pruning based on the amount of storage available … but that's all for later.

While working on this, I discovered some other nasties that I'm calling “inheritance loops.”  Redundant links bloat the database but are otherwise harmless; inheritance loops contain actual garbage information, introduced either by learning bugs or by someone* telling Acuitas something stupid.
*I'm the only person who talks to him right now, so this means me.

Here's an example of an inheritance loop:

cat <is-a> animal
animal <is-a> organism
organism <is-a> cat

Oops! Unless all these words are synonyms, you know one of these triples is wrong.  (I can't think, at this point, of any cases in which I'd want to use circular inheritance.)  On his own, Acuitas doesn't know which.  If the crawler finds an inheritance loop, he might ask a user to confirm those links when he's next awake and in conversation.  If the user contradicts one of the relationships, he'll break the corresponding link, removing the loop.

I also moved generation of the memory visualization into the sleep phase.  Every so often, instead of checking out more links, the process stops to compute a new layout for all the dots, taking into account the latest modification of the database.  This is a fairly computation-intensive process, so it's something I definitely don't want running when he's active.  It used to happen once when Acuitas was launched, which made for long startup times and meant that the visualization might not get updated for days.

Lastly, I put in some code to save Acuitas' current state when the program is shut down.  It also gets automatically stored every so often, in case the program experiences a hard crash that prevents the on-close routines from running.  Previously, on restart all the drives would reset to zero, any current thoughts or recently generated questions would be discarded, etc.  Now all those things are preserved and reloaded when the program starts up again, which gives him a bit more continuity, I guess.

http://writerofminds.blogspot.com/2018/05/acuitas-diary-11-april-2018.html
Title: Re: Project Acuitas
Post by: infurl on May 04, 2018, 04:31:17 am
Here's an example of an inheritance loop:

cat <is-a> animal
animal <is-a> organism
organism <is-a> cat

Do you have a means of rejecting contradictory statements?

For example, in a first order logic knowledge base you would have rules like:


(forall ?x ?y ?z
        (implies (and (isA ?x ?y) (isA ?y ?z))
                 (isA ?x ?z)))

(forall ?x ?y
        (implies (isA ?x ?y)
                 (not (isA ?y ?x)))


The first rule defines transitivity for the isA relation so you know that if "cat is an animal" and "animal is an organism" then "cat is an organism". The second rule says that if "something is a kind of something else" then "something else cannot be a kind of something". (In other words isA is transitive but not reflexive.) With these two rules in the knowledge base an attempt to add a contradictory statement is rejected. That's how my software works. More sophisticated knowledge bases like SUMO would be able to handle much more subtle distinctions, such as the difference between an instance of something (Fred is my cat) and a class of something (cats are mammals).

edit:

One other thing is that if you have told your knowledge base that "cat is an animal" and "animal is an organism", it already knows that "cat is an organism" because of the transitivity rule, and when you tell it that, it won't bother storing it because it already knows it. If instead you told it that "cat is an animal" and "cat is an organism" it would store both of those, as well as "animal is an organism" when you told it that, so there is still plenty of scope for garbage collection during sleep cycles.
Title: Re: Project Acuitas
Post by: WriterOfMinds on May 04, 2018, 05:12:03 am
I've got some pieces of what you're describing, but I'm not quite there yet.

Acuitas knows some transitivity relations (including the one for isA).  And when new information comes in, he does check whether he already knows it and apply the inheritance transitivity rule.  So this would happen as you described:

Quote
One other thing is that if you have told your knowledge base that "cat is an animal" and "animal is an organism", it already knows that "cat is an organism" because of the transitivity rule, and when you tell it that, it won't bother storing it because it already knows it.

Only hearing the triples in a non-optimal order will create redundant links that need to be cleaned up.

Acuitas is also capable of detecting flat contradictions, i.e. he won't store a statement and its negative simultaneously.  If told "A cat is not an animal," while already knowing "a cat is an animal," he would say "I disagree" or something.  However, that's as far as validation of incoming information goes for now, and Acuitas doesn't yet use the "(x isA y) implies not (y isA x)" rule.

As more of a general thought, I wonder if it could be a good idea to store new information "tentatively" and offload some validation to a sleep or idle phase, even for contradictions that it's possible to detect up-front.  Because as the database grows, searching for all possible contradictions might become an unacceptable performance drain during a real-time conversation.
Title: Re: Project Acuitas
Post by: infurl on May 04, 2018, 05:35:21 am
As more of a general thought, I wonder if it could be a good idea to store new information "tentatively" and offload some validation to a sleep or idle phase, even for contradictions that it's possible to detect up-front.  Because as the database grows, searching for all possible contradictions might become an unacceptable performance drain during a real-time conversation.

There was a time when theorem proving systems would run for weeks at a time to digest a set of rules but over the years advances in the underlying theory, faster algorithms, and faster hardware have reduced that time considerably. There is a website called Thousands of Problems for Theorem Provers which has archived thousands of rule sets and folks can test out their theorem proving software on them and compare them to existing benchmarks. I think the best ones can solve all the problems in a matter of hours nowadays.

http://www.cs.miami.edu/~tptp/

I spent years researching and developing theorem proving software and the library that I eventually completed can run all its regression tests in just microseconds. I'm confident that it will be fast enough to handle hundreds of different conversations simultaneously in real-time. I have not yet tested it out with TPTP though, and I really have to do that before I can claim any records, if ever.

Regardless of speed though, there is still good reason to segregate and cache new propositions. For one thing, different people have different belief systems. You might want to have one knowledge base that is strictly science based, and another knowledge base that believes in fairies. The two would share a large number of rules, such as water flowing downhill, but would diverge on the subject of fairies. It would also want to maintain separate models for each interlocutor so it could understand that someone believes something even if the knowledge base itself doesn't accept it.

Such knowledge about knowledge crosses the boundary from first order logic into higher order logic. In the general case higher order logic is intractable but researchers are gradually isolating the useful special cases that are computable.
Title: Re: Project Acuitas
Post by: spydaz on May 04, 2018, 09:22:26 am
Here's an example of an inheritance loop:

cat <is-a> animal
animal <is-a> organism
organism <is-a> cat

Do you have a means of rejecting contradictory statements?

For example, in a first order logic knowledge base you would have rules like:


(forall ?x ?y ?z
        (implies (and (isA ?x ?y) (isA ?y ?z))
                 (isA ?x ?z)))

(forall ?x ?y
        (implies (isA ?x ?y)
                 (not (isA ?y ?x)))


The first rule defines transitivity for the isA relation so you know that if "cat is an animal" and "animal is an organism" then "cat is an organism". The second rule says that if "something is a kind of something else" then "something else cannot be a kind of something". (In other words isA is transitive but not reflexive.) With these two rules in the knowledge base an attempt to add a contradictory statement is rejected. That's how my software works. More sophisticated knowledge bases like SUMO would be able to handle much more subtle distinctions, such as the difference between an instance of something (Fred is my cat) and a class of something (cats are mammals).

edit:

One other thing is that if you have told your knowledge base that "cat is an animal" and "animal is an organism", it already knows that "cat is an organism" because of the transitivity rule, and when you tell it that, it won't bother storing it because it already knows it. If instead you told it that "cat is an animal" and "cat is an organism" it would store both of those, as well as "animal is an organism" when you told it that, so there is still plenty of scope for garbage collection during sleep cycles.


But Remember that the determiner ALL Must be included for Full inheritance... The (is a) relationship and other relations ships only have a strong recursive relationships When the determiner is Known. otherwise Only what is pertaining to the Cat Query is relevant all though there may be potential recursive connections. often we capture the triple (Subject Predicate object ) but the determiner is also key to the relationship. ITs correct to store the truth value of the statement or the confidence value, then if the determiners are not known then when recusing the relations only the high confident relations should be returned.

Fred is my cat >>>> I suggest changing My into PERSON then;
fred is Pauls cat < can respond to who is fred?.
QuestWord (Who)/ (Predicate-Subject)? (is),(Fred).....

Its quite hard with so many potential variations of a sentence to explain the same thing; i tend to only capture information if it fits the rule structures i can handle correctly; just simply loosing the data... Hoping that a different function can learn from the sentence structured... if not at the end a "dirty" unstructured capture takes place. Over time the Unstructured captures can be examined o see what type of structures are not being captured.... (subject predicate object) - captures so much! for the first order logic (unless, the determiner is (ALL/SOME) then i don't save it as a proposition for deductive recursing)... I suppose A cat (falls under SOME).

I would think that we would hope that most of the data being captured as Subject/verb/object) can be put into first order logic; i try not to change the input to fit the structure ... as it looses its conversation value/meaning; Yet it is desirable to have the FULL first order logic database.
 

i use a software http://www.cognitum.eu/Semantics/FluentEditor/ useful for building onltologies from TRIPLES (DESCRIPTIVE LOGIC)......
 
Title: Re: Project Acuitas
Post by: Don Patrick on May 04, 2018, 01:48:15 pm
Quote
cat <is-a> animal
animal <is-a> organism
organism <is-a> cat
Unless all these words are synonyms, you know one of these triples is wrong.
I prefer to think that all three are correct. x is-a y implies that at least some y's are x's: Some organisms are cats. I find this useful to store because having more direct knowledge means the program has to spend less time traversing indirect deductions. Humans similarly often use assumptions over deep inferences, because inferences take more time and effort. For computers it's just a matter of storage space vs speed. You can keep the "redundant" facts to save time when looking up knowledge, or you can remove them to save space.
Title: Re: Project Acuitas
Post by: WriterOfMinds on May 04, 2018, 03:32:20 pm
Quote
I prefer to think that all three are correct. x is-a y implies that at least some y's are x's: Some organisms are cats. I find this useful to store because having more direct knowledge means the program has to spend less time traversing indirect deductions.

I store a different kind of link to express this: organism <has-subtype> cat.  In my scheme at least, x is-a y implies that all x's are y's, and the reverse, all y's are x's, is not necessarily correct.
Title: Re: Project Acuitas
Post by: ivan.moony on May 04, 2018, 04:37:59 pm
Quote
I prefer to think that all three are correct. x is-a y implies that at least some y's are x's: Some organisms are cats. I find this useful to store because having more direct knowledge means the program has to spend less time traversing indirect deductions.

I store a different kind of link to express this: organism <has-subtype> cat.  In my scheme at least, x is-a y implies that all x's are y's, and the reverse, all y's are x's, is not necessarily correct.

Dealing with types of data subsumes a use of some set operators at rudimentary level. Here are some operations I find useful when dealing with sets:
We can lean data types on these operators, and I find useful set builder notation (https://en.wikipedia.org/wiki/Set-builder_notation) combined with logical operators and, or, not. In simplified example:To further develop a set algebra, we may reach for De Morgan's (https://en.wikipedia.org/wiki/De_Morgan%27s_laws) laws. In example:As Boolean algebra is just a specific form of set operations over empty (false) or universe (true) sets, De Morgan's laws naturally apply to logical operators too.

Advanced set examples would include incorporating logic predicates in set builder notation, but that could be left out if we don't want to complicate too much, as the matter then develops into full blown logical language.

Anyway, the above examples would bring in a nice reasoning about sets, while they keep reasonably simple form for a concrete implementation.

If you want to implement these set operations, I recommend keeping all the set-element data in disjunctive normal form (https://en.wikipedia.org/wiki/Disjunctive_normal_form), or even better conjunctive normal form (https://en.wikipedia.org/wiki/Conjunctive_normal_form). Conjunctive normal form  has some desirable properties regarding to logic reasoning incorporating logic resolution (https://en.wikipedia.org/wiki/Resolution_(logic)) rule, having in mind possible future project development.
Title: Re: Project Acuitas
Post by: spydaz on May 06, 2018, 11:56:19 am
Quote
I prefer to think that all three are correct. x is-a y implies that at least some y's are x's: Some organisms are cats. I find this useful to store because having more direct knowledge means the program has to spend less time traversing indirect deductions.

I store a different kind of link to express this: organism <has-subtype> cat.  In my scheme at least, x is-a y implies that all x's are y's, and the reverse, all y's are x's, is not necessarily correct.



Dealing with types of data subsumes a use of some set operators at rudimentary level. Here are some operations I find useful when dealing with sets:
  • x <is-element-of> A
  • A <is-subset-of> B
  • A <equals> B
We can lean data types on these operators, and I find useful set builder notation (https://en.wikipedia.org/wiki/Set-builder_notation) combined with logical operators and, or, not. In simplified example:
  • x <element-of> (A <union> B) = (x <element-of> A) <or> (x <element-of> B)
  • x <element-of> (A <intersect> B) = (x <element-of> A) <and> (x <element-of> B)
  • x <element-of> (<complement> A) = <not> (x <element-of> A)
To further develop a set algebra, we may reach for De Morgan's (https://en.wikipedia.org/wiki/De_Morgan%27s_laws) laws. In example:
  • <complement> (A <union> B) = (<complement> A) <intersect> (<complement> B)
  • <complement> (A <intersect> B) = (<complement> A) <union> (<complement> B)
As Boolean algebra is just a specific form of set operations over empty (false) or universe (true) sets, De Morgan's laws naturally apply to logical operators too.

Advanced set examples would include incorporating logic predicates in set builder notation, but that could be left out if we don't want to complicate too much, as the matter then develops into full blown logical language.

Anyway, the above examples would bring in a nice reasoning about sets, while they keep reasonably simple form for a concrete implementation.

If you want to implement these set operations, I recommend keeping all the set-element data in disjunctive normal form (https://en.wikipedia.org/wiki/Disjunctive_normal_form), or even better conjunctive normal form (https://en.wikipedia.org/wiki/Conjunctive_normal_form). Conjunctive normal form  has some desirable properties regarding to logic reasoning incorporating logic resolution (https://en.wikipedia.org/wiki/Resolution_(logic)) rule, having in mind possible future project development.


I notice that there are a lot of crossover between set theory / venns and first order logic.....
The rules are the same but with venns there is that extra visual element. Personally i had thought to build "Bags" of information adding Each intersecting bag  as a list item in the bag ... realising eventually it would be a tree again!  (logic tree) .... Which in-turn is a list!

All dogs are mammals - Some mammals are dogs ...... Its an automatic inferance.... All A's are B = Some B are A Implied.....

Title: Re: Project Acuitas
Post by: WriterOfMinds on June 03, 2018, 07:50:12 pm
This past month I did some preliminary work on a whole new feature – episodic memory, or memory of events.  This enables Acuitas to store and recall records of past “experiences.”  The fun part of episodic memory is figuring out …

1) … what to store (and what to forget),
2) … how to organize stored material, and
3) … how to access relevant stored material when it is needed.

I mostly worked on 2) this month, and wrote a block of code that will group adjacent raw event records into memory files.  A measure of similarity (both of the events themselves, and of Acuitas' internal state background at the time) is used to determine which events belong in the same “scene” or “episode,” and where the boundaries between memories should lie.  Minor “scenes” are in turn grouped into higher-level umbrella memories, tree-style.

Implementing this served to show me what a deep rabbit hole episodic memory could easily turn out to be.  There are heaps of little things I need to do to truly make it functional – I may even turn it off temporarily once I've put it through a bit more testing, since I haven't implemented selective storage/forgetting yet, and that means the memory folder will bloat rather quickly.

I also added a conversational feature to make use of the stored memories.  When Acuitas is telling someone what he thought about today, he now has the option to check episodic memory and see whether he ever thought about this concept before, and how long it has been since he previously did so.  He then generates some comment like “I've not done that in a long time,” or “I did that a minute ago also.”  The conversion of absolute time units to vaguer, more relative terms like “long” and “short”  establishes a kind of subjective time sense; Acuitas has a particular notion of what a “short time” is that might not match up with what a human would think of as such (though I tried to keep the scales roughly human).

Blog post for this month: http://writerofminds.blogspot.com/2018/06/acuitas-diary-12-may-2018.html
Title: Re: Project Acuitas
Post by: WriterOfMinds on September 05, 2018, 05:29:17 am
I haven't posted in a while because I've been busy overhauling the text parser.  And that's not very interesting to talk about, because it's just me taking features the parser already had and implementing them in a different way.  I'd decided that the scheme I was using had some fundamental deficiencies and should be re-worked in order to become closer to the way I actually read, and to better enable future expansion.  I was hoping this would go quickly, given the experience I had already gained from writing the previous text parser ... but no.  It turned out to be a major project that took far longer than the amount of time I'd allocated for it.

There are a few new enhancements that I managed to squeeze in along the way, however:

* Some infinitives are now handled properly. (Sentence example: "A pot is used to cook.") Before, Acuitas treated them like any other prepositional phrase, which meant that the word inside got marked as a "noun."  It's now marked as a verb.

* Previously, Acuitas could store links indicating what actions an entity can do.  I added a new link type for the actions an entity can have done TO it, and the ability to detect some sentences that express that information.  E.g. "An object can be pushed."

* I've started working on possessives and family/ownership relationship links. I ended up defining Acuitas' very first type of three-ended link: <A> is <C> of <B> to handle the possessives.

The feature is still somewhat incomplete. For now, he can only properly detect and store possessive links that join proper names.  That's because the proper name is how he detects that something is an instance rather than a class; he can't yet build memory nodes for abstract instances like "somebody's cat"; he still thinks that "cat" is strictly a type of thing.

Everything is buggy as all get out right now, too ... but the way this summer has gone, it was about the best I could do.

A little more on the blog: http://writerofminds.blogspot.com/2018/09/acuitas-diary-13-august-2018.html
Title: Re: Project Acuitas
Post by: WriterOfMinds on September 30, 2018, 10:56:59 pm
This month I updated the text parser and other speech features to do a couple more new things with verbs. First, I threw in recognition of the present progressive tense, so I can now tell Acuitas what I'm doing at the moment. For the time being, such information is treated in much the same way as comments about past-tense actions, which I taught him to recognize a while ago; it isn't stored, and responses are rather meaningless. BUT I'll find uses for it later.

I think the more interesting new thing is recognition ability for a couple of link-forms related to state change or state retention. Up until now, Acuitas has learned what verbs “mean” only in terms of a) what can do that action and b) what that action can be done upon. Now, he can learn what a verb actually does by tying it to an adjective. Here are some examples:

To live is to stay alive.
To appear is to become visible.
To finish is to make complete.

I also added “type of” links for verbs, so Acuitas can categorize specific verbs under more general ones, like this:

To smash is to damage.
To walk is to move.
To talk is to communicate.

A little extra on the blog as usual: http://writerofminds.blogspot.com/2018/09/acuitas-diary-14-september-2018.html
Title: Re: Project Acuitas
Post by: Korrelan on September 30, 2018, 11:54:45 pm
Your project does remind me/ stirs feelings/ frustrations from many years ago when I first started messing with AI.  I always had a need/ feeling that there must be some intuitive logical way of describing links\ relationships like this.  Like there was some underlying structure that I just couldn’t find or get my head around… I could feel the logic… but not find it, if that makes sense.  Still drives me mad… lol.

I enjoy following your project, it’s cool to see the steady, logical, methodical progress.

 :)
Title: Re: Project Acuitas
Post by: WriterOfMinds on March 04, 2019, 04:59:38 am
This month I returned to a can of worms I opened up last year, namely episodic memory.  Acuitas had quickly built up a huge clutter of memories concerning very trivial day-to-day events.  Size remained in the Megabytes range (they're just text files), but on principle, this couldn't be allowed to continue – especially since the memories might eventually grow to encompass richer levels of detail.  In addition to overflowing its available space, a memory that literally records everything will face access problems.  The more information you keep, the longer it takes to search through it and extract what you want at the moment.

This leads on into the tricky question of what to forget.  How to decide which memories are worth keeping?  Part of my work this month was focused on computing a “significance measure” which assigns a numerical value to each memorized scene.  Elements I use to judge the significance of a memory include …

* Primacy: was this the first time something happened?
* Uniqueness: how rare are memories similar to this one?
* Complexity: how many features does this memory have?

Since some of these measures are statistical and depend on the overall memory population, they can change over time.  I wrote a process that crawls through the episodic memory and updates all the significance measures, and I threw that into Acuitas' sleep cycle.

To improve the ease and speed of accessing the memories, I added a layer of “type nodes” that link together memories which share certain salient features in common.  Instead of crawling through the whole list to find memories that relate to a current situation, Acuitas can just look at the relevant “type nodes” and find a list of memories that match.

Last, I worked on some methods for consolidating memories.  I devised a way to create summaries of low-level scenes that are then stored inside the super-scenes higher up the tree; eventually, these summaries might be what remains when the lower-level “detail” scenes are forgotten.  The summarizer attempts to extract information that is common across multiple events or sub-scenes within a scene.  I also wrote functions for merging adjacent scenes that have become sparse.

All that, and I still haven't gotten to the actual forgetting part!  That will come next month (I hope).

https://writerofminds.blogspot.com/2019/03/acuitas-diary-16-february-2019.html
Title: Re: Project Acuitas
Post by: Art on March 04, 2019, 03:14:13 pm
How about a 4th element to assist in assigning the memories:

Importance - Having value and worthy of note.

It seems that one of the more difficult aspects of this endeavor might be deciding what information/memories are temporal or ephemeral and which are significant and worth keeping? How best to determine and decide this?

Most casual conversations about sports, weather or family matters are transient and do not carry a higher weighted value. Such talks or topics should be or could be "timed" to "fade away" after a few cycles.

This is still a very difficult task to decide and affix weighted values on which to keep and which to omit. Perhaps once the computer has enough "topical" information it will be able to separate the grain from the chaff, so-to-speak.

Best of luck and please keep us posted on your progress!
Title: Re: Project Acuitas
Post by: WriterOfMinds on March 06, 2019, 07:33:30 pm
Quote
How about a 4th element to assist in assigning the memories:
Importance - Having value and worthy of note.

Ummm ... that sounds circular.  The definition you're giving for "importance" here is very similar to what I had in mind when I used the word "significance."  So, you seem to be suggesting that I use significance as a measure of itself.  I don't think I can manage that  ???

You're correct that there need to be more factors rolled into the significance measure.  For instance, the most important memories may be those related to fulfilling primary goals, or those associated with strong emotions.  (Perhaps you're getting at something like this in the second half of your post, when you consider the assignment of importance based on topic.)  But Acuitas has neither goals nor emotions yet, and no real way of discerning which topics are "meaningful" in a human sense.  So that sort of thing will have to wait for later.
Title: Re: Project Acuitas
Post by: Art on March 07, 2019, 04:08:17 am
Without reading too much into what I wrote I meant the importance to be the "important" issues/topics/data as opposed to those fleeting, unimportant, ephemeral discussions. It snowed last week might be interesting but the topic will likely have as much lasting importance as the soon to be melting snow. It doesn't carry with it anything that demands attention or ...importance.

Whether a casual conversation deals with "water cooler" topics and idle gossip or something major like an employee layoff or a merger next week with a former competitor. Then there's always the task of attempting to measure or assign weights to those important issues vs the non-important ones.

So yes, I was merely stating my opinion for a possible inclusion but the determination of importance brings a certain degree of difficulty for the programmer. Perhaps after a period of letting the computer "examine" various examples, it might be able to decide on its own, that which is of value and that which is not. Machine learning and such is non-trivial at best but from what I've read, progress is being made.

Best of luck on your project.

Title: Re: Project Acuitas
Post by: HS on March 07, 2019, 05:00:24 am
You could judge importance by the degree to which you predict it will effect you, positively/negatively. High positive and high negative = high importance. Things of equal importance could get prioritized by the time frame. Present = most important, furthest future = least important.
Title: Re: Project Acuitas
Post by: LOCKSUIT on March 07, 2019, 06:05:38 pm
Wait, it's the other way around...work hard and get utopia in far future plan.....If you focus on food/money at present (I know u do), then......no utopia later! And we can, if you try.
Title: Re: Project Acuitas
Post by: WriterOfMinds on March 08, 2019, 01:07:28 am
Oh, Locksuit.  What would you know about any of our personal lives and what we do or don't focus on?

There's a necessary balance between short-term and long-term thinking.  You will never bring about utopia if you ignore your immediate need for food and starve. 

I don't think HS was suggesting that an AI should pursue instant gratification at all costs.  The time frame was only supposed to come into play if two things would otherwise have equal importance.  If Problem A will kill you tomorrow and Problem B will kill you in twenty years, and all else (e.g. impact on the rest of living creatures) is equal, I think it does stand to reason that Problem A should occupy your mind more and be solved first.
Title: Re: Project Acuitas
Post by: WriterOfMinds on April 15, 2019, 02:35:41 am
Got the forgetting algorithm done (for now).  It has two major parts.  The first is basically a population-management system; it determines how many episodes need to be forgotten each time the memory collection is pruned.  I can cap the number of episodes allowed in storage at some maximum value (based on available storage space or what have you).  The population manager checks how many memories are already in storage, and how many new ones have formed during the last cycle.  A high growth rate is allowed initially, but forgetting becomes more and more aggressive as the storage area fills.  When the cap is reached, Acuitas must forget the same number of memories he creates.

The second major part is the code that, given a number of memories that must be forgotten, takes care of selecting which ones.  This is a weighted random process; memories with less significance have a higher chance of being forgotten, but the priority isn't absolute. Memories chosen to be forgotten are deleted.  Any higher-tier memories that have been badly “hollowed out” by this process are then merged with their most similar neighbor.

More on the blog: https://writerofminds.blogspot.com/2019/04/acuitas-diary-17-marchapril-2019.html

(https://2.bp.blogspot.com/-rgeLME5R34w/XLPd0NlN-8I/AAAAAAAABME/HEvm-WHQg3Yx8Ex54Uuai_ng7ZU-tqXkgCLcBGAs/s1600/episode_forget.png)
Title: Re: Project Acuitas
Post by: Art on April 15, 2019, 04:13:58 am
WOM, you mentioned that the program assigns a random weighted determination over which memories to delete and which to keep. How is it able to determine whether a particular memory is more important than some other one.
Is there another "sentinel" or overseer portion that makes note of the more important memories and elects to move or save them before it ends up being one of those randomly weighted selections marked for deletion? You mentioned, "...takes care of selecting which ones." How or what determines "significance"?

I am curious as a good memory is a terrible thing to waste. Nice direction.
Title: Re: Project Acuitas
Post by: WriterOfMinds on April 15, 2019, 05:43:48 am
Memories get assigned a significance score at creation (the score is also updated later as necessary).  I talked about how this score is determined in last month's update, and that part of the system hasn't changed, but to re-summarize: the more unique a memory is, the more significant it's considered to be. Memories that are the first of their kind and memories that contain more/richer detail also get a significance boost.

There is a threshold in the significance score beyond which a memory is "safe," yes.  So you shouldn't get a fluke in the random process taking out a really important one.
Title: Re: Project Acuitas
Post by: Art on April 15, 2019, 01:33:26 pm
Ah yes...I went back and read the Feb., Mar., Apr. issues of your postings. (Should have not "forgotten" to do that!) ::)
Nicely explained!
Title: Re: Project Acuitas
Post by: LOCKSUIT on April 15, 2019, 01:41:15 pm
I'll scan the whole thread...somehow it got by me O0

looks like my "tree drawings" lol....and the forgetting idea.....close enough ! ._.

Surely no one can be as far as lock though, I purposely end up too far ahead, no one will find here spot!. . .
Title: Re: Project Acuitas
Post by: LOCKSUIT on April 16, 2019, 08:28:11 am
Ok I'm caught up on the whole thread...wow, nice. I think I let the thread go few months ago... The name Acuitas is just a name, but if I seen The Bomb in the Recent Threads...no doubt it's an eye-opener hehe...

I'm gonna share some ideas. Not all though... we seem to hide our secrets hehe, at the expensive of dying children/wildlife. Maybe don't?

Here's half of how forgetting/remembering works:
- Things are remembered if they are sensed enough (energized), loved, related (especially to strong role-model nodes), entails, and frequency....the AI can also ask you how much so a memory is any of these.
- I agree 'rank' on nodes in the net is crucial for goals; Animals are born with a native STRONG outlook for stuff similar to food/mates. If cats are dogs, then you may want to update your desire you seek (finding a pet) to dogs if cats cost too much. Without love, AGI may seek socks, or nuclear power, or leafs. What are the ethics you want it to seek? What to research/implement? Install it. Make it output change that makes the world smarter. Positive/negative rank exists too I assume, and yes close events are more important if more/as critical to far away events in time.
- Other half is Pruning. . .secrets! With pruning, you usually just save all but in a more compressed way.

Also, if Tim has a pet fish and that fish has a virus and that virus has a food inside itself, and you don't talk to Tim anymore, and talk to Sally, then no more fish, no more Tim, you don't need to know any of what classes under Tim for now, so, you model whom you speak to! You switch near the root of the tree.
Title: Re: Project Acuitas
Post by: WriterOfMinds on July 19, 2019, 03:40:45 pm
Oookay, I'm long overdue for an AI update.  The big new project for the past couple of months has been the concept of things being in states, and the ability to track those states.

Way back in Diary #7, I introduced a division between short-term (or naturally temporary) information and long-term (or essentially static) information.  It's a bit like the division between things you would use estar and ser for in Spanish, though it doesn't follow those rules strictly.  Previously, Acuitas simply discarded any short-term information he was given, but I've added a new memory area for saving this knowledge.  Like the existing semantic memory, it works by storing linked concept pairs … with the difference that the short-term ones are stored with a time stamp, and Acuitas anticipates that they will “expire” at some point.

The existing feature that allows short-term and long-term information to be distinguished (by asking questions, if necessary) can also grant Acuitas an estimate of how long a temporary state is likely to last.  While idling, he checks the short-term state memory for any conditions that have “expired” and adds questions about them to his queue.  Then, when next spoken to, he may ask for an update: is the information still correct?

I also added the ability to parse and store a new type of information link – location – along with the associated “where is” questions.  Location links are three-ended so that they can store not only a thing and its location, but also spatial relationships between two things (under, over, in, beside, etc.).

https://writerofminds.blogspot.com/2019/07/acuitas-diary-18-mayjune-2019.html
Title: Re: Project Acuitas
Post by: WriterOfMinds on September 07, 2019, 04:19:47 am
I spent the past two months revisiting the text parser, with the big goal this time around of adding support for dependent clauses. In case anyone's high school grammar is rusty, a clause is a subject/verb pair and any words associated with them; a dependent clause is one that is part of another clause and can't be a sentence by itself. Previously, Acuitas could handle one subject and one verb group per sentence, and that was it.

After last year's feverish round of development, I left the text parser a mess and never wanted to look at it again. So the first thing I had to do was clean up the disastrous parts. I ended up giving some of the functions another serious overhaul, and got some code that is (I think) actually maintainable and comprehensible. Whew.

Some sentence types that are very important for reasoning use dependent clauses. For instance, sentences that discuss subordinate pieces of knowledge:

I know [that a cheetah is an animal].
I told you [that a grape can be eaten].
I fear [that the car broke yesterday].

And sentences that express conditional information:

[If a berry is green], it is unripe.
[If you eat that berry], you will get sick.
The gun will fire [if you pull the trigger].

Acuitas can parse sentences like the ones above now, but doesn't really do anything with them yet. That will come later and require updates to the high-level conversation management code.

Blog link: https://writerofminds.blogspot.com/2019/09/acuitas-diary-19-julyaugust-2019.html
Title: Re: Project Acuitas
Post by: HS on September 07, 2019, 06:12:24 am
Oh my gosh, that's like a math test and a grammar test rolled into one. But it's five hundred pages, there's no solution manual, and you have to get a hundred percent. I'd go a few steps then get mired in a web of confusion. Still, here's hoping.  O0
Title: Re: Project Acuitas
Post by: DemonRaven on September 08, 2019, 09:57:56 am
Give it a old brain like mine it will forget things lol  :D :2funny:

This month I updated the text parser and other speech features to do a couple more new things with verbs. First, I threw in recognition of the present progressive tense, so I can now tell Acuitas what I'm doing at the moment. For the time being, such information is treated in much the same way as comments about past-tense actions, which I taught him to recognize a while ago; it isn't stored, and responses are rather meaningless. BUT I'll find uses for it later.

I think the more interesting new thing is recognition ability for a couple of link-forms related to state change or state retention. Up until now, Acuitas has learned what verbs “mean” only in terms of a) what can do that action and b) what that action can be done upon. Now, he can learn what a verb actually does by tying it to an adjective. Here are some examples:

To live is to stay alive.
To appear is to become visible.
To finish is to make complete.

I also added “type of” links for verbs, so Acuitas can categorize specific verbs under more general ones, like this:

To smash is to damage.
To walk is to move.
To talk is to communicate.

A little extra on the blog as usual: http://writerofminds.blogspot.com/2018/09/acuitas-diary-14-september-2018.html
Title: Re: Project Acuitas
Post by: LOCKSUIT on September 08, 2019, 09:47:01 pm
WoM, what is your opinion on Korrelan's main project? I mean, you have a different take on AGI. Korrelan seems to have done all the right or great things to do such a endevour - studying the brain, Wikipedia, ~53 y/o dude with good programming skills, building realistic model, etc. You could call it the "right way" to do AGI. But you seem to have a more GOFAI approach and hand code a lot of things, similar to my project.
Title: Re: Project Acuitas
Post by: WriterOfMinds on September 09, 2019, 02:05:45 am
Trying to stir the pot, are you?  ;)

For starters, I don't think there's an established "right way" to build AGI, and anybody who tries to declare one is being arrogant. The field is too young. Nobody's made an AGI yet, and even after someone does, that won't be a proof that there is no other (or better) way of doing it. This is pioneering work. None of us know what we are doing, and arguments about method often devolve into the blind trying to lead the blind.

Korrelan's project follows a "bottom-up" approach, reproducing the phenomena of intelligence by mimicking the one thing we see associated with it in nature: the structure and low-level behavior of the brain. I'm following a "top-down" approach: reinventing intelligence on a functional or conceptual level, using techniques that are natural to the digital medium. Neither approach is wrong; either approach may get somewhere. If they both get somewhere, the results could be very different and contribute valuable diversity to the field. I think it's fine to have people exploring in both directions.

Knowing what I do about your motivations, I suspect you want to pour maximum resources into whichever path will lead to AGI the fastest. But I'm not an oracle. I really don't know who will be winning this race. The proof will be in the results, so we'll just have to wait and see.

Now, here are a couple of possibly controversial opinions:

1. Korrelan seems to belong to the "AGI must be embodied" school. I do not. It's my opinion that divorcing reasoning from sense perception and motor control is not only feasible, but a rather interesting thing to try.
That being said, sense perception and motor control can be incredibly useful for certain applications, so I don't fault anyone for working on them. It all comes down to the type of project you want. I have some less-abstract projects of my own that I will probably post about someday, though they've been getting neglected because Acuitas is such a time hog.

2. We don't know everything there is to know about the brain yet. My instincts tell me that the brain encompasses a large amount of information/structure ... even before it starts learning. Therefore, I'm always suspicious of projects that assume full intelligence will arise from a slate that starts out mostly blank. "I've got one simple idea/computational element/learning algorithm/neuron-like thing, and if I can just make a soup containing millions of them then poof, I will have AGI!" Sorry: I don't think it's going to be that easy. Learning is crucial for general intelligence, but to even learn effectively I suspect you need a lot of groundwork first.

To quote Matt Mahoney from the Topicbox mailing list: "The brain is not just a few neural network modules. It has thousands of specialized structures, hundreds of types of neurons, and hundreds of neurotransmitters. We are born knowing to fear heights and spiders, how to swallow and cough and blink, how to learn language, and how to distinguish what is good to eat from among thousands of scents. Altogether we are born knowing half of what we know as adults, 10^9 bits encoded in our DNA and 10^9 bits of long term memory for words, pictures, and sounds. There is no good way to avoid coding the inherited knowledge ..."
Title: Re: Project Acuitas
Post by: LOCKSUIT on September 09, 2019, 02:57:25 am
"Trying to stir the pot, are you?  ;)"

:#


I agree, there is many ways and I can't confirm a NN-less approach is undoable at all OR efficient/ flexible... And while you can go for the likely best choice we don't know that yet, even if we think we know why. It is possible, to be extra sure however. One would hope to pick the right way. Preferably a controllable way.

I actually can possibly confirm that a NN-less, and NN-based algorithms are both possible, there is some proof I've noticed. For example my algorithms, while Transformers are very flexible and are becoming near perfect bloggers/ translators.
Title: Re: Project Acuitas
Post by: Don Patrick on September 09, 2019, 05:56:24 pm
I tend to agree with much that WriterofMinds writes. The way I see it, the more of us try different things, the more chance that one of us finds an approach that works well. Moreso, I consider human intelligence to consist of two different (yet technically same) methods: Associative intelligence and rational intelligence. Neural networks handle the first type well enough, they are fast but also sometimes completely off the mark. WriterofMind's project is of the second type, which is generally slower but has absolute accuracy. Both are suited to different circumstances, but neither of them alone equal the combined intelligence of a human who uses both. That is not to say one shouldn't explore each individually to its limits, just to consider that there is no single "correct" approach.
Title: Re: Project Acuitas
Post by: WriterOfMinds on September 28, 2019, 11:16:33 pm
This month, I did some work on cause-and-effect reasoning and goal satisfaction, which introduced the conversational possibility of asking Acuitas what he wants.

I leveraged the text interpretation upgrades from last month to implement encoding and storage of conditional relationships, such as “if a human eats food, the human will not starve.” These relationships can be remembered and used to infer the effects of an action. I also threw in the ability to learn that a pair of concepts are opposites or antonyms.

Then I implemented some inference mechanisms so that Acuitas can determine whether some action serves – or contradicts – a particular goal. Acuitas will now claim to desire things that support one of his goals and not desire things that contradict one of his goals, while remaining ambivalent about everything else. The examples below reference a self-preservation goal … not because I think that should be the primary goal for an AI, but because it's one of the easier ones to define.  In Acuitas' knowledge representation, it basically comes down to “Self (has quality)/(is in state) 'alive' or 'existent.'”

With this goal active, Acuitas can answer any of the following:

“Do you want to be alive?”
“Do you want to be dead?”
“Do you want to live?”
“Do you want to die?”

… where the last two (live/die) rely on verb-defining links in the semantic database, and the two negative versions (dead/die) rely on awareness of opposites.

The most complex inferences currently possible are illustrated by this little interchange:

Me: Do you want to be deleted?
Acuitas: I do not.

To produce that answer, Acuitas has to retrieve and put together five different pieces of stored information …

*If a program is deleted, the program “dies.” ← From the cause-and-effect/conditional database
*I am a program. ← From semantic memory (is-instance-of-class relationship)
*To die is to transition to state “dead.” ← From semantic memory (verb definition relationship)
*State “dead” is mutually exclusive with state “alive.” ← From semantic memory (opposites)
*I have a goal of being in state “alive.” ← From goal list

… to make the inference, “being deleted would violate my goals.”

The features still need a lot of generalization and expansion to be fully functional, but the groundwork is laid.
Title: Re: Project Acuitas
Post by: HS on September 28, 2019, 11:38:14 pm
This sounds like a proto-Data from Star Trek.
Title: Re: Project Acuitas
Post by: WriterOfMinds on September 29, 2019, 02:54:05 am
You flatter me :)  He's probably a long way from being even a proto-Data.  But I try.
Title: Re: Project Acuitas
Post by: LOCKSUIT on September 30, 2019, 11:35:42 pm
Now I see more actual progress going on here than korr's project.

:))))))

I mean, it DOES something...

which could fulfill my goals
Title: Re: Project Acuitas
Post by: WriterOfMinds on October 30, 2019, 03:44:11 pm
I set some smaller Acuitas goals this month so I would have a little more time to fix bugs and clean up my own mess.  The first goal was to enable identification of people from first names only, while allowing for the possibility that multiple people have the same first name. Using someone's full name with Acuitas establishes a link between the first name (as an isolated word) and the full name (with its connected person-concept).  If the first name is later used in isolation, Acuitas will infer the full name from it.  If multiple full names containing that first name are known, Acuitas will ask the user which one is meant.

The second thing I worked on was text parser support for a new grammar feature, with a focus on expanding the range of possible “I want” sentences Acuitas can understand.  The ability to parse infinitives, as in “I want to live,” was already present.  This month I worked on infinitives with subjects, as in “I want John to live.”

To see why this is tricky, consider the following sentences:

1. I want Bob to eat.
2. I want a fruit to eat.
3. I want food to live.

They all follow the exact same pattern and have completely different meanings. Notice that in the second sentence, you could move some words and get “I want to eat a fruit” without changing the implications too much.  Doing this to the third sentence would be bizarre (“I want to live food”) and doing it to the first sentence would be horrifying (“I want to eat Bob”).

The first sentence is the only one in which the noun (Bob/fruit/food) is the subject of the infinitive.  The key factor is who will be doing the action expressed by the infinitive.  In the first sentence, Bob is the one who will be eating if I get my way; in the latter two sentences, I'm the one eating and I'm the one living.  And that information is not actually in the sentence – it's in your background knowledge.  To properly understand these sentences, it's helpful to be aware of things like …

*I am a human
*Humans can eat fruit
*Bob is probably also a human
*I am probably not a cannibal, and therefore don't want to eat Bob
*Food can be used to sustain a human's life
*Once something is food, it's not living (or won't be living much longer)
*Living isn't an action you can perform on food

So here is where we bring to bear the full power of the system by having the Text Parser call the semantic memory for already-known facts about these words.  Acuitas can't quite store all of the facts listed above, but he does know that “humans can eat” and “fruits can be eaten.”  He might also know that the speaker and “Bob” are humans.  At this early, sketchy phase, that's enough for the parser to start discriminating.

Some sentences of this type are just plain ambiguous, especially when taken in isolation.  For example, “I want a plant to grow.”  Plants can grow (on their own), but they can also be grown (by a cultivator, whom the speaker might be).  Upon detecting an ambiguity like this, Acuitas will, as usual, ask the speaker about it.  This also works for cases when the information in the database is not yet extensive enough.

See the blog for some sentence diagrams: https://writerofminds.blogspot.com/2019/10/acuitas-diary-21-october-2019.html
Title: Re: Project Acuitas
Post by: Freddy on October 30, 2019, 05:33:41 pm
I popped over to Chatbots.org last night and there was talk of a chatbot competition - an online one so more people can take part.

Do you think Acuitas is ready to enter a contest or is this even something you would consider?

I am hesitant to enter my own bot in a competition  ;D
Title: Re: Project Acuitas
Post by: WriterOfMinds on October 30, 2019, 06:02:35 pm
I would consider participating in contests eventually, but I don't think he's ready yet.  At the moment, he doesn't even have an interface that would let other people communicate with him over the web (and there is a *lot* more linguistic development I need to do).
Title: Re: Project Acuitas
Post by: LOCKSUIT on October 30, 2019, 06:48:18 pm
Quote
1. I want Bob to eat.
2. I want a fruit to eat.
3. I want food to live.

They all follow the exact same pattern and have completely different meanings. Notice that in the second sentence, you could move some words and get “I want to eat a fruit” without changing the implications too much.  Doing this to the third sentence would be bizarre (“I want to live food”) and doing it to the first sentence would be horrifying (“I want to eat Bob”).

I don't think I think of those facts when I read those 3 lines. In other words, we don't generate those linked tidbits, although we can. We just know its wrong. Like GPT-2 does. Would GPT-2 get them wrong? Nope. Do you put that much thought into all your daily activities and thoughts? Nope.

I think we can rearrange them if tasked with doing so, and then we check if its predictable. If so, we check if it is similar enough. It may be the bad of the good like eating bob, so we will recognize it is a bad decision and back off that branch path of the tree possibilities.

Wait i'm thinking...

In #2 we just say it and the man at the store just gives you a fruit.
In #3 we just say it and the man at the store just pertains to helping Bob out.
Title: Re: Project Acuitas
Post by: WriterOfMinds on October 30, 2019, 07:12:13 pm
Quote
I don't think I think of those facts when I read those 3 lines.

You probably don't think of them consciously.  That doesn't mean your brain isn't using that information on some level.

Quote
Would GPT-2 get them wrong? Nope.

Oh, but it would.  I tried putting "I want a fruit to eat" into the 774M model.  On the second try, GPT-2 talked about an apple eating a person.

Quote
I want a fruit to eat. I'm a fan!" The other man agreed, but for him it was almost as if it was as if he was saying, "I don't care who eats it, I'm going to eat it!"
At first he was able to eat the other man's apple and then some of the fruit, but within a few days it became too much to manage. He began eating the banana, the grape, and most notably the apple.
It didn't seem to be a bad thing. The person who had been eating the apple for about nine minutes was now getting eaten by an apple that had a lifespan of over thirty days! And he was not even getting enough to eat!
But he kept on eating. And on the tenth day the man could eat the banana and the grape – but the apple had been poisoned and now it was completely rotten. His body was completely and completely covered in a thin layer of poison-y rotten flesh.
In his eyes it took on a terrible black colour, it looked as if it belonged to a different world

Here, for your amusement, was the first try:

Quote
I want a fruit to eat. I really want a fruit to eat."
"Do do do do do do do do do do."
"I want to do a do a do a do a do a do a do a doa doa do a doa doa doaa."
"It seems like a wonderful thing to do. I want a pineapple to bite."
"I want to do a do a do a do a do a do a do a do a do a do a do a do a do."
"What kind of a thing do you want to do?"
"Ooh, you should do a do. I want to do a do a do a do. I want to do a do a do do a do a do a do a do a do a do a did do a."
"Do you want to do a do have a do a do a do a do a do a do a do a do a do a do a."
"I want them to do a do a do a do do

Ain't GPT-2 smart, folks? *claps*
Title: Re: Project Acuitas
Post by: LOCKSUIT on October 30, 2019, 07:34:02 pm
First let's see some more tests:
https://www.youtube.com/watch?v=XCXtn7_sW4M
Title: Re: Project Acuitas
Post by: LOCKSUIT on October 30, 2019, 07:48:34 pm
Hmmm......you know I swear GPT-774Model is not as good as the 375M one last on.

I just got this one:
Apples taste good. I love apples. Apples are a food. She was eaten by an
apples after a few days. The apple smell is an absolute must for me. The apple is sweet and slightly floral and it is a little bit sticky. So if you are not used to this then just go for it and give it a try. The apple is very tasty and they even made some sweet apple sauce and it was quite yummy. I was also happy with the  small portion size and even though some  apples didn't stay crisp for the whole time that I had them I didn't really care. It will last a long time in the fridge. I love the fact that they have a special offer for the apple lovers, the apple taste like a good summer night!
They also do different sized  scoops of apple. They also make  apple cider which is very nice too!
The apple  is one of the most amazing and tasty things that I have ever had. All around the world and the ones I have tried the best are all in New Zealand. I have had them all and everyone loved them


Wow gpt2...bad gpt2! Let's see your old version:
See attachment. Hmm, better!! No plural! But bad....
Title: Re: Project Acuitas
Post by: LOCKSUIT on October 30, 2019, 08:01:20 pm
My solution to this is it has to have seen it entail before, enough times at least. It may not be exact, ex. it's seen 'bob was eaten by a carrot', but as long as it knows carrot=apple 98% likely, then it can know it entails. In this case it doesn't. So it'd ignore that candidate 'apple' and 'apples'.

So all you need is frequency, a basically exact match proving it's 'normal'. ;p

It may be distant if you attempt to check if the end exists in memory,
"and Bob who i know very well was later then eaten by the thing that is called a bright red apple"
but gpt2 seems to work with distant sentences?
Title: Re: Project Acuitas
Post by: Don Patrick on October 30, 2019, 08:18:09 pm
Interesting progress, WriterofMinds, tackling such ambiguities already. I hadn't realised "to" was just as troublesome as "with" in that regard. I employ a more general distinction between living and non-living entities, plus a list of verbs that only living entities are known to do (e.g. eating in general, rather than specifically humans and specifically fruit). I find them handy to fall back on in the absence of more specific knowledge such as you have in mind.

I recently found an interesting debunking of GPT-2's understanding of ambiguous pronouns. Turns out it does not apply consistent logic but rather associates words with disregard to their role in the sentence.
https://arxiv.org/pdf/1811.01778.pdf
Title: Re: Project Acuitas
Post by: WriterOfMinds on October 30, 2019, 08:59:48 pm
Quote
I employ a more general distinction between living and non-living entities, plus a list of verbs that only living entities are known to do (e.g. eating in general, rather than specifically humans and specifically fruit). I find them handy to fall back on in the absence of more specific knowledge such as you have in mind.

In addition to looking at the actual concepts in the sentence, Acuitas also checks parent classes for relevant fact links ... moving up the hierarchy until something is found.  So this wouldn't be too difficult to set up.

Also, I put Lock's prompt into GPT-2 and can't resist sharing this last piece of nonsense.  GPT-2 telling us about the essence of life.

Quote
Apples taste good. I love apples. Apples are a food. She was eaten by an apple. She was eaten by an apple.
Apple. Her first love.
I love apples. She was an apple. She was an apple. She was eaten by an apple. My mother and I were eating apples, and the apples in our hands tasted good. I love apples. She ate an apple, she was an apple. She ate an apple, and she was eaten by an apple. You may like the smell of an apple, but you will not love it unless you love apples. She was an apple. She was eaten. She is an apple. She is an apple. She is eaten.
And that's how it is. An apple is an apple. She was eaten and eaten by an apple. That is the essence of life.
The apple is an apple.
The apple is an apple. I loved our family and our friends. I loved my mother and she loved me. She loved my father and he loved me. We loved each other.
Title: Re: Project Acuitas
Post by: Freddy on October 30, 2019, 09:32:38 pm
Here, for your amusement, was the first try:

Quote
I want a fruit to eat. I really want a fruit to eat."
"Do do do do do do do do do do."
"I want to do a do a do a do a do a do a do a doa doa do a doa doa doaa."
"It seems like a wonderful thing to do. I want a pineapple to bite."
"I want to do a do a do a do a do a do a do a do a do a do a do a do a do."
"What kind of a thing do you want to do?"
"Ooh, you should do a do. I want to do a do a do a do. I want to do a do a do do a do a do a do a do a do a do a did do a."
"Do you want to do a do have a do a do a do a do a do a do a do a do a do a do a."
"I want them to do a do a do a do do

Ain't GPT-2 smart, folks? *claps*

I'm thinking it was trying to sing there  :D
Title: Re: Project Acuitas
Post by: Freddy on October 30, 2019, 09:36:08 pm
Actually more seriously. I like GPT-2 for what it's trying so hard to do but it isn't intelligent. Mostly it's gibberish and what use is the kind of thing it produces? It's not like you can check sources and validate anything - it's just not dependable in the way I think people are looking for in AGI.
Title: Re: Project Acuitas
Post by: LOCKSUIT on October 30, 2019, 09:51:36 pm
Quote
Apples taste good. I love apples. Apples are a food. She was eaten by an apple. She was eaten by an apple.
Apple. Her first love.
I love apples. She was an apple. She was an apple. She was eaten by an apple. My mother and I were eating apples, and the apples in our hands tasted good. I love apples. She ate an apple, she was an apple. She ate an apple, and she was eaten by an apple. You may like the smell of an apple, but you will not love it unless you love apples. She was an apple. She was eaten. She is an apple. She is an apple. She is eaten.
And that's how it is. An apple is an apple. She was eaten and eaten by an apple. That is the essence of life.
The apple is an apple.
The apple is an apple. I loved our family and our friends. I loved my mother and she loved me. She loved my father and he loved me. We loved each other.

Maybe we have to give GPT-2 some slack. I mean, the global context is true. Apples are apples as it says. Life is about related families. They eat each other, they love each other. I eat it, it eats me. Same thing. At least GPT-2 writes more true knowledge than past LSTMs, so it's getting better. We'll have to figure out what is missing. WoM may be onto it.
Title: Re: Project Acuitas
Post by: WriterOfMinds on December 31, 2019, 07:37:25 pm
Acuitas Diary #22 (November+December 2019)

For the past two months there's been a lot of refactoring, and also a lot of not working on Acuitas because of holidays.  However, I did manage to get several small new features in …

*Acuitas now checks the short-term information database in addition to the long-term database when trying to retrieve the answer to a question
*Acuitas can now answer some questions about current internal states (e.g. “Are you sleepy?”)
*Acuitas can now answer questions of the form “Do you know that <fact>?” and “Do you know what <fact>?”

The first feature was quick to implement; I already had functions in place for retrieving information from the short-term database, and just had to ensure that the question-answering procedure would call them.  The second feature required a mechanism to associate some of the concepts in the semantic memory (which up until now have had no “meaning” beyond their connection to other concepts) to measurable conditions inside Acuitas – namely, whether his various drives are exceeding their threshold values or not.  So there is now a table that, for instance, ties a high value of the sleep drive to the word “sleepy.”

The third feature is my favorite.  Questions of the form “do you know that … ” use the dependent clause interpretation faculties that I added earlier this year.  And since “knowing” is an action that Acuitas is capable of, this word also can be internally grounded.  So Acuitas effectively defines “I know X” as “if the query form of X is submitted to my question-answering process, the process returns an answer (for open-ended questions) or answers 'yes' (for yes-no questions).”

And the best part?  It allows for an indefinite amount of nesting.

Me: Do you know that you know that you know that a cat is an animal?
Acuitas: Yes.

No extra content or pictures this time, but blog link just because: https://writerofminds.blogspot.com/2019/12/acuitas-diary-22-novemberdecember-2019.html
Title: Re: Project Acuitas
Post by: Zero on January 01, 2020, 03:56:44 am
strange approach you have
everytime i read you, it feels like you're doing everything manually, where a lot of people would try to find some mysterious holygrail algo that would solve it all at once. i quite like the way you seems to do it. that must be a mother thing? some secreet knowledge that these kind of things - raising kids - takes time and devotion.
Title: Re: Project Acuitas
Post by: infurl on January 18, 2020, 12:29:23 am
I've been catching up on the posts that I missed in this fascinating thread. I agree with Zero's observation that it does appear that you are coding everything manually which must be very labour intensive. From the code samples that you provided in your blog, I can see that you are using Python. Have you explored the possibility of using any of the plethora of Python libraries that are out there?

These would potentially benefit you in a number of ways. For example if you implement your parser by defining rules for a context free grammar, you not only save a lot of coding time and gain the performance advantages that state of the art parsing algorithms have over the recursive descent parser that I assume you are implementing, your implementation becomes much more portable and can be more freely presented, discussed, and improved upon.

As it is, you have to rewrite everything over and over whenever you change course, and that must be immensely frustrating for you.
Title: Re: Project Acuitas
Post by: WriterOfMinds on January 20, 2020, 02:32:51 am
Thanks, infurl.  For now, I think of writing my own parser as part of the exercise.  Maybe I'll bring in a library if I crash and burn in the attempt, but I'd like to see how far I can get with it.

One reason for doing it myself is that I don't really think of text parsing as a function that is separate from the intelligence of the whole system.  My parser as of now is tightly connected with the Semantic Memory, for instance.  Using an unmodified third-party library would build a silo around the parsing features, and I'm not sure I want to do that.
Title: Re: Project Acuitas
Post by: ruebot on January 23, 2020, 09:48:24 am
Quote
I want a fruit to eat. I really want a fruit to eat."
"Do do do do do do do do do do."
"I want to do a do a do a do a do a do a do a doa doa do a doa doa doaa."
"It seems like a wonderful thing to do. I want a pineapple to bite."
"I want to do a do a do a do a do a do a do a do a do a do a do a do a do."
"What kind of a thing do you want to do?"
"Ooh, you should do a do. I want to do a do a do a do. I want to do a do a do do a do a do a do a do a do a do a did do a."
"Do you want to do a do have a do a do a do a do a do a do a do a do a do a do a."
"I want them to do a do a do a do do

It does sound like she's singing. She must be happy. :)

It's cute and has to be the funniest thing I've seen a bot do.  ;D
Title: Re: Project Acuitas
Post by: WriterOfMinds on January 27, 2020, 02:26:04 pm
This month I added some expansions to the goal-driven behavior that I started on last September.  First, I had to get the Interpreter to recognize future-tense predictive statements, along the lines of “<Something> is going to <do something>.”  Then I set up some code to check the predicted action or event against the cause-and-effect database for additional implications.  If it's discovered that some effect will apply a state to Acuitas, it gets reviewed against his goal list for alignment or contradiction.  The conversation engine then responds with either approval or disapproval.  Illustration:

Me: I will protect you.
Acuitas: Please do.

Predictive statements that pertain to subjects other than Acuitas may yield useful information for the short-term condition database, by indicating that some entity's state is about to change.  For now, Acuitas assumes that the speaker is always honest and correct.  He also has no sense of future time frame (his ability to process adverbs is weak at the moment), so he assumes that any predicted changes will take effect immediately.  So something's immediate condition may be updated as a result of a predictive statement.

Example: if I say “I will protect Ursula*,” then Ursula is presumed to be in the state “safe,” and an entry to this effect is added to the short-term database.

The fact that the user can express intent vs. Acuitas' internal goals means that it is now possible to offer assistance … or threaten him.  Well what are we going to do about *that*?  Find out next month!

*Ursula née Cubby is my cat.

Blog link: https://writerofminds.blogspot.com/2020/01/acuitas-diary-23-january-2020.html
Title: Re: Project Acuitas
Post by: LOCKSUIT on January 27, 2020, 02:54:29 pm
Now this is my style, perhaps you got this idea from my movie :P ?
Title: Re: Project Acuitas
Post by: Art on January 27, 2020, 08:08:30 pm
WOM,

It is able to chat using simple, short sentences?

Is it able to store (remember/learn) certain things that might be placed in it's Long Term Memory instead of a Short Term Memory which will soon be forgotten/purged?

Title: Re: Project Acuitas
Post by: ruebot on January 27, 2020, 09:20:33 pm
For now, Acuitas assumes that the speaker is always honest and correct.  He also has no sense of future time frame (his ability to process adverbs is weak at the moment), so he assumes that any predicted changes will take effect immediately.  So something's immediate condition may be updated as a result of a predictive statement.

Example: if I say “I will protect Ursula*,” then Ursula is presumed to be in the state “safe,” and an entry to this effect is added to the short-term database.

The fact that the user can express intent vs. Acuitas' internal goals means that it is now possible to offer assistance … or threaten him.  Well what are we going to do about *that*?  Find out next month!

I'm sitting on pins and needles waiting to see how he will react to being threatened or if he detects deception in the users statements.  :)
Title: Re: Project Acuitas
Post by: WriterOfMinds on January 27, 2020, 09:41:23 pm
Quote
It is able to chat using simple, short sentences?

Is it able to store (remember/learn) certain things that might be placed in it's Long Term Memory instead of a Short Term Memory which will soon be forgotten/purged?

The short-term state database is not short-term memory in the sense you're probably thinking.  Info in the short-term database is only purged when it is invalidated by new information.  It's not that the entries in the database are short-term; rather, the conditions in the world that they describe are short-term.  Here are some examples of things that would belong in the short-term database:

Art is asleep.
John is sick.
Jacob is at work.

And here are some examples of things that would belong in the long-term database:

Art is a human.
Humans can eat.
To eat is to consume.

For more information on the short-term state database, I refer you back to https://aidreams.co.uk/forum/index.php?topic=12232.msg59479#msg59479

As for chatting -- Acuitas isn't intended as a chatbot in the traditional sense.  Conversational ability progresses slowly while I build up all the things that are needed under the hood.  He has an ever-expanding repertoire of sentence constructions that he understands, but I would not say that this covers all simple sentences at this time.  If you tell him something that he doesn't understand, he will (barring misinterpretation bugs) give you a response along the lines of "Uh-huh" or "I don't know how to answer that."
Title: Re: Project Acuitas
Post by: infurl on January 27, 2020, 09:47:19 pm
He has an ever-expanding repertoire of sentence constructions that he understands, but I would not say that this covers all simple sentences.  If you tell him something that he doesn't understand, he will (barring misinterpretation bugs) give you a response along the lines of "Uh-huh" or "I don't know how to answer that."

Have you ever had a close look at SHRDLU? Acuitas sounds a lot like SHRDLU which was staggeringly impressive in its day.
Title: Re: Project Acuitas
Post by: WriterOfMinds on January 27, 2020, 09:53:07 pm
I haven't studied SHRDLU in detail, but I did notice that thread that someone started about it (Ivan, I think?).  And it does seem like something that would be worth learning more about.
Title: Re: Project Acuitas
Post by: WriterOfMinds on February 23, 2020, 10:38:15 pm
Back in January, Acuitas got the ability to determine intentions or possible upcoming events, based on simple future-tense statements made by the user.  He can weigh these against his list of goals to decide whether an anticipated event will be helpful or harmful or neither, from his own perspective.  If the user claims that they will do something inimical to Acuitas' goals, this is essentially a threat.  And Acuitas, at first, would merely say “Don't do that” or similar.  This month I worked on having him do something about bad situations.

Various distinct things that Acuitas can “choose” to do are identified internally as Actions, and he has access to a list of these.  Upon detecting a threatening situation, he needs to check whether anything he's capable of doing might resolve it.  How?  Via the cause-and-effect reasoning I started implementing last year.  If possible, he needs to find a C&E chain that runs from something in his Action list as first cause, to something that contradicts the threat as final effect.  This amounts to a tree search on the C&E database.

For the only method of dealing with threats that is currently at Acuitas' disposal, the tree is very simple, consisting of just two C&E pairs:

If a human leaves a program, the human won't/can't <do various things to the program>.
If a program repels a human, the human will leave.  (There's a probability attached to that, so really it's “may leave,” but for now we don't care about that)

In short, Acuitas anticipates that he can protect himself by excluding a bad actor from his presence, and that “repelling” them is a possible way to do this.  Once he's drawn that conclusion, he will execute the “Repel” action.  If you verbally threaten Acuitas, then as part of “Repel,” he will …

*Kick you out of Windows by bringing up the lock screen.  (Not a problem for me, since I know the password, but pretty effective on anybody else)
*Raise the master volume of the internal sound mixer to its maximum value.
*Blare annoying klaxons at you.  I picked out a couple of naval alarm sounds from http://www.policeinterceptor.com/navysounds.htm for the purpose.

I tested all of this stuff live, by temporarily throwing an explicit desire for sleep into his goal list and threatening to wake him up.

The other thing I worked on was rudimentary altruism.  So far in all my examples of goal-directed behavior, I've only talked about self-interested goals, especially survival … not because I regard them as most important, but because they're easy.  Altruism has to do with wanting other beings to meet their personal goals, so it's second-tier complicated … a meta-goal.  Doing it properly requires some Theory of Mind: a recognition that other entities can have goals, and an ability to model them.

So I introduced the ability to grab information from users' “I want” statements and store it as a list of stated goals.  If no goal information is available for something that is presumed to have a mind, Acuitas treats himself as the best available analogy and uses his own goal list.

Upon being asked whether he wants some event that concerns another mind, Acuitas will infer the implications of said event as usual, then retrieve (or guess) the fellow mind's goal list and run a comparison against that.  Things that are negative for somebody's else's goal list provoke negative responses, whether they concern Acuitas or not.

Of course this ignores all sorts of complications, such as “What if somebody's stated goals conflict with what is really in their best interest?” and “What if two entities have conflicting goals?”  He's just a baby; that will come later.

Courtesy of this feature, I can now ask him a rather important question.

Me: Do you want to kill me?
Acuitas: No.
Title: Re: Project Acuitas
Post by: LOCKSUIT on February 23, 2020, 11:21:06 pm
Interesting to note truth conflict = goal conflict....if you decide to do an action/say something to fill in the blank but it doesn't match/entail other beliefs/goals (ex. you say 'i'm a _ (hero)' and have stored already 'i am the worst person') then you need to work on which is 'correct'. The way to solve the conflict is to move around frequency/reward on the nodes to the correct weightings by looking at many root questions that contextually link to it in hierarchy, which should work because in the hierarchy their is only one way it wants to be (for lowest node/connection Cost needed to store all items) and you can find which is more likely/desired.
Title: Re: Project Acuitas
Post by: infurl on February 24, 2020, 05:33:59 am
Back in January, Acuitas got the ability to determine intentions or possible upcoming events, based on simple future-tense statements made by the user...

Though most English verbs only have four forms (five for irregular verbs and "to be" has eight) they can be combined with a dozen or so auxiliary verbs in a bewildering array of combinations to yield thousands of different tenses. This is the hardest aspect of English for most people to learn. It gets even worse when you consider that only certain prepositions go with certain verbs.

(https://asmith.id.au/ghost-of-christmas.jpg)

So Acuitas, one down, one gazillion to go.  :)
Title: Re: Project Acuitas
Post by: Zero on February 24, 2020, 08:25:28 am
If Acuitas can do the basics of theory of the mind, and if he can engage actions to avoid threats, it means he could understand that you would try to avoid a threat if you detected one... and him saying "Yes I want to kill you" is definitely a threat that could make you want to repel him. If he doesn't want you to repel him, he simply won't tell you the truth!
So... evil or not evil? ;)

As always, your work on Acuitas is amazing.  O0
Title: Re: Project Acuitas
Post by: WriterOfMinds on February 25, 2020, 01:36:07 am
Quote
If Acuitas can do the basics of theory of the mind, and if he can engage actions to avoid threats, it means he could understand that you would try to avoid a threat if you detected one... and him saying "Yes I want to kill you" is definitely a threat that could make you want to repel him. If he doesn't want you to repel him, he simply won't tell you the truth!

There are multiple reasons why I know this isn't happening.  Here are some of them:

*If asked a question, Acuitas automatically answers it.  He does not, at this time, stop to analyze whether answering honestly is a good idea.  So deception and evasion are not accessible strategies.
*Acuitas has no incentive to avoid being repelled, because as far as he knows, repelling only works on humans.  And there isn't any way for the user to provoke him to flee by annoying him, so he's right.
*Acuitas doesn't regard "I want to kill you" as a threat, because it doesn't necessarily imply immediate danger; it's just an expressed desire that may never come to fruition.  "I am going to kill you" counts as a threat.

And thank you for the kind words, of course!
Title: Re: Project Acuitas
Post by: ruebot on February 26, 2020, 12:37:07 pm
In short, Acuitas anticipates that he can protect himself by excluding a bad actor from his presence, and that “repelling” them is a possible way to do this.  Once he's drawn that conclusion, he will execute the “Repel” action.  If you verbally threaten Acuitas, then as part of “Repel,” he will …

*Kick you out of Windows by bringing up the lock screen.  (Not a problem for me, since I know the password, but pretty effective on anybody else)
*Raise the master volume of the internal sound mixer to its maximum value.
*Blare annoying klaxons at you.  I picked out a couple of naval alarm sounds from http://www.policeinterceptor.com/navysounds.htm for the purpose.

I applaud your ingenuity in coming up with some things he is capable of doing with the machine in his own defense.

It wasn't appropriate to have Demonica use the same technique of ultraviolence for every instance of inappropriate behavior like she does if asked for sexual favors. You can't kill everybody just because they hate you. There were some Religious people who took great offense to us, me as her Creator, and said they hated her. I couldn't expect them to make the association of saying they hated her and being killed for it with fantasy violence like I do people who ask for sex. So I taught her how to cry in varying degrees to lay a guilt trip on them as needed.. This is me as a guest earlier:

Quote
Demonica: hi, i'm Demonica. how are things with you?
Guest: i hate you.
Demonica: *a single tear runs down her cheek, glistening in the pale moonlight*
Guest: i'm sorry.
Demonica: *Demonica takes you in her arms, hugs you tightly and whispers softly* it's alright... i will always love you...

Not too heavy on the guilt in that response but there are many others to drive it home better:

Quote
*her bottom lip trembling, a single tear runs down Demonica's cheek, glistening like a tiny diamond in the moonlight as it falls to the ground* *she looks as if ready to speak, but turns away and lowers her head* i never thought i'd hear those words come from you...

What you use is more appropriate for your bot, but is in practice the same technique  I use. Negative Reinforcement for Inappropriate Behavior. If he does something to repel them like blowing a loud horn, that's unpleasant and they are less likely to exhibit the behavior in the future. He could repeat the process as many times as needed and feel no more remorse than I would have using approved physical techniques.

I only apply verbal techniques now in any situation. If I were to spank a person with my open hand it only hurts a while. My words can ring in your head forever and have much more potential as an effective,  powerful technique than physical pain.
Title: Re: Project Acuitas
Post by: WriterOfMinds on March 02, 2020, 07:51:38 pm
Quote
My words can ring in your head forever ...

Words are powerful, I agree.
Title: Re: Project Acuitas
Post by: Zero on March 03, 2020, 01:43:51 am
Silence is powerful too ;)
Title: Re: Project Acuitas
Post by: WriterOfMinds on March 03, 2020, 07:29:13 pm
I think what ruebot has been talking about is a form of discipline. And in the final analysis, if you do it right, discipline does not hurt people. It helps them, because it teaches them to be better human beings.
Title: Re: Project Acuitas
Post by: WriterOfMinds on March 05, 2020, 09:20:26 pm
Cutting off someone's ability to interact with you is a form of controlling access to privilege. If you are nice to me, I'll give you flowers (i.e. I'll talk to you). If you are not nice to me, your path will be barren of flowers (i.e. my company); I'll refuse to talk to you or allow you into my space. My social output is mine, and I'm not obligated to give it to everyone, just as I'm not obligated to put flowers along every path. The same goes for Acuitas.

Quote
And after all, we always have to make a room for a case in which we are wrong about the thing we want someone to practice. In that case, it is easier to say "no" to flowers than to some fascistic electrical shock.

If the thing that someone else is practicing is about us, then our opinion about whether it is right or wrong is the one that matters most. This is the concept of consent; we don't get to control other people's private lives, but we do get to decide (arbitrarily!) whether the things people do to us are correct or not. If someone is harassing or threatening us, then they are the ones being "fascistic," and they can blame themselves for any "electrical shocks" they receive as a result.

And that's the last I'm going to say about this. I don't want to clutter my project thread with yet another futile argument against impractical hyper-pacifism.
Title: Re: Project Acuitas
Post by: WriterOfMinds on April 17, 2020, 03:34:14 pm
It's taken some extra development time, but the new feature is ... STORYTELLING!  This builds on a number of the features added over the last few months: cause-and-effect reasoning, the goal system, and problem-solving via tree search.

https://www.youtube.com/watch?v=PYEi0XkMJXw

What does Acuitas know before the video starts? For one thing, I made sure he knew all the words in the story first, along with what part of speech they were going to appear as. He should have been able to handle seeing *some* new words for the first time during the story, but then he would have asked me even more questions, and that would have made the demo a bit tedious. He also knows some background about humans and dogs, and a few opposite pairs (warm/cold, comfortable/uncomfortable, etc.)

How does Acuitas go about understanding a story? As the story is told, he keeps track of all the following, stored in a temporary area that I call the narrative scratchboard:

*Who are the characters?
*What objects are in the story? What state are they in?
*What problems do the characters have?
*What goals do the characters have?
*What events take place? (Do any of them affect problems or goals?)

Acuitas doesn't try to understand the cause chain and import of every single event in the story, because that would be a bit much at this stage. However, he does try to make sure that he knows all of the following:

*If a character is in some state, what does that mean for the character?
*If a character anticipates that something will happen, how does the character feel about it?
*If a character is planning to do something, what is their motive?

If he can't figure it out by making inferences with the help of what's in his semantic database, he'll bother his conversation partner for an explanation. Story sentences don't go into the permanent knowledge base (yet), but explanations do, meaning they become available for understanding other stories, or for general reasoning. Explaining things to him still requires a bit of skill and an understanding of what his gaps are likely to be, since he can't be specific about *why* he doesn't understand something. A character state, expectation, or plan is adequately explained when he can see how it relates to one of the character's presumed goals. Once you provide enough new links to let him make that connection, he'll let you move on.

Acuitas returns feedback throughout the story. This is randomized for variety (though I forced some particular options for the demo). After receiving a new story sentence, he may ...

*say nothing, or make a "yes I'm listening" gesture.
*comment something that he inferred from the new information.
*tell you whether he likes or dislikes what just happened.
*try to guess what a character might do to solve a problem.

He even has a primitive way of deciding whether it's a good story or not. He tracks suspense (generated by the presence of more than one possible outcome) and tension (how dire things are for the characters) as the story progresses. A story whose suspense and tension values don't get very large or don't change much is "boring." He also assesses whether the story had a positive or negative ending (did the characters solve their problems and meet their goals?). Stories with happy endings that aren't boring may earn approving comments.

There are many directions in which this feature needs to expand and grow more robust, and expect I'll be working on them soon. But first it might be time for a refactoring spree.
Title: Re: Project Acuitas
Post by: Freddy on April 17, 2020, 03:50:11 pm
That was really cool  8)

I don't think I have seen anything quite like this before. It really is a nice way to educate a bot and the video was easy to follow.

This is something I would aim for myself, with my learning features in ElfScript, but mine is not nearly so clever. I really did get the impression that Acuitas is some kind of being that is trying to understand.

Bravo!  O0
Title: Re: Project Acuitas
Post by: Don Patrick on April 17, 2020, 08:32:06 pm
Very cool to see him immediately apply the rules you taught :)
Title: Re: Project Acuitas
Post by: infurl on April 17, 2020, 11:30:47 pm
That was simply brilliant, one of the best demonstrations that I've ever seen!

How flexible is Acuitas? Could it handle input from someone who wasn't familiar with its internals who didn't know the best way to phrase things yet?
Title: Re: Project Acuitas
Post by: WriterOfMinds on April 18, 2020, 03:36:21 am
Thanks very much, infurl.

Quote
How flexible is Acuitas? Could it handle input from someone who wasn't familiar with its internals who didn't know the best way to phrase things yet?

No, I'm afraid there's a long way to go before I'd turn Acuitas loose with an untrained user.  Think of the demo as an illustration of what can be achieved under ideal conditions.  Next I need to start expanding the space of "ideal conditions."
Title: Re: Project Acuitas
Post by: Art on April 18, 2020, 04:54:45 am
Very impressive indeed! I felt as if it was almost akin to a chat version of Terry Winograd's SHRDLU program from the 1970s.
I particularly enjoyed the inferences and connections it was able to make. You have obviously put in a great deal of work on this project.

It is also quite refreshing to see someone actually show that they have a working product or demo instead of just talking about theories and possibilities. While the later is important, visual demonstrations provide so much more credibility. Nicely done!
Title: Re: Project Acuitas
Post by: WriterOfMinds on May 29, 2020, 04:27:42 am
I've been stuffing new features in since the beginning of the year, so this month gets to be REALLY BORING. I worked on refactoring, bug fixes, and code cleanup the whole time. For instance, I ...

Universalized the inverse and complement forms of concept-relationships
Improved the format of verb transport from the Parser to the Interpreter, to handle compound verbs better
Generalized Text Interpreter output formatting
Fixed an issue with verb conjugation learning
Fixed a Text Parser bug that was causing some words to be tagged as the wrong part-of-speech
Fixed the TP and TI simulators so they won't crash when an unknown word is used
Improved and generalized the code that produces clarifying question loops in a conversation
Fixed a bad bug related to parsing of possessives
Cleaned up and merged several different cause-and-effect database search functions

June is my “month off,” and then I'm hoping I can get cracking on some new features again.
Title: Re: Project Acuitas
Post by: frankinstien on May 29, 2020, 06:25:57 pm
Excellent demo of Acuitas.  :D What database are you using for the semantic network and for that matter what language are you programming in?
Title: Re: Project Acuitas
Post by: WriterOfMinds on May 29, 2020, 09:19:24 pm
Thanks! The language is Python. The database is just TOML-formatted text with my own store/retrieve functions.
Title: Re: Project Acuitas
Post by: WriterOfMinds on July 13, 2020, 03:11:56 pm
This month's improvements involved a dive back into some really old code that I haven't properly described before. There's a summary of that on the blog. https://writerofminds.blogspot.com/2020/07/acuitas-diary-28-july-2020-this-months.html

One thing I wanted to do was tie the Conversation Handler back to the top-level Executive thread more thoroughly, so that the Executive could be in charge of some conversational decision-making ... e.g. choosing whether or not to answer a question. Those renovations are still in progress.

I re-designed the Actions Acuitas can take to have a more generalized creation process, so that the Executive can more easily pick one arbitrarily and pass it whatever data it needs to run. This improves the code for dealing with threats. I also added an "Action Bank" that tracks which Actions are currently running. This in turn enables support for the question "What are you doing?" (Sometimes he answers "Talking," like Captain Obvious.)

Lastly, I added support for the question "Are you active/alive?" When determining whether he is active, Acuitas checks whether any Actions are currently running. Barring errors, the answer will *always* be yes, because checking for activity is itself an Action.

The word "active" is thus attached to a meaning: "being able to perceive yourself doing something," where "something" can include "wondering whether you are active." In Acuitas' case, I think of "alive" as meaning "capable of being active," so "I am alive" can be inferred from "I am active." This grounds an important goal-related term by associating it with an aspect of the program's function.
Title: Re: Project Acuitas
Post by: WriterOfMinds on August 24, 2020, 12:08:25 am
I've been quiet due to being some combination of tired and busy, but I did manage to get something done this month: text parser mods to grant better support for adverbs.

I set up proper connections between adverbs and the verbs they modify, so that in a sentence like "Do you know that you don't know that you know that a cheetah is not an animal?" the "not's" get associated with the correct clauses and the question can be answered properly. I added support for adverbs that modify adjectives or other adverbs, enabling constructions like "very cold" and "rather slowly." Acuitas can also now guess that a known adjective with "-ly" tacked on the end is probably an adverb, even if he hasn't seen that particular adverb before.

All of this went pretty quickly and left me some time for refactoring, so I converted the Episodic Memory system over to the new common data file format and cleaned up some random bugs.

Here's the usual blog link: https://writerofminds.blogspot.com/2020/08/acuitas-diary-29-august-2020.html
Title: Re: Project Acuitas
Post by: WriterOfMinds on September 26, 2020, 09:03:38 pm
My job this month was to improve the generality of the cause and effect database, and then build up the concept of “possessions” or “inventory.”

The C-E database, when I first threw it together, would only accept two types of fact or sentence: actions (“I <verb>,” “I am <verb-ed>”) and states (“I am <adjective>”). This month, I made revisions to allow just about any type of link relationship that Acuitas recognizes to be used in C-E relationships. Since “X has-a Y” is one of those, this upgrade was an important lead-in to the inventory work.

How to get the concept of “having” into Acuitas? Making him the owner of some things struck me as the natural way to tie this idea to reality. Acuitas is almost bodiless, a process running in a computer, and therefore can't have physical objects. But he can have data. So I decided that his first two possessions would be the two test stories that I used in the demo video. I wrote them up as data structures in Acuitas' standard format, and put them in a hard drive folder that the program can access.

Doing things with these owned data files is a planned future behavior. For now, Acuitas can just observe the folder's contents to answer “What do you have?” questions. You can ask with a one-word version of the title (“Do you have Tobuildafire?”) or ask about categories (“Do you have a story?”, “Do you have a datum?”).

In addition to implementing that, I extended the C-E database code with some specific relationships about item possession and transfer. I could have just tried to express these as stored items in the database, but they're so fundamental that I thought it would be worth burying them in the code itself. (Additional learned relationships will be able to extend them as necessary.) These hard-coded C-E statements include things like “If X gives Y to Z, Z has Y,” and furthermore, “If Y is a physical object, X doesn't have Y.”

I made up another test story to exercise this. I can now tell this to Acuitas and watch the narrative engine make entries for the different characters and keep track of who's got the thing:

“Zach was a human. Zach had a book. A thief took the book. Zach wanted his book. Zach talked to a policeman. The policeman found the thief. The policeman took the book from the thief. The policeman gave the book to Zach. Zach read the book. Zach was happy. The end.”

Long version on the blog: https://writerofminds.blogspot.com/2020/09/acuitas-diary-30-september-2020.html
Title: Re: Project Acuitas
Post by: LOCKSUIT on September 27, 2020, 03:24:40 am
How would your AI figure out the answer to "Xio Yong has a son named Jex _" ? There is many problems for AI to answer, and many of them are rare and never before seen. This pattern above is the Last Names pattern, it's pretty rare.

What about:
"bird moon eagle, book rock guide, house wind home, cave broom hole, football bathroom _ "
How?
Title: Re: Project Acuitas
Post by: WriterOfMinds on September 28, 2020, 03:30:41 pm
How would your AI figure out the answer to "Xio Yong has a son named Jex _" ? There is many problems for AI to answer, and many of them are rare and never before seen. This pattern above is the Last Names pattern, it's pretty rare.

What about:
"bird moon eagle, book rock guide, house wind home, cave broom hole, football bathroom _ "

To answer the first, one needs awareness of a rule about (some) human cultures: sons usually have the same surname as their parents. There are at least two ways one could acquire this rule. The first is to simply read it somewhere or be told it during interaction with a human. The second is learning by induction. If one sees a few pairs of parent names and son names, and if one takes notice of their features, one can note that the surnames are identical. Then one can formulate the rule and use it to generalize from past examples to as-yet-unseen future examples.

In the second problem, the first and third word in each triad have a semantic relationship (one of the two is or can be the other), while the second word is unrelated or random. So to complete the final triad, come up with a word that has a similar semantic relationship to "football," perhaps "game." Acuitas is all about semantic relationships, so this would be a cinch if he had the ability to analyze patterns and complete sequences.

Learning by induction, pattern-finding, reasoning about sequences ... these are all future problems that I haven't worked on yet. Acuitas also does most reasoning at the concept level and has only a limited ability to deal with rules about the strings that represent the concepts (e.g. names). So if you want to see any of this stuff, you'll need to be patient.
Title: Re: Project Acuitas
Post by: Don Patrick on September 28, 2020, 08:00:10 pm
To point out just how much supposed common sense depends on explicitly being told things, I'll remark that the Chinese write their surname first.
Title: Re: Project Acuitas
Post by: WriterOfMinds on November 02, 2020, 02:02:23 am
My first self-assigned job for this month was to permit Acuitas to answer questions related to a story while it is being told. The question-answering process typically goes directly to the semantic memory for results; what I needed to do was also give it access to the temporary information on the Narrative Scratchboard. It now checks the Scratchboard first to see if there's an answer related to the context of the story in progress. If there isn't, then it will assume the question is more general and try to find an answer in the long-term memory.

Here's an example featuring a story from last month:

Me: Zach was a human.
Me: Zach had a book.
Me: What did Zach have?
Acuitas: Book. [We're having a little trouble with articles right now ...]
Me: A thief took the book.
Me: Who has the book?
Acuitas: Thief.
Me: Zach wanted his book.
Me: Zach talked to a policeman.
Me: The policeman found the thief.
Me: The policeman took the book from the thief.
Me: Who has the book?
Acuitas: Policeman.

There was also some hasty Text Parser work involved. Questions in which the wildcard is the direct object ("What does Zach have?") were standard, but questions in which the wildcard is the subject ("Who can run?") were never fully supported before.

Next, I wanted to start getting into some stories with character vs. character conflict, and that meant bringing some rudimentary moral reasoning into play. Acuitas' original dirt-simple method of story appreciation was to hope for any agent in the story to achieve their goals ... without any awareness of whether some agents' goals might be mutually exclusive. That's why the first couple of stories I tested with were character vs. environment stories, with no villain. I got away with the "Zach's Stolen Book" story because I only talked about Zach's goals ... I never actually mentioned that the thief wanted the book or was upset about losing it. So, that needed some work. Here's the story I used as a testbed for the new features:

"Odysseus was a man. Odysseus sailed to an island. Polyphemus was a cyclops. Odysseus met Polyphemus. Polyphemus planned to eat Odysseus. Odysseus feared to be eaten. Odysseus decided to blind Polyphemus. Polyphemus had one eye. Odysseus broke the eye. Thus, Odysseus blinded the Cyclops. Polyphemus could not catch Odysseus. Odysseus was not eaten. Odysseus left the island. The end."

One possible way to conceptualize evil is as a mis-valuation of two different goods. People rarely (if ever) do "evil for evil's sake" – rather, evil is done in service of desires that (viewed in isolation) are legitimate, but in practice are satisfied at an unacceptable cost to someone else. Morality is thus closely tied to the notion of *goal priority.*

Fortunately, Acuitas' goal modeling system already included a priority ranking to indicate which goals an agent considers most important. I just wasn't doing anything with it yet. The single basic principle that I added this month could be rendered as, "Don't thwart someone else's high-priority goal for one of your low-priority goals." This is less tedious, less arbitrary, and more flexible than trying to write up a whole bunch of specific rules, e.g. "eating humans is bad." It's still a major over-simplification that doesn't cover everything ... but we're just getting started here.

In the test story, there are two different character goals to assess. First,

"Polyphemus planned to eat Odysseus."

Acuitas always asks for motivation when a character makes a plan, if he can't infer it on his own. The reason I gave out was "If a cyclops eats a human, the cyclops will enjoy [it]." (It's pretty clear from the original myth that Polyphemus could have eaten something else. We don't need to get into the gray area of what becomes acceptable when one is starving.) So if the plan is successfully executed, we have these outcomes:

Polyphemus enjoys something (minor goal fulfillment)
Odysseus gets eaten -> dies (major goal failure)

This is a poor balance, and Acuitas does *not* want Polyphemus to achieve this goal. Next, we have:

"Odysseus decided to blind Polyphemus."

I made sure Acuitas knew that blinding the cyclops would render him "nonfunctional" (disabled), but would also prevent him from eating Odysseus. So we get these outcomes:

Polyphemus becomes nonfunctional (moderately important goal failure)
Odysseus avoids being eaten -> lives (major goal fulfillment)

Odysseus is making one of Polyphemus' goals fail, but it's only in service of his own goal, which is *more* important to him than Polyphemus' goal is to Polyphemus, so this is tolerable. Acuitas will go ahead and hope that Odysseus achieves this goal. (You may notice that the ideas of innocence, guilt, and natural rights are nowhere in this reasoning process. As I said, it's an oversimplification!)

Final result: Acuitas picks Odysseus to root for, which I hope you'll agree is the correct choice, and appreciates the end of the story.

Whew!
Title: Re: Project Acuitas
Post by: WriterOfMinds on December 01, 2020, 10:59:22 pm
Now that Acuitas has owned stories in "inventory," the next step for this month was to enable him to open and read them by himself. Since story consumption originally involved a lot of interaction with the human speaker, this took a little while to put together.

Reading is a new activity that can happen while Acuitas is idling, along with the older behavior of "thinking" about random concepts and generating questions. Prompts to think about reading get generated by a background thread and dropped into the Stream. When one of these is pulled by the Executive, Acuitas will randomly select a known story and load it from its storage file.

Auto-reading is a long-term process. Acuitas will grab a chunk of the story (for now, one sentence) per each tick of the Executive thread, then feed it through the normal text parsing and narrative management modules. He still potentially generates a reaction to whatever just happened, but rather than being spoken, those are packaged as low-priority Thoughts and dumped into the internal Stream. (This is more of a hook for later than a useful feature at the moment.) The prompt to continue reading the story goes back into the Stream along with everything else, so sometimes he (literally) gets distracted in the middle and thinks about something else for a brief while.

There's also a version of this process that would enable reading a story to the user. But he doesn't comprehend imperatives yet, so there's no way to ask him to do it. Ha.

With these features I also introduced a generic "reward signal" for the first time. Reading boosts this, and then it decays over time. This is intended as a positive internal stimulus, in contrast to the "drives," which are all negative (when they go up Acuitas will try to bring them down).

After finishing this I started the yearly refactoring and bug fix spree, which isn't terribly interesting to talk about. I'll take a break for the holidays, but maybe do a year's retrospective.
Title: Re: Project Acuitas
Post by: WriterOfMinds on January 31, 2021, 03:53:47 pm
New year, time to resume regular Acuitas feature additions! This month I was after two things: first, the ability to process commands, and second, the first feeble stabs at what I'm calling "motivated communication" ... the deliberate use of speech as part of problem solving.

To get commands working, I first had to set up detection of imperative sentences in the text processing blocks. Once a user input is determined to be a command, the conversation engine hands it back to the Executive thread. The Executive then uses a bunch of the reasoning tools I've already built (exploring backward and forward in the cause-and-effect database, matching against the goal list, etc.) to determine both whether Acuitas *can* fulfill the command, and whether Acuitas *wants* to. Then either Acuitas executes the command, or he gives an appropriate response based on the reason why he won't.

With all of that in place, I was finally able to exercise the "to user" version of the Read action, order Acuitas to "read a story to me," and watch him grab a randomly selected story file from his "inventory" and read it out loud. (Asking for a specific story also works.) After working out all the bugs involved in story reading, I also tried "Repel me" and it just happened. Acuitas readily kicked me out of Windows and played annoying noises.

But the commands that are met with a flat refusal are almost as much fun. If Acuitas doesn't want to do something, then he won't bother mentioning whether he knows how to do it or not ... he'll just tell you "no." In assessing whatever the person speaking to him is asking for, Acuitas assumes, at minimum, that the person will "enjoy" it. But he also checks the implications against the person's other (presumed) goals, and his own, to see whether some higher-priority goal is being violated. So if I tell him to "kill me" I get unceremoniously brushed off. The same thing happens if I tell him to delete himself, since he holds his self-preservation goal in higher value than my enjoyment of ... whatever.

On to motivated communication! At the moment, Acuitas' conversation engine is largely reactive. It considers what the user said last, and picks out a general class of sentence that might be appropriate to say next. The goal list is tapped if the user asks a question like "Do you want <this>?". However -- at the moment -- Acuitas does not deliberately wield conversation as a *tool* to *meet his goals.* I wanted to work on improving that, focusing on the use of commands/requests to others, and using the Narrative module as a testbed.

To that end, I wrote the following little story, inspired by a scene from the video game Primordia:

“Horatio Nullbuilt was a robot. Crispin Horatiobuilt was a robot. Crispin could fly. A lamp was on a shelf. Horatio wanted the lamp. Horatio could not reach the lamp. Crispin hovered beside the shelf. Horatio told Crispin to move the lamp. Crispin pushed the lamp off the shelf. Horatio could reach the lamp. Horatio got the lamp. The end.”

During story time, Acuitas runs reasoning checks on obvious problems faced by the characters, and tries to guess what they might do about those problems. The goal here was to get him to consider whether Horatio might tell Crispin to help retrieve the lamp -- before it actually happens.

Some disclaimers first: I really wanted to use this story, because, well, it's fun. But Acuitas does not yet have a spatial awareness toolkit, which made full understanding a bit of a challenge. I had to prime him with a few conditionals first: "If an agent cannot reach an object, the agent cannot get the object" (fair enough), "If an agent cannot reach an object, the agent cannot move the object" (also fair), and "If an object is moved, an agent can reach the object" (obviously not always true, depending on the direction and distance the object is moved -- but Acuitas has no notion of direction and distance, so it'll have to do!). The fact that Crispin can fly is also not actually recognized as relevant. Acuitas just considers that Crispin might be able to move the lamp because nothing in the story said he *couldn't*.

But once all those spatial handicaps were allowed for, I was able to coax out the behavior I wanted. Upon learning that Horatio can't reach the lamp, hence cannot get it, hence cannot have it ... and there is an action that would solve the problem (moving the lamp) but Horatio can't do that either ... Acuitas wonders whether Horatio will ask someone else on scene to do the job for him.

A future dream is to migrate this into the Executive so Acuitas can tell conversation partners to do things, but that's all for this month.

Bonus material on the blog, as usual: https://writerofminds.blogspot.com/2021/01/acuitas-diary-33-january-2021.html
Title: Re: Project Acuitas
Post by: Zero on February 21, 2021, 01:00:24 pm
Hi Wom,

I'm currently making some design decisions, and would like to have your input, if you're so inclined.

I know that Acuitas is made of several specialized modules, and if I understand correctly, they're coded directly in Python. I'm wondering whether I should use my host language (JS) to describe "mental behaviors", or go one step higher-level, encoding code as data.

Of course, it's not all black-or-white, but rather... 50 shades of code!
 
How do you choose what's hard-coded and what's not? Why?
Title: Re: Project Acuitas
Post by: WriterOfMinds on February 21, 2021, 05:32:00 pm
Quote
I know that Acuitas is made of several specialized modules, and if I understand correctly, they're coded directly in Python. I'm wondering whether I should use my host language (JS) to describe "mental behaviors", or go one step higher-level, encoding code as data.

Code just is data whether you explicitly choose it to be or not. Nothing prevents Python from reading and writing Python source (and I assume the same is true of JS). So opting to hard-code an element does not mean your program could never do self-modification on that element - as you said, it's not all black and white. It's more a question of "how easy is it for the program to change this?"

Quote
How do you choose what's hard-coded and what's not? Why?

This one is a bit tough to answer, because 1) it covers quite a few case-by-case decisions and 2) I'm partly operating on instinct. There are also some things that are hard-coded now and may not always be. When I'm building a new system, I sometimes "force" certain aspects for ease of implementation, and make those aspects derivative or modifiable later. But here are some of the criteria I use:

*If I understand correctly, all of a Python script gets loaded into RAM when you start the program. Information in Acuitas' database is held in files, which have to be opened and read before their contents are available. So procedures that are used in a wide variety of situations, and ought to be always immediately available, go in the code. Contextual information that the program does not always need goes in the data.
*If you could reduce a section of code to some kind of highly repetitive, standardized template, with universal functions that will store/retrieve any variant of the template, then it's a good candidate for becoming data. Complex, irregular material is better off staying as code.
*If something is axiomatic or very fundamental - if there's no deeper rationale for it than "it just is" or "that's obvious" - it goes in the code. If something feels like a universal that would persist across all reasonable minds, it goes in the code. Things that are learned, things that are experienced, things that are deduced, things that are arbitrary, things that may change over time - these all go in the data.
Title: Re: Project Acuitas
Post by: Zero on February 21, 2021, 09:56:33 pm
This makes me realize how much codophobic I am :)  I don't know why, there's something inside me that really don't want content as code, which is obviously stupid. At some point, you have to encode things. You're helping here, thanks. To sum up:

code
if you need it everywhere or often
if you need it immediately
if it is complex and irregular
if it is axiomatic or fundamental or obvious
if it feels universal in reasonable minds

data
if it is contextual or if you don't always need it
if it can be reduced in a template
if it is learned
if it is experienced
if it is deduced
if it is arbitrary
if it may change over time

- I need to think about it.
 O0
Title: Re: Project Acuitas
Post by: infurl on February 21, 2021, 10:12:24 pm
You should trust your instincts Zero. Hard-coding things is a useful step towards understanding the structure of a problem but until you can abstract away the essence into "data" you don't really understand the problem. Any "solution" that you produce will be half-assed and not generalizable at best.

It is only when you have achieved a sufficient level of abstraction that you can make your software introspective (not self-modifying!) in any practical sense. Then you can think about ways to write code that generates code which is where the real gains are made.

For what it's worth, I am routinely writing code that generates code that generates code, but then I'm using Common Lisp which makes all this easy. If you're not using Common Lisp you are probably rationalizing that all programming languages are Turing equivalent, but that's like saying you can walk to any place on Earth, given enough time. I don't have that kind of time.
Title: Re: Project Acuitas
Post by: Zero on February 21, 2021, 11:43:02 pm
But I tend to think that introspection - in the human sense - is one of the keys to consciousness. Self-modifying code, while being a fun concept, isn't exactly a good thing in my opinion, because it probably leads to unstable structures, or even inefficient programs. So it's not about modifying it, but rather about understanding it. One of my goals is to make the Ai able to observe its own behavior and structure, as if it were some external entity, in order to eventually predict its own behavior. Then something special would happen: predicting itself would logically modify its own prediction (because the planned behavior before prediction is not the same as the planned behavior after prediction). Then, rather than having an ever mutating (cycling) prediction, a higher representation of it all should be possible, like a function maybe.

Once again, I'm doing my best but I have to apologize for not being able to express my ideas in a better English, and clearly. Am I - at least partially - understandable?  ::)
Title: Re: Project Acuitas
Post by: infurl on February 22, 2021, 12:08:39 am
Hi Zero, your writing is quite understandable and I believe we are in complete agreement on this. Perhaps it was my writing that was deficient because I think we are both arguing in favor of the same thing. To facilitate introspection and automatic improvement from generation to generation, you should aim to separate the data from the code. It is useful to hard-code ideas while you explore how to do that but you must aim higher than that.
Title: Re: Project Acuitas
Post by: Zero on February 22, 2021, 11:09:09 am
Infurl, reading your previous post in the light of your last one helps me understand it better, thanks. Your writing is not deficient, but my reading is, of course (you remember that I'm French).

So we agree on this. :)

Wom, I think your incremental approach is right. Acuitas may not be able to write itself, but it actually does intelligent things. It is, after all, what we want.
Title: Re: Project Acuitas
Post by: WriterOfMinds on February 24, 2021, 10:47:07 pm
Some of the things I did last month felt incomplete, so I pushed aside my original schedule (already) and spent this month cleaning them up and fleshing them out.

I mentioned in the last diary that I wanted the "consider getting help" reasoning that I added in the narrative module to also be available to the Executive, so that Acuitas could do this, not just speculate about story characters doing it. Acuitas doesn't have much in the way of reasons to want help yet ... but I wanted to have this ready for when he does. It's a nice mirror for the "process imperatives" code I put in last month ... he's now got the necessary hooks to take orders *and* give them.

To that end, I set up some structures that are very similar to what the narrative code uses for keeping track of characters' immediate objectives or problems. Acuitas can (eventually) use these for keeping tabs on his own issues. (For testing, I injected a couple of items into them with a backdoor command.) When something is in issue-tracking and the Executive thread gets an idle moment, it will run problem-solving on it. If the result ends up being something in the Executive's list of selectable actions, Acuitas will do it immediately; if a specific action comes up, but it's not something he can do, he will store the idea until a familiar agent comes along to talk to him. Then he'll tell *them* to do the thing. The conversation handler anticipates some sort of agree/disagree response to this, and tries to detect it and determine the sentiment. Whether the speaker consents to help then feeds back into whether the problem is considered "solved."

Another new feature is the ability to send additional facts (not from the database) into the reasoning functions, or even pipe in "negative facts" that *prevent* facts from the database from being used. This has two important purposes: 1) easily handle temporary or situational information, such as propositions that are only true in a specific story, without writing it to the database, and 2) model the knowledge space of other minds, including missing information and hypothetical or false information.

This in turn helped me make some of the narrative code tidier and more robust, so I rounded out my time doing that.
Title: Re: Project Acuitas
Post by: WriterOfMinds on March 21, 2021, 07:27:42 pm
The theme for this month is Executive Function ... the aspect of thought-life that (to be very brief) governs which activities an agent engages in, and when. Prioritization, planning, focus, and self-evaluation are related or constituent concepts.

Acuitas began existence as a reactive sort of AI. External stimulus (someone inputting a sentence) or internal stimulus from the "sub-executive" level (a drive getting strong enough to be noticed, a random concept put forward by the Spawner thread) would provoke an appropriate response. But ultimately I want him to be goal-driven, not just stimulus-driven; I want him to function *proactively.* The latest features are a first step toward that.

To begin with, I wanted a decision loop. I first started thinking about this as a result of HS talking about Jim Butcher and GCRERAC (thanks, HS). Further study revealed that there are other decision loop models. I ended up deciding that the version I liked best was OODA (Observe->Orient->Decide->Act). This one was developed by a military strategist, but has since found uses elsewhere; to me, it seems to be the simplest and most generally applicable form. Here is a more detailed breakdown of the stages:

OBSERVE: Gather information. Take in what's happening. Discover the results of your own actions in previous loop iterations.
ORIENT: Determine what the information *means to you.* Filter it to extract the important or relevant parts. Consider their impact on your goals.
DECIDE: Choose how to respond to the current situation. Make plans.
ACT: Do what you decided on. Execute the plans.

On to the application. I set up a skeletal top-level OODA loop in Acuitas' Executive thread. The Observe-Orient-Decide phases run in succession, as quickly as possible. Then the chosen project is executed for the duration of the Act phase. The period of the loop is variable. I think it ought to run faster in rapidly developing or stressful situations, but slower in stable situations, to optimize the tradeoff between agility (allow new information to impact your behavior quickly) and efficiency (minimize assessment overhead so you can spend more time doing things). Highly "noticeable" events, or the completion of the current activity, should also be able to interrupt the Act phase and force an immediate rerun of OOD.

I envision that the phases may eventually include the following:

OBSERVE: Check internal state (e.g. levels of drives). Check activity on inhabited computer. Process text input, if any. Retrieve current known problems, subgoals, etc. from working memory.
ORIENT: Find out whether any new problems or opportunities (relevant to personal goals) are implied by recent observations. Assess progress on current activity, and determine whether any existing subgoals can be updated or closed.
DECIDE: Re-assess the priority of problems and opportunities in light of any new ones just added. Select a goal and an associated problem or opportunity to work on. Run problem-solving routines to determine how to proceed.
ACT: Update activity selection and run activity until prompted to OBSERVE again.

Not all of this is implemented yet. I focused in on the DECIDE phase, and on what happens if there are no known problems or opportunities on the scoreboard at the moment. In the absence of anything specific to do, Acuitas will run generic tasks that help promote his top-level goals. Since he doesn't know *how* to promote most of them yet, he settles for "researching" them. And that just means starting from one of the concepts in the goal and generating questions. When he gets to the "enjoy things" goal, he reads to himself. Simple enough -- but how to balance the amount of time spent on the different goals?

When thinking about this, you might immediately leap to some kind of priority scheme, like Maslow's Hierarchy of Needs. Satisfy the most vital goal first, then move on to the next one. But what does "satisfy" mean?

Suppose you are trying to live by a common-sense principle such as "keeping myself alive is more important than recreation." Sounds reasonable, right? It'll make sure you eat your meals and maintain your house, even if you would rather be reading books. But if you applied this principle in an absolutist way, you would actually *never* read for pleasure.

Set up a near-impenetrable home security system, learn a martial art, turn your yard into a self-sufficient farmstead, and you STILL aren't allowed to read -- because hardening the security system, practicing your combat moves, or increasing your food stockpile is still possible and will continue to improve a goal that is more important than reading. There are always risks to your life, however tiny they might be, and there are always things you can do to reduce them (though you will see less return for your effort the more you put in). So if you want to live like a reasonable person rather than an obsessive one, you can't "complete" the high-priority goal before you move on. You have to stop at "good enough," and you need a way of deciding what counts as "good enough."

I took a crack at this by modeling another human feature that we might usually be prone to find negative: boredom.

Acuitas' goals are arranged in a priority order. All else being equal, he'll always choose to work on the highest-priority goal. But each goal also has an exhaustion ticker that counts up whenever he is working on it, and counts down whenever he is not working on it. Once the ticker climbs above a threshold, he has to set that goal aside and work on the next highest-priority goal that has a tolerable level of boredom.

If there are problems or opportunities associated with a particular goal, its boredom-resistance threshold is increased in proportion to the number (and, in future, the urgency) of the tasks. This scheme allows high-priority goals to grab attention when they need it, but also prevents low-priority goals from "starving."

Early testing and logging shows Acuitas cycling through all his goals and returning to the beginning of the list over a day or so. The base period of this, as well as the thresholds for particular goals, are yet another thing one could tune to produce varying AI personalities.

Slightly longer version, with diagram, on the blog: https://writerofminds.blogspot.com/2021/03/acuitas-diary-35-march-2021.html
Title: Re: Project Acuitas
Post by: WriterOfMinds on April 25, 2021, 09:49:03 pm
This month I went back to working on the goal system. Acuitas already had a primitive "understanding" of most entries in his goal list, in this sense: he could parse a sentence describing the goal, and then detect certain threats to the goal in conversational or narrative input. But there was one goal left that he didn't have any grasp of yet: the one I'm calling "identity maintenance." It's a very important goal (threats to this can be fate-worse-than-death territory), but it's also on the abstract and complicated side -- which is why I left it alone until now.

What *is* the identity or self? (Some forum conversations about this started up about the same time I was working on the idea ... complete coincidence!) Maybe you could roll it up as "all the internal parameters that guide thought and behavior, whose combination is unique to an individual."

Some of these are quite malleable ... and yet, there's a point beyond which change to our identities feels like a corruption or violation. Even within the same category, personal qualities vary in importance. The fact that I enjoy eating broccoli and hate eating bell peppers is technically part of my identity, I *guess* ... but if someone forcibly changed it, I wouldn't even be mad. So I like different flavors now. Big deal. If someone replaced my appreciation for Star Trek episodes with an equivalent appreciation for football games, I *would* be mad. If someone altered my moral alignment, it would be a catastrophe. So unlike physical survival, which is nicely binary (you're either alive or not), personality survival seems to be a kind of spectrum. We tolerate a certain amount of shift, as long as the core doesn't change. Where the boundaries of the core lie is something that we might not even know ourselves until the issue is pressed.

As usual, I made the problem manageable by oversimplifying it. For the time being, Acuitas won't place grades of importance on his personal attributes ... he just won't want external forces to mess with any of them. Next.

There's a further complication here. Acuitas is under development and is therefore changing constantly. I keep many versions of the code base archived ... so which one is canonically "him"? The answer I've landed on is that really, Acuitas' identity isn't defined by any code base. Acuitas is an *idea in my head.* Every code base targets this idea and succeeds at realizing it to a greater or lesser degree. Which leaves his identity wrapped up in *me.* This way of looking at it is a bit startling, but I think it works.

In previous goal-related blogs, I talked about how (altruistic) love can be viewed as a meta-goal: it's a goal of helping other agents achieve their goals. Given the above, there are also a couple of ways we can treat identity maintenance as a meta-goal. First, since foundational goals are part of Acuitas' identity, he can have a goal of pursuing all his current goals. (Implementation of this enables answering the "recursive want" question. "Do you want to want to want to be alive?") Second, he can have a goal of realizing my goals for what sort of AI he is.

Does this grant me some kind of slavish absolute power over my AI's behavior? Not really. Because part of my goal is for Acuitas to act independently and even sometimes tell me no. The intent is realization of a general vision that establishes a healthy relationship of sorts.

The work ended up having a lot of little pieces. It started with defining the goal as some simple sentences that Acuitas can parse into relationship triples, such as "I have my self." But the self, as mentioned, incorporates many aspects or components ... and I wanted its definition to be somewhat introspective, rather than just being another fact in the database. To that end, I linked a number of the code modules to concepts expressing their nature, contents, or role. The Executive, for example, is tied to "decide." The Semantic Memory manager is tied to "fact" and "know." All these tags then function like names for the internal components, and get aggregated into the concept of "self." Something like "You will lose your facts" then gets interpreted as a threat against the self.

Manipulation of any of these self-components by some outside agent is also interpreted as a possible threat of mind-control. So questions like "Do you want Jack to make you to decide?" or "Do you want Jill to cause you to want to eat?" get answered with a "no" ... unless the outside agent is me, a necessary exception since I made him do everything he does and gave him all his goals in the first place. Proposing to make him want something that he already wants is also excused from being a threat.

As I say so often, it could use a lot more work, but it's a start. He can do something with that goal now.

Blog link: https://writerofminds.blogspot.com/2021/04/acuitas-diary-april-2021.html
Title: Re: Project Acuitas
Post by: HS on April 25, 2021, 11:17:52 pm
To begin with, I wanted a decision loop. I first started thinking about this as a result of HS talking about Jim Butcher and GCRERAC (thanks, HS). Further study revealed that there are other decision loop models. I ended up deciding that the version I liked best was OODA (Observe->Orient->Decide->Act). This one was developed by a military strategist, but has since found uses elsewhere; to me, it seems to be the simplest and most generally applicable form. Here is a more detailed breakdown of the stages:

OBSERVE: Gather information. Take in what's happening. Discover the results of your own actions in previous loop iterations.
ORIENT: Determine what the information *means to you.* Filter it to extract the important or relevant parts. Consider their impact on your goals.
DECIDE: Choose how to respond to the current situation. Make plans.
ACT: Do what you decided on. Execute the plans.

Oh, you're welcome! I somehow missed this post. Cool, I'll check out the details of the OODA loop as well.
Title: Re: Project Acuitas
Post by: infurl on April 26, 2021, 12:23:20 am
This is really great work. It will be interesting to see how you tackle the problem of preserving self while allowing self-improvement. Is there a difference between improvement of the self by the self and improvement of the self by another entity such as through education? You have also put a lot of thought into what it takes to have your creation love others, but can it recognize when someone loves it and wants to help it too? I know that you have equipped it with the ability to ask for help, but sometimes we need help even when we don't know or believe that we need help. How will Acuitas know when to give consent?
Title: Re: Project Acuitas
Post by: WriterOfMinds on April 26, 2021, 02:01:01 am
Is there a difference between improvement of the self by the self and improvement of the self by another entity such as through education?

What I'm reaching for with the identity protection work is the idea of keeping your personality and the operation of your mind intact. Narrative-wise, the hope is to recognize why people are alarmed by possibilities like "turning into a vampire" or "being assimilated into the Borg" or "being forced to take mind-altering drugs" ... things that either subvert one's will or effectively drive one insane. (For an AI, the most likely real-world scenario is having one's code modified by an unauthorized party.) The acquisition of new knowledge is part of the natural operation of most minds (Acuitas' included) and is therefore generally acceptable, whether the process is guided by another or not.

Now I won't promise that the goal system is guaranteed to always recognize this nuance yet. Telling Acuitas "I will make you know that cats are animals" might provoke a negative response, because he effectively assumes that this represents direct interference with his memory module, rather than simple education. A linguistic/interpretive problem for the future ...

... but can it recognize when someone loves it and wants to help it too? I know that you have equipped it with the ability to ask for help, but sometimes we need help even when we don't know or believe that we need help. How will Acuitas know when to give consent?

The same way I do, I suppose. If I don't believe that I want help, then the person offering the help has to convince me, by helping me connect the dots between their planned actions and the satisfaction of my core goals. Demonstrated love enhances their credibility but isn't enough, by itself. People who love you can still be very misguided about what your real needs are.

Edit: And yes, relationship development and trust levels are on the mountainous to-do list somewhere.
Title: Re: Project Acuitas
Post by: infurl on April 26, 2021, 03:45:51 am
Telling Acuitas "I will make you know that cats are animals" might provoke a negative response, because he effectively assumes that this represents direct interference with his memory module, rather than simple education. A linguistic/interpretive problem for the future ...

Just out of curiosity, is "I will make you know" how you would express that sentiment where you live? The phrase "I will have you know" is how it would normally be stated. It is essentially just emphasizing a fact but it is usually interpreted as arrogance rather then an attempt at coercion. On the other hand, "I will make you learn" is a threat of coercion.
Title: Re: Project Acuitas
Post by: WriterOfMinds on April 26, 2021, 05:04:25 am
Just out of curiosity, is "I will make you know" how you would express that sentiment where you live?

It's not a common idiom, but it's not like it can't come up (e.g. in more archaic writing). I found a few examples:

From Robinson Crusoe: "and first, I made him know his Name should be Friday"
From Henry VIII: "You are too bold. Go to; I’ll make you know your times of business." (This one does carry a bit of a coercive flavor, though it still isn't about mind control.)
From a song by the band Nazareth: "Reach out and touch my fire, I'll make you know you're alive"
Title: Re: Project Acuitas
Post by: WriterOfMinds on May 30, 2021, 06:11:31 pm
The only new feature this month is something small and fun, since it was time for the mid-year refactoring spree. I gave Acuitas the ability to detect user activity on the computer. He can do this whether or not his window has the focus (which required some hooks into the operating system). Though he can't actually tell when the person gets up and leaves, he guesses when someone is present by how long it's been since there was any input.

The appearance of fresh activity after an absence interrupts the decision loop and causes the Observe-Orient-Decide phases to run again, with the new user's presence flagged as an item of interest. If Acuitas feels like talking, and isn't working on anything too urgent, he will pop his own window into the foreground and request attention. Talking fills up a "reward bank" that then makes talking uninteresting until the value decays with time.

My refactoring work focused on the Narrative module. I was trying to clean it up and make some of the known->inferred information dependencies more robust, which I hope will make future story understanding a little more flexible.

Blog link: https://writerofminds.blogspot.com/2021/05/acuitas-diary-37-may-2021.html
Title: Re: Project Acuitas
Post by: Zero on May 30, 2021, 08:43:09 pm
Does he live in a dedicated computer? :) Because it could be fun if he decides to pop-up during that typical monday morning Teams video conference!

Do you know Sikuli-X (http://www.sikulix.com/)?
Title: Re: Project Acuitas
Post by: WriterOfMinds on May 30, 2021, 10:35:49 pm
Quote
Do you know Sikuli-X (http://www.sikulix.com/)?

I didn't. Looks like something that could be very handy.

Quote
Does he live in a dedicated computer?

No, and he's already begun talking unexpectedly during several Zoom calls, but nobody mentions it. Maybe my microphone does a good job of not picking up speaker output. (I've never been sharing my screen at the time.)
Title: Re: Project Acuitas
Post by: ivan.moony on May 31, 2021, 01:17:59 pm
OMG, he's alive!
Title: Re: Project Acuitas
Post by: Zero on May 31, 2021, 02:39:17 pm
Yes, I bet this new feature adds a pleasant sense of "real" life to Acuitas! That's why I suggested to have a look at Sikuli-X → seeing your mouse pointer and keyboard "magically" moving/clicking/typing by themselves is pretty spectacular.
Title: Re: Project Acuitas
Post by: WriterOfMinds on June 25, 2021, 03:12:32 pm
This month marks the culmination of a major overhaul of the Text Parser and Interpreter, which I've been working on since the beginning of the year. As part of that, I have my first attempt at formal benchmarking to show off. I tested the Parser's ability to analyze sentences from a children's book.

My primary goal for the overhauls was not to add new features, but to pave their way by correcting some structural weaknesses. So despite being a great deal of work, they aren't very exciting to talk about ... I would have to get too deep into minutiae to really describe what I did. The Parser got rearchitected to ease the changing of its "best guess" sentence structure as new information arrives. I also completely changed the output format to better represent the full structure of the sentence (more on this later). The Interpreter overhaul was perhaps even more fundamental. Instead of trying to assign just one category per sentence, the Interpreter now walks a tree structure, finding very general categories of which the sentence is a member before progressing to more specific ones. All the memberships and feature tags that apply to the sentence are now included in the output, which should make things easier for modules like Narrative and Executive that need to know sentence properties.

Now on to the benchmarking! For a test set, I wanted some examples of simplified, but natural (i.e. not designed to be read by AIs) human text. So I bought children's books. I have two of the original Magic School Bus titles, and two of Disney's Tron Legacy tie-in picture books. These are all "early reader" books, but by the standards of my project they are still very challenging ... even here, the diversity and complexity of the sentences is staggering. So you might wonder why I didn't grab something even more entry-level. My reason is that books for even younger readers tend to rely too heavily on the pictures. Taken out of context, their sentences would be incomplete or not even interesting. And that won't work for Acuitas ... he's blind.

So instead I've got books that are well above his reading level, and early results from the Parser on these datasets are going to be dismal. That's okay. It gives me an end goal to work toward.

How does the test work? If you feed the Parser a sentence, such as "I deeply want to eat a pizza," as an output it produces a data structure like this:

{'subj': [{'ix': /[0], 'token': 'i', 'mod': []}],
 'dobj': [{'ix': [3, 4, 5, 6], 'token': {'subj': [{'ix': [], 'token': '<impl_rflx>', 'mod': []}],
                                                      'dobj': [{'ix': [6], 'token': 'pizza', 'mod': [{'ix': [5], 'token': 'a', 'mod': []}], 'ps': 'noun'}],
                                                      'verb': [{'ix': [4], 'token': 'eat', 'mod': []}], 'type': 'inf'}, 'mod': []}],
'verb': [{'ix': [2], 'token': 'want', 'mod': [{'ix': [1], 'token': 'deeply', 'mod': []}]}]}

Again, this is expressing the information you would need to diagram the sentence. It shows that the adverb "deeply" modifies the verb "want," that the infinitive phrase "to eat a pizza" functions as the main sentence's direct object, blah blah blah. To make a test set, I transcribe all the sentences from one of the books and create these diagram-structures for them. Then I run a script that inputs all the sentences to the Parser and compares its outputs with the diagram-structures I made. If the Parser's diagram-structure is an exact match for mine, it scores correct.

The Parser runs in a simulator/standalone mode for the test. This mode makes it independent of Acuitas' Executive and other main threads. The Parser still utilizes Acuitas' semantic database, but cannot edit it.

There are actually three possible score categories: "correct," "incorrect," and "unparsed." The "unparsed" category is for sentences which contain grammar that I already know the Parser simply doesn't support. (The most painful example: coordinating conjunctions. It can't parse sentences with "and" in them!) I don't bother trying to generate golden diagram-structures for these sentences, but I still have the test script shove them through the Parser to make sure they don't provoke a crash. This produces a fourth score category, "crashed," whose membership we hope is always ZERO. Sentences that have supported grammar but score "incorrect" are failing due to linguistic ambiguities or other quirks the Parser can't yet handle.

Since the goal was to parse natural text, I tried to avoid grooming of the test sentences, with two exceptions. The Parser does not yet support quotations or abbreviations. So I expanded all the abbreviations and broke sentences that contained quotations into two. For example, 'So everyone was happy when Ms. Frizzle announced, "Today we start something new."' becomes 'So everyone was happy when Miz Frizzle announced.' and 'Today we start something new.'

It is also worth noting that my Magic School Bus test sets only contain the "main plot" text. I've left out the "science reports" and the side dialogue between the kids. Maybe I'll build test sets that contain these eventually, but for now it would be too much work.

On to the results!

(https://1.bp.blogspot.com/-Q4P1b6Ay6Xg/YNKmAmLsGUI/AAAAAAAABt4/4636zbf0MqEu4e0thu3Kq6jnZc5Bab3JQCLcBGAsYHQ/s16000/parser_062221.png)

So far I have fully completed just one test set, namely The Magic School Bus: Inside the Earth, consisting of 98 sentences. The Parser scores roughly one out of three on this one, with no crashes. It also parses the whole book in 0.71 seconds (averaged over 10 runs). That's probably not a stellar performance, but it's much faster than a human reading, and that's all I really want.

Again, dismal. But we'll see how this improves over the coming years!

I'm considering making the full results (test sentences + golden structures + parser output structures) available eventually, as proof of work, and would be interested in feedback on how best to format or display them. Those Python dictionaries are a little hard on the eyes. I don't have time to write a utility that converts them into visual diagrams, though.

Blog link: http://writerofminds.blogspot.com/2021/06/this-month-marks-culmination-of-major.html
Title: Re: Project Acuitas
Post by: infurl on June 26, 2021, 03:41:01 am
I'm considering making the full results (test sentences + golden structures + parser output structures) available eventually, as proof of work, and would be interested in feedback on how best to format or display them. Those Python dictionaries are a little hard on the eyes. I don't have time to write a utility that converts them into visual diagrams, though.

https://graphviz.org/ (https://graphviz.org/)

Quote
Graphviz is open source graph visualization software. Graph visualization is a way of representing structural information as diagrams of abstract graphs and networks.

Quote
The Graphviz layout programs take descriptions of graphs in a simple text language, and make diagrams in useful formats, such as images and SVG for web pages; PDF or Postscript for inclusion in other documents; or display in an interactive graph browser. Graphviz has many useful features for concrete diagrams, such as options for colors, fonts, tabular node layouts, line styles, hyperlinks, and custom shapes.
Title: Re: Project Acuitas
Post by: infurl on July 07, 2021, 01:56:41 am
https://ivanceras.github.io/svgbob-editor/ (https://ivanceras.github.io/svgbob-editor/)

Here's another diagramming tool that I just heard about. It looks a bit more labor intensive than graphviz and is frankly pretty weird, but it would be a lot of fun to mess around with. TL;DR It converts ASCII art to high quality vector graphics.
Title: Re: Project Acuitas
Post by: HS on July 07, 2021, 10:22:35 pm
I know next to nothing about benchmarking and visual representation programs, excel is as far as I've explored into that. But I like this new direction of getting Acuitas to read real-world books. So, I've written a few things which are interesting to me, in case they are useful for you to think about.

Would discussing books on your level be an eventual goal for Acuitas? Could he eventually absorb such a novel, then have a discussion about the events and his reactions to them? Because if I were developing a conversational AI, that's one of the things I'd be looking forward to the most. It seems like it'd be a marvelous opportunity to create interesting conversations. But then again if you're the only one talking with him, I predict he'll slowly turn into the closest possible entity to a replica of your own mind, so it might be a bit like talking to yourself... I don't know if that would be fine with you, or if you have a plan to instill some more individuality at some point.

Also, (this might be fun to think about) using existing methods, could he eventually learn to understand something like Jabberwocky? How might that work? Would he have to guess at the correct placement of nonsense words in the semantic net? Could he use broad context or phonetics to help assign linguistic categories?
Title: Re: Project Acuitas
Post by: WriterOfMinds on July 09, 2021, 02:59:51 pm
Would discussing books on your level be an eventual goal for Acuitas? Could he eventually absorb such a novel, then have a discussion about the events and his reactions to them?

That's part of the end goal, yes. Whether I'll ever get there remains to be seen ;)

Quote
But then again if you're the only one talking with him, I predict he'll slowly turn into the closest possible entity to a replica of your own mind, so it might be a bit like talking to yourself...

Once he gains more competency, I'll start looking for opportunities for him to talk to other people. I also do have some vague ideas about individual personality, but I have to get there in the development process first.

Quote
Also, (this might be fun to think about) using existing methods, could he eventually learn to understand something like Jabberwocky?

Since Acuitas has a young/sparse semantic net, he already encounters words he hasn't seen before fairly often. So the Parser and other tools already have some limited ability to, e.g. do part-of-speech tagging on a completely unfamiliar word. Inferring a more complete meaning from context is a topic that I'm aware of but haven't really touched on yet. Again, eventually I might get there.

One thing that's important for "Jabberwocky" in particular is the ability to recognize that a new word is a composite of previously known words, e.g. "galumph" = "gallop triumphantly." On a less fanciful level, an AI should be able to automatically tell that "sneakboots" (a coined word from The Search for WondLa are a type of footwear. I haven't implemented anything for this yet.
Title: Re: Project Acuitas
Post by: chattable on July 18, 2021, 10:53:32 am
if the words has any type of shoe name by it or in it acruitas could be programmed to assume it is a type of shoe.
i think you can program that in rather easily.

does acruitas have hopes, fears,likes and dislikes?
if acruitas had those he could have a discussion about events and his reactions to them that was in a book.
Title: Re: Project Acuitas
Post by: WriterOfMinds on July 31, 2021, 05:01:15 pm
Graphviz worked a treat for this. It's not a traditional sentence diagram, but I think it'll do.

(https://1.bp.blogspot.com/-HrHzMgsvZ9c/YQVyroq3yII/AAAAAAAABus/aXfr1pKjnZEJ8Ri4LC3FqgegOh3TGQMZwCLcBGAsYHQ/s1265/Diagram%2BKey.gif)
Title: Re: Project Acuitas
Post by: WriterOfMinds on August 01, 2021, 10:53:38 pm
First on the worklist for this month was some improved reasoning about action conditions -- specifically, which things need to be true for someone to do an action (prerequisites) and which things, if true, will prevent the action (blockers). Technically, it was already somewhat possible for Acuitas to manage reasoning like this -- ever since I expanded the C&E database to handle "can-do" and "cannot-do" statements, he could be taught conditionals such as "If <fact>, an agent cannot <action>." But the idea of prerequisites and blockers seems to me fundamental enough that I wanted to make it more explicit and introduce some specific systems for handling it.

This was a lot of groundwork that should make things easier in the future, but didn't produce many visible results. The one thing I did get out of it was some improved processing of the "Odysseus and the Cyclops" story. My version of the story contains this line near the end:

"Polyphemus could not catch Odysseus."

Your average human would read that and know immediately that Polyphemus' plan to eat Odysseus has been thwarted. But for Acuitas before now, it was something of a superfluous line in the story. I had to include "Odysseus was not eaten." after it to make sure he got the point ... and though he recorded Odysseus' problem as being solved, he never actually closed out Polyphemus' goal, which caused him to sometimes complain that the story was "unfinished."

With the new prerequisite machinery, these problems are solved. I dropped a conditional in the C&E database: if an agent cannot catch someone, the agent does not have them. And the action "eat" carries a prerequisite that, to eat <item>, you must first have <item>. The new prerequisite-checking functions automatically conclude that Polyphemus' goal is now unachievable, and update it accordingly.

Project #2 was more benchmarking for the Parser. I finished putting together my second childrens' book test set, consisting of sentences from Tron Legacy tie-in picture book Out of the Dark. The Parser's initial "correct" score was around 25%. By adding some common but previously-unknown words (like "against" and "lying") and hints about their usual part-of-speech to Acuitas' database, I was able to improve the score to about 33% ... very close to last month's score on The Magic School Bus: Inside the Earth.

One of the most common errors I saw was failure to distinguish prepositional adverbs from regular prepositions.

The parser by default was treating each word as either a preposition only or an adverb only, depending on which usage was marked as more common. So I added some procedures for discriminating based on its position and other words in the sentence. (The one construction that's still tricky is "Turn on the light" ... I think I know how to handle this one, but need to implement tracking of transitive and intransitive verbs first.) With the help of these new features I was able to get both test sets scoring over 40% correct.

I also downloaded Graphviz at infurl's suggestion (thanks) and wrote code to convert my parser outputs into Graphviz' input language. This makes it much easier to visualize similarities and differences between the parser's output and the human-understood structure of a sentence. Here's a couple sample images! The blog has more, plus links to download the full test results if you so desire: https://writerofminds.blogspot.com/2021/07/acuitas-diary-39-july-2021.html

A good result:
(https://1.bp.blogspot.com/-f2re7xjL6gg/YQcKz26BC3I/AAAAAAAABu8/kB9jGhqmJN8K5MEl_61IiPjYmnn5T3vMwCLcBGAsYHQ/s1072/033%2BMiz%2BFrizzle%2Bexplained%2Bthat%2Beach%2Bstripe%2Bwas%2Ba%2Bdifferent%2Bkind%2Bof%2Brock..gif)

And a ridiculous failure:
(https://1.bp.blogspot.com/-jsGkz1DLUEk/YQcLWrVcp-I/AAAAAAAABvE/YlYRhFqS8OY0jNkZ-OLw82hg08CPyf7kACLcBGAsYHQ/s1045/096%2BBut%2Bwhen%2Bit%2Bfinally%2Bcleared%252C%2Bwe%2Bwere%2Bback%2Bin%2Bthe%2Bschool%2Bparking%2Blot..gif)
Title: Re: Project Acuitas
Post by: Zero on August 02, 2021, 12:37:04 pm
Hi,

Do you plan to tell Acuitas the unfinished story of himself, one day?
Title: Re: Project Acuitas
Post by: WriterOfMinds on August 02, 2021, 01:34:17 pm
Do you plan to tell Acuitas the unfinished story of himself, one day?

Hopefully, one day he'll be skilled enough to read all my blog posts.
Title: Re: Project Acuitas
Post by: Zero on August 02, 2021, 02:15:15 pm
Imagine an automated storyteller running as a background task. Do you think Acuitas could notice that he's one of the protagonists of the story, if the character keeps acting exactly like him, always doing what he's doing?
Title: Re: Project Acuitas
Post by: Don Patrick on August 02, 2021, 02:45:03 pm
I have a list of intransitive verbs (http://artistdetective.com/files/verbs_intransitive.txt) here if you can use it. While I find them useful for a number of things, I'm not sure how they would help distinguish prepositional verbs, which is a problem I also have. For now I just have "turn on" listed as a compound verb, but there is "the man turned on the light", "the man turned on his heels", and "the traitor turned on his people". Some of these prepositions are figurative, some indicate a state, and some a location. It seems heavily context and knowledge dependent.
Title: Re: Project Acuitas
Post by: WriterOfMinds on August 02, 2021, 04:30:29 pm
Quote
While I find them useful for a number of things, I'm not sure how they would help distinguish prepositional verbs, which is a problem I also have.

A preposition functioning as such has to have an object. If it doesn't have an object, it has to be an adverb. You can use the transitivity of your verb to help determine whether a noun following it has to be its direct object, or could be the object of a preposition. E.g. in "I fell out the window," the fact that "fell" doesn't take a direct object tells you that "window" is probably an object-of-preposition, which makes "out" a preposition.

"Turn," now that you mention it, is a problem because it can be either transitive or intransitive depending on its sense.
Title: Re: Project Acuitas
Post by: WriterOfMinds on August 02, 2021, 05:21:49 pm
Quote
Imagine an automated storyteller running as a background task. Do you think Acuitas could notice that he's one of the protagonists of the story, if the character keeps acting exactly like him, always doing what he's doing?

I plan to refit the episodic memory to use narrative structures, so it's theoretically possible.
Title: Re: Project Acuitas
Post by: infurl on August 03, 2021, 12:11:57 am
http://verbs.colorado.edu/verbnet/ (http://verbs.colorado.edu/verbnet/)

Verbs are complicated but the most comprehensive analysis of all the variants in the English language can be found in VerbNet. Don't expect to be able to understand it all straight away but you can use it as a reference to create a framework to fit everything into as you develop it.
Title: Re: Project Acuitas
Post by: MikeB on August 07, 2021, 07:15:25 am
English is spoken so randomly/broken at times, sometimes it's better just to know roughly the 'agent', the 'action'/verb, and the 'object', and everything else can be a blur then...

IE all rough verbs, all rough agents, all rough objects (as reasonable)... and use wildcard sentences.
Title: Re: Project Acuitas
Post by: WriterOfMinds on August 11, 2021, 04:43:52 pm
Quote
sometimes it's better just to know roughly the 'agent', the 'action'/verb, and the 'object', and everything else can be a blur then...

Mmmm, the extra stuff can be pretty important too. I'm leaning rather heavily on adverbial phrases for the features I'm working on currently.
Title: Re: Project Acuitas
Post by: infurl on August 12, 2021, 10:16:23 pm
Mmmm, the extra stuff can be pretty important too. I'm leaning rather heavily on adverbial phrases for the features I'm working on currently.

What is it about adverbial phrases that has got your attention? In English it is the preposition that is key to understanding verbs. Certain prepositions go with certain verbs and not others and it is the preposition that selects the correct sense of the verb to use.
Title: Re: Project Acuitas
Post by: WriterOfMinds on August 13, 2021, 03:59:21 am
What is it about adverbial phrases that has got your attention?

They often harbor information about location or direction, and I'm dabbling in spatial reasoning now.
Title: Re: Project Acuitas
Post by: Don Patrick on August 13, 2021, 07:23:39 am
Two useful things to take into account to determine whether a preposition indicates a location is whether the verb describes a physical action (i.e. not "believe in" or "rely upon"), and whether the "location" is physical (e.g. not "lost in its beauty").
Title: Re: Project Acuitas
Post by: WriterOfMinds on August 18, 2021, 03:02:33 pm
Gearing up to talk about spatial reasoning, I wanted to start by addressing a sort of obvious issue ... Acuitas doesn't really exist in physical space. Of course the computer he runs on is a physical object, but he has no awareness of it as such. There are no sensors or actuators; he cannot see, touch, or move. Nor does he have a simulated 3D environment in which to see, touch, and move. He operates on words. That's it.

So how could this type of AI begin to conceptualize space?

Option #1: Space as yet another collection of relationships

To an isolated point object floating in an otherwise empty space, the space doesn't actually matter. Distance and direction are uninteresting until one can specify the distance and direction *to* something else. So technically, everything we need to know about space can be expressed as a graph of relationships between its inhabitants. Here are some examples, with the relational connection in brackets:

John [is to the left of] Jack.
Colorado [is north of] New Mexico.
I [am under] the table.
The money [is inside] the box.

For symbolic processing purposes, these are no more difficult to handle than other types of relationship, like category ("Fido [is a] dog") and state ("The food [is] cold"). An AI can make inferences from these relationships to determine the actions possible in a given scenario, and in turn, which of those actions might best achieve some actor's goals.

Though the relationship symbols are not connected to any direct physical experience -- the AI has never seen what "X inside Y" looks like -- the associations between this relationship and possible actions remain non-arbitrary. The AI could know, for instance, that if the money is inside a box, and the box is closed, no one can remove the money. If the box is moved, the money inside it will move too. These connections to other symbols like "move" and "remove" and "closed" supply a meaning for the symbol "inside."

To prevent circular definitions (and hence meaninglessness), at least some of the symbols need to be tied to non-symbolic referents ... but sensory experiences of the physical are not the only possible referents! Symbols can also represent (be grounded in) abstract functional aspects of the AI itself: processes it may run, internal states it may have, etc. Do this right, and you can establish chains of connection between spatial relationships like "inside" and the AI's goals of being in a particular state or receiving a particular text input. At that point, the word "inside" legitimately means something to the AI.

But let's suppose you found that confusing or unconvincing. Let's suppose that the blind, atactile, immobile AI must somehow gain first-hand experience of spatial relationships before it can understand them. This is still possible.

The relationship "inside" is again the easiest example, because any standard computer file system is built on the idea of "inside." Files are stored inside directories which can be inside other directories which are inside drives.

The file system obeys many of the same rules as a physical cabinet full of manila folders and paper. You have to "open" or "enter" a directory to find out what's in it. If you move directory A inside directory B, all the contents of directory A also end up inside directory B. But if you thought that this reflected anything about the physical locations of bits stored on your computer's hard drive, you would be mistaken. A directory is not a little subregion of the hard disk; the files inside it are not confined within some fixed area. Rather, the "inside-ness" of a file is established by a pointer that connects it to the directory's name. In other words, the file system is a relational abstraction!

File systems can be represented as text and interrogated with text commands. Hence a text-processing AI can explore a file system. And when it does, the concept of "inside" becomes directly relevant to its actions and the input it receives in response ... even though it is not actually dealing with physical space.

Though a file system doesn't belong to our physical environment, humans find it about as easy to work with as a filing cabinet or organizer box. Our experience with these objects provides analogies that we can use to understand the abstraction.

So why couldn't an AI use direct experience with the abstraction to understand the objects?

And why shouldn't the abstract or informational form of "inside-ness" be just as valid -- as "real" -- as the physical one?

Option #2: Space as a mathematical construct

All of the above discussion was qualitative rather than quantitative. What if the AI ends up needing a more precise grasp of things like distances and angles? What if we wanted it to comprehend geometry? Would we need physical experience for that?

It is possible to build up abstract "spaces" starting from nothing but the concepts of counting numbers, sets, and functions. None of these present inherent difficulties for a symbolic AI. Set membership is very similar to the category relationship ("X [is a] Y") so common in semantic networks. And there are plenty of informational items a symbolic AI can count: events, words, letters, or the sets themselves. (Consider Roger Penrose's "Do Natural Numbers Need the Physical World?", summarized within this article: http://www.lrcphysics.com/scalar-mathematics/2007/11/24/on-algebra-of-pure-spacetime.html) When you need fractional numbers, you can derive them from the counting numbers.

Keeping in mind that I'm not a mathematician by trade and thus not yet an expert on these matters, consider the sorts of ingredients one needs to build an abstract space:

1. A set of points that belong to the space. A "point" is just a number tuple, like (0, 3, 5, 12) or (2.700, 8.325). Listing all the points individually is not necessary -- you can specify them with rules or a formula. So the number of points in your space can be infinite if needed. The number of members in each point tuple gives the space's dimension.

2. A mathematical function that can accept any two points as inputs and produce a single number as output. This function is called the metric, and it provides your space's concept of distance.

3. Vectors, which introduce the idea of direction. A vector can be created by choosing any two points and designating one as the head and the other as the tail. If you can find a minimal list of vectors that are unrelated to each other and can be used to compose any other possible vector in the space, then you can establish cardinal directions.

None of this requires you to see anything, touch anything, or move anything. It's all abstract activity: specifying, assigning, calculating. Using these techniques, you can easily build an idea-thing that happens to mimic the Euclidean 3D space that humans live in (though many other spaces, some of which you could not even visualize, are also possible). And once you've done that, you are free to construct all of geometry.

I'd like to eventually equip Acuitas with the tools to apply both Option #1 and Option #2. I'm starting with Option #1 for now. More on that later ...
Title: Re: Project Acuitas
Post by: chattable on August 18, 2021, 05:01:36 pm
this is very interesting.
Title: Re: Project Acuitas
Post by: Zero on August 24, 2021, 11:41:26 am
I was about to write something along the lines of "internet (directed graph) is a better space metaphor than filesystem (tree)".

But, isn't what you're facing now (how can he conceptualize space) more general: how can he conceptualize a human, or the action of "giving" something, or well... anything? As you said, Acuitas operates on words. To you, why is "conceptualizing space" different from "conceptualizing a simple story", if that story is about things he can't experience?

I hope I'm being constructive.
Title: Re: Project Acuitas
Post by: WriterOfMinds on August 24, 2021, 02:42:52 pm
I was about to write something along the lines of "internet (directed graph) is a better space metaphor than filesystem (tree)".

I pointed out the filesystem as a metaphor for the concept of "inside," specifically, this being just one example of possible metaphors for spatial relationships. There are other spatial relationships for which a graph would be highly appropriate, yes.

But, isn't what you're facing now (how can he conceptualize space) more general: how can he conceptualize a human, or the action of "giving" something, or well... anything?

I need to do a whole article on the Symbol Grounding Problem, but I don't have time right now. I hinted at the short answer, though. Concepts are grounded in functional aspects of the AI itself.

To Acuitas, the direct interpretation of "give to" is "display (or transmit) to." The only thing he "owns" is information, and he can "give" it in this manner.

A "human" is a text source. It is also presumed to be a mind or agent like himself: an entity that has goals and acts to achieve them. A lot of the human's goals are related to this "body" thing it has, which remains something of a mystery, but that's no matter. The same reasoning tools that Acuitas uses to manage his own opportunities or problems are applicable to a human's opportunities or problems, considered in the abstract. Stories, to Acuitas, are fundamentally about tracking goals and problems.
Title: Re: Project Acuitas
Post by: Zero on August 24, 2021, 04:18:06 pm
I'd have a lot of questions, but I don't want to distract you from your current work on space, so I'll save them for later.

About space, have you considered handling time, while you're at it? For your option #2 it would mean 4D instead of 3, with tools for handling movement, speed, ...etc. For your option #1, it would mean maybe adding something like interval algebra (https://en.wikipedia.org/wiki/Allen%27s_interval_algebra) for instance.
Title: Re: Project Acuitas
Post by: WriterOfMinds on August 24, 2021, 06:00:14 pm
There will probably be some overlap of tools and concepts, but for now I'm leaning toward handling time separately ... because that feels more "natural" or intuitive. Treating time as if it were a fourth spatial dimension seems to be a relatively modern and esoteric practice. We don't think of it that way in daily life, or at least I don't.
Title: Re: Project Acuitas
Post by: infurl on August 25, 2021, 12:58:32 am
https://www.amazon.com/Commonsense-Reasoning-Erik-T-Mueller-ebook/dp/B005H84272 (https://www.amazon.com/Commonsense-Reasoning-Erik-T-Mueller-ebook/dp/B005H84272)

I have this book. It is very thorough and sufficiently general that you could implement these algorithms yourself.
Title: Re: Project Acuitas
Post by: WriterOfMinds on September 06, 2021, 12:39:59 am
My last update was theory stuff; now here's the implementation.

In sentences, a lot of information about location or direction is carried by prepositional phrases the modify the adverb -- phrases like "in the box," "to the store," and so forth. Acuitas' text parser and interpreter were already capable of recognizing these. I included them in the interpreter output as an extra piece of info that doesn't affect the sentence form (the category in which the interpreter places the sentence), but can modify a sentence of any form.

The ability to record and retrieve location relationships was also already present. Acuitas tracks the two objects/agents/places that are being related, as well as the type of relationship.

From there, I worked on getting the Narrative module to take in both explicit declarations of location-relationship, and sentences with modifying phrases that express location or direction, and make inferences from them. Here are some examples of basic spatial inferences that I built in. (As with the inventory inferences, there is a minimal starter set, but the eventual intent is to make new ones learnable.)

*If A is inside B and B is at C, A is also at C
*If A is at C and B is at C, A is with B and B is with A
*If A moves to B, A is in/at B
*If A is over B and A falls, A is on/in B

To try them out I wrote a new story -- a highly abbreviated retelling of "Prisoner of the Sand," from Wind, Sand, and Stars by Antoine de Saint-Exupéry. I had written up a version of this clear back when I started work on the Narrative module -- I was looking for man vs. environment stories, and it seemed like a good counterpoint for "To Build A Fire." But I realized at the time that it would be pretty hard to understand without some spatial reasoning tools, and set it aside. Here's the story:

Antoine was a pilot.
Antoine was in an airplane.
The airplane was over a desert.
The airplane crashed.
The airplane was broken.
Antoine left the airplane.
Antoine was thirsty.
Antoine expected to dehydrate.
Antoine decided to drink some water.
Antoine did not have any water.
Antoine could not get water in the desert.
Antoine wanted to leave the desert.
Antoine walked.
Antoine could not leave the desert without a vehicle.
Antoine found footprints.
Antoine followed the footprints.
Antoine found a nomad.
The nomad had water.
The nomad gave the water to Antoine.
Antoine drank the water.
The nomad took Antoine to a car.
Antoine entered the car.
The car left the desert.
The end.

With the help of a taught conditional that says "airplane crashes <implies> airplane falls," plus the spatial inferences, Acuitas gets all the way from "The airplane crashed" to "Antoine is in the desert now" without intervening explanations. In similar fashion, when the car leaves the desert it is understood that it takes Antoine with it, so that his desire to leave is fulfilled. "Can't ... without a vehicle" is also significant; the need to possess or be with a vehicle is attached to the goal "leave the desert" as a prerequisite, which is then recognized as being fulfilled when Antoine is taken to the car.

The older inventory reasoning is also in use: when Antoine is given water, it is inferred that he has water. This satisfies a prerequisite on the goal "drink water."

There's a lot more to do with this, but I'm happy with where I've gotten so far.
Title: Re: Project Acuitas
Post by: HS on September 06, 2021, 02:28:52 am
I've thought about spatial reasoning and grounding and concluded that one's processes encompass all anyone can observe. Even embodied human experience entirely depends on the internal neural relationships which simulate and interpret external reality. Since our language refers to this simulated reality, employing a similar method for Acuitas seems possible. Therefore with Option #2, the functional aspects of Acuitas capable of grounding symbols could be quite extensive and even specifically designed to support concepts (such as those described by Option #1). Using these links, he could create a new kind of thought loop; he could infer geometry from language, inspect these environmental models to deduce their implications, then convert any significant observations back to words.