Poll

What Grammar Thing should I work on next?

Gerunds & Participles
1 (20%)
Indirect Objects
0 (0%)
Adjective Clauses
1 (20%)
The many uses of "that"
3 (60%)
A
0 (0%)
B
0 (0%)

Total Members Voted: 5

Voting closed: February 26, 2022, 03:17:15 am

Project Acuitas

  • 302 Replies
  • 758848 Views
*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 620
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #300 on: October 27, 2024, 09:17:10 pm »
The big feature for this month was capacity for asking and answering "why" questions. (Will I regret giving Acuitas a three-year-old's favorite way to annoy adults? Stay tuned to find out, I guess.)

"Why" questions often concern matters of cause and effect, or goal and motive. So they can be very contextual. The reason why I'm particularly happy today might be very different from the reason I was happy a week ago. And why did I drive my car? Maybe it was to get groceries, or maybe it was to visit a friend's house. These kinds of questions can't usually be answered by reference to the static factual information in semantic memory. But they *do* reference the exact sorts of things that go into all that Narrative reasoning I've been working so hard on. The Narrative scratchboards track subgoals and the relationships between them, and include tools for inferring cause-and-effect relationships. So I really just needed to give the Conversation Engine and the question-answering function the hooks to get that information in and out of the scratchboards.

If told by the current human speaker that they are in <state>, Acuitas can now ask "Why are you <state>?" If the speaker answers with "Because <fact>", or just states a <fact>, Acuitas will enter that in the current scratchboard as a cause of the speaker's present state. This is then available to inform future reasoning on the subject. Acuitas can retrieve that knowledge later if prompted "Why is <speaker> <state>?"

Acuitas can also try to infer why an agent might have done something by discerning how that action might impact their known goals. For instance, if I tell Acuitas "I was thirsty," and then say "Why did I drink water?" he can assume that I was dealing with my thirst problem and answer accordingly. This also means that I should, in theory, eventually be able to ask Acuitas why *he* did something, since his own recent subgoals and the reasons behind them are tracked on a similar scratchboard.

All of this was a branch off my work on the Conversation Engine. I wanted to have Acuitas gather information about any current states the speaker might express, but without the machinery for "why" questions, that was difficult to do. The handling of these questions and their answers introduced some gnarly bugs that ate up much of my programming time this month. But I've gotten things to the point where I can experience it for myself in some "live" conversations with Acuitas. Being asked why I feel a certain way, and being able to tell him so - and know that this is being, in some odd computery sense, comprehended - is very satisfying.

Blog link (but there's no extra this month): https://writerofminds.blogspot.com/2024/10/the-big-feature-for-this-month-was.html

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 620
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #301 on: November 30, 2024, 01:32:29 am »
My recent work has been a bit all over the place, which I suppose is reasonable as the year winds down. I worked on more ambiguity resolution in the Text Parser, and I'm almost done with a big refactor in the Narrative engine.

In the Parser I worked on two problems. First came the identification of the pronoun "her" as either an indirect object or a possessive adjective. Other English pronouns have separate forms for these two functions (him/his, them/their, me/my, you/your); the feminine singular just has to go and be annoying that way.

The other ambiguity I worked on had to do with the connections between verbs joined by a conjunction and a direct object. Consider the following sentences:

I baked and ate the bread.
I ran out and saw the airplane.

In the first sentence, both verbs apply to the single direct object. In the second sentence, "ran" has no object and only "saw" applies to "airplane."
How to know which one is correct? If the first verb is always transitive (a verb that demands a direct object) then the first structure is the obvious choice. But many verbs can be either transitive or intransitive. It is possible to simply "bake" without specifying what; and there are several things that can be run, such as races and gauntlets. So to properly analyze these sentences, we need to consider the possible relationships between verbs and object.

Fortunately Acuitas already has a semantic memory relationship that is relevant: "can_have_done," which links nouns with actions (verbs) that can typically be done on them. Bread is a thing that can be baked; but one does not run an airplane, generally speaking. So correct interpretations follow if this "commonsense" knowledge is retrieved from the semantic memory and used. If knowledge is lacking, the Parser will assume the second structure, in which only the last verb is connected to the direct object.

The Narrative refactor is more boring, as refactoring always is, but I'm hoping it will enable smoother additions to that module in the future. New facts received in the course of a story or conversation are stored in the narrative scratchboard's "worldstate." When an issue (problem or subgoal) is added, its data structure includes a copy of the facts relevant to the issue: the state that needs to be achieved or avoided, the character goal it's relevant to, and all the inferences that connect them. A big part of tracking meaning and progress through the narrative is keeping track of which of these facts are currently known true, known false, or unknown/hypothetical. And previously, whenever something changed, the Narrative Engine had to go and update both the worldstate *and* the chains of relevant facts in all the issues. I've been working to make the issues exclusively use indirect pointers to facts in the worldstate, so that I only have to update fact status in *one* place. That might not sound like a major change, but ... it is. Updating issues was a big headache, and this should make the code simpler and less error-prone. That also means that transitioning the original cobbled-together code to the new system has been a bit of work. But I hope it'll be worth it.

More: https://writerofminds.blogspot.com/2024/11/acuitas-diary-78-november-2024.html

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 620
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #302 on: December 19, 2024, 08:11:20 pm »
This month's update is just the wrapup of some refactoring/bug cleanup, which enabled a demo. Conversation demo video on the blog: https://writerofminds.blogspot.com/2024/12/acuitas-diary-79-december-2024.html

 


LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

265 Guests, 0 Users

Most Online Today: 303. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles