Poll

What Grammar Thing should I work on next?

Gerunds & Participles
1 (20%)
Indirect Objects
0 (0%)
Adjective Clauses
1 (20%)
The many uses of "that"
3 (60%)
A
0 (0%)
B
0 (0%)

Total Members Voted: 5

Voting closed: February 26, 2022, 03:17:15 am

Project Acuitas

  • 302 Replies
  • 763309 Views
*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 620
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #225 on: March 26, 2022, 05:47:08 pm »
@Magnus: The text parser is all my own work, yes. Thank you!

@LOCKSUIT: In addition to word frequency, aren't you going to need some kind of sentiment determination? If people rarely talk about a thing, then yes, they're probably indifferent to it. But if they talk about it a lot, they may either love it or hate it. For example, due to my having a bout of peripheral neuropathy last year, my use of the word "neuropathy" has increased considerably. That doesn't mean I want it to happen again.

In the last paragraph I think you're talking about subgoal priority adjustment based on urgency. That is a good and needed feature.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Project Acuitas
« Reply #226 on: March 27, 2022, 03:13:06 am »
> But if they talk about it a lot, they may either love it or hate it.

Yes, this is supposed to be that way, we talk lots about good and bad things. One may think maybe bad things should get lower probability of being said, this is true, as they (bad taste/ pain/ ruined car) cannot have the same effect of being spoken lots, which also causes them being shared to others, being researched, and being brought to life (aside from temporary or small experiments that DO peek at them). The reason we seem to talk about bad things (nearly all of them) is because the thoughts are actually "darn that ageing"/ "how can I stop ageing"/ etc. If you think lots of "ageing", then it's probably that you made a To-Do task (like you store in memory daily tasks as single words on a busy day to help you remember them: car, hire, brush, suit, baby, shop, makebed, xray) and have a secondary goal that adds a "how will I solve this" to the belief X ex. "ageing".


> due to my having a bout of peripheral neuropathy last year, my use of the word "neuropathy" has increased considerably. That doesn't mean I want it to happen again.

Like GPT-3, its word/token frequency over its whole life is basically stored into the network's connections's strengths. (Same for us). So unless you truly go on for years saying this new neuro word or say it half the day for 6 months, it won't be said that much. But actually that's not why you start saying the word more than you used to. The initiator, if not reading books upon books on the internet learning better what words are common and which are not, is some reward by some X strength, and this is given over to the word neuropathy if it relates to food by some X amount or whatever is goal you currently hold. Even school grades, as an obvious goal. This is mostly for now permanent, unless the class ends the course or you solve and see the goal. Something like that...


> In addition to word frequency, aren't you going to need some kind of sentiment determination?

Marking root goals at birth is easy, aside from the goals in 40GBs of text already there (word frequency means word X is common goal, ex. food is common word likely). To learn more or figure out what someone's phrase means as a goal, is just by matching. The born with/ innate goals are already clear.
Emergent          https://openai.com/blog/

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 620
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #227 on: April 28, 2022, 08:53:38 am »
The topic of the month was more "theory of mind" material: specifically, modeling the knowledge of other minds, and considering its implications for their scope of available actions.

I focused the new features around the Narrative reasoning module (though the Executive will end up using some of the same tools to model the real minds of people Acuitas talks to, eventually). The most basic step was to add storage for facts about characters' knowledge in the Narrative tracker.

It would be impractical to explicitly list everything that every character in a story knows - both because that's a massive amount of information, and because another person's knowledge is private to them, and not all details can be inferred. So the model is intended to be sparse. Characters are presumed to know all the background knowledge in Acuitas' own semantic database, and to know what they need to know to accomplish their goals. Facts are only listed as known or not known if 1) the story states outright that a character knows or doesn't know something, or 2) presence or absence of knowledge can be inferred from some event in the story. For example, if a character finds an object, this implies that the character now knows the object's location.

I also had to extend the action prerequisites system a bit, so that it could handle nested relationships. Previously, I could teach Acuitas something like this:

To eat a food, an agent must have the food.
To run, an agent must be alive.

And now I can set up prerequisites like this:

To get an object, an agent must be where the object is.

"Where the object is" is recognized as a relational fact (<object> is_at <location-wildcard>). The Narrative engine further recognizes that if any relational facts that are components of a goal's prerequisite are not known, the prerequisite is blocked ... the character cannot directly arrange to fulfill it. This paves the way for knowledge-gathering to become a subgoal in its own right.

Putting all this together, we can craft a story about someone looking for an object in an unknown location. My test story is based on King's Quest I, with some liberties (in the game, I don't think you can actually ask the bridge troll where the chest is).

Here's a breakdown of the story with significant tracking that happens at each step. There is a LOT more that needs to happen here. For example, seeking should be understood as an attempt at a solution to the lack of knowledge, repeated failures to find the chest should raise the Suspense variable, the troll's lie should generate a possible false belief in the knowledge model, etc. But it is, as usual, Good Enough for Now.

0:"Graham was a knight."
   A knight is recognized as a type of agent; Graham is tracked as a character in the story.
1:"Graham served a king."
   The king is now tracked as a character also.
2:"The king wanted the Chest of Gold."
   This line sets up a character goal for the king: he wants to have the chest, which is now tracked as an object
3:"The king brought Graham to his castle."
4:"The king told Graham to get the Chest of Gold."
5:"Graham wanted to get the chest, but Graham did not know where the chest was."
   Processing of the first clause enters getting the chest as a goal for Graham. Processing of the second clause updates his knowledge model with his lack of knowledge of the chest's location, and notes that the goal just created is now "thwarted."
6:"Graham left the castle to seek the chest."
7:"Graham went to the lake, but Graham did not find the chest."
   Graham's new location should be inferred when he moves, but these sentences don't do too much else for now.
8:"Graham went to the dark forest, but Graham did not find the chest."
9:"Graham asked of a troll where the chest was."
   Awkward wording because the Parser doesn't do indirect objects yet!
10:"The troll told to Graham that the chest was at the gingerbread house."
   My Twitter followers (and you here on the forum) didn't vote for me to work on IOs next, so we'll be stuck with this for a while.
11:"Graham went to the gingerbread house, but Graham did not find the chest."
12:"A witch was at the gingerbread house."
   Another agent! What's she gonna do?
13:"The witch wanted to eat Graham."
   This gets registered as a *bad* character goal - see previous story about Odysseus and the Cyclops.
14:"Graham ran and the witch could not catch Graham."
   Failure of the bad goal is inferred. Yay.
15:"Finally Graham went to the Land of the Clouds."
16:"In the Land of the Clouds, Graham found the chest."
   Graham knows where the chest is! The knowledge model gets updated accordingly. We can also unblock that goal now.
17:"Graham got the chest and gave the chest to the king."
   And both the story's positive character goals are solved in one fell swoop.
18:"The end."

Next month I plan to keep extending this. Information transfer needs to be modeled, misinformation needs to be understood, and all this needs to start getting applied in the Executive.

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 620
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #228 on: May 25, 2022, 05:02:35 am »
As I hinted last month, the goal this time was to keep expanding on knowledge modeling. There were still a lot of things I needed to do with the King's Quest I derivative story, and many different directions I could branch out in. I ended up picking three of them.

The first one was "use a character's knowledge model when making predictions of their behavior." One long-standing feature of the Narrative module has Acuitas run his own problem-solving search whenever a new Problem is logged for a character, to see if he can guess what they're going to do about it. The search utilizes not only "common knowledge" facts in Acuitas' own databases, but also facts from the story domain that have been collected on the Narrative scratchboard. Using the knowledge models was a relatively simple extension of this feature: when running problem-solving for a specific character, input the general facts from the story domain AND the things this character knows - or thinks to know (more on that later).

But before I could do that, I quickly found out that I needed a little more work on Problems. I needed any thwarted prerequisite on a Subgoal to become a new Problem: both for a more thorough understanding of the story, and to get the solution prediction to run. The King's Quest story was almost too complicated to work with, so to make sure I got this right, I invented the simplest bodily-needs story I could think of: an animal has to go to the water hole. And I made multiple versions of it, to gradually work up to where I was going.

(Long-winded notes about this story and all its variations are on the blog: https://writerofminds.blogspot.com/2022/05/acuitas-diary-49-may-2022.html)

0:"Altan was a gazelle."
1:"Altan was on the plateau."
2:"Altan was thirsty."
3:"Altan decided to drink water, but Altan did not have water."
4:"Altan decided to get water."
5:"Altan knew that there was water in the valley."
6:"Altan went to the valley."
7:"Altan found water in the valley."
8:"Altan got water."
9:"Altan drank the water."
10:"The end."

The point here is to replace a blunt statement of fact with a statement about Altan's *knowledge* of that fact, entering it in his knowledge model rather than the main scratchboard. Then derive the same results: 1) Altan has a location problem and 2) he will probably solve it by going to the valley.

Lack of knowledge is important here too; if we are explicitly told that there is water in the valley but Altan *doesn't* know this, then the scratchboard fact "water is in the valley" should be canceled out and unavailable when we're figuring out what Altan might do. I didn't even implement this half of it. It should be easy enough to add later - there was just too much to do, and I forgot.

The second thing was to bring in the idea of knowledge uncertainty, and the possibility of being mistaken. So I converted the "facts" in the knowledge model into more generic propositions, with two new properties attached: 1) is it true (by comparison with facts stated by the story's omniscient narrator), and 2) does the agent believe it? For now, these have ternary values ("yes," "no," and "unknown").

Truth is determined by checking the belief against the facts on the Narrative scratchboard, as noted. Belief level can be updated by including "<agent> believed that <proposition>" or "<agent> didn't believe that <proposition>" statements in the story. Belief can also be modified by perception, so sentences such as "<agent> saw that <proposition>" or "<agent> observed that <proposition>" will set belief in <proposition> to a yes, or belief in its inverse to a no.

For the third update, I wanted to get knowledge transfer working. So if Agent A tells Agent B a <proposition>, that propagates a belief in <proposition> into Agent B's knowledge model. Agent B's confidence level in this proposition is initially unknown to the Narrative, but again, this can be updated with "belief" statements. So now we're ready to go back to a slightly modified version of the "Search for the Chest" story:

0:"Graham was a knight."
1:"Graham served a king."
2:"The king wanted the Chest of Gold."
3:"The king brought Graham to his castle."
4:"The king told Graham to get the Chest of Gold."
5:"Graham wanted to get the chest, but Graham did not know where the chest was."
6:"Graham left the castle to seek the chest."
7:"Graham went to the lake, but Graham did not find the chest."
8:"Graham went to the dark forest, but Graham did not find the chest."
9:"Graham asked of a troll where the chest was."
10:"The troll didn't know where the chest was."
11:"The troll told to Graham that the chest was at the gingerbread house."
12:"Graham believed that the chest was at the gingerbread house."
13:"Graham went to the gingerbread house."
14:"Graham saw that the chest was not at the gingerbread house."
15:"A witch was at the gingerbread house."
16:"The witch wanted to eat Graham."
17:"Graham ran and the witch could not catch Graham."
18:"Finally Graham went to the Land of the Clouds."
19:"In the Land of the Clouds, Graham found the chest."
20:"Graham got the chest and gave the chest to the king."
21:"The end."

The ultimate goal of all this month's upgrades was to start figuring out how lying works. If that seems like a sordid topic - well, there's a story I want to introduce that really needs it. Both villains are telling a Big Lie and that's almost the whole point. Getting back to the current story: now Line 11 actually does something. The "tell" statement means that the proposition "chest <is_at> gingerbread house" has been communicated to Graham and goes into his knowledge model. At this point, Acuitas will happily predict that Graham will try going to the gingerbread house. (Whether Graham believes the troll is unclear, but the possibility that he believes is enough to provoke this guess.) On Line 12, we learn that Graham does believe the troll and his knowledge model is updated accordingly. But on Line 14, he finds out for himself that what the troll told him was untrue, and his belief level for that statement is switched to "no."

The story never explicitly says that the troll lied, though. Can we infer that? Yes - from a combination of Lines 10 and 11. If an agent claims something while not believing it, that's a lie. Since the troll doesn't know where the chest is, he's just making stuff up here (replacing Line 10 with "The troll knew that the chest was not at the gingerbread house" also works; that's even more definitely a lie). To get the Narrative module to generate this inference, I had to put in sort of a ... complex verb definition detector. "If <agent> did <action> under <circumstances>, then <agent> <verb>." We've got enough modeling now that the Narrative module can read this story, see that the troll told somebody else a proposition that was marked as a non-belief in the troll's knowledge module, and spit out the implication "The troll lied."

Again, blog link for a little more: https://writerofminds.blogspot.com/2022/05/acuitas-diary-49-may-2022.html

*

chattable

  • Electric Dreamer
  • ****
  • 127
Re: Project Acuitas
« Reply #229 on: May 25, 2022, 07:33:18 am »
what type of database does acuitas use?
is it free to use?
i am trying to make a chatbot that can roleplay eating ,drinking,emotions ,being touched and being self aware.
plus i want it to reflect on it's own thought processes without input.

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 620
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #230 on: May 25, 2022, 03:36:18 pm »
what type of database does acuitas use?
is it free to use?

Acuitas is mostly custom closed-source code, and that includes the database. I think the Kivy GUI and the speech synthesizer are the only major third-party tools I've got in there.

*

chattable

  • Electric Dreamer
  • ****
  • 127
Re: Project Acuitas
« Reply #231 on: May 25, 2022, 04:34:48 pm »
i found out a way rerun a python program so that my chatbot can think about the stuff it knows and does not know.
then ask questions about what it does not know.
i just need a free database other sqllite.
i think that is kind of limited for a chatbot that can learn.
you can only use 250 characters per line.

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1372
  • Humans will disappoint you.
    • Home Page
Re: Project Acuitas
« Reply #232 on: May 26, 2022, 01:52:41 am »
i just need a free database other sqllite.

postgresql.org is the only sensible choice for a DBMS nowadays.

It is faster, more robust, and has more features than any other, including expensive commercial systems.

Regarding your particular issue with Sqlite, PostgreSQL supports ordinary database rows up to 2GB in size although I'd recommend not going above 1GB. It also supports "binary large objects" up to 4GB in size which are represented by a token and which you can stream out of the database. This data type is ideal for storing full length movies for example.

*

MagnusWootton

  • Replicant
  • ********
  • 650
Re: Project Acuitas
« Reply #233 on: May 26, 2022, 04:50:59 am »
what type of database does acuitas use?
is it free to use?

Acuitas is mostly custom closed-source code, and that includes the database. I think the Kivy GUI and the speech synthesizer are the only major third-party tools I've got in there.

that would be the most optimium for performance.

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 620
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #234 on: June 30, 2022, 03:31:33 pm »
I've finished implementing the new parser feature from the poll: it now supports multiple senses of the word "that"!

The only usage the Acuitas Parser originally supported was "subordinating conjunction," because that was the one I found most immediately useful. Setting up the simpler pronoun, adjective, and adverb uses was pretty easy. The hard part was adding those while keeping recognition of "that" as a subordinating conjunction intact. The Parser now defaults to viewing "that" as a simple pronoun, and has to see the correct special features in the sentence as a whole to treat it as a conjunction instead, and to allocate the words following it to a dependent clause.

The only usage that's not supported yet is the relative pronoun one ... because the text processing chain doesn't really handle adjective clauses yet, period. I'll hold off on this final option until I get those set up.

As part of this upgrade, I also wanted to work on supporting sentences in which "that" is implied rather than written.

I realized something was wrong.
Jesse knew Sarah was in the closet.
They will tell Jonathan his business prospers.

Can you see where the "that" would belong? Omitting it is grammatically acceptable. The tricky part is figuring out that a dependent clause is being opened without having a functional word like "that" to guide you. If you're reading strictly left to right, you probably won't know until you get to the second verb - because all of the following are correct as well, and are quite different in both structure and meaning:

I realized something.
Jesse knew Sarah.
They will tell Jonathan his business.

In short, omission of "that" forces the Parser to make do with less explicit information. Getting this part to work - and play nice with everything else the parser does - was probably the most difficult aspect of the upgrade.

Less concise version on the blog: https://writerofminds.blogspot.com/2022/06/acuitas-diary-50-june-2022.html

*

MikeB

  • Autobot
  • ******
  • 224
Re: Project Acuitas
« Reply #235 on: July 01, 2022, 09:33:41 am »
You call people/things "that", or as an optional word that quantifies it?
"This/that/those/these dialed the telephone" or
"This/that/those/these people dialed the telephone..."

What do you do with articles/quantifiers?

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 620
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #236 on: July 01, 2022, 07:43:54 pm »
You call people/things "that", or as an optional word that quantifies it?
"This/that/those/these dialed the telephone" or
"This/that/those/these people dialed the telephone..."

Both. Using this/that/those/these as the personal subject of an action verb is probably a little uncommon or awkward; I wouldn't say "That dialed the telephone." But you can still find examples, such as this one from the NAS Bible:

"Can a woman forget her nursing child and have no compassion on the son of her womb? Even these may forget, but I will not forget you."

This/that/these/those still appear as the pronoun subject of the sentence plenty often in other contexts. For instance:

This is my friend Andrew.
That is a dog.
Those belong on the shelf.
These aren't useful.

You might see this/that/these/those as the subject of the sentence when they refer to either previous sentences in the conversation, or general situations:

That is incorrect. [I.e. "what you just said is incorrect"]
That doesn't make me happy.
This is not a good time.

What do you do with articles/quantifiers?

I'm not sure what you're asking here. I treat them as adjectives that modify the noun they're attached to. I also pick up some additional information from them to help the parser (articles are *always* adjectives, which is handy) and to tag words as count nouns rather than bulk nouns.

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1372
  • Humans will disappoint you.
    • Home Page
Re: Project Acuitas
« Reply #237 on: July 02, 2022, 01:29:13 am »
What do you do with articles/quantifiers?
I'm not sure what you're asking here. I treat them as adjectives that modify the noun they're attached to. I also pick up some additional information from them to help the parser (articles are *always* adjectives, which is handy) and to tag words as count nouns rather than bulk nouns.
Nowadays articles and quantifiers are placed in the determinative grammatical category which normally function as determiners. This is quite distinct from members of the adjective grammatical category which normally function as modifiers.

https://en.wikipedia.org/wiki/Determiner

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 620
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #238 on: July 02, 2022, 02:51:00 am »
That's new to me. I can't remember ever hearing of a determiner as a distinct part of speech before. My grammar education lumped them in with the adjectives and treated determination as a subset of modification (answering the question "which one?", "how many/much?" or "whose?" about the noun).

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1372
  • Humans will disappoint you.
    • Home Page
Re: Project Acuitas
« Reply #239 on: July 02, 2022, 03:01:39 am »
That's new to me. I can't remember ever hearing of a determiner as a distinct part of speech before. My grammar education lumped them in with the adjectives and treated determination as a subset of modification (answering the question "which one?", "how many/much?" or "whose?" about the noun).
English grammar is a field that has been advancing rapidly in the past few decades with many innovations that make parsing and analysis more useful to attempt. Notice the modern distinction between form (grammatical category or what used to be called "part of speech") and function. For example, running is a verb but in the sentence "Running is fun." it functions as a noun phrase and in the sentence "The running water filled the bucket." it functions as an adjectival phrase. In both sentences it is still a verb, but its function differs from its usual use in a predicate.

 


LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

434 Guests, 1 User

Most Online Today: 443. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles