What Grammar Thing should I work on next?

Gerunds & Participles
1 (20%)
Indirect Objects
0 (0%)
Adjective Clauses
1 (20%)
The many uses of "that"
3 (60%)

Total Members Voted: 5

Voting closed: February 26, 2022, 03:17:15 am

Project Acuitas

  • 227 Replies


  • Trusty Member
  • ********
  • Replicant
  • *
  • 504
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #225 on: March 26, 2022, 05:47:08 pm »
@Magnus: The text parser is all my own work, yes. Thank you!

@LOCKSUIT: In addition to word frequency, aren't you going to need some kind of sentiment determination? If people rarely talk about a thing, then yes, they're probably indifferent to it. But if they talk about it a lot, they may either love it or hate it. For example, due to my having a bout of peripheral neuropathy last year, my use of the word "neuropathy" has increased considerably. That doesn't mean I want it to happen again.

In the last paragraph I think you're talking about subgoal priority adjustment based on urgency. That is a good and needed feature.



  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4611
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Project Acuitas
« Reply #226 on: March 27, 2022, 03:13:06 am »
> But if they talk about it a lot, they may either love it or hate it.

Yes, this is supposed to be that way, we talk lots about good and bad things. One may think maybe bad things should get lower probability of being said, this is true, as they (bad taste/ pain/ ruined car) cannot have the same effect of being spoken lots, which also causes them being shared to others, being researched, and being brought to life (aside from temporary or small experiments that DO peek at them). The reason we seem to talk about bad things (nearly all of them) is because the thoughts are actually "darn that ageing"/ "how can I stop ageing"/ etc. If you think lots of "ageing", then it's probably that you made a To-Do task (like you store in memory daily tasks as single words on a busy day to help you remember them: car, hire, brush, suit, baby, shop, makebed, xray) and have a secondary goal that adds a "how will I solve this" to the belief X ex. "ageing".

> due to my having a bout of peripheral neuropathy last year, my use of the word "neuropathy" has increased considerably. That doesn't mean I want it to happen again.

Like GPT-3, its word/token frequency over its whole life is basically stored into the network's connections's strengths. (Same for us). So unless you truly go on for years saying this new neuro word or say it half the day for 6 months, it won't be said that much. But actually that's not why you start saying the word more than you used to. The initiator, if not reading books upon books on the internet learning better what words are common and which are not, is some reward by some X strength, and this is given over to the word neuropathy if it relates to food by some X amount or whatever is goal you currently hold. Even school grades, as an obvious goal. This is mostly for now permanent, unless the class ends the course or you solve and see the goal. Something like that...

> In addition to word frequency, aren't you going to need some kind of sentiment determination?

Marking root goals at birth is easy, aside from the goals in 40GBs of text already there (word frequency means word X is common goal, ex. food is common word likely). To learn more or figure out what someone's phrase means as a goal, is just by matching. The born with/ innate goals are already clear.



  • Trusty Member
  • ********
  • Replicant
  • *
  • 504
    • WriterOfMinds Blog
Re: Project Acuitas
« Reply #227 on: April 28, 2022, 08:53:38 am »
The topic of the month was more "theory of mind" material: specifically, modeling the knowledge of other minds, and considering its implications for their scope of available actions.

I focused the new features around the Narrative reasoning module (though the Executive will end up using some of the same tools to model the real minds of people Acuitas talks to, eventually). The most basic step was to add storage for facts about characters' knowledge in the Narrative tracker.

It would be impractical to explicitly list everything that every character in a story knows - both because that's a massive amount of information, and because another person's knowledge is private to them, and not all details can be inferred. So the model is intended to be sparse. Characters are presumed to know all the background knowledge in Acuitas' own semantic database, and to know what they need to know to accomplish their goals. Facts are only listed as known or not known if 1) the story states outright that a character knows or doesn't know something, or 2) presence or absence of knowledge can be inferred from some event in the story. For example, if a character finds an object, this implies that the character now knows the object's location.

I also had to extend the action prerequisites system a bit, so that it could handle nested relationships. Previously, I could teach Acuitas something like this:

To eat a food, an agent must have the food.
To run, an agent must be alive.

And now I can set up prerequisites like this:

To get an object, an agent must be where the object is.

"Where the object is" is recognized as a relational fact (<object> is_at <location-wildcard>). The Narrative engine further recognizes that if any relational facts that are components of a goal's prerequisite are not known, the prerequisite is blocked ... the character cannot directly arrange to fulfill it. This paves the way for knowledge-gathering to become a subgoal in its own right.

Putting all this together, we can craft a story about someone looking for an object in an unknown location. My test story is based on King's Quest I, with some liberties (in the game, I don't think you can actually ask the bridge troll where the chest is).

Here's a breakdown of the story with significant tracking that happens at each step. There is a LOT more that needs to happen here. For example, seeking should be understood as an attempt at a solution to the lack of knowledge, repeated failures to find the chest should raise the Suspense variable, the troll's lie should generate a possible false belief in the knowledge model, etc. But it is, as usual, Good Enough for Now.

0:"Graham was a knight."
   A knight is recognized as a type of agent; Graham is tracked as a character in the story.
1:"Graham served a king."
   The king is now tracked as a character also.
2:"The king wanted the Chest of Gold."
   This line sets up a character goal for the king: he wants to have the chest, which is now tracked as an object
3:"The king brought Graham to his castle."
4:"The king told Graham to get the Chest of Gold."
5:"Graham wanted to get the chest, but Graham did not know where the chest was."
   Processing of the first clause enters getting the chest as a goal for Graham. Processing of the second clause updates his knowledge model with his lack of knowledge of the chest's location, and notes that the goal just created is now "thwarted."
6:"Graham left the castle to seek the chest."
7:"Graham went to the lake, but Graham did not find the chest."
   Graham's new location should be inferred when he moves, but these sentences don't do too much else for now.
8:"Graham went to the dark forest, but Graham did not find the chest."
9:"Graham asked of a troll where the chest was."
   Awkward wording because the Parser doesn't do indirect objects yet!
10:"The troll told to Graham that the chest was at the gingerbread house."
   My Twitter followers (and you here on the forum) didn't vote for me to work on IOs next, so we'll be stuck with this for a while.
11:"Graham went to the gingerbread house, but Graham did not find the chest."
12:"A witch was at the gingerbread house."
   Another agent! What's she gonna do?
13:"The witch wanted to eat Graham."
   This gets registered as a *bad* character goal - see previous story about Odysseus and the Cyclops.
14:"Graham ran and the witch could not catch Graham."
   Failure of the bad goal is inferred. Yay.
15:"Finally Graham went to the Land of the Clouds."
16:"In the Land of the Clouds, Graham found the chest."
   Graham knows where the chest is! The knowledge model gets updated accordingly. We can also unblock that goal now.
17:"Graham got the chest and gave the chest to the king."
   And both the story's positive character goals are solved in one fell swoop.
18:"The end."

Next month I plan to keep extending this. Information transfer needs to be modeled, misinformation needs to be understood, and all this needs to start getting applied in the Executive.


I am child hunger in America
by 8pla.net (Robotics News)
May 23, 2022, 11:05:31 am
Scalable Quantum Computer on a Silicone wafer
by MagnusWootton (AI News )
April 21, 2022, 07:31:36 pm
Neurosymbolic Programming with Scallop
by Zero (AI News )
April 19, 2022, 10:37:31 pm
try this new ai named blip
by chattable (AI News )
April 18, 2022, 09:21:31 pm

Users Online

92 Guests, 0 Users

Most Online Today: 143. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)