As I hinted last month, the goal this time was to keep expanding on knowledge modeling. There were still a lot of things I needed to do with the King's Quest I derivative story, and many different directions I could branch out in. I ended up picking three of them.
The first one was "use a character's knowledge model when making predictions of their behavior." One long-standing feature of the Narrative module has Acuitas run his own problem-solving search whenever a new Problem is logged for a character, to see if he can guess what they're going to do about it. The search utilizes not only "common knowledge" facts in Acuitas' own databases, but also facts from the story domain that have been collected on the Narrative scratchboard. Using the knowledge models was a relatively simple extension of this feature: when running problem-solving for a specific character, input the general facts from the story domain AND the things this character knows - or thinks to know (more on that later).
But before I could do that, I quickly found out that I needed a little more work on Problems. I needed any thwarted prerequisite on a Subgoal to become a new Problem: both for a more thorough understanding of the story, and to get the solution prediction to run. The King's Quest story was almost too complicated to work with, so to make sure I got this right, I invented the simplest bodily-needs story I could think of: an animal has to go to the water hole. And I made multiple versions of it, to gradually work up to where I was going.
(Long-winded notes about this story and all its variations are on the blog:
https://writerofminds.blogspot.com/2022/05/acuitas-diary-49-may-2022.html)
0:"Altan was a gazelle."
1:"Altan was on the plateau."
2:"Altan was thirsty."
3:"Altan decided to drink water, but Altan did not have water."
4:"Altan decided to get water."
5:"Altan knew that there was water in the valley."
6:"Altan went to the valley."
7:"Altan found water in the valley."
8:"Altan got water."
9:"Altan drank the water."
10:"The end."
The point here is to replace a blunt statement of fact with a statement about Altan's *knowledge* of that fact, entering it in his knowledge model rather than the main scratchboard. Then derive the same results: 1) Altan has a location problem and 2) he will probably solve it by going to the valley.
Lack of knowledge is important here too; if we are explicitly told that there is water in the valley but Altan *doesn't* know this, then the scratchboard fact "water is in the valley" should be canceled out and unavailable when we're figuring out what Altan might do. I didn't even implement this half of it. It should be easy enough to add later - there was just too much to do, and I forgot.
The second thing was to bring in the idea of knowledge uncertainty, and the possibility of being mistaken. So I converted the "facts" in the knowledge model into more generic propositions, with two new properties attached: 1) is it true (by comparison with facts stated by the story's omniscient narrator), and 2) does the agent believe it? For now, these have ternary values ("yes," "no," and "unknown").
Truth is determined by checking the belief against the facts on the Narrative scratchboard, as noted. Belief level can be updated by including "<agent> believed that <proposition>" or "<agent> didn't believe that <proposition>" statements in the story. Belief can also be modified by perception, so sentences such as "<agent> saw that <proposition>" or "<agent> observed that <proposition>" will set belief in <proposition> to a yes, or belief in its inverse to a no.
For the third update, I wanted to get knowledge transfer working. So if Agent A tells Agent B a <proposition>, that propagates a belief in <proposition> into Agent B's knowledge model. Agent B's confidence level in this proposition is initially unknown to the Narrative, but again, this can be updated with "belief" statements. So now we're ready to go back to a slightly modified version of the "Search for the Chest" story:
0:"Graham was a knight."
1:"Graham served a king."
2:"The king wanted the Chest of Gold."
3:"The king brought Graham to his castle."
4:"The king told Graham to get the Chest of Gold."
5:"Graham wanted to get the chest, but Graham did not know where the chest was."
6:"Graham left the castle to seek the chest."
7:"Graham went to the lake, but Graham did not find the chest."
8:"Graham went to the dark forest, but Graham did not find the chest."
9:"Graham asked of a troll where the chest was."
10:"The troll didn't know where the chest was."
11:"The troll told to Graham that the chest was at the gingerbread house."
12:"Graham believed that the chest was at the gingerbread house."
13:"Graham went to the gingerbread house."
14:"Graham saw that the chest was not at the gingerbread house."
15:"A witch was at the gingerbread house."
16:"The witch wanted to eat Graham."
17:"Graham ran and the witch could not catch Graham."
18:"Finally Graham went to the Land of the Clouds."
19:"In the Land of the Clouds, Graham found the chest."
20:"Graham got the chest and gave the chest to the king."
21:"The end."
The ultimate goal of all this month's upgrades was to start figuring out how lying works. If that seems like a sordid topic - well, there's a story I want to introduce that really needs it. Both villains are telling a Big Lie and that's almost the whole point. Getting back to the current story: now Line 11 actually does something. The "tell" statement means that the proposition "chest <is_at> gingerbread house" has been communicated to Graham and goes into his knowledge model. At this point, Acuitas will happily predict that Graham will try going to the gingerbread house. (Whether Graham believes the troll is unclear, but the possibility that he believes is enough to provoke this guess.) On Line 12, we learn that Graham does believe the troll and his knowledge model is updated accordingly. But on Line 14, he finds out for himself that what the troll told him was untrue, and his belief level for that statement is switched to "no."
The story never explicitly says that the troll lied, though. Can we infer that? Yes - from a combination of Lines 10 and 11. If an agent claims something while not believing it, that's a lie. Since the troll doesn't know where the chest is, he's just making stuff up here (replacing Line 10 with "The troll knew that the chest was not at the gingerbread house" also works; that's even more definitely a lie). To get the Narrative module to generate this inference, I had to put in sort of a ... complex verb definition detector. "If <agent> did <action> under <circumstances>, then <agent> <verb>." We've got enough modeling now that the Narrative module can read this story, see that the troll told somebody else a proposition that was marked as a non-belief in the troll's knowledge module, and spit out the implication "The troll lied."
Again, blog link for a little more:
https://writerofminds.blogspot.com/2022/05/acuitas-diary-49-may-2022.html