This month's work directly built on what's in the previous post, by introducing the idea that agents can *add* side effects to other agents' actions, by choosing a conditional behavior: "If you do X, I will do Y." This is the groundwork for understanding social interactions like bargaining, reward, and coercion.
The introduction of a story sentence like "Agent A decided to do X if Agent B did Y" now creates a new cause-and-effect rule for the Narrative engine to use; it isn't stored to the permanent database, only used within the domain of that story. For reasoning purposes, it is assumed that "B does Y" will automatically happen if A does X ... so long as nothing is preventing B from doing Y.
I can start to define some verbs in terms of these models - much as, in previous Narrative work, I effectively defined "lie" as "tell someone a proposition that you don't believe." Now "coerce" ... in at least one of its forms ... can be defined as "deliberately apply a negative side effect to someone else's subgoal." If this happens, the Narrative engine will infer that A coerced B.
I was more interested in coercion than the positive options, thanks to the second goal of the month: to figure out a functional understanding of "freedom." As with other important abstractions I've introduced, I don't promise the result is better than an oversimplification. But we have to start somewhere.
Freedom could be defined, in a very simple and broad sense, as a lack of constraints. But all humans live with constraints. We generally don't presume that freedom requires *omnipotence.* So to get closer to the idea of freedom that people generally have in mind, we might say "a lack of *unnatural* or *exceptional* constraints." These could include situations that severely reduce one's options below the ordinary level ... getting trapped in a cave by a landslide, for instance. Since any constraints imposed by other agents are not part of the default state of things, they are also included. Freedom in a narrower sense is concerned with not having one's existence, abilities, and purpose subverted - not being *used* as a means to someone else's ends.
Assessing what counts as a "severe reduction of options" is a little beyond Acuitas' capability right now, so I plan to just put conditionals in the database for some of these. "Confined implies not free," "restrained implies not free," etc. But as for the other part, the Narrative engine can assess whether some other agent is applying coercion, or otherwise purposely constraining the viewpoint character's actions. If this happens, the viewpoint character is less than free.
There are a couple of additional wrinkles. Agent B's freedom is not regarded as being lost if Agent A thwarts one of Agent B's goals in *self-defense.* If we didn't have this provision, we'd be stuck with conundrums like "Agent B wants to prevent Agent A from living. Agent A wants to prevent Agent B from killing them. Who is offending against whose freedom?" For an idea of how "self-defense" is defined, take a look back at the Odysseus and the Cyclops story.
Now for what I found to be the trickiest part: sometimes you can interfere with someone else even while minding your own business. For example, let's suppose Josh has a goal of buying a PS5. There's a world of difference between "Josh could not buy the PS5 because I bought it first," and "Josh could not buy the PS5 because I wrestled him to the ground and wouldn't let him enter the store." I take a volitional action that reduces Josh's options and prevents him from achieving his goal in both cases. In the first case, I'm not limiting Josh's freedom, just exercising my own; my interference is indirect and incidental. In the second case, my interference is direct and intentional. So I can express the difference in words, but how on earth to explain it to a computer?
I finally decided a handy encapsulation was "Would Agent A still take the interfering action if Agent B didn't exist?" In the above example, I would still buy the PS5 whether Josh were involved or not. (Unless I were being a dog in the manger and only buying it to spite him, in which case that *would* be reducing his freedom! See how context-dependent these things are.) But I'd have no incentive to wrestle Josh down if he were not there (not to mention that I wouldn't be able to). Can you come up with any thought experiments in which this doesn't work? Let me know in the comments!
Again, testing for this in the Narrative engine is a little complex for now - it requires a somewhat more thorough analysis of character intent than I'm currently doing. But having it in my back pocket for the future makes me feel better. As a stopgap, I went with the less perfectly accurate "is Agent B an object of Agent A's interfering action?"
For purposes of a quick test, I wrote the following totally not historical story about ... feudalism, I guess:
0:"Robert was a human."
1:"George was a king."
2:"Robert wanted to study mathematics."
3:"George wanted Robert to work George's farm."
4:"Robert didn't want to work the farm."
5:"If Robert studied mathematics, Robert could not work George's farm."
6:"George decided to beat Robert if Robert studied mathematics."
7:"Robert left the farm and went to the big city."
8:"George did not know where Robert was."
9:"So George could not beat Robert."
10:"Robert studied mathematics."
11:"Robert became a scholar."
12:"Robert never worked the farm."
13:"The end."
The Narrative engine picks up on a threat to Robert's freedom on Line 4, and retroactively marks George's goal from Line 3 as something negative. Wanting another agent to do something, or not do something, is all fine and dandy; it's only if your wishes for them oppose theirs that we run into trouble. An attempt at coercion happens on Line 6; Robert cannot safely fulfill his goal of studying math now. But George's illegitimate plan is blocked, and Acuitas can conclude that this story has a good ending.
With this done ... I think I've built the capacity for understanding all the necessary concepts to explain the conflict in the big story I've been targeting. They need more refinement and expansion, but the bones are there. This is exciting, and I may start pushing toward that more directly.
Blog version:
https://writerofminds.blogspot.com/2022/09/acuitas-diary-53-september-2022.html