I'm introducing a programming language I'm working on. It's not complete yet, but I already have a prototype that shows interesting outcomes.
It is based on beliefs. The notation goes like:
A =:> B | C
A is believed to provoke B in the context C
A =/> B | C
A is believed to prevent B in the context C
It works on a symbolic description of observations (A and B are observations), in discrete time. Those 2 lines above implicitly state "at the next step". There's also an extended notation to express time gaps greater than 1, like this:
A [2]=:> B | C
A is believed to provoke B two steps later, in the context C
A [3]=/> B | C
A is believed to prevent B three steps later in the context C
So,
sky rain =:> grass wet | garden
This example means "if (I can see that) it rains at step N, I believe that (I will see that) the grass will be wet at step N+1 (if my 'garden' sensor is working)".
Say you want an automatic garden controller that's in charge of keeping the grass alive, watering it when it doesn't rain, but not when it rains because you don't want to waste water. In your typical setup, you would create a model of the situation (the sky, the grass, the sprinklers, and how they interact), give it goals, and connect it to the sensors and actuators. With my programming language, all you do is connect it to the sensors and actuators, and tell it "keep the grass wet and the water consumption low", it will figure it out all by itself.
What it does looks a bit like FSM induction. It logically infers what causes what and everything, based on observations. Then it can make predictions. This part works already, but I want to go further. In the end, there will be logical variables on both side of the formula, like "if you water {something}, and if {something} is a plant, then {something} will survive". So a bit like Prolog, except it programs itself.
The context part (the C after the vertical bar) is an unimplemented recent addition, the purpose of which is to handle hidden states. That's what I'm about to do right now.
After that, the prediction system (and the whole system in fact) will be augmented to contain predictions about the behavior of the agent itself. Then, action for this step will be chosen to realize the most interesting prediction, based on an ordered list of desired outcomes. Basically, this is active inference without the bayesian math. But I'm not there yet.
That's it for now.