Ai Dreams Forum

Member's Experiments & Projects => General Project Discussion => Topic started by: Zero on January 31, 2022, 09:00:18 am

Title: Belief based programming language
Post by: Zero on January 31, 2022, 09:00:18 am
I'm introducing a programming language I'm working on. It's not complete yet, but I already have a prototype that shows interesting outcomes.

It is based on beliefs. The notation goes like:
Code: text
    A =:> B | C

A is believed to provoke B in the context C

    A =/> B | C

A is believed to prevent B in the context C

It works on a symbolic description of observations (A and B are observations), in discrete time. Those 2 lines above implicitly state "at the next step". There's also an extended notation to express time gaps greater than 1, like this:
Code: text
    A [2]=:> B | C

A is believed to provoke B two steps later, in the context C

    A [3]=/> B | C

A is believed to prevent B three steps later in the context C

So,
Code: text
    sky rain =:> grass wet | garden
This example means "if (I can see that) it rains at step N, I believe that (I will see that) the grass will be wet at step N+1 (if my 'garden' sensor is working)".



Say you want an automatic garden controller that's in charge of keeping the grass alive, watering it when it doesn't rain, but not when it rains because you don't want to waste water. In your typical setup, you would create a model of the situation (the sky, the grass, the sprinklers, and how they interact), give it goals, and connect it to the sensors and actuators. With my programming language, all you do is connect it to the sensors and actuators, and tell it "keep the grass wet and the water consumption low", it will figure it out all by itself.

What it does looks a bit like FSM induction. It logically infers what causes what and everything, based on observations. Then it can make predictions. This part works already, but I want to go further. In the end, there will be logical variables on both side of the formula, like "if you water {something}, and if {something} is a plant, then {something} will survive". So a bit like Prolog, except it programs itself.

The context part (the C after the vertical bar) is an unimplemented recent addition, the purpose of which is to handle hidden states. That's what I'm about to do right now.

After that, the prediction system (and the whole system in fact) will be augmented to contain predictions about the behavior of the agent itself. Then, action for this step will be chosen to realize the most interesting prediction, based on an ordered list of desired outcomes. Basically, this is active inference without the bayesian math. But I'm not there yet.
 ::)
That's it for now.
Title: Re: Belief based programming language
Post by: MagnusWootton on January 31, 2022, 09:32:28 am
That's nice, you should able to fit that inside of prolog.   Its an expert system.

Theres a tricky thing left 2 do and that is you need involve it with a computer vision system (or other random sensors) to flag the situation fires the logic needs, to get the result of the model.

I suggest entering in the truths for it, rather than it working them out itself, thats alot trickier....  maybe could talk about it? It usually isn't implemented,  I suggest maybe a goaless approach of just discovering everything it can.

If you gave it 1000 different sensoral detections, then it could workout the logic of how they go together itself?  And its aimless how it does it, it just works out as much as it can, sorta curiosity based motivation,  just try everything (but could end up a disaster, trying some things just for the hell of it.)

Especially if you gave it motors, for it to affect the environment,  could end up trying to burn the house down, just to see what'll happen. :)
Title: Re: Belief based programming language
Post by: Zero on January 31, 2022, 09:38:12 am
The balance between curiosity and... well, things that should not happen, is meant to be built-in. If it can reach a nice outcome, then it will go for it. That's exploitative behavior. Else, I want it to get in situations where it can learn things, which is explorative behavior. If you ask it, "please go get my keys", if it knows where they are, it will go there directly, else, it will start scanning the house to find them.

Here is an example output of the prototype; the world is like this:
Code
let prev;
let grass = "grass dry";
for (let i = 0; i < 200; i++) {

    let w = ["sky blue", "sky cloudy", "sky rain"][Math.floor(3*Math.random())];
    if (prev == "sky blue") grass = "grass dry";
    if (prev == "sky rain") grass = "grass wet";
    if (prev == "sky cloudy") w = "sky rain";
    prev = w;

    a.observe(w, grass)
}

Then if we do
Code
console.log(a.prettyPrintBProvokeTheories());
console.log(a.prettyPrintBPreventTheories());
console.log(a.calculatePredictionsNextStep());

We get
Code
[
  'sky rain =:> grass wet',
  'sky rain, grass dry =:> grass wet',
  'sky cloudy =:> sky rain',
  'sky cloudy, grass wet =:> sky rain',
  'sky cloudy [2]=:> grass wet',
  'sky blue =:> grass dry',
  'sky blue, grass wet =:> grass dry',
  'sky cloudy, grass dry =:> sky rain',
  'sky cloudy, grass dry [2]=:> grass wet'
]
[
  'sky rain =/> grass dry',
  'sky rain, grass wet =/> grass dry',
  'sky cloudy, grass wet =/> grass dry',
  'sky cloudy [2]=/> grass dry',
  'sky cloudy, grass wet [2]=/> grass dry',
  'sky blue =/> grass wet',
  'sky cloudy, grass dry =/> grass wet',
  'sky cloudy =/> sky blue',
  'sky cloudy, grass dry =/> sky blue',
  'sky cloudy, grass wet =/> sky blue',
  'sky blue, grass dry =/> grass wet'
]
[
  [ 'grass dry' ],
  [ 'grass dry', 'sky blue' ],
  [ 'grass dry', 'sky cloudy' ],
  [ 'sky rain', 'grass dry' ]
]

That's a start.

Of course you can partially load it with initial beliefs, through scripts. Actually, this is even how we tell it what we want it to do. In fact, to be precise, we don't tell it "please do this", we rather tell it "you believe that you will do this". Which is why it does it. It does things because it positively believes it WILL do these things. It's a notion of... optimism as motivation.
Title: Re: Belief based programming language
Post by: MagnusWootton on January 31, 2022, 09:59:37 am
Would be quite magic to get it running, the tools to get expert systems to happen are simple, but tricky to put them in the working formation and I wouldn't expect much help other than a whole lot of cynical ppl saying it doesnt do anything interesting, which is completely false.

I'm interested in this technique, but my time is invested somewhere else atm. (I've got something cool going too.)  U r definitely sparkin' my lightbulb tho!  I could speculate and suggest that its easy to shoot yourself in the foot when building a larger logic system, I think starting simple like this is good, see how functional it is now before adding more complexity to it,   prob does a lot already.

The AI pool you could say is a fairly deep one (perhaps never ending...) where u can drown in and not get anywhere if u dive in too deep too early maybe, stuck at the drawing board without implementing anything,   I suggest get something working first and go from there.

Excellent ideas.   O0

I actually havent done much logic programming,   but If I did maybe I'd put all the truths of space invaders in, and see if I can get correct responses out of it,   then see if I can launch the game and it makes it right decisions in the game.   (prolog invaders)  I swear that would be possible.    I think the tricky thing is making it specific enough,  doing things too general it still couldnt play the game properly.

An expert system still cant actually date a girl,  even if it knows it has to "date a girl"   it needs the more specific information of how to do it as well,  even tho u can give it trainer wheels by supposing automatic success.

One more thing...  supposing automatic success actually is a good idea,  because you could separate the things it does into tiers of specificy,    the main expert system leaves it to another expert system to define the tasks that are given to it,  and the main system supposes automatic success,  even tho it isnt unless the more specific system can pass the test.
Title: Re: Belief based programming language
Post by: infurl on January 31, 2022, 10:03:42 am
It's certainly an idea worth exploring. The Non-Axiomatic Reasoning System (NARS) was developed along these lines and there are some mature software packages available. I believe it has quite a following.

https://www.opennars.org/ (https://www.opennars.org/)
Title: Re: Belief based programming language
Post by: Zero on January 31, 2022, 10:24:36 am
Quote
I actually havent done much logic programming,   but If I did maybe I'd put all the truths of space invaders in, and see if I can get correct responses out of it,   then see if I can launch the game and it makes it right decisions in the game.   (prolog invaders)  I swear that would be possible.    I think the tricky thing is making it specific enough,  doing things too general it still couldnt play the game properly.

Yes it would be possible. I studied Prolog a lot, a few months ago, it's an awesome language. Like, functions getting data in & out through their parameters, it's just incredible. I considered implementing my system in Prolog, then I thought, well yet another "Prolog thingy" that people will never use because it would imply installing SWI-Prolog or any other interpreter, and I know that's a no-go for most people. Also, while it is cool, it is also hard to use for big project (but there are solutions like Object Prolog I can't remember the name right now), and more importantly, slow.

So my current proto is Javascript, because I can write Js while I'm asleep. C is too... C-ish. I know Pascal quite well, it's fast and all, but let's face it. It's Pascal. Then I'm considering Go. The language is simple and clever, and it produces small binaries. I think it got what it takes.

Quote
An expert system still cant actually date a girl,  even if it knows it has to "date a girl"   it needs the more specific information of how to do it as well,  even tho u can give it trainer wheels by supposing automatic success.

 ;D  All experts, in that matter, started off with a trial & error explorative behavior, I believe!

Quote
It's certainly an idea worth exploring. The Non-Axiomatic Reasoning System (NARS) was developed along these lines and there are some mature software packages available. I believe it has quite a following.

https://www.opennars.org/

Ah Infurl. You always have the right pointer to share at the right time. This helps so much. Thank you, it definitely looks like something I can learn a lot from!
Title: Re: Belief based programming language
Post by: Zero on January 31, 2022, 02:57:44 pm
I just tried to implement the notions of potential attractors and repulsors, like this:
Quote
an attractor is something that tends to happen even when nothing provokes it
an attractor always happens if nothing prevents it

a repulsor is something that tends NOT to happen even when nothing prevents it
a repulsor never happens if nothing provokes it

You know, like... "balls tend to fall on the ground". Sadly, it doesn't yield anything useful. My intention was to use it for action. I'll find another idea. I need to sit in front of my computer, doing nothing for a while.


Edit:

I'm discovering that, at least in my case, a prediction is a like a function closure. It's an entire world in itself, that includes absolutely everything, including a new version of the self. Namely, theories are supposed to be updated if this or that happens. Knowledge can be gained, or not. And all of this is necessary if you're going to have a sort of curiosity-driven agent. Clearly this rings the (hell's) bells of complete rewriting. Fresh start. And since I know myself, I know it is time to switch to Go, because I won't do it again and again, once I'll get it to work. Too bad I won't be able to code in Go on my phone.
Title: Re: Belief based programming language
Post by: MagnusWootton on January 31, 2022, 11:03:58 pm
Isnt an action just a "symbol" connected to output, and is exactly the same as any other symbol?

Like you have so many words/symbols in your system,    some are inputs. (connected to your sensors)  some are intermediates (that form the temporary computation space, chainway to output) then you have the outputs (which are the actions.)

And they can be as general as you want,  u just might not be able to see them through without the extra information.
Title: Re: Belief based programming language
Post by: Zero on February 01, 2022, 06:19:04 am
In fact my approach is more humble than this. I'd be very happy to emulate the behavior of a cat, that would already be awesome.

I understand perfectly the system you describe (input, intermediate, output), but this is not how I see it. The way I see it, there's only input and output. Even outputs are inputs, so there's only inputs. The only difference is that outputs are generated by the agent in a special manner of course, but basically it feels what it does, hence they are inputs too.

So, just see it as a reflex-machine. A tool that's meant to be programmed, just like Prolog. Can you imagine it?

Now, imagine a second layer that observes the first layer.
 :)
Title: Re: Belief based programming language
Post by: MagnusWootton on February 01, 2022, 06:26:25 am
It sounds like a cool way to code something.   I haven't actually done it before,  but the input pings off so many symbols, then set some internal state? or something? then those go off,  etcetera.   then u ping off the motor nueron at the end. :)   I gotta do it one day,  get this logic paradigm thing happening.

if(A) then (B) if context(C)   that sounds like a good way to shape it, thats the tricky thing,  your allowed any form you want to do it, can be a little overwhelming perhaps.

logic paradigm patterns,   maybe there needs to be a book written for that, instead of object oriented paradigm patterns.
Title: Re: Belief based programming language
Post by: WriterOfMinds on February 01, 2022, 06:49:20 am
This reminds me rather heavily of what Acuitas' "Cause and Effect Database" does. So I like it. I think of the C&ED contents as data or knowledge representations, rather than a programming language, but that's maybe just a semantic difference. (In practice, they are stored in a text file and formatted in my own corrupted version of TOML.)
Title: Re: Belief based programming language
Post by: MagnusWootton on February 01, 2022, 07:25:49 am
Yeh Im going to have a go at this one day.

Symbols work over time if you want an ordered arrangement to get an outcome over time, this works good in a hieararchy of goals and subgoals, like that hierarchical temporal memory, each tier fills in the details of the tier before it.  the main tier existing over very long time durations. the lesser tiers plugging up the gaps with subgoals/subtasks.

But this way of looking at it makes it look  like something that has all of its options open spacially at any moment that fires it here and there regardless of time.  Would also work good in a taxonomy of generality-specificy.   probably better.
Title: Re: Belief based programming language
Post by: Zero on February 01, 2022, 03:17:31 pm
I caught the covid, sorry I can't think right now
I'll live  O0
Title: Re: Belief based programming language
Post by: infurl on February 01, 2022, 10:38:46 pm
Make sure you get plenty of rest. Don't make the mistake of trying to push through because that increases your chances of getting long covid and not being able to fully recover. I hope you're fully vaccinated too as that significantly reduces its ability to kill you.
Title: Re: Belief based programming language
Post by: Zero on February 13, 2022, 05:56:17 pm
Still alive! But my brain feels like cookie dough.

This work relies on an asumption I'm not sure of.

It goes... in the world of computers, either things have a causal relation, or they don't. There's no probability involved, the causal link is true or false. Therefore we can learn everything we need (about a deterministic finite state machine) without probabilities, only with logical statements.

2 questions I'd like to ask:
- Am I being understandable?  ???
- Am I correct when I assert that probabilities are not needed in this case ?
Title: Re: Belief based programming language
Post by: WriterOfMinds on February 13, 2022, 06:41:47 pm
I think the assumption of determinism in FSMs is fine. It may not be 100% true, given the existence of (for instance) radiation-induced errors, but it's true enough that we can probably act as if it were always true.

However, I don't think that means you can dispense with probabilities - because that would require the additional assumption that your AI mind can acquire absolute knowledge of all FSMs it has to interact with. And unless the AI's environment is very simple, that may not be possible. Probability in a piece of cause-and-effect knowledge is not just a measure of how likely A is to cause B, in an objective sense. It's also a measure of how confident we are that we know A causes B. Dealing with this is a key aspect of the NARS design and is, I think, one of the things it does well.

But you're creating a virtual world here, right? If you are planning to give your AI direct access to objective facts about its environment (i.e. it can just ask about a rule of the world and simply be told the rule), then you might be able to assume absolute knowledge as well. If you are planning to do inductive learning or anything like that, then you can't.

Acuitas' C&E database does not include probabilities yet - relationships there are absolute. But I consider this a simplifying approximation and a shortcoming, and I plan to introduce probabilities eventually, in some form.
Title: Re: Belief based programming language
Post by: ivan.moony on February 13, 2022, 07:21:51 pm
It may be a case that only on/off rules hold. Either they hold, either not. Probability may be introduced within rules that hold, once that ruleset is extended by numbers from real numbers set. New rules that operate on these real numbers can be introduced, holding facts about probabilities.

But I assume dealing with it that way may introduce a bottleneck in valuable processor time.

I imagine a set of probability rules accelerated by some low-level code for efficiency. Something like you can program addition/subtraction/multiplication/division/... in a programming language just using strings, but it will be slower than using commands provided by the processor. On the other side, accelerated low-level commands are bounded by word size like 64 bit numbers, while string algorithm version would be bounded only by available memory.

In short, I believe probability can be simulated by rigid rules above the rule engine, in consciousness level, and they would probably be more flexible, but the execution would be faster to low-level code them beneath the consciousness level, right into the rule engine using native programming language benefits.
Title: Re: Belief based programming language
Post by: Zero on February 13, 2022, 09:39:17 pm
I agree: if probabilities are used, they would be better and faster as "native" (built in the engine) rather than as entities in the emulation (upper) layer. Now the question is, are they needed in such a case?

The agent does not have direct access to the rules, it only makes observations: it perceives a part of the state of the world, nothing more.

The notion of confidence is exactly what I want to reduce, with the following scheme:

1) one observed occurence is enough to say "maybe it holds"
2) no matter how many additional occurences you observe, from a logical (absolute) perspective you can never get past "maybe it holds" (to some "I know it holds")
3) one observed counter-example is enough to say "I know it doesn't hold"

Point 2 is a consequence of point 3.

If we push the notion of confidence to its purest mathematical essence, absolute certainty is unreachable, unless the world is over and won't evolve anymore.

So, while probabilities certainly bring a rich ecosystem of valuable knowledge, can we imagine that an agent with only a discrete confidence system (unobserved / may hold / doesn't hold) would still be useful and capable of intelligence? Would it be able to learn anything it needs, in this deterministic LogicLand?

Edit:
Wait, since the world is deterministic, being more confident after 1000 occurences than after 10 would even be a fallacy, wouldn't it?
Title: Re: Belief based programming language
Post by: HS on February 14, 2022, 01:44:32 am
Wait, since the world is deterministic, being more confident after 1000 occurences than after 10 would even be a fallacy, wouldn't it?

If, like a block on an inclined slope, a system is simple enough so that the relevant initial conditions are preserved, then yes. With more complicated systems, I think a distant variable could affect the result once every 11 trials, or in an unpredictable pattern; then every occurrence should change your confidence.
Title: Re: Belief based programming language
Post by: WriterOfMinds on February 14, 2022, 02:34:45 am
HS is right ... let me try an example.

Let's say your world contains the 100% deterministic relationship "(A and not B) always causes C." And let's suppose that B co-occurs with A on a periodic cycle, 1 out of every 100 times that A occurs. After the first observation of C following A, your agent would develop a belief that "A maybe causes C." It would need (on average, depending on where in the cycle it made its first observation) 50 observations to learn that this is false and to discover the correct relationship.

To complete the rationale for more observations yielding a higher confidence, I think you need an Occam's Razor-like assumption: relationships which take many observations to discover (due to complexity, separation in time and space, etc.) are less common than simple relationships.
Title: Re: Belief based programming language
Post by: MagnusWootton on February 15, 2022, 03:20:08 am
I vote deterministic.

Not the chance of something happening, who cares about that, why is it fricken happening.  thats determined.
Title: Re: Belief based programming language
Post by: MikeB on February 15, 2022, 05:20:35 am
What good is an agent that only observes and guesses? Huge computing power used for an unreliable result...

The life of an insect is working with what you have for the best result for yourself then being generous to others in the tribe then other species... Almost no computing power used.

FSM/solid state machines are literally like an insect.... nothing to learn, only decisions to make based on the environment... IE. Everything is already known. Not the fate, but the directions only.
Title: Re: Belief based programming language
Post by: MagnusWootton on February 15, 2022, 06:13:16 am
If only you could get the computer to program itself via brute force validation.   but its unfortunately going to take 2^synapses trials to find the result,   then that would be more than just some statistical observation->prediction machine, it literally develops the closest matching algorythm to its environment!!!!!

But I guess doing it that way isn't quite rocket science,   and ppl may actually show quite distaste for it when they see what u'r hiding under the carpet about the situation.   :)

Like maybe,  its not quite so genius. Since most bums on the street probably have heard about the man that walks in all directions from now and finds where the wine is. 

Its a simplistic thing to know,   but alot of very important truths in life are simply knowable, such as how to make cheese,    its a bit of conundrum working out WHY it happens, but u can fluke doing it with vinegar and milk together.   Another simple one is amalgamating chalk powder into solid chalk,  if u put it in water, vibrate it, heat it, put in enclosed vessel with small leak, apply pressure and make it acidic, it will become like glass after a few days,  its amazing, but there's nothing really that challenging about it, especially mentally, actually going through the practical to do it poses some engineering riddles/hurdles to overcome.

 But the strange thing about A.I.  is its not even really hard to know how/why it works, (brute force validation is a  fairly simple topic anyone can understand) its just we cant do it without a quantum computer, so thats the less exciting thing about something that would grant someone greedy power beyond,  so its best if we didn't have it anyway.

These kind of things are so banal you feel silly for even appreciating them at all, and u may as well just be a brickies labourer.

"Brute force" is a good way to explain something that doesn't have finesse',  like maybe mental arithmetic.  And the concept is simple.

Title: Re: Belief based programming language
Post by: Zero on February 15, 2022, 09:38:55 am
I want to thank you for your help. I still cannot explain why, but I know that it is possible to build a Prolog-like tool that has full access to the 3 rungs of Judea Pearl's ladder of causation. I'll do my best to explore this, and report what I find!