In my research I have evidence suggesting generating=proving. Are you really sure yours can prove but not generate? How so...It is because of normalization that exposes only the last atom in a sequence to be proved. The nature of provided algorithmy is that only final tail can be derived, but information could be contained within previous atoms too, in case of partial parameter application. I think I'll opt out for separated code (instead of complicating existing algorithm) to support generation.
Uhm.....makes no sense....can you give an example instead? Like this: The cup fell off the table, therefore water soaked into the [carpet]
Above its begins with a question "The cup fell off the table,", and answers it, adding word by word, or even just the last word/atom [carpet]. The last word is the last atom shown as you said.
If...your algorithm adds just the last word, that is just as good for me to discuss here. The last word/atom is added/generated onto the Sentence. But to add it (generate) requires validation/verification/truth. It's of one of the same operation, 1:1.
this sentence:
(
(
fell (
(what cup)
(from table)
)
) (
soaked (
(what water)
(to carpet)
)
)
)
normalized gives something like this:
(
[
(
[
fell,
(
[
what
] cup
),
from
] table
),
soaked,
(
[
what
] water
),
to
] carpet
)
Things get more complicated with variables instead of constants. The question is: what can you read from this, possibly adding more sentences. Now we have only `A -> B`. Nothing much can be derived from this unless we add something like `A` too. Then we should be able to derive `B`.
[Edit]
I'm thinking of changing the algorithm, so `(A -> B) -> C` can be queried with something like `(A -> ?) -> B`.
[Edit2]
On another thought, the algorithm already should support expressions like `((A -> @X) -> C) -> @X`, so things are going better than I thought.
Lock, those brackets are a programming thing. You just stitch all the lines together, and you get the unreadable result. For example:
(walking ((who I) (whereTo store)))
is written in more readable form like this:
(
walking (
(
who
I
)
(
whereTo
store
)
)
)
Ok?
code:
((fell ((what cup) (from table))) (soaked ((what water) (to carpet))))
normalized:
([([fell, ([what] cup), from] table), soaked, ([what] water), to] carpet)
I've changed the name of the library from Esperas to Implika. Esperas is meant to be a bundle of graphical user interface and Implika.
Implika was originaly thought to be a javascript nest for Logos metatheory language, but it turned out it may be a stand-alone project, replacing Logos.
I just finished a Javascript implementation. That's about it, I hunted some bugs, but I'm sure there's more hiding within. Time will tell, I'm on it. The whole implementation took 152 lines of Javascript code, while it resembles rule based inference engine with pattern matching and implication mechanism that aims to be Turing complete. There is no distinction between knowledge base representation and rules, where each expression behaves as both rule and data from which a implicit information may be derived by other rules.
It inputs a s-expression, and outputs JSON object enriched by ForeChain property (forward chainer). Backward chaining is supported indirectly. S-expressions are interpreted in a way that the left side expression implies the right side expression, and this aspect goes recursively deep to atoms. Note that there are no other operators, just implication in a form of pairs. BNF of input looks like this:
s-exp := ()
| constant
| @variable
| (s-exp s-exp)
The only rule of inference is modus ponens:
(A B) A
---------------------
B
Who would say that this little thingie hides a system capable of describing any kind of computation?
One may find interesting an appearance of logic like expert system among examples. Knowledge base is stated, and query is put in a form of conjunction. The answer appears in ForeChain property of output.
I still have to finish a description of implementation of Hilbert calculus (https://en.wikipedia.org/wiki/Hilbert_system) within the system, but you can already play with it within provided test suite. Hilbert calculus is mainly used to describe different kinds of logic, inference systems, and lambda calculus (lambda calculus is a Turing complete representation of using functions to calculate results) in various reincarnations. Provided that Implika can describe Hilbert calculus, then it can describe any kind of system.
I also see Implika as something capable of replacing core AtomSpace (https://wiki.opencog.org/w/AtomSpace) from OpenCog (https://en.wikipedia.org/wiki/OpenCog) project. Maybe, as a curiosity, I'll drop a post in their group, once that description is fully finished, but I doubt they will be totally crazy about it. They are a bunch of PhD-s, and they invested a lot of time in polishing AtomSpace to let it go so easily.
Read about it here: https://github.com/e-teoria/Implika (https://github.com/e-teoria/Implika)
Test it in action here: https://e-teoria.github.io/Implika/test/ (https://e-teoria.github.io/Implika/test/)
What makes yous so sure you can't 'know it all'.
See Gödel's incompleteness theorems (https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems). There are two notions when constructing theories: completeness (to describe all that exists), and consistency (to be noncontradictory). What Gödel proved is that we can't have both in the same time. Thus, a complete theory has to be contradictory, and as such, can't be described by classical logic in a single logical system. Implika tries to be complete, while providing mechanisms to constrain only parts of the whole system to be consistent, although these parts may contradict each other (illogical as a whole). These consistent parts would be switched by implication like:
(
Case1 -> (... Theory1 ...)
) /\ (
Case2 -> (... Theory2 ...)
) /\ ...
Now, if we want to analyze some `(... Environment1 ...)` by `Theory1`, we write:
(Analytics -> (Case1 /\ (... Environment1 ...)))
If `Analytics` turns out to be contradictory, then we may conclude that `Theory1` does not hold for `(... Environment1 ...)`, and we can move on to check `Case2`. Moreover, these checks may be done iteratively in a recursive loop to report which theory actually does hold for some environment, but it is already a matter of programming, and I don't want to complicate this post that much.
These are currently not working definitions of logical operators that I have to make to work with Implika:
(@A -> @B) (@A @B)
(@A /\ @B) ((@A -> (@B -> @C)) -> @C)
(@A \/ @B) ((@A -> @C) -> ((@B -> @C) -> @C))
(~ @A) (@A -> @C)
(! @X . @F) (@X -> @F)
(? @X . @F) ((@X -> (@F -> @C)) -> @C)
First, the logical implication is being defined. Then everything else is being defined from that implication and complement `@c`, including universal quantificator `!` and existential quantificator `?`. I think I'll have to implant the complementation behavior in a form of one of DeMorgan's and double complementation laws:
(A /\ B)^{c} = (A^{c} \/ B^{c})
(A^{c})^{c} = A
There is a story behind complementation in implika. A fresh variable that is not defined before stands for *any* possible expression. Since Implika is made to be complete (see Gödel's incompleteness theorems (https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems)), it has to be inconsistent, so *any* expression also includes a falsehood, thus evaluates to a false, providing a case for complementation detection. So, to state `x -> @c`, is the same as stating `x -> false`, which is actually a definition of `not` operator in implicational logic. This behavior aligns with the principle of explosion (https://en.wikipedia.org/wiki/Principle_of_explosion) that says: from falsehood anything follows.
There is also something interesting in the above definitions: if we take a look at `/\` (and), `\/` (or), and `~` (not) operators, we may interpret `@c` as complementation. But, we may also consider `@c` as a placeholder for any other expression, and then the definitions become theorems preserving ground properties of `\/`, `/\`, and `~` operators.
I could probably get away without low-level complementation support, but these properties seem so natural, so I decided to grapple with this dragon.
Hi :)
I'd like to check whether I understand correctly. In the following example:
(
((man @x) (mortal @x)) (
(man Socrates) ()
)
)
We first set a context ((man @x) (mortal @x)). That's our "rule". Then, our "current state of the world" is (man Socrates).
The dummy () is actually the place where we can finally use the conclusion we obtained (that Socrates is mortal), am I right?
How would I express a world where Plato is a man too? If I understand correctly, I would create a nested structure with (man Plato) and (man Socrates)? How would it look like?
(
((man @x) (mortal @x)) (
(man Socrates) (
(man Plato) ()
)
)
)
It is a matter of transforming implicational logic to normal logic. `A -> ( B -> ( C -> D ) )` translates to `(A /\ B /\ C) -> D`, and `A`, `B`, and `C` are in the same context, and may interact between each other. We may use `D` as a dummy, or as something more meaningful.
In (a b), how would you call a and b? What are the official terms you choose?
Nothing special, it's all about implications. I'd say `a` is a cause, while `b` is a consequence. Thus `a` implies `b`. But the real beauty is that we may use implication combinations to form data structures. For example, if we have a magical operator `+`, we may write the following function definition and its two use cases:
(
( ( add ( left @x ) ( right @y ) ) ( @x + @y ) ) (
( add ( left 1 ) ( right 2 ) ) (
( add ( left 4 ) ( right 5 ) ) ()
)
)
)
The first meaningful line is the function definition. The second line results with `1 + 2`. The third results with `4 + 5`. Try to copy and paste this example into implika test suite (https://e-teoria.github.io/Implika/test/) and see what happens to `ForeChain` property. Remember, implika is a very simple variant of implicational logic, but we have just been doing math with it.
Thank you for reminding me of this project, I put it aside in a favor of super-duper-logic-able-doing-parser esperas. Nevertheless, I might finish implika one day, I'd still like to simplify different contexts parent-child interaction.
I think if all logical operators could be expressed in implika, then `( ( A -> B ) /\ B ) -> A` simulates backward chaining without any special treatment. Try to paste the following example in implika test suite (https://e-teoria.github.io/Implika/test/):
(
(
( @x -> @y ) (
( ( fore @x ) @y ) (
( ( back @y ) @x ) ()
)
)
) (
( rains -> wet ) ()
)
)
After drawing conclusions, expression `rains -> wet` should in ForeChain property have forward and backward chained cause and consequence.
However, to extract correct results from `fore rains` and `back wet` some crazy stunts with contexts are required, which would be more easily solved with still unimplemented hierarchical contexts.
Yes, I thought about Uppercase first letter too, à la Prolog.
About comments, having them between double quotes is unusual, but this is how Smalltalk does it (http://rigaux.org/language-study/syntax-across-languages-per-language/Smalltalk.html) for example. It seems reasonable to me. One way or another, I think comments are very important in a language, you should choose a syntax for them.
( "one meaningful context"
(("pattern" man <x>) ("template" mortal <x>)) (
("starting from" man Socrates) ()
)
)
Another question. I would expect the state after inference-step to be represented in the same format. Currently, a ForeChain key is added, with the result as an array. This breaks the head/tail format. How do we represent the result of the inference in Implika syntax?
The reason behind my question was a hope to build a system where forward chaining can be applied repeatedly, to obtain a tree-shaped cellular automaton, so to speak.
It should be already quite possible. For example, this expression never terminates:
(
(@a (@a + 1)) (
0 (
)
)
)
`(@a (@a + 1))` turns `@a` to `@a + 1`. Then `@a + 1` is fed back to `(@a (@a + 1))` to produce `(@a + 1) + 1`, and the loop is going on forever after applying it to `0`. To terminate it, it is necessary to express a condition under which `@a` becomes `@a + 1`. For example:
(
((@a < 10) (@a (@a + 1))) (
0 (
)
)
)
This should loop only 10 times, starting from 0, but it won't work right away. On the first sight, the problem seems that we have to internally code predicate less than and a function of addition. But on the second thought, I believe it is possible to do with implika something like Church encoding (https://en.wikipedia.org/wiki/Church_encoding) with lambda calculus. This is what I only mentioned in the current implika documentation, but I'll try to shed more light on this problem in the next github update. I'll try to provide a simple example of implementing lambda calculus, which is now (without hierarchically related contexts) more complicated, but possible (I believe).
In other words, the above example would work only after defining comparison and addition operations on set of numbers within implika notions. Otherwise (and for a sake of speed performance), these definitions should be added internally by manually altering implika code.
By the way, are you aware of untyped lambda calculus (https://en.wikipedia.org/wiki/Lambda_calculus)? It possesses a hypnotic simplicity, while it reveals unbound power. It would be wise to take it under consideration as a base language. I plan to do it with etml (https://github.com/e-teoria/E-Teoria-Markup-Language) because I think lambda calculus is simpler than implika, while implika inference automation is not a necessary accessory in raw document production.