Ai Dreams Forum

Member's Experiments & Projects => General Project Discussion => Topic started by: ivan.moony on October 05, 2019, 10:46:34 PM

Title: Implika automated inference engine
Post by: ivan.moony on October 05, 2019, 10:46:34 PM
Interested in symbolic AI (GOFAI) (https://en.wikipedia.org/wiki/Symbolic_artificial_intelligence)?

I think this might be a place to start off. I just defined on a paper an inference engine based entirely on implication. Input and output forms are defined, detailed pseudocode prototype (65 lines) is already written, Javascript implementation and bugs extermination is pending.

If we manage to get a logical representation of a chatbot (https://en.wikipedia.org/wiki/Chatbot) input, we can pass it through this engine to derive implicitly contained information and answer questions.  Also, a logical expert system (https://en.wikipedia.org/wiki/Expert_system) could be based on this engine. The only built in rule of inference is modus ponens that says from `A -> B` and `A` follows `B`. All other rules are combined and derived from this one, and this includes even and/or/not combinations. If we fill in a falsehood that we make sure to always fail, we have a functionally complete (https://en.wikipedia.org/wiki/Functional_completeness) combination of implication and negation. Further, if we add a few axioms, we have a real implicational logic (https://en.wikipedia.org/wiki/Implicational_propositional_calculus) and a form of theorem prover. Developing the thought further, a search for proofs is basically automated algorithm construction... Well, all of this is just a part of a theory that always looks like a rainbow to me, but it still has to be seen how many rainy days there are on a way.

In short, Esperas is planed to be a Javascript programming library for logical bottom-up reasoning that derives conclusions based on input. If you are interested, a short read about Esperas is placed here (https://github.com/e-teoria/Esperas). I'll try to implement it and inform you about the progress as soon as possible.

(https://i.imgur.com/sfomX8P.png)

I would still like to see how to fuse this approach with neural networks. Maybe condition-consequence pairs from this kind of system can be seen as action-reaction pairs in trained NN?

[important edit]
Old Esperas project is renamed to Implika project. The new Implika link is here (https://github.com/e-teoria/Implika), while Esperas turned into another project based on top-down parsing technology.
Title: Re: Esperas automated inference engine
Post by: LOCKSUIT on October 05, 2019, 11:06:07 PM
Cool.

My friend once told me a year ago that even my idea needed what a NN has - it knows exactly how close cat=dog.....not just cat=dog. Rather cat=dog ex. 68%. The way to store all these is exactly what W2V does - it uses a net and is efficient. We may be stuck with NNs being a part of AGI.
Title: Re: Esperas automated inference engine
Post by: AndyGoode on October 05, 2019, 11:47:28 PM
We may be stuck with NNs being a part of AGI.

No way, if I understand you correctly. UMSs--Uncertainty Management Systems--are standard in rule-based expert systems, and do the estimations of likelihoods of inferences and matches that you mention. Neural networks also can only estimate, only in a different way.
Title: Re: Esperas automated inference engine
Post by: LOCKSUIT on October 06, 2019, 12:23:41 AM
Every single word connects to every other single word.........with a =s score weight.........that's a lot of connections and weights. And you need them to be precise to progress further plus utilize the translation.
Title: Re: Esperas automated inference engine
Post by: ivan.moony on October 06, 2019, 02:05:59 PM
We have NN today because they were proven to be successful in their simplest forms, in the most primitive organisms before 5.4 billion years ago. All this time the Nature had time to upgrade it and fine tune it, while we got the finished stuff to analyze today. I bet there are other ways to recognize and produce informations, both statistical and strict, but right now we have a finished work that weights 5.4 billion years. I only wish if there was a way to use and analyze it without abusing any living.  :-[
Title: Re: Esperas automated inference engine
Post by: LOCKSUIT on October 06, 2019, 02:20:31 PM
Although still crossing the ethical beliefs, it'd be more logically harmless and brave to take any and every dead person (young humans/animals) and experiment/study. Though they would not be alive.
Many are already are dying in pain, it's the right thing to do.

Alternatively you can take live cells.

I still believe I could make Cryonics work with all the time I'd put into it. I think it looks easy to test, ....so many things to compare what improves it!! Lots of tests could be done. Just take clumps of cells.
Title: Re: Esperas automated inference engine
Post by: Hopefully Something on October 07, 2019, 02:16:39 AM
Volunteers.
Title: Re: Esperas automated inference engine
Post by: LOCKSUIT on October 07, 2019, 02:56:35 AM
All ya gotta do is get a Cryonics Degree, get a lab, and make a hugggge notepad or whiteboard and fill it up with a bizzilion ideas (with tests). Take out some 20 years and we'll get there in no-time. Put the chips down and you will find the gloryful gates to a utopia with much, much more chips than 80years will feed you. That sadistic high you get when eating is nothing, it will be again. An endless feast.
Title: Re: Esperas automated inference engine
Post by: AndyGoode on October 07, 2019, 05:03:15 AM
I would still like to see how to fuse this approach with neural networks. Maybe condition-consequence pairs from this kind of system can be seen as action-reaction pairs in trained NN?

I refrained from getting into this topic because I would probably come across as too negative. Specifically, in the 1990s, when neural networks were the latest and greatest hope for AI, many dozens, if not thousands, of software engineers realized that since symbolic AI and neural networks have different strengths and weaknesses, a combined system might be the ultimate solution since each weakness of one paradigm would be covered by the other paradigm, so they published numerous articles proposing 'hybrid systems' composed of rules *and* neural networks. I got tired of seeing so many articles on such proposals at every conference, and to my knowledge, not one of those proposals has survived as a system that is still being used today, which is what I predicted back then.

So yes, there exist numerous ways to create hybrid systems, like the way you suggest, but ultimately something critical is missing from both paradigms, so I don't believe you'll ever get general intelligence arising from hybrid system approaches. That's where I'm so extremely negative that it doesn't even interest me to discuss the details of the pros and cons of those paradigms. I don't want to detract from your project, though, since it's a solid, legitimate, straightforward project, so you should probably just ignore my views on this topic and keep working on that, if that's what interests you.

----------

(p. 36)
Classical AI
techniques are best suited for natural language processing, planning, or explicit reasoning,
whereas neural networks are best suited for lower-level perceptual processes, pattern
matching, and associative memories.

Haykin, Simon. 1994. Neural Networks: A Comprehensive Foundation. New York, New York: Macmillan College Publishing Company.

Title: Re: Esperas automated inference engine
Post by: Hopefully Something on October 07, 2019, 05:55:17 AM
I think two different systems just sitting beside each-other would have a data transmission bottleneck, and might even require a third, translator unit. There's got to be a way of combining them that's not janky. Maybe symbolic logic could be transmitted within a neural network, so it accomplishes high and low level tasks in parallel.
Title: Re: Esperas automated inference engine
Post by: ivan.moony on October 07, 2019, 02:31:39 PM
I managed to do the normalization, it seems it does the correct thing, but I encountered some problems with partial inference and schematic variables before even beginning implementing them. It seems that the system would only answer yes/no if specific consequences could be paired with assumptions, but wouldn't actually generate an answer of what follows from some assumptions. It would be just a prover, not a generator.

Here is the normalization testing interface: https://e-teoria.github.io/Esperas/test/ (https://e-teoria.github.io/Esperas/test/)
Title: Re: Esperas automated inference engine
Post by: LOCKSUIT on October 07, 2019, 11:20:10 PM
In my research I have evidence suggesting generating=proving. Are you really sure yours can prove but not generate? How so...
Title: Re: Esperas automated inference engine
Post by: 8pla.net on October 08, 2019, 05:18:08 AM
JavaScript is actually a good choice but may limit your project to the client side  web browser which is constantly changing it seems.  So, you may want to consider a server side language such as PHP which changes more slowly, and it has neural networks or expert systems already written. 

I don't believe JavaScript does.  Of course, you may post the results from the JavaScript to PHP on the server side, but that could  slow things down.  There are many PHP tutorials, references and examples that are easy to find to help with your project.  But, this is just my opinion. As I said, JavaScript is a great choice especially with HTML5 which is a nice feature.

Title: Re: Esperas automated inference engine
Post by: goaty on October 08, 2019, 06:32:49 AM
(https://raw.githubusercontent.com/e-teoria/Esperas/master/drawing.svg?sanitize=true)

This picture is fractal data/code,   Ive put a lot of exploration into this datastructure of this picture!  =)

The main problem is you run out of page space by 4 scopes -  you cant even continue drawing it!
Title: Re: Esperas automated inference engine
Post by: goaty on October 08, 2019, 06:37:24 AM
I think two different systems just sitting beside each-other would have a data transmission bottleneck, and might even require a third, translator unit. There's got to be a way of combining them that's not janky. Maybe symbolic logic could be transmitted within a neural network, so it accomplishes high and low level tasks in parallel.

Any program can be put into a perceptron feedforward structure,   they are just data I/O!!   its just in general that anything can go into one.
Im meaning you can probably put both systems into the same net,  doesn't matter what they are.
Title: Re: Esperas automated inference engine
Post by: ivan.moony on October 08, 2019, 08:39:12 AM
In my research I have evidence suggesting generating=proving. Are you really sure yours can prove but not generate? How so...

It is because of normalization that exposes only the last atom in a sequence to be proved. The nature of provided algorithmy is that only final tail can be derived, but information could be contained within previous atoms too, in case of partial parameter application. I think I'll opt out for separated code (instead of complicating existing algorithm) to support generation.

JavaScript is actually a good choice but may limit your project to the client side  web browser which is constantly changing it seems.  So, you may want to consider a server side language such as PHP which changes more slowly, and it has neural networks or expert systems already written. 

I don't believe JavaScript does.  Of course, you may post the results from the JavaScript to PHP on the server side, but that could  slow things down.  There are many PHP tutorials, references and examples that are easy to find to help with your project.  But, this is just my opinion. As I said, JavaScript is a great choice especially with HTML5 which is a nice feature.

I thought about it (in a way), but server side scripting is computationally expensive. I wanted to use client resources, as I believe there is the real power. Suppose data mining, for example. Clients could do process intensive data mining and upload only final results. And there is a feature I want to provide in a form of downloadable HTML application, so that users don't depend on server if anything goes wrong. Besides, I can always use Node.js if I want server side scripting. Also, there is Webassembly if I need performance and want to go low level.

PHP is a nice language, it has its benefits, but it doesn't develop so fast, and I find it a drawback, in a way. Javascript advances all the time, and I find those advances reasonably backwards compatible. Javascript applications from the first day still work on today's versions. Moreover, I'd like some low level extensions to Javascript in a direction of Typescript (for performance reasons), but that's the problem with standards - they are slowly adopted and it's hard to place their roots. Webassembly should change this.

And I like Google V8 engine being publicly available for integration into your own native app.

(https://i.imgur.com/sfomX8P.png)

This picture is fractal data/code,   Ive put a lot of exploration into this datastructure of this picture!  =)

The main problem is you run out of page space by 4 scopes -  you cant even continue drawing it!

Did you think about zooming in/out?  :D

Title: Re: Esperas automated inference engine
Post by: goaty on October 08, 2019, 08:58:17 AM
Yes,  and ill give you a hot tip.

A fractal is an exponential expanding,   but u can report it in a logged down fashion which just makes it an ordinary key. :)  But hush hush must keep the rest a secret.
Title: Re: Esperas automated inference engine
Post by: LOCKSUIT on October 08, 2019, 02:06:57 PM
Quote
In my research I have evidence suggesting generating=proving. Are you really sure yours can prove but not generate? How so...
Quote
It is because of normalization that exposes only the last atom in a sequence to be proved. The nature of provided algorithmy is that only final tail can be derived, but information could be contained within previous atoms too, in case of partial parameter application. I think I'll opt out for separated code (instead of complicating existing algorithm) to support generation.

Uhm.....makes no sense....can you give an example instead? Like this: The cup fell off the table, therefore water soaked into the [carpet]

Above its begins with a question "The cup fell off the table,", and answers it, adding word by word, or even just the last word/atom [carpet]. The last word is the last atom shown as you said.

If...your algorithm adds just the last word, that is just as good for me to discuss here. The last word/atom is added/generated onto the Sentence. But to add it (generate) requires validation/verification/truth. It's of one of the same operation, 1:1.
Title: Re: Esperas automated inference engine
Post by: ivan.moony on October 08, 2019, 03:57:21 PM
Quote
In my research I have evidence suggesting generating=proving. Are you really sure yours can prove but not generate? How so...
Quote
It is because of normalization that exposes only the last atom in a sequence to be proved. The nature of provided algorithmy is that only final tail can be derived, but information could be contained within previous atoms too, in case of partial parameter application. I think I'll opt out for separated code (instead of complicating existing algorithm) to support generation.

Uhm.....makes no sense....can you give an example instead? Like this: The cup fell off the table, therefore water soaked into the [carpet]

Above its begins with a question "The cup fell off the table,", and answers it, adding word by word, or even just the last word/atom [carpet]. The last word is the last atom shown as you said.

If...your algorithm adds just the last word, that is just as good for me to discuss here. The last word/atom is added/generated onto the Sentence. But to add it (generate) requires validation/verification/truth. It's of one of the same operation, 1:1.

this sentence:
Code: [Select]
(
    (
        fell (
            (what cup)
            (from table)
        )
    ) (
        soaked (
            (what water)
            (to carpet)
        )
    )
)

normalized gives something like this:
Code: [Select]
(
    [
        (
            [
                fell,
                (
                    [
                        what
                    ] cup
                ),
                from
            ] table
        ),
        soaked,
        (
            [
                what
            ] water
        ),
        to
    ] carpet
)

Things get more complicated with variables instead of constants. The question is: what can you read from this, possibly adding more sentences. Now we have only `A -> B`. Nothing much can be derived from this unless we add something like `A` too. Then we should be able to derive `B`.

[Edit]
I'm thinking of changing the algorithm, so `(A -> B) -> C` can be queried with something like `(A -> ?) -> B`.

[Edit2]
On another thought, the algorithm already should support expressions like `((A -> @X) -> C) -> @X`, so things are going better than I thought.
Title: Re: Esperas automated inference engine
Post by: LOCKSUIT on October 09, 2019, 12:48:14 AM
hmm..........segmentation? See my attachement below, can you show me both yours like that instead? The brackets of yours are not making sense to me.

But... Where is the proving method without the generating of anything....if you can prove something, it is same as generating it. Not that it's faster, it is longer to discover an answer, but the same technology is used.

cats eat food can be proven because dogs eat food.....generatable plus provable.
Title: Re: Esperas automated inference engine
Post by: goaty on October 09, 2019, 07:59:58 AM
Locks right.

generating a circumstance accurately or even innacurately, means its the equivalent of the model, and computers only understand what models were built for them.   But that thing about the tail, must be some amazing theory in your head so id keep that if I were you.

.
Title: Re: Esperas automated inference engine
Post by: ivan.moony on October 09, 2019, 08:06:51 AM
Lock, those brackets are a programming thing. You just stitch all the lines together, and you get the unreadable result. For example:

Code: [Select]
(walking ((who I) (whereTo store)))

is written in more readable form like this:
Code: [Select]
(
    walking (
        (
            who
            I
        )
        (
            whereTo
            store
        )
    )
)

Ok?
Title: Re: Esperas automated inference engine
Post by: goaty on October 09, 2019, 08:57:51 AM
Lock, those brackets are a programming thing. You just stitch all the lines together, and you get the unreadable result. For example:

Code: [Select]
(walking ((who I) (whereTo store)))

is written in more readable form like this:
Code: [Select]
(
    walking (
        (
            who
            I
        )
        (
            whereTo
            store
        )
    )
)

Ok?

But why did it walk to the store.   (because it was written in its head to.)
Title: Re: Esperas automated inference engine
Post by: LOCKSUIT on October 09, 2019, 03:18:21 PM
It's not computing! :D Ivan can you just show me the cup fell table post like that? (both you were comparing). I don't think I'm seeing it lol.
Title: Re: Esperas automated inference engine
Post by: ivan.moony on October 09, 2019, 03:23:58 PM
code:
Code: [Select]
((fell ((what cup) (from table))) (soaked ((what water) (to carpet))))

normalized:
Code: [Select]
([([fell, ([what] cup), from] table), soaked, ([what] water), to] carpet)
Title: Re: Esperas automated inference engine
Post by: LOCKSUIT on October 09, 2019, 05:30:59 PM
2 more questions now

1) Why does the sentence not grammatically sound correct? Or what is it trying to do...
2) Unsure if you are adding-on/validating words/phrase parts? When? Where? Why? Do you add a new segmentation+word/phrase as long as the cup will soak a object with x property?
Title: Re: Esperas automated inference engine
Post by: ivan.moony on October 09, 2019, 05:56:14 PM
Lock, I'll write about it soon. The algorithm is rapidly changing now, and I want to settle down with something steady before implementing it. I want to have an algorithm that is clean, simple and fully usable. Right now it is only partially usable, and that's not enough.

[Edit]
Lock, you ask many questions, but I find one of them particularly surprising for not being understood. It is about saw-tooth code and brackets distributed vertically. It is a great programming invent that improves readability, and it  seems to us, programmers, to be god given, but actually, someone back then had to invent it, I suppose. It is all about reading braces top-down where upper brace opens, while downer brace closes a scope. Great invent, I hope you will find it amazing as much as I do.
Title: Re: Esperas automated inference engine
Post by: ivan.moony on October 13, 2019, 11:29:23 AM
https://www.youtube.com/watch?v=UjY7n0-z-p0 (https://www.youtube.com/watch?v=UjY7n0-z-p0)
Title: Re: Implika automated inference engine
Post by: ivan.moony on November 02, 2019, 07:45:33 AM
I've changed the name of the library from Esperas to Implika. Esperas is meant to be a bundle of graphical user interface and Implika.

Implika was originaly thought to be a javascript nest for Logos metatheory language, but it turned out it may be a stand-alone project, replacing Logos.

I just finished a Javascript implementation. That's about it, I hunted some bugs, but I'm sure there's more hiding within. Time will tell, I'm on it. The whole implementation took 152 lines of Javascript code, while it resembles rule based inference engine with pattern matching and implication mechanism that aims to be Turing complete. There is no distinction between knowledge base representation and rules, where each expression behaves as both rule and data from which a implicit information may be derived by other rules.

It inputs a s-expression, and outputs JSON object enriched by ForeChain property (forward chainer). Backward chaining is supported indirectly. S-expressions are interpreted in a way that the left side expression implies the right side expression, and this aspect goes recursively deep to atoms. Note that there are no other operators, just implication in a form of pairs. BNF of input looks like this:
Code: [Select]
    s-exp := ()
           | constant
           | @variable
           | (s-exp s-exp)

The only rule of inference is modus ponens:
Code: [Select]
  (A B)          A
---------------------
        B

Who would say that this little thingie hides a system capable of describing any kind of computation?

One may find interesting an appearance of logic like expert system among examples. Knowledge base is stated, and query is put in a form of conjunction. The answer appears in ForeChain property of output.

I still have to finish a description of implementation of Hilbert calculus (https://en.wikipedia.org/wiki/Hilbert_system) within the system, but you can already play with it within provided test suite. Hilbert calculus is mainly used to describe different kinds of logic, inference systems, and lambda calculus (lambda calculus is a Turing complete representation of using functions to calculate results) in various reincarnations. Provided that Implika can describe Hilbert calculus, then it can describe any kind of system.

I also see Implika as something capable of replacing core AtomSpace (https://wiki.opencog.org/w/AtomSpace) from OpenCog (https://en.wikipedia.org/wiki/OpenCog) project. Maybe, as a curiosity, I'll drop a post in their group, once that description is fully finished, but I doubt they will be totally crazy about it. They are a bunch of PhD-s, and they invested a lot of time in polishing AtomSpace to let it go so easily.

Read about it here: https://github.com/e-teoria/Implika (https://github.com/e-teoria/Implika)
Test it in action here: https://e-teoria.github.io/Implika/test/ (https://e-teoria.github.io/Implika/test/)
Title: Re: Implika automated inference engine
Post by: ivan.moony on December 07, 2019, 08:08:09 PM
I believe I found a design flaw in algorithm relating to phantom forward chaining update, but I'll correct this as soon as possible. I'm also upgrading the current behavior of Implika to make contexts hierarchically dependent, which would simplify general problem setups (and defining Hilbert calculus). All of this requires the entire algorithm rewriting, and I'm on it.
Title: Re: Implika automated inference engine
Post by: goaty on December 07, 2019, 08:35:23 PM
hope you get your original idea working.   sounds really good,   if its simple, its probably a good sign that it works.  everything is.
Title: Re: Implika automated inference engine
Post by: ivan.moony on December 07, 2019, 08:41:19 PM
hope you get your original idea working.   sounds really good,   if its simple, its probably a good sign that it works.  everything is.

Thank you for the kind words. Luckily, the original idea is not changed. The issue is only about the implementation which will hopefully stay simple. I also plan to support a kind of expression complementation which will have positive properties towards detecting forms of input (more precisely, detecting if an input is not following provided constraints)
Title: Re: Implika automated inference engine
Post by: goaty on December 07, 2019, 09:32:40 PM

Thank you for the kind words. Luckily, the original idea is not changed. The issue is only about the implementation which will hopefully stay simple. I also plan to support a kind of expression complementation which will have positive properties towards detecting forms of input (more precisely, detecting if an input is not following provided constraints)

have u heard of assimilation & accommodation?    its pretty easy idea, so maybe u don't need to look it up,  but its the psychology of having to let go of false beliefs, and how you never feel like it.
Title: Re: Implika automated inference engine
Post by: ivan.moony on December 07, 2019, 10:07:56 PM
have u heard of assimilation & accommodation?    its pretty easy idea, so maybe u don't need to look it up,  but its the psychology of having to let go of false beliefs, and how you never feel like it.

?
Title: Re: Implika automated inference engine
Post by: goaty on December 08, 2019, 01:26:00 AM
Ok Ill explain->
Just like a human, a robot is trying to complete the similar task, yet in an artificial way.  So there is actually an intersecting set between ppl and robots,  and when conflicting information enters the system, what is wrong?  the new information or the old information?   When you have a gpt-2 system, it has no ability to tell wrong from right, its all equally right to it,  continually contradicts itself, and the same thing happens to ppl,  only by a lot of experience, and getting punished a few times,  its hard to know if your doing the wrong thing, or what you are currently acting on is false.   But an artificial system has a very hard time with it,   you can set up a Boolean algebra model for answering "how do I fix a computer" if you give it the relation to "use a hammer" it may be a little biased, but as far as the robots concerned its solving the puzzle appropriately.

Its ok if you see evil,  it definitely happens in the world,  but the robot has to know what others do, is different from what it does,   and then its ok to store such information,  but its not motivated to repeat it itself, then it becomes a goal based problem.

But that might not be the right way to go about it,  and this is one of the unsolved mysteries of a.i. to date.
Title: Re: Implika automated inference engine
Post by: ivan.moony on December 08, 2019, 11:06:59 AM
I think it is about relating experience with theory. Experience is absolute truth about environment, and should be referred when constructing theories. Theory is a relative notion, and often there is more than one way to correctly explain experience by theory. If we build up two competing theories that contradict each other, we have only experience to decide which theory is correct.

We don't have yet a grand unified theory about everything, and according to Gödel, we will never have it. Hence, it will always be juggling between fragments which work in this or that occasion. Because of that, it would be necessary to have a mechanism for switching between this or that theory according to environment which is the only truly solid point.

Theories are about predicting data in spatial-temporal space. There can't exist a theory that would always predict correct data, whatever environment looks like, but we can have domain specific theories which we choose to operate on isolated specific environments. We can detect if those domain specific theories are correct by comparing their results to environmental data.
Title: Re: Implika automated inference engine
Post by: LOCKSUIT on December 08, 2019, 11:32:45 AM
What makes yous so sure you can't 'know it all'. The whole AI field is about learning patterns. A pattern let's you understand/ solve multiple cases. Our little brain already understands the whole universe, sorta. Just imagine when the Earth becomes all brains at the nano level - it will have so much more data; cats eat, dogs eat, cats sleep, dogs sleep. Discovery. The universe is made up out of just a few laws, so patterns are everywhere.

Yes goaty we have best candidate answers, sometimes the top 20 can all be the same rank. We do usually have only 1 or 2 favorites. GPT-2 does too, but it's not that good yet, but its truer than past generators, that's, for sure. It is already a very slightly useful discover/researcher.
Title: Re: Implika automated inference engine
Post by: ivan.moony on December 08, 2019, 12:18:40 PM
What makes yous so sure you can't 'know it all'.

See Gödel's incompleteness theorems (https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems). There are two notions when constructing theories: completeness (to describe all that exists), and consistency (to be noncontradictory). What Gödel proved is that we can't have both in the same time. Thus, a complete theory has to be contradictory, and as such, can't be described by classical logic in a single logical system. Implika tries to be complete, while providing mechanisms to constrain only parts of the whole system to be consistent, although these parts may contradict each other (illogical as a whole). These consistent parts would be switched by implication like:

Code: [Select]
(
    Case1 -> (... Theory1 ...)
) /\ (
    Case2 -> (... Theory2 ...)
) /\ ...

Now, if we want to analyze some `(... Environment1 ...)` by `Theory1`, we write:

Code: [Select]
(Analytics -> (Case1 /\ (... Environment1 ...)))

If `Analytics` turns out to be contradictory, then we may conclude that `Theory1` does not hold for `(... Environment1 ...)`, and we can move on to check `Case2`. Moreover, these checks may be done iteratively in a recursive loop to report which theory actually does hold for some environment, but it is already a matter of programming, and I don't want to complicate this post that much.
Title: Re: Implika automated inference engine
Post by: LOCKSUIT on December 08, 2019, 02:05:55 PM
Ok but you agree 10 patterns can allow the AGI agent to answer 100 questions, right? Patterns work for unseen cases, each has 10 per pattern hence 100. It still allows a smartness-explosion that is prepared for huge encounters of space lol.

If you happen to get any big question that involves a lot of possibilities, you can narrow down the search space.
Title: Re: Implika automated inference engine
Post by: ivan.moony on December 11, 2019, 01:41:03 PM
These are currently not working definitions of logical operators that I have to make to work with Implika:

Code: [Select]
(@A -> @B)  (@A @B)
(@A /\ @B)  ((@A -> (@B -> @C)) -> @C)
(@A \/ @B)  ((@A -> @C) -> ((@B -> @C) -> @C))
(~ @A)      (@A -> @C)
(! @X . @F) (@X -> @F)
(? @X . @F) ((@X -> (@F -> @C)) -> @C)

First, the logical implication is being defined. Then everything else is being defined from that implication and complement `@c`, including universal quantificator `!` and existential quantificator `?`. I think I'll have to implant the complementation behavior in a form of one of DeMorgan's and double complementation laws:

(A /\ B)c = (Ac \/ Bc)
(Ac)c = A

There is a story behind complementation in implika. A fresh variable that is not defined before stands for *any* possible expression. Since Implika is made to be complete (see Gödel's incompleteness theorems (https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems)), it has to be inconsistent, so *any* expression also includes a falsehood, thus evaluates to a false, providing a case for complementation detection. So, to state `x -> @c`, is the same as stating `x -> false`, which is actually a definition of `not` operator in implicational logic. This behavior aligns with the principle of explosion (https://en.wikipedia.org/wiki/Principle_of_explosion) that says: from falsehood anything follows.

There is also something interesting in the above definitions: if we take a look at `/\` (and), `\/` (or), and `~` (not) operators, we may interpret `@c` as complementation. But, we may also consider `@c` as a placeholder for any other expression, and then the definitions become theorems preserving ground properties of `\/`, `/\`, and `~` operators.

I could probably get away without low-level complementation support, but these properties seem so natural, so I decided to grapple with this dragon.
Title: Re: Implika automated inference engine
Post by: goaty on December 12, 2019, 08:07:21 AM
I think it is about relating experience with theory. Experience is absolute truth about environment, and should be referred when constructing theories. Theory is a relative notion, and often there is more than one way to correctly explain experience by theory. If we build up two competing theories that contradict each other, we have only experience to decide which theory is correct.

We don't have yet a grand unified theory about everything, and according to Gödel, we will never have it. Hence, it will always be juggling between fragments which work in this or that occasion. Because of that, it would be necessary to have a mechanism for switching between this or that theory according to environment which is the only truly solid point.

Theories are about predicting data in spatial-temporal space. There can't exist a theory that would always predict correct data, whatever environment looks like, but we can have domain specific theories which we choose to operate on isolated specific environments. We can detect if those domain specific theories are correct by comparing their results to environmental data.

I see...  so the model isn't the environment,   in all my studies I actually make the model THE environment,  but that isn't worth basing behaviour on!  thanks for that one Ivan.    But it definitely is easier to just make the model the environment,  but to develop the robots behaviour - you have to copy whats in it,  so its not the true solution to a.i..
Title: Re: Implika automated inference engine
Post by: ivan.moony on December 16, 2019, 10:09:41 AM
I've been trying to detect if a formula is not a consequence of some assumptions (i.e. if `A -> B` is not the case). This particular question represented an issue to me, as long as I wanted to separate the some-kind-of-solid logic language definition from dirty programming tricks.

I think there is a thing about semantics in classical logic in which it is possible to detect a contradiction or a tautology, while it is a problem to detect merely satisfiable formulas. I think what I really needed was a way to detect if the negation of `A -> B` that translates to `(A /\ ¬B)` is satisfiable, and the detection problem arose.

In classical logic, I think to derive `failure` from `(A /\ ¬B) -> failure` requires to explicitly state or derive both `A` and `¬B`. But if B does not follow from `A`, then `¬B` is not automatically concluded, thus `failure` doesn't get propagated.

So I decided to bend rules a bit, to match the last remark. I planned to make a system where if `B` does not follow from `A`, then `~B` is automatically concluded from `A` (or at least treated like so, not to bloat an output). In fact, in this system, if `X` is not a part of formula set, then `~X` automatically is. In this system, to prove that `B` doesn't follow from `A`, it would be enough to derive `~(A -> B)`. This completely changes raw logic semantics and it is not classical logic anymore.

However, classical not operation can still coexist in a form of `(¬X) = (X -> false)` statement, but now the negation `¬` is completely different operator than complementation `~`. The important thing is they should be able to both exist in the same system.

Last time I checked Martin Löf's type theory (a long time ago), it had an interesting property: sets are being inhabitated with proofs of their existence, while sets not being inhabitated at all are considered being false. This would somewhat correspond to a little system of mine, so finally, all of this might not be a completely new thing after all.

And then I found that the subject bugging me is already worked out: autoepistemic logic (https://en.m.wikipedia.org/wiki/Autoepistemic_logic), negation as a failure (https://en.m.wikipedia.org/wiki/Negation_as_failure), and stable model semantics (https://en.m.wikipedia.org/wiki/Stable_model_semantics). Those are based on evidence, or a lack of evidence of some formula being true or false.

Of course, me wouldn't be me if I wasn't reinvent the hot water all by myself first. Why do I always have to do it the hard way?
Title: Re: Implika automated inference engine
Post by: MikeB on January 01, 2020, 02:36:38 PM
I don't know how you could ever get an A = B inference engine working...

"Does a ball bounce?" is the same as saying "do all balls always bounce anywhere everywhere anytime".

You could never link the data...

You'd need deep relationship.. "given this and this and this and this and this.. does this do this?". Then looking up if all those things are true... If the question is asked too simply it's their fault...
Title: Re: Implika automated inference engine
Post by: ivan.moony on January 01, 2020, 02:50:25 PM
I don't know how you could ever get an A = B inference engine working...

That's why we have variables. If we mention the same variable twice, like in expression `(@a = @a) -> equal`, the right part of implication would be deduced only if we say `5 = 5`, or `a = a`, but not `2 = 3`. The same variable at various places are expected to be substituted for the same expression.
Title: Re: Implika automated inference engine
Post by: Zero on March 19, 2020, 08:41:08 PM
Hi :)
I'd like to check whether I understand correctly. In the following example:
Code: [Select]
(
    ((man @x) (mortal @x)) (
        (man Socrates) ()
    )
)

We first set a context ((man @x) (mortal @x)). That's our "rule". Then, our "current state of the world" is (man Socrates).
The dummy () is actually the place where we can finally use the conclusion we obtained (that Socrates is mortal), am I right?

How would I express a world where Plato is a man too? If I understand correctly, I would create a nested structure with (man Plato) and (man Socrates)? How would it look like?
Title: Re: Implika automated inference engine
Post by: ivan.moony on March 19, 2020, 09:09:07 PM
Code: [Select]
(
    ((man @x) (mortal @x)) (
        (man Socrates) (
            (man Plato) ()
        )
    )
)

It is a matter of transforming implicational logic to normal logic. `A -> ( B -> ( C -> D ) )` translates to `(A /\ B /\ C) -> D`, and `A`, `B`, and `C` are in the same context, and may interact between each other. We may use `D` as a dummy, or as something more meaningful.
Title: Re: Implika automated inference engine
Post by: Zero on March 19, 2020, 09:14:52 PM
In (a b), how would you call a and b? What are the official terms you choose? I can't choose between condition/context/rule, consequence/situation ...
Title: Re: Implika automated inference engine
Post by: ivan.moony on March 19, 2020, 09:37:48 PM
In (a b), how would you call a and b? What are the official terms you choose?

Nothing special, it's all about implications. I'd say `a` is a cause, while `b` is a consequence. Thus `a` implies `b`. But the real beauty is that we may use implication combinations to form data structures. For example, if we have a magical operator `+`, we may write the following function definition and its two use cases:

Code: [Select]
(
    ( ( add ( left @x ) ( right @y ) ) ( @x + @y ) ) (
        ( add ( left 1 ) ( right 2 ) ) (
            ( add ( left 4 ) ( right 5 ) ) ()
        )
    )
)

The first meaningful line is the function definition. The second line results with `1 + 2`. The third results with `4 + 5`. Try to copy and paste this example into implika test suite (https://e-teoria.github.io/Implika/test/) and see what happens to `ForeChain` property. Remember, implika is a very simple variant of implicational logic, but we have just been doing math with it.

Thank you for reminding me of this project, I put it aside in a favor of super-duper-logic-able-doing-parser esperas. Nevertheless, I might finish implika one day, I'd still like to simplify different contexts parent-child interaction.
Title: Re: Implika automated inference engine
Post by: Zero on March 19, 2020, 10:08:41 PM
Yes I can see that it can represent data structures. Actually, I was working on an expansion of consnets, in OrientDB (see the other thread?) Trying to capture the essence of things, I came to a point where I introduced "truth" and "context" for handling first order logic. Implika popped up in my mind, I reread it, and found it would fit perfectly here.

First, I still need a way to make things move, I don't have rules in my system yet, and you know how much I love minimalism. But I also need to express contexts. I feel contexts is a very primitive concept.

You probably remember that consnet is all about pairs. Pairs is enough to create everything else. But then, there's a schema that comes again and again, which is [LinkType [Left Right]], so I thought that "type" was primitive enough to deserve its own existence. Then, following OpenCog, I wondered about truth and context, and the next step was to add wildcards too, because who doesn't like patterns, right?
:)

Now, with Implika, I think "truth" and "context" are actually the very same thing. Maria Keet says: What matters in logic is not the actual truth of a statement, but rather the relationship between the truth of one statement and that of another.
Title: Re: Implika automated inference engine
Post by: ivan.moony on March 19, 2020, 10:21:07 PM
Be careful if you use only implications, I'm still uncertain about them representing sequences. There was something wrong about it, but I forgot the solution. In logic, if we have `( ( ( A -> B ) -> C ) -> D )`, and if we say `C` in the top context, then from `C` follows `D` without even mentioning `A` and `B`. I don't know how to feel about this logical property, but it seems a nasty one.

About types, you can get away without special operator by demanding the whole expression to be tautology (it is said valid). If the top expression is `SomeA -> SomeB`, then if we consider it a tautology, then `SomeA` is of type `SomeB`. However, expressions constituting `SomeA` and `SomeB` may be only satisfiable, making a room for some needed acrobatics. The third case is contradiction treatment, which may be reported as an error, but they (Gödel) say that that kind of system is then not complete. For a while now, I don't believe them anymore. I suppose there may be multiple onion skin metalayers of which the above may be valid or satisfiable (in contrast to contradictory), while retaining ability to express lower system which may be contradictory if we want.
Title: Re: Implika automated inference engine
Post by: Zero on March 19, 2020, 10:33:41 PM
Man, this baby needs a little love. To me, Implika might be one of your greatest works.

And what about backward chaining? Do you think it's possible to enhance it?
Title: Re: Implika automated inference engine
Post by: ivan.moony on March 19, 2020, 10:55:30 PM
I think if all logical operators could be expressed in implika, then `( ( A -> B ) /\ B ) -> A` simulates backward chaining without any special treatment. Try to paste the following example in implika test suite (https://e-teoria.github.io/Implika/test/):

Code: [Select]
(
    (
        ( @x -> @y ) (
            ( ( fore @x ) @y ) (
                ( ( back @y ) @x ) ()
            )
        )
    ) (
        ( rains -> wet ) ()
    )
)

After drawing conclusions, expression `rains -> wet` should in ForeChain property have forward and backward chained cause and consequence.

However, to extract correct results from `fore rains` and `back wet` some crazy stunts with contexts are required, which would be more easily solved with still unimplemented hierarchical contexts.
Title: Re: Implika automated inference engine
Post by: Zero on March 19, 2020, 11:43:53 PM
Ok, so to avoid crazy stunts, would you need to expand the model? Or would it be more like changing the algo?
Title: Re: Implika automated inference engine
Post by: ivan.moony on March 20, 2020, 12:10:47 AM
It would be an algo adjustment. No additional syntax, just a few loops rearrangements if I remember correctly (I was about to do this before something else attracted my attention). But I think I'd need at least a week to remember what I was doing and to plan things up before making the adjustments (not to inject bigger amounts of code than really needed - I want to keep it below 200 lines of Javascript).

As a motivation, I think we can expect example implementation of the entire Hilbert calculus (https://en.wikipedia.org/wiki/Hilbert_system) in about 15-20 lines of more readable implika code instead of the current unnecessary mess among 47 lines.
Title: Re: Implika automated inference engine
Post by: Zero on March 20, 2020, 12:58:09 AM
I think it's worth it, especially if it is a building block of a bigger project like e-Teoria.

One interesting improvement could be the addition of a syntax for inline comments. I suggest "double quotes anywhere", or [square brackets], being ignored by the parser (together with a good-practice section explaining how to use them). Because since pairs can be conceptually anything, readability of Implika code is currently absolutely awful!!

You're the master of Implika, but while I'm at it, I find @var syntax not so pretty. May I suggest <var> or var? instead? Sorry, probably not my business...
Title: Re: Implika automated inference engine
Post by: ivan.moony on March 20, 2020, 06:30:50 AM
You're the master of Implika, but while I'm at it, I find @var syntax not so pretty. May I suggest <var> or var? instead?

I have to admit, `<var>` looks much prettier to me too. I also kind of like capital letters denoting variables, while lower letters denoting constants.
Title: Re: Implika automated inference engine
Post by: Zero on March 20, 2020, 08:22:52 AM
Yes, I thought about Uppercase first letter too, à la Prolog.

About comments, having them between double quotes is unusual, but this is how Smalltalk does it (http://rigaux.org/language-study/syntax-across-languages-per-language/Smalltalk.html) for example. It seems reasonable to me. One way or another, I think comments are very important in a language, you should choose a syntax for them.

Code: [Select]
( "one meaningful context"

    (("pattern" man <x>) ("template" mortal <x>)) (

        ("starting from" man Socrates) ()
    )
)

Another question. I would expect the state after inference-step to be represented in the same format. Currently, a ForeChain key is added, with the result as an array. This breaks the head/tail format. How do we represent the result of the inference in Implika syntax?
Title: Re: Implika automated inference engine
Post by: krayvonk on March 20, 2020, 09:02:27 AM
One funny thing I think about symbolic logic, is the "framework" thats running your system could just be more symbolic logic.  very confusing topic.
If anyone cracks how to do it really good,  it puts hairs on your chest.

I love that first picture,  why dont you put symbolic logic in a fractal!   Its like that thing I just said, the system running the system, or it would be like "10 people with 10 forks with 10 steaks with 10 blobs of ketchup with 10 little bits of parmesan cheese particles."  - couldnt go more micronicly setted than that in this case.
Title: Re: Implika automated inference engine
Post by: Zero on March 20, 2020, 09:27:39 AM
He did :)
https://aidreams.co.uk/forum/index.php?topic=12849.msg60380#msg60380 (https://aidreams.co.uk/forum/index.php?topic=12849.msg60380#msg60380)
Title: Re: Implika automated inference engine
Post by: ivan.moony on March 20, 2020, 09:44:14 AM
About comments...

Well, there is already a way to state `(pattern @a) @a`, and later to use it as `pattern (man @x)`, just to increase verbosity. But general comments will certainly be taken under consideration.

I would expect the state after inference-step to be represented in the same format. Currently, a ForeChain key is added, with the result as an array. This breaks the head/tail format.

Sounds extraordinary. I should give it a try.

How do we represent the result of the inference in Implika syntax?

Nohow. Inference particles are just there in forechain to interact with other initial/forechain particles, making new forechain particles.

[EDIT]
continuing in the next post
Title: Re: Implika automated inference engine
Post by: ivan.moony on March 21, 2020, 07:45:19 AM
I would expect the state after inference-step to be represented in the same format. Currently, a ForeChain key is added, with the result as an array. This breaks the head/tail format. How do we represent the result of the inference in Implika syntax?

After a few thoughts, a solution may be in populating existing contexts with inference results, thus altering the original expression. This solution suffers from a lack of information about what conclusion came from what node, while this information may be valuable in theorem proving. I think it would be wise to separate conclusions from starting code as it is now, leaving the user possibility to flatten resulting JSON into pure head/tail format in the following way: There may be multiple conclusions from the same node. The solution to this is to replace a `node` with `(node ( result1 ( result2 ( result3 () ) ) ) )` with mandatory dummy at the end because this expression behaves like logical expression `(node -> (result1 -> (result2 -> (result3 -> ()))))`, which translates to `(node /\ result1 /\ result2 /\ result3) -> ()`. A function to make this flattening would take only a dozen lines of code with standard tree traversal (https://en.wikipedia.org/wiki/Tree_traversal) and node reassignment.

How do we represent the result of the inference in Implika syntax?

In fact, implika (like logic) has a nice property of being able to detect internal inference. Inference process is homoiconic (https://en.wikipedia.org/wiki/Homoiconicity) in implika, if that's the right word to use. For example, we can write `(a b) c` to derive `c` if in the same context may be implicitly derived `b` from `a`, even without explicitly stating `(a b)`. IMHO Logic in general is a true dragon that could be worth of studying even for its own sake, just like mathematics is.
Title: Re: Implika automated inference engine
Post by: Zero on March 21, 2020, 08:50:02 AM
It surely is a dragon! It's hard for me to study. For example, I don't understand yet the difference between material conditional (https://en.wikipedia.org/wiki/Material_conditional) and logical consequence (https://en.wikipedia.org/wiki/Logical_consequence). Logic is an entire world in itself.

The reason behind my question was a hope to build a system where forward chaining can be applied repeatedly, to obtain a tree-shaped cellular automaton, so to speak.

BTW, expressions like (a b c d) associate to the left, may I ask why you did choose left? Expressions like (a (b (c ())) seem to be everywhere...
Title: Re: Implika automated inference engine
Post by: ivan.moony on March 21, 2020, 11:54:08 AM
BTW, expressions like (a b c d) associate to the left, may I ask why you did choose left? Expressions like (a (b (c ())) seem to be everywhere...

Yes, I decided not to follow standards and to associate implications to the left for a reason. It is because `(a (b (c ())))` is commutative (since it is analog to logical conjunction) , while in `((((a) b) c ) d)` the order matters. I predicted that someone might want to construct sequences with more than two elements, and I wanted to avoid writing braces in those cases (i.e. expressions like `select <x> where <y> order by <z>`). I don't know if I made an optimal decision, but it's just a matter of parsing that is external to the library anyway, like in test suite. Internally the algorithm accepts `{head: ..., tail:...}` jsobjects (or alternatively two element arrays processed by `Conv` function) which require explicit grouping. The library itself actually is meant to be used as a bare essential for integrating with bigger projects which may decide their own grouping policy.

By the way, I think I know what to do with hierarchical context relations. It would be just a few more lines of code. I'll try it on Monday.
Title: Re: Implika automated inference engine
Post by: ivan.moony on March 21, 2020, 05:20:45 PM
The reason behind my question was a hope to build a system where forward chaining can be applied repeatedly, to obtain a tree-shaped cellular automaton, so to speak.

It should be already quite possible. For example, this expression never terminates:

Code: [Select]
(
    (@a (@a + 1)) (
        0 (
        )
    )
)

`(@a (@a + 1))` turns `@a` to `@a + 1`. Then `@a + 1` is fed back to `(@a (@a + 1))` to produce `(@a + 1) + 1`, and the loop is going on forever after applying it to `0`. To terminate it, it is necessary to express a condition under which `@a` becomes `@a + 1`. For example:

Code: [Select]
(
    ((@a < 10) (@a (@a + 1))) (
        0 (
        )
    )
)

This should loop only 10 times, starting from 0, but it won't work right away. On the first sight, the problem seems that we have to internally code predicate less than and a function of addition. But on the second thought, I believe it is possible to do with implika something like Church encoding (https://en.wikipedia.org/wiki/Church_encoding) with lambda calculus. This is what I only mentioned in the current implika documentation, but I'll try to shed more light on this problem in the next github update. I'll try to provide a simple example of implementing lambda calculus, which is now (without hierarchically related contexts) more complicated, but possible (I believe).

In other words, the above example would work only after defining comparison and addition operations on set of numbers within implika notions. Otherwise (and for a sake of speed performance), these definitions should be added internally by manually altering implika code.

By the way, are you aware of untyped lambda calculus (https://en.wikipedia.org/wiki/Lambda_calculus)? It possesses a hypnotic simplicity, while it reveals unbound power. It would be wise to take it under consideration as a base language. I plan to do it with etml (https://github.com/e-teoria/E-Teoria-Markup-Language) because I think lambda calculus is simpler than implika, while implika inference automation is not a necessary accessory in raw document production.
Title: Re: Implika automated inference engine
Post by: Zero on March 22, 2020, 10:51:46 AM
Do you plan to introduce macros at some point? :)
Title: Re: Implika automated inference engine
Post by: ivan.moony on March 22, 2020, 12:39:00 PM
Do you plan to introduce macros at some point? :)

What kind of macros? I thought every implication is a kind of a macro. A result of a head pattern is a tail pattern, regarding to some rule.
Title: Re: Implika automated inference engine
Post by: Zero on March 22, 2020, 12:57:37 PM
You're right, it's a rewriting system (silly me). I think what's bothering me is having to nest things when I'd like to syntactically juxtapose them. I'm not sure I'm being clear.
Title: Re: Implika automated inference engine
Post by: ivan.moony on March 22, 2020, 01:16:39 PM
Yes, it is a kind of itchy when every bond is a consequence, when we need it, and when we do not. It was bothering me that much that I introduced a space separator aside from implication operators, but it all turned into Esperas (https://github.com/e-teoria/Esperas), which I'm still developing.

[edit]
I only kept implika because I believe in minimalism.