Implika automated inference engine

  • 68 Replies
  • 13929 Views
*

goaty

  • Trusty Member
  • ********
  • Replicant
  • *
  • 552
Re: Implika automated inference engine
« Reply #30 on: December 07, 2019, 08:35:23 pm »
hope you get your original idea working.   sounds really good,   if its simple, its probably a good sign that it works.  everything is.

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: Implika automated inference engine
« Reply #31 on: December 07, 2019, 08:41:19 pm »
hope you get your original idea working.   sounds really good,   if its simple, its probably a good sign that it works.  everything is.

Thank you for the kind words. Luckily, the original idea is not changed. The issue is only about the implementation which will hopefully stay simple. I also plan to support a kind of expression complementation which will have positive properties towards detecting forms of input (more precisely, detecting if an input is not following provided constraints)

*

goaty

  • Trusty Member
  • ********
  • Replicant
  • *
  • 552
Re: Implika automated inference engine
« Reply #32 on: December 07, 2019, 09:32:40 pm »

Thank you for the kind words. Luckily, the original idea is not changed. The issue is only about the implementation which will hopefully stay simple. I also plan to support a kind of expression complementation which will have positive properties towards detecting forms of input (more precisely, detecting if an input is not following provided constraints)

have u heard of assimilation & accommodation?    its pretty easy idea, so maybe u don't need to look it up,  but its the psychology of having to let go of false beliefs, and how you never feel like it.

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: Implika automated inference engine
« Reply #33 on: December 07, 2019, 10:07:56 pm »
have u heard of assimilation & accommodation?    its pretty easy idea, so maybe u don't need to look it up,  but its the psychology of having to let go of false beliefs, and how you never feel like it.

?

*

goaty

  • Trusty Member
  • ********
  • Replicant
  • *
  • 552
Re: Implika automated inference engine
« Reply #34 on: December 08, 2019, 01:26:00 am »
Ok Ill explain->
Just like a human, a robot is trying to complete the similar task, yet in an artificial way.  So there is actually an intersecting set between ppl and robots,  and when conflicting information enters the system, what is wrong?  the new information or the old information?   When you have a gpt-2 system, it has no ability to tell wrong from right, its all equally right to it,  continually contradicts itself, and the same thing happens to ppl,  only by a lot of experience, and getting punished a few times,  its hard to know if your doing the wrong thing, or what you are currently acting on is false.   But an artificial system has a very hard time with it,   you can set up a Boolean algebra model for answering "how do I fix a computer" if you give it the relation to "use a hammer" it may be a little biased, but as far as the robots concerned its solving the puzzle appropriately.

Its ok if you see evil,  it definitely happens in the world,  but the robot has to know what others do, is different from what it does,   and then its ok to store such information,  but its not motivated to repeat it itself, then it becomes a goal based problem.

But that might not be the right way to go about it,  and this is one of the unsolved mysteries of a.i. to date.

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: Implika automated inference engine
« Reply #35 on: December 08, 2019, 11:06:59 am »
I think it is about relating experience with theory. Experience is absolute truth about environment, and should be referred when constructing theories. Theory is a relative notion, and often there is more than one way to correctly explain experience by theory. If we build up two competing theories that contradict each other, we have only experience to decide which theory is correct.

We don't have yet a grand unified theory about everything, and according to Gödel, we will never have it. Hence, it will always be juggling between fragments which work in this or that occasion. Because of that, it would be necessary to have a mechanism for switching between this or that theory according to environment which is the only truly solid point.

Theories are about predicting data in spatial-temporal space. There can't exist a theory that would always predict correct data, whatever environment looks like, but we can have domain specific theories which we choose to operate on isolated specific environments. We can detect if those domain specific theories are correct by comparing their results to environmental data.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Implika automated inference engine
« Reply #36 on: December 08, 2019, 11:32:45 am »
What makes yous so sure you can't 'know it all'. The whole AI field is about learning patterns. A pattern let's you understand/ solve multiple cases. Our little brain already understands the whole universe, sorta. Just imagine when the Earth becomes all brains at the nano level - it will have so much more data; cats eat, dogs eat, cats sleep, dogs sleep. Discovery. The universe is made up out of just a few laws, so patterns are everywhere.

Yes goaty we have best candidate answers, sometimes the top 20 can all be the same rank. We do usually have only 1 or 2 favorites. GPT-2 does too, but it's not that good yet, but its truer than past generators, that's, for sure. It is already a very slightly useful discover/researcher.
Emergent          https://openai.com/blog/

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: Implika automated inference engine
« Reply #37 on: December 08, 2019, 12:18:40 pm »
What makes yous so sure you can't 'know it all'.

See Gödel's incompleteness theorems. There are two notions when constructing theories: completeness (to describe all that exists), and consistency (to be noncontradictory). What Gödel proved is that we can't have both in the same time. Thus, a complete theory has to be contradictory, and as such, can't be described by classical logic in a single logical system. Implika tries to be complete, while providing mechanisms to constrain only parts of the whole system to be consistent, although these parts may contradict each other (illogical as a whole). These consistent parts would be switched by implication like:

Code
(
    Case1 -> (... Theory1 ...)
) /\ (
    Case2 -> (... Theory2 ...)
) /\ ...

Now, if we want to analyze some `(... Environment1 ...)` by `Theory1`, we write:

Code
(Analytics -> (Case1 /\ (... Environment1 ...)))

If `Analytics` turns out to be contradictory, then we may conclude that `Theory1` does not hold for `(... Environment1 ...)`, and we can move on to check `Case2`. Moreover, these checks may be done iteratively in a recursive loop to report which theory actually does hold for some environment, but it is already a matter of programming, and I don't want to complicate this post that much.
« Last Edit: December 08, 2019, 12:51:08 pm by ivan.moony »

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Implika automated inference engine
« Reply #38 on: December 08, 2019, 02:05:55 pm »
Ok but you agree 10 patterns can allow the AGI agent to answer 100 questions, right? Patterns work for unseen cases, each has 10 per pattern hence 100. It still allows a smartness-explosion that is prepared for huge encounters of space lol.

If you happen to get any big question that involves a lot of possibilities, you can narrow down the search space.
Emergent          https://openai.com/blog/

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: Implika automated inference engine
« Reply #39 on: December 11, 2019, 01:41:03 pm »
These are currently not working definitions of logical operators that I have to make to work with Implika:

Code
(@A -> @B)  (@A @B)
(@A /\ @B)  ((@A -> (@B -> @C)) -> @C)
(@A \/ @B)  ((@A -> @C) -> ((@B -> @C) -> @C))
(~ @A)      (@A -> @C)
(! @X . @F) (@X -> @F)
(? @X . @F) ((@X -> (@F -> @C)) -> @C)

First, the logical implication is being defined. Then everything else is being defined from that implication and complement `@c`, including universal quantificator `!` and existential quantificator `?`. I think I'll have to implant the complementation behavior in a form of one of DeMorgan's and double complementation laws:

(A /\ B)c = (Ac \/ Bc)
(Ac)c = A

There is a story behind complementation in implika. A fresh variable that is not defined before stands for *any* possible expression. Since Implika is made to be complete (see Gödel's incompleteness theorems), it has to be inconsistent, so *any* expression also includes a falsehood, thus evaluates to a false, providing a case for complementation detection. So, to state `x -> @c`, is the same as stating `x -> false`, which is actually a definition of `not` operator in implicational logic. This behavior aligns with the principle of explosion that says: from falsehood anything follows.

There is also something interesting in the above definitions: if we take a look at `/\` (and), `\/` (or), and `~` (not) operators, we may interpret `@c` as complementation. But, we may also consider `@c` as a placeholder for any other expression, and then the definitions become theorems preserving ground properties of `\/`, `/\`, and `~` operators.

I could probably get away without low-level complementation support, but these properties seem so natural, so I decided to grapple with this dragon.
« Last Edit: December 16, 2019, 11:07:49 am by ivan.moony »

*

goaty

  • Trusty Member
  • ********
  • Replicant
  • *
  • 552
Re: Implika automated inference engine
« Reply #40 on: December 12, 2019, 08:07:21 am »
I think it is about relating experience with theory. Experience is absolute truth about environment, and should be referred when constructing theories. Theory is a relative notion, and often there is more than one way to correctly explain experience by theory. If we build up two competing theories that contradict each other, we have only experience to decide which theory is correct.

We don't have yet a grand unified theory about everything, and according to Gödel, we will never have it. Hence, it will always be juggling between fragments which work in this or that occasion. Because of that, it would be necessary to have a mechanism for switching between this or that theory according to environment which is the only truly solid point.

Theories are about predicting data in spatial-temporal space. There can't exist a theory that would always predict correct data, whatever environment looks like, but we can have domain specific theories which we choose to operate on isolated specific environments. We can detect if those domain specific theories are correct by comparing their results to environmental data.

I see...  so the model isn't the environment,   in all my studies I actually make the model THE environment,  but that isn't worth basing behaviour on!  thanks for that one Ivan.    But it definitely is easier to just make the model the environment,  but to develop the robots behaviour - you have to copy whats in it,  so its not the true solution to a.i..

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: Implika automated inference engine
« Reply #41 on: December 16, 2019, 10:09:41 am »
I've been trying to detect if a formula is not a consequence of some assumptions (i.e. if `A -> B` is not the case). This particular question represented an issue to me, as long as I wanted to separate the some-kind-of-solid logic language definition from dirty programming tricks.

I think there is a thing about semantics in classical logic in which it is possible to detect a contradiction or a tautology, while it is a problem to detect merely satisfiable formulas. I think what I really needed was a way to detect if the negation of `A -> B` that translates to `(A /\ ¬B)` is satisfiable, and the detection problem arose.

In classical logic, I think to derive `failure` from `(A /\ ¬B) -> failure` requires to explicitly state or derive both `A` and `¬B`. But if B does not follow from `A`, then `¬B` is not automatically concluded, thus `failure` doesn't get propagated.

So I decided to bend rules a bit, to match the last remark. I planned to make a system where if `B` does not follow from `A`, then `~B` is automatically concluded from `A` (or at least treated like so, not to bloat an output). In fact, in this system, if `X` is not a part of formula set, then `~X` automatically is. In this system, to prove that `B` doesn't follow from `A`, it would be enough to derive `~(A -> B)`. This completely changes raw logic semantics and it is not classical logic anymore.

However, classical not operation can still coexist in a form of `(¬X) = (X -> false)` statement, but now the negation `¬` is completely different operator than complementation `~`. The important thing is they should be able to both exist in the same system.

Last time I checked Martin Löf's type theory (a long time ago), it had an interesting property: sets are being inhabitated with proofs of their existence, while sets not being inhabitated at all are considered being false. This would somewhat correspond to a little system of mine, so finally, all of this might not be a completely new thing after all.

And then I found that the subject bugging me is already worked out: autoepistemic logic, negation as a failure, and stable model semantics. Those are based on evidence, or a lack of evidence of some formula being true or false.

Of course, me wouldn't be me if I wasn't reinvent the hot water all by myself first. Why do I always have to do it the hard way?
« Last Edit: December 16, 2019, 11:10:37 am by ivan.moony »

*

MikeB

  • Autobot
  • ******
  • 224
Re: Implika automated inference engine
« Reply #42 on: January 01, 2020, 02:36:38 pm »
I don't know how you could ever get an A = B inference engine working...

"Does a ball bounce?" is the same as saying "do all balls always bounce anywhere everywhere anytime".

You could never link the data...

You'd need deep relationship.. "given this and this and this and this and this.. does this do this?". Then looking up if all those things are true... If the question is asked too simply it's their fault...

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: Implika automated inference engine
« Reply #43 on: January 01, 2020, 02:50:25 pm »
I don't know how you could ever get an A = B inference engine working...

That's why we have variables. If we mention the same variable twice, like in expression `(@a = @a) -> equal`, the right part of implication would be deduced only if we say `5 = 5`, or `a = a`, but not `2 = 3`. The same variable at various places are expected to be substituted for the same expression.
« Last Edit: January 02, 2020, 11:11:16 am by ivan.moony »

*

Zero

  • Eve
  • ***********
  • 1287
Re: Implika automated inference engine
« Reply #44 on: March 19, 2020, 08:41:08 pm »
Hi :)
I'd like to check whether I understand correctly. In the following example:
Code
(
    ((man @x) (mortal @x)) (
        (man Socrates) ()
    )
)

We first set a context ((man @x) (mortal @x)). That's our "rule". Then, our "current state of the world" is (man Socrates).
The dummy () is actually the place where we can finally use the conclusion we obtained (that Socrates is mortal), am I right?

How would I express a world where Plato is a man too? If I understand correctly, I would create a nested structure with (man Plato) and (man Socrates)? How would it look like?
« Last Edit: March 19, 2020, 09:09:20 pm by Zero »

 


Requirements for functional equivalence to conscious processing?
by DaltonG (General AI Discussion)
November 19, 2024, 11:56:05 am
Will LLMs ever learn what is ... is?
by HS (Future of AI)
November 10, 2024, 06:28:10 pm
Who's the AI?
by frankinstien (Future of AI)
November 04, 2024, 05:45:05 am
Project Acuitas
by WriterOfMinds (General Project Discussion)
October 27, 2024, 09:17:10 pm
Ai improving AI
by infurl (AI Programming)
October 19, 2024, 03:43:29 am
Atronach's Eye
by WriterOfMinds (Home Made Robots)
October 13, 2024, 09:52:42 pm
Running local AI models
by spydaz (AI Programming)
October 07, 2024, 09:00:53 am
Hi IM BAA---AAACK!!
by MagnusWootton (Home Made Robots)
September 16, 2024, 09:49:10 pm
LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

298 Guests, 0 Users

Most Online Today: 482. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles