Implika automated inference engine

  • 68 Replies
  • 13779 Views
*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: Esperas automated inference engine
« Reply #15 on: October 08, 2019, 08:39:12 am »
In my research I have evidence suggesting generating=proving. Are you really sure yours can prove but not generate? How so...

It is because of normalization that exposes only the last atom in a sequence to be proved. The nature of provided algorithmy is that only final tail can be derived, but information could be contained within previous atoms too, in case of partial parameter application. I think I'll opt out for separated code (instead of complicating existing algorithm) to support generation.

JavaScript is actually a good choice but may limit your project to the client side  web browser which is constantly changing it seems.  So, you may want to consider a server side language such as PHP which changes more slowly, and it has neural networks or expert systems already written. 

I don't believe JavaScript does.  Of course, you may post the results from the JavaScript to PHP on the server side, but that could  slow things down.  There are many PHP tutorials, references and examples that are easy to find to help with your project.  But, this is just my opinion. As I said, JavaScript is a great choice especially with HTML5 which is a nice feature.

I thought about it (in a way), but server side scripting is computationally expensive. I wanted to use client resources, as I believe there is the real power. Suppose data mining, for example. Clients could do process intensive data mining and upload only final results. And there is a feature I want to provide in a form of downloadable HTML application, so that users don't depend on server if anything goes wrong. Besides, I can always use Node.js if I want server side scripting. Also, there is Webassembly if I need performance and want to go low level.

PHP is a nice language, it has its benefits, but it doesn't develop so fast, and I find it a drawback, in a way. Javascript advances all the time, and I find those advances reasonably backwards compatible. Javascript applications from the first day still work on today's versions. Moreover, I'd like some low level extensions to Javascript in a direction of Typescript (for performance reasons), but that's the problem with standards - they are slowly adopted and it's hard to place their roots. Webassembly should change this.

And I like Google V8 engine being publicly available for integration into your own native app.



This picture is fractal data/code,   Ive put a lot of exploration into this datastructure of this picture!  =)

The main problem is you run out of page space by 4 scopes -  you cant even continue drawing it!

Did you think about zooming in/out?  :D

« Last Edit: December 09, 2019, 11:05:34 am by ivan.moony »

*

goaty

  • Trusty Member
  • ********
  • Replicant
  • *
  • 552
Re: Esperas automated inference engine
« Reply #16 on: October 08, 2019, 08:58:17 am »
Yes,  and ill give you a hot tip.

A fractal is an exponential expanding,   but u can report it in a logged down fashion which just makes it an ordinary key. :)  But hush hush must keep the rest a secret.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Esperas automated inference engine
« Reply #17 on: October 08, 2019, 02:06:57 pm »
Quote
In my research I have evidence suggesting generating=proving. Are you really sure yours can prove but not generate? How so...
Quote
It is because of normalization that exposes only the last atom in a sequence to be proved. The nature of provided algorithmy is that only final tail can be derived, but information could be contained within previous atoms too, in case of partial parameter application. I think I'll opt out for separated code (instead of complicating existing algorithm) to support generation.

Uhm.....makes no sense....can you give an example instead? Like this: The cup fell off the table, therefore water soaked into the [carpet]

Above its begins with a question "The cup fell off the table,", and answers it, adding word by word, or even just the last word/atom [carpet]. The last word is the last atom shown as you said.

If...your algorithm adds just the last word, that is just as good for me to discuss here. The last word/atom is added/generated onto the Sentence. But to add it (generate) requires validation/verification/truth. It's of one of the same operation, 1:1.
Emergent          https://openai.com/blog/

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: Esperas automated inference engine
« Reply #18 on: October 08, 2019, 03:57:21 pm »
Quote
In my research I have evidence suggesting generating=proving. Are you really sure yours can prove but not generate? How so...
Quote
It is because of normalization that exposes only the last atom in a sequence to be proved. The nature of provided algorithmy is that only final tail can be derived, but information could be contained within previous atoms too, in case of partial parameter application. I think I'll opt out for separated code (instead of complicating existing algorithm) to support generation.

Uhm.....makes no sense....can you give an example instead? Like this: The cup fell off the table, therefore water soaked into the [carpet]

Above its begins with a question "The cup fell off the table,", and answers it, adding word by word, or even just the last word/atom [carpet]. The last word is the last atom shown as you said.

If...your algorithm adds just the last word, that is just as good for me to discuss here. The last word/atom is added/generated onto the Sentence. But to add it (generate) requires validation/verification/truth. It's of one of the same operation, 1:1.

this sentence:
Code
(
    (
        fell (
            (what cup)
            (from table)
        )
    ) (
        soaked (
            (what water)
            (to carpet)
        )
    )
)

normalized gives something like this:
Code
(
    [
        (
            [
                fell,
                (
                    [
                        what
                    ] cup
                ),
                from
            ] table
        ),
        soaked,
        (
            [
                what
            ] water
        ),
        to
    ] carpet
)

Things get more complicated with variables instead of constants. The question is: what can you read from this, possibly adding more sentences. Now we have only `A -> B`. Nothing much can be derived from this unless we add something like `A` too. Then we should be able to derive `B`.

[Edit]
I'm thinking of changing the algorithm, so `(A -> B) -> C` can be queried with something like `(A -> ?) -> B`.

[Edit2]
On another thought, the algorithm already should support expressions like `((A -> @X) -> C) -> @X`, so things are going better than I thought.
« Last Edit: October 08, 2019, 04:22:51 pm by ivan.moony »

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Esperas automated inference engine
« Reply #19 on: October 09, 2019, 12:48:14 am »
hmm..........segmentation? See my attachement below, can you show me both yours like that instead? The brackets of yours are not making sense to me.

But... Where is the proving method without the generating of anything....if you can prove something, it is same as generating it. Not that it's faster, it is longer to discover an answer, but the same technology is used.

cats eat food can be proven because dogs eat food.....generatable plus provable.
Emergent          https://openai.com/blog/

*

goaty

  • Trusty Member
  • ********
  • Replicant
  • *
  • 552
Re: Esperas automated inference engine
« Reply #20 on: October 09, 2019, 07:59:58 am »
Locks right.

generating a circumstance accurately or even innacurately, means its the equivalent of the model, and computers only understand what models were built for them.   But that thing about the tail, must be some amazing theory in your head so id keep that if I were you.

.

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: Esperas automated inference engine
« Reply #21 on: October 09, 2019, 08:06:51 am »
Lock, those brackets are a programming thing. You just stitch all the lines together, and you get the unreadable result. For example:

Code
(walking ((who I) (whereTo store)))

is written in more readable form like this:
Code
(
    walking (
        (
            who
            I
        )
        (
            whereTo
            store
        )
    )
)

Ok?
« Last Edit: October 09, 2019, 08:41:01 am by ivan.moony »

*

goaty

  • Trusty Member
  • ********
  • Replicant
  • *
  • 552
Re: Esperas automated inference engine
« Reply #22 on: October 09, 2019, 08:57:51 am »
Lock, those brackets are a programming thing. You just stitch all the lines together, and you get the unreadable result. For example:

Code
(walking ((who I) (whereTo store)))

is written in more readable form like this:
Code
(
    walking (
        (
            who
            I
        )
        (
            whereTo
            store
        )
    )
)

Ok?

But why did it walk to the store.   (because it was written in its head to.)

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Esperas automated inference engine
« Reply #23 on: October 09, 2019, 03:18:21 pm »
It's not computing! :D Ivan can you just show me the cup fell table post like that? (both you were comparing). I don't think I'm seeing it lol.
Emergent          https://openai.com/blog/

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: Esperas automated inference engine
« Reply #24 on: October 09, 2019, 03:23:58 pm »
code:
Code
((fell ((what cup) (from table))) (soaked ((what water) (to carpet))))

normalized:
Code
([([fell, ([what] cup), from] table), soaked, ([what] water), to] carpet)

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Esperas automated inference engine
« Reply #25 on: October 09, 2019, 05:30:59 pm »
2 more questions now

1) Why does the sentence not grammatically sound correct? Or what is it trying to do...
2) Unsure if you are adding-on/validating words/phrase parts? When? Where? Why? Do you add a new segmentation+word/phrase as long as the cup will soak a object with x property?
Emergent          https://openai.com/blog/

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: Esperas automated inference engine
« Reply #26 on: October 09, 2019, 05:56:14 pm »
Lock, I'll write about it soon. The algorithm is rapidly changing now, and I want to settle down with something steady before implementing it. I want to have an algorithm that is clean, simple and fully usable. Right now it is only partially usable, and that's not enough.

[Edit]
Lock, you ask many questions, but I find one of them particularly surprising for not being understood. It is about saw-tooth code and brackets distributed vertically. It is a great programming invent that improves readability, and it  seems to us, programmers, to be god given, but actually, someone back then had to invent it, I suppose. It is all about reading braces top-down where upper brace opens, while downer brace closes a scope. Great invent, I hope you will find it amazing as much as I do.
« Last Edit: October 09, 2019, 07:41:48 pm by ivan.moony »

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: Esperas automated inference engine
« Reply #27 on: October 13, 2019, 11:29:23 am »

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: Implika automated inference engine
« Reply #28 on: November 02, 2019, 07:45:33 am »
I've changed the name of the library from Esperas to Implika. Esperas is meant to be a bundle of graphical user interface and Implika.

Implika was originaly thought to be a javascript nest for Logos metatheory language, but it turned out it may be a stand-alone project, replacing Logos.

I just finished a Javascript implementation. That's about it, I hunted some bugs, but I'm sure there's more hiding within. Time will tell, I'm on it. The whole implementation took 152 lines of Javascript code, while it resembles rule based inference engine with pattern matching and implication mechanism that aims to be Turing complete. There is no distinction between knowledge base representation and rules, where each expression behaves as both rule and data from which a implicit information may be derived by other rules.

It inputs a s-expression, and outputs JSON object enriched by ForeChain property (forward chainer). Backward chaining is supported indirectly. S-expressions are interpreted in a way that the left side expression implies the right side expression, and this aspect goes recursively deep to atoms. Note that there are no other operators, just implication in a form of pairs. BNF of input looks like this:
Code
    s-exp := ()
           | constant
           | @variable
           | (s-exp s-exp)

The only rule of inference is modus ponens:
Code
  (A B)          A
---------------------
        B

Who would say that this little thingie hides a system capable of describing any kind of computation?

One may find interesting an appearance of logic like expert system among examples. Knowledge base is stated, and query is put in a form of conjunction. The answer appears in ForeChain property of output.

I still have to finish a description of implementation of Hilbert calculus within the system, but you can already play with it within provided test suite. Hilbert calculus is mainly used to describe different kinds of logic, inference systems, and lambda calculus (lambda calculus is a Turing complete representation of using functions to calculate results) in various reincarnations. Provided that Implika can describe Hilbert calculus, then it can describe any kind of system.

I also see Implika as something capable of replacing core AtomSpace from OpenCog project. Maybe, as a curiosity, I'll drop a post in their group, once that description is fully finished, but I doubt they will be totally crazy about it. They are a bunch of PhD-s, and they invested a lot of time in polishing AtomSpace to let it go so easily.

Read about it here: https://github.com/e-teoria/Implika
Test it in action here: https://e-teoria.github.io/Implika/test/
« Last Edit: November 02, 2019, 08:17:13 am by ivan.moony »

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: Implika automated inference engine
« Reply #29 on: December 07, 2019, 08:08:09 pm »
I believe I found a design flaw in algorithm relating to phantom forward chaining update, but I'll correct this as soon as possible. I'm also upgrading the current behavior of Implika to make contexts hierarchically dependent, which would simplify general problem setups (and defining Hilbert calculus). All of this requires the entire algorithm rewriting, and I'm on it.
« Last Edit: December 07, 2019, 08:33:56 pm by ivan.moony »

 


Requirements for functional equivalence to conscious processing?
by DaltonG (General AI Discussion)
November 19, 2024, 11:56:05 am
Will LLMs ever learn what is ... is?
by HS (Future of AI)
November 10, 2024, 06:28:10 pm
Who's the AI?
by frankinstien (Future of AI)
November 04, 2024, 05:45:05 am
Project Acuitas
by WriterOfMinds (General Project Discussion)
October 27, 2024, 09:17:10 pm
Ai improving AI
by infurl (AI Programming)
October 19, 2024, 03:43:29 am
Atronach's Eye
by WriterOfMinds (Home Made Robots)
October 13, 2024, 09:52:42 pm
Running local AI models
by spydaz (AI Programming)
October 07, 2024, 09:00:53 am
Hi IM BAA---AAACK!!
by MagnusWootton (Home Made Robots)
September 16, 2024, 09:49:10 pm
LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

417 Guests, 0 Users

Most Online Today: 467. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles