Thinkbot

  • 44 Replies
  • 1733 Views
*

Zero

  • Trusty Member
  • *********
  • Terminator
  • *
  • 869
  • Ready?
    • Thinkbots are free
Thinkbot
« on: December 16, 2019, 11:22:52 AM »
The thinkbot project, at last.

https://github.com/ThinkbotsAreFree/thinkbot
Thinkbots are free, as in 'free will'

*

Zero

  • Trusty Member
  • *********
  • Terminator
  • *
  • 869
  • Ready?
    • Thinkbots are free
Re: Thinkbot
« Reply #1 on: December 17, 2019, 01:58:52 PM »
Basic loading structures are done. https://github.com/ThinkbotsAreFree/thinkbot/wiki

Now's the moment of truth: the main loop.

Hey, nice xmas skin, Freddy!
Thinkbots are free, as in 'free will'

*

Zero

  • Trusty Member
  • *********
  • Terminator
  • *
  • 869
  • Ready?
    • Thinkbots are free
Re: Thinkbot
« Reply #2 on: December 17, 2019, 04:04:52 PM »
I'm exploring Seneca, an interesting "microservices toolkit for Nodejs". I like the idea.
Thinkbots are free, as in 'free will'

*

Zero

  • Trusty Member
  • *********
  • Terminator
  • *
  • 869
  • Ready?
    • Thinkbots are free
Re: Thinkbot
« Reply #3 on: December 18, 2019, 08:34:29 AM »
Switching back to Lunr instead of elasticLunr, because the elasticLunr doesn't understand in-query AND/OR combinations correctly. Smells like crap.

And... elasticLunr is ugly, while Lunr is beautiful.
Thinkbots are free, as in 'free will'

*

Zero

  • Trusty Member
  • *********
  • Terminator
  • *
  • 869
  • Ready?
    • Thinkbots are free
Re: Thinkbot
« Reply #4 on: December 18, 2019, 09:45:32 AM »
Arrrrgh, switching to lunr-mutable-indexes because lunr 2.x index is immutable.  ;D
The lunr guy takes decisions, like... he knows what's good better than the library users do.

Back to work.



I need to think out loud now. Say, we're in the middle of a cycle. We just got a search running, and we have results like this JSON:
Code: [Select]
[
  {
    id: 'module1.f5',
    relation: 'EvaluationLink beautiful $X',
    score: 1.5308861812688752
  },
  {
    id: 'test.module2.f5',
    relation: 'EvaluationLink beautiful $X',
    score: 1.5308861812688752
  },
  {
    id: 'module1.f3',
    relation: 'EvaluationLink boring $X',
    score: 0.45325544674057694
  },
  {
    id: 'module1.f4',
    relation: 'EvaluationLink young $X',
    score: 0.45325544674057694
  },
  {
    id: 'test.module2.f3',
    relation: 'EvaluationLink boring $X',
    score: 0.45325544674057694
  },
  {
    id: 'test.module2.f4',
    relation: 'EvaluationLink young $X',
    score: 0.45325544674057694
  }
]

We have the scores and the relations. Now we want to know what to do next. The obvious idea is: pattern-match it to find the relevant piece of code you're gonna execute in front of this. But we don't want over-complicated pattern definition.

Seneca uses key-value patterns, like "role:math,cmd:sum". Here, we have lists, but these lists are ordered, which means they're like key-value pairs with keys being "first item, second item, ...".

A list like "ImplicationLink f2 f3" can be translated to
Code: [Select]
{
  "type": "ImplicationLink",
  "rest": "f2 f3",
  "i1": "f2",
  "i2": "f3"
}

which Seneca can handle. the "rest" key is here to reward "full hits" more than just the sum of partial matches.

That's a simple solution. However, maybe it lacks flexibility.

And it doesn't take multiple input into account anyway, with scores and all.



There's the other solution. If relations have fixed arity, for example, AndLink always have 2 arguments, then we can rewrite the search results as a list of trees, replacing ids by the relations they identify.

f1: ImplicationLink f2 f3
f2: AndLink f4 f5
f3: is $X boring
f4: is $X beautiful
f5: is $X young

gets resumed as ImplicationLink AndLink is $X beautiful is $X young is $X boring
We obtain prefix-notation code, which can be run through... a chatbot! Yep, again.

Maybe mixing the 2 solutions can bring out interesting possibilities. Chatbots are good at replying, so they could provide the next search query. And Seneca could provide local execution of thought.

Or... we can remove ambiguity by using opening/closing tags, like
ImplicationLink AndLink Is $X beautiful EndIs Is $X young EndIs EndAndLink Is $X boring EndIs EndImplicationLink
« Last Edit: December 18, 2019, 01:18:47 PM by Zero »
Thinkbots are free, as in 'free will'

*

Zero

  • Trusty Member
  • *********
  • Terminator
  • *
  • 869
  • Ready?
    • Thinkbots are free
Re: Thinkbot
« Reply #5 on: December 18, 2019, 01:29:17 PM »
- make a search
- get results
- make "overview sentence" out of results
- send overview sentence to rivescript
- rivescript renders preliminary query
- run seneca against results
- seneca microservices act
- with final query: loop
Thinkbots are free, as in 'free will'

*

Zero

  • Trusty Member
  • *********
  • Terminator
  • *
  • 869
  • Ready?
    • Thinkbots are free
Re: Thinkbot
« Reply #6 on: December 18, 2019, 06:32:07 PM »
All candy.

Listening to this: https://www.radio.fr/s/synthwaveretrowave

Also, switching back to lunr, because mutable-lunr just rebuilds the index every now and then, which I can do by myself (and... lunr is beautiful while mutable-lunr is ugly). Rivescript brains would need to be rebuilt too, when the system learns something.

Learning is sooo overrated in AI world...

I do have my "overview function" now.

Code: [Select]
function overview(results) {
    var ref = {};
    results.forEach(result => { ref[result.id] = result; });
    results.forEach(result => {
        var takenTokens = [];
        var tokenCount = -1;
        result.expandedScore = result.score;   
        while (tokenCount != takenTokens.length) {
            tokenCount = takenTokens.length;
            result.relation = result.relation.split(' ');
            result.expandedRelation = result.relation.map(token => {
                if (ref[token]) {
                    if (!takenTokens.includes(token)) {
                        takenTokens.push(token);
                        result.expandedScore += ref[token].score;
                    }
                    return ref[token].relation;
                }
                else
                    return token;
            }).join(' ');
            result.relation = result.expandedRelation;
        }
    });
    return results;
}
It works. Going to plug Rivescript in.
Thinkbots are free, as in 'free will'

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *****************
  • Sentinel
  • *
  • 3767
  • First it wiggles, then it is rewarded.
Re: Thinkbot
« Reply #7 on: December 18, 2019, 07:21:32 PM »
where's the download button, for the song...

i think i found it, if not then it gone:
h ttps://www.youtube.com/watch?v=4rYfodqEF7E
h ttps://www.youtube.com/watch?v=7RK6mLL_gog
I think goaty will like the one above though hehe
I came across H.P. Lovecraft somewhere lol..................

It's gone dude....but I think I found it below let me see now.......hmm, nope
h ttps://www.youtube.com/watch?v=N6O1o3hGgtQ

EDITED
« Last Edit: December 18, 2019, 11:38:00 PM by LOCKSUIT »
Emergent

*

Zero

  • Trusty Member
  • *********
  • Terminator
  • *
  • 869
  • Ready?
    • Thinkbots are free
Re: Thinkbot
« Reply #8 on: December 18, 2019, 08:50:56 PM »
here
Thinkbots are free, as in 'free will'

*

Zero

  • Trusty Member
  • *********
  • Terminator
  • *
  • 869
  • Ready?
    • Thinkbots are free
Re: Thinkbot
« Reply #9 on: December 20, 2019, 12:43:18 PM »
Data files now look something like:

# meta

author:   zero
date:     2019/12/16
keywords: eno test2 content

# data

f1: ImplicationLink, f2, f3
f2: AndLink, f4, f5
f3: EvaluationLink, boring, $X
f4: EvaluationLink, young, $X
f5: EvaluationLink, beautiful, $X

# graph

test1:
i = one, *, three
o = fruit 1

test2:
i = one, two
o = fruit 2

test3:
i = one, *
o = fruit 3

test2:
i = *, three
o = fruit 4


#data is for the search engine, #graph is for the graphmaster.
Thinkbots are free, as in 'free will'

*

Zero

  • Trusty Member
  • *********
  • Terminator
  • *
  • 869
  • Ready?
    • Thinkbots are free
Re: Thinkbot
« Reply #10 on: December 20, 2019, 09:39:24 PM »
I now have
- my home grown graphmaster,
- my mini search engine,
- a basic enodoc readers import.

I'm adding an observer. Various engines in the loop will register interest in read/write access to certain items in search index or graphmaster, and get activated when things happen.

The search object is really the swiss army knife here, it acts as main memory. Say, if core.ObjetType of eng2 is core.Engine, then search.neighbourhood(["eng2", "core.ObjectType"]) gives ["core.Engine"], while search.neighbourhood(["core.Engine", "core.ObjectType"]) gives ["eng1","eng2"].

What's more powerful than graphmaster + inverted index + observer pattern ...?

Edit: yeah, beowulf  ;)
Thinkbots are free, as in 'free will'

*

Zero

  • Trusty Member
  • *********
  • Terminator
  • *
  • 869
  • Ready?
    • Thinkbots are free
Re: Thinkbot
« Reply #11 on: December 21, 2019, 10:13:10 AM »
Ivan, how do you approach uncertainty in logic?
Thinkbots are free, as in 'free will'

*

ivan.moony

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1357
    • e-teoria
Re: Thinkbot
« Reply #12 on: December 21, 2019, 10:53:56 AM »
Ivan, how do you approach uncertainty in logic?

I'm glad you asked, that is an interesting topic. See autoepistemic logic and negation as a failure. I try to simulate it with complements. When written `(Ac) -> X`, `X` will be derived if `A`can't be derived (is not certain). That's the starting construct. I further complicate it by introducing `syntax` and `semantics` definitions, and I write: if complement of top grammar symbol is derived, then `Wrong` is derived as a failure. But this requires asserting duals to implications on each regular implication, so each `A -> B` is paired with `Bc -> Ac`. I'm still having issues with this approach.

Note that complement is not the same as negation. Negation of A is total opposite of A (¬A should be explicitly stated to hold), while complement of A is anything but A (Ac holds when A is not stated).

I think it's about how logic treats expressions. In pure logic we can calculate if some expressions is always true, or always false. But uncertainity involves cases when a formula is merely satisfiable (may be true, may be false).

OpenCog has wired in uncertainty by noting a decimal between 0 and 1 on each starting formula. PLN then calculates those values for inferred formulas.
« Last Edit: December 21, 2019, 11:28:36 AM by ivan.moony »
There exist some rules interwoven within this world. As much as it is a blessing, so much it is a curse.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *****************
  • Sentinel
  • *
  • 3767
  • First it wiggles, then it is rewarded.
Re: Thinkbot
« Reply #13 on: December 21, 2019, 11:46:16 AM »
https://medium.com/datadriveninvestor/uncertainty-in-machine-learning-predictions-fead32abf717

When you give an ex. network some big diverse data of words, it can learn unsupervised the relations cat=dog. And how much boat=meow. Uncertainty is when you have low data on some specific topic or you do have lots but they counter each other ex. my meow, my boat, his meow, his boat, great now w2v thinks boat=meow! Or you show it a 7 but it looks like a 1 and it activates both up as much and, if your brain quantinizes w2v, you may have a 0 difference ex. not =1 0.2624% =7 0.2637% but instead all you know is =1 0.2% =7 0.2%! In such case you go with best choice or, a guess (or think it is both a 1 or 7). But in a thinking brain, you can ask Master for new data or go another route (and, generate your own data at the same time) if get stumped on how to invent better HDDs.

So:
1) big diverse data
2) low quantinization
3) go with one or both possibilities to see what happens next
4) go another route + generate your own data and ask/search for new data
Emergent

*

Zero

  • Trusty Member
  • *********
  • Terminator
  • *
  • 869
  • Ready?
    • Thinkbots are free
Re: Thinkbot
« Reply #14 on: December 21, 2019, 02:38:31 PM »
Thank you for your quick answers. Honestly, I'm not intelligent enough to understand clearly everything you said, Ivan. I read carefully Wikipedia's articles you mentioned.

I noticed how Opencog uses PLN, but I don't understand how confidence can be other than crispy. To me, either the program knows something (because it's inside of itself), or the program believes something (because it's outside of itself), in which case it's directly a probability, without fractional uncertainty. Subjectivity is one component of probability, like any other component it doesn't deserve any special handling.

Also, I'm interested in multi-valued logic. For example, I find it nice to imagine:
- True,
- False,
- Both true and false,
- Neither true nor false,
- Currently unknown,
- Meaningless.
Thinkbots are free, as in 'free will'

 


NC girl born without eyes or nose graduates from college
by LOCKSUIT (General AI Discussion)
Today at 08:07:59 AM
roomba alg
by LOCKSUIT (General AI Discussion)
February 20, 2020, 09:57:38 PM
How Lucid are your thoughts?
by Art (General AI Discussion)
February 20, 2020, 05:50:35 PM
XKCD Comic : Picking Bad Stocks
by Tyler (XKCD Comic)
February 20, 2020, 12:01:19 PM
A single neuron in Bash script
by Dat D (AI Programming)
February 20, 2020, 05:35:48 AM
beyond omega level coding
by yotamarker (General AI Discussion)
February 19, 2020, 03:06:32 PM
'My Soul Has a Hat'
by LOCKSUIT (General Chat)
February 19, 2020, 06:00:51 AM
Green Light
by Hopefully Something (AI in Film and Literature.)
February 18, 2020, 05:24:28 AM

Users Online

14 Guests, 1 User
Users active in past 15 minutes:
Freddy
[Administrator]

Most Online Today: 25. Most Online Ever: 340 (March 26, 2019, 09:47:57 PM)

Articles