the consnet

  • 8 Replies
  • 4571 Views
*

Zero

  • Eve
  • ***********
  • 1287
the consnet
« on: June 26, 2018, 11:02:01 pm »
For a long time, I've been looking for a data structure on which I could apply pattern learning. Where I was in my research, 10 years ago, I had a global picture of the building, but I missed the elemental brick that would allow me to build it. I found it!

A cons cell, in the lisp world, is a memory cell that holds two things, generally one value and one pointer to the next cons cell, to make lists. I steal the name "cons cell" to talk about something that's a bit like it, but not entirely. I'll use the following syntax:

    identifier: left content, right content

Ids can be human-made, with meaningful names, or automatically generated, yielding id1, id2, id3, ...etc.

So, here is how I would represent a simple situation: I have a car.

    id1: Zero, Berlingo
    id2: id1, have
    id3: Berlingo, car
    id4: id3, is a

You get the idea. First you link two things, then you link the link to the type of link it belongs to. "id1" is linked to "have", which means it's a "have" kind of link.

It's not a powerful representation. It's verbose, and obviously not meant to be written directly by developpers. But it has two properties:
1- It is 100% liquid: you can get as deep as you need. Say you're not sure that "id1" is a "have" kind of link, you can represent it easily by adding links about "id2".
2- It is a good candidate for pattern learning and compression, which is crucial to create a conscious program.

How do we compress things? By creating a new cell that means the simultaneous presence of several cells.

    id3: Berlingo, car
    id4: id3, is a

    id7: Peugeot, car
    id8: id7, is a


We have this pattern:

    %1 = ?, #1
    ?  = %1, #2

    ////////////// with

    ?   appears only once
    %   same in 1 block
    #   same in both blocks


We make 1 cell "id9" with #1 and #2, and we get

    id10: Berlingo, id9
    id11: Peugeot, id9


We can see that id9 means "is a car". "id9" forms a recognized situation, and what happens in this now known situation can be linked to "id9", not to a swarm of disseminated cells.

I believe this is an important step in my path to AI.

I made a prototype out of this, in Js. It counts things that could be compressed, but I still don't know whether it should compress the big counts first or the small counts first. I rather think: small first.
« Last Edit: March 26, 2022, 10:40:28 am by Zero »

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: the consnet
« Reply #1 on: June 27, 2018, 09:04:26 am »
You're dealing with very powerful concept there. I wonder where will it end up with you.

Though, I'm not sure what kind of compression you are talking about.

*

Zero

  • Eve
  • ***********
  • 1287
Re: the consnet
« Reply #2 on: June 27, 2018, 10:40:56 am »
I believe it will end up... where the solution lies!

Compression is the very key to consciousness. There's a video in another thread. If I remember correctly, the guy explains how compression fits in the system they built.

Here is how I see things:
Imagining, perceiving and acting rely on the same "summarizing" engine.

When you learn how to drive a car, you first have to be really focused on what to do, because nothing is automated yet.

But after repeating the same small actions again and again together, these low-level actions get summarized by a single higher-level action, or rather several higher-level configurable actions, that can be run autonomously. It's like functions definition, really.

Then when you're used to drive, you can drive and talk to someone at the same time, because the driving engine is almost entirely autonomous. Only willingness and unexpected events will draw attention back to the road, with heavy focus.

While multiple tasks are running, focus often switch from one to another, not to act at low-level, but to pilot the acting engine, which is like modifying the parameters you give to a function.

Wikipedia, as always, is a good resource on the topic.
I was using the word "summarize", and I now I use "compress", but it's the same idea.

Say you have a conscious program. You save a snapshot of it, so you can resume execution later, or on another machine. The whole program's mind, in its state, is contained in 1 file. If the program is conscious, it means that one part of its mind contains a representation of its own state. In other words, the state contains the state. The only way you can do this is by compressing the state, to obtain some sort of fractal structure. It's like looking in a mirror when you're between two mirrors, you know, when you see your image reproduced infinitely.

Concretely, pattern learning and compression are two sides of the same coin. If you learn a pattern, then you create a minified version of this pattern, like "is a car" meaning both "is a" and "car".

Imagine the base of a pyramid. You have "instances" that can happen: stimuli, events, states, whatever. These instances "bubble" up: when an instance exists, every pattern this instance is a part of is "excited", and gets its internal counter incremented by one. When a pattern is fully excited (for example "is a car" got incremented twice, one for "is a" and one for "car") it outputs an instance of it. This process builds a second level to the pyramid, with instances of simple patterns. Then a third level, and so on.

The shape of the current pyramid is the program's interpretation of the current situation. This current situation probably leads to an obvious action (even if this action is "wondering what to do", it's still an obvious action). This action is summarized (example: I need to eat now), symbolized. We now have something new at the top of our pyramid. Since actions are done but also sensed, they're compressed just like any other situation (it's the idea of acting = sensing). So we're able to decompose the top-level action representation into its lower-level components, until things get doable (move arms, ...etc.).

I hope I'm being clear, my english is not accurate.

Ed:

When does consciousness pops up?

You remember this:
Code
function log(x) { console.log(JSON.stringify(x)); }


var entity = {};


entity.externalSensors = [0, 0, 0, 0, 0, 0, 0, 0];


entity.externalActuators = [0, 0, 0, 0];


entity.internalSensors = [0, 0, 0, 0];


entity.instantMemory = [
    [0, 4, 4, 5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
];


entity.target = [18, 1, 2, 3];


entity.behavior = {

    3: [0, 2, 3, 0],
    5: [0, 1, 0, 0],
    8: [1, 0, 2, 0]
}


entity.setActuator = function(actuatorId, newValue) {

    entity.externalActuators[actuatorId] = newValue;

    entity.internalSensors = [0, actuatorId, newValue, 0];
}


entity.refreshInstantMemory = function() {

    entity.instantMemory.push(
        entity.externalSensors
            .concat(entity.internalSensors)
            .concat(entity.externalActuators)
    );

    entity.instantMemory.shift();
}


entity.logInstantMemory = function() {

    for (var i=0; i<entity.instantMemory.length; i++)
        console.log(JSON.stringify(entity.instantMemory[i]));
    console.log();
}


entity.setTarget = function(targetSlot, newAddress) {

    entity.target[targetSlot] = newAddress;

    entity.internalSensors = [1, targetSlot, newAddress, 0];
}


entity.xy = function(n) { return { x: n%16, y: Math.floor(n/16) }; }


entity.fetch = function() {

    var values = [];

    for (var t=0; t<4; t++) {

        var addr = entity.xy(entity.target[t]);
        values.push(entity.instantMemory[addr.y][addr.x]);
    }

    return values
}


entity.interpret = function(values) {

    var candidate = 0;

    for (var i1=0; i1<3; i1++) {

        for (var i2=i1+1; i2<4; i2++) {

            if ((values[i1]==values[i2]) && (values[i1]>candidate))

                candidate = values[i1];
        }
    }

    return candidate*2;
}


entity.do = function(action) {

    if (action[0]==0) setActuator(action[1], action[2]);
    if (action[0]==1) setTarget(action[1], action[2]);
}



entity.run = function() {

    while (1) {

        entity.refreshInstantMemory();

        entity.do(
            entity.behavior[
                entity.interpret(
                    entity.fetch())]);
    }
}


entity.run();

It's rather easy to follow, even though it's not meant to be used as is, since the entity isn't connected to anything. It's just showing off the outline of the system.

We start from a typical program:


           â†’  →  →  External sensors  →  →  →
         â†—                                    ↘
    ( WORLD )                               [ PROGRAM ]
         â†–                                    ↙
           â†  ←  ← External actuators ←  ←  ←



And we add an internal sensors loop:


           â†’  →  →  External sensors  →  →  →         â†  Internal sensors  ←
         â†—                                    ↘    ↙                         â†–
    ( WORLD )                               [ PROGRAM ]                        ↑
         â†–                                    ↙    ↘                         â†—
           â†  ←  ← External actuators ←  ←  ←         â†’   Activity logger  →



So the program is part of the world it observes and learns from.
« Last Edit: June 27, 2018, 11:30:24 am by Zero »

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: the consnet
« Reply #3 on: June 27, 2018, 11:35:25 am »

*

Zero

  • Eve
  • ***********
  • 1287
Re: the consnet
« Reply #4 on: June 27, 2018, 02:07:35 pm »
Generalization, precisely.

The hard part is generalizing while still taking care of unusual properties/features, differences. For example, you would say "a big dog". That's straight: the expression returns the common dog mental picture, plus a modifier "big".

That's really on the conceptual model, which is created by the pyramid, that we can attach reason and logic. You can talk about someone and say "he's afraid of big dogs". It makes sense because "dog" is the generalized concept of a lot of things (cells) that, all together, form our idea of a dog. Actually, ten years ago, I was calling generalizing cells "concepts" and generalized cells "instances".

In a running system, there can be several tasks running in parallel, like driving a car while talking to a passenger. If the road is monotonous, boring, you'll focus on the discussion and drive your car in some sort of "auto-pilot" mode, we all know this feeling. You can even take the wrong path just because you're so used to it that your hands automagically turned the wheel.

What that means is that once the pyramid is up, and receiving data flow from its base, the top of the pyramid is what fires pre-recorded responses. We fall in a well known field: good things we reproduce, bad things we don't.

I'm not saying that everything in the system should be in the consnet. I don't know, maybe it should, but maybe not. We're not aware of everything that happens in our heads, far from it. Currently I'd rather opt for a statistical calculus, to determine what reaction should pop up in a given situation.

However in the end, we still have to actually do things. Think, speak, move. We have only a small structuring element, pairs of things. How do we program with this, that's the question.

I just came up with a pet language, 1arg asm, in another thread:
hi,

based on this old thread, here's a super simple esolang. it's an asm, command-colon-argument. commands take only 1 argument. "this" is the result of the last command. "about" puts something in "this".


[fib]
about: argument
if less than: 3
return: 1
about: argument
decrement: 1
fib: this
store as: fib1
about: argument
decrement: 2
fib: this
increment: fib1
return: this


now we use 1 character names for common commands


[fib]
? @
< 3
^ 1
? @
- 1
fib #
$ fib1
? @
- 2
fib #
+ fib1
^ #


and we loose the brackets, newlines, so it looks like hell


fib: ? @ < 3 ^ 1 ? @ - 1 fib # $ fib1 ? @ - 2 fib # + fib1 ^ #

Whether or not it's Turing complete is irrelevant: I did not publish the entire language, just a few keyword to show the general idea. No one said it has to be low-level: you could do very complicated stuff if you have very advanced keywords. I'm learning Golang currently, and I thought a language as simple as this would be enough to "drive" very high level Golang functions and routines. Take this guy for instance, he made a lovely machine called FooVM (look at that name, the guy's a god of social efficiency). When I saw it, I said hey, why restrict ourselves to mimicking register-based hardware? We can do whatever we want, and that's what I do with 1argasm.

1argasm has an interesting property: one code line takes only two things, a command, and 1 argument. Two things. Like a cons cell. If consnet is a database structure, then 1argasm is its "imperative style" counterpart. Obviously we could do logic programming or whatever, and it's probably going to be needed at some point.

What interests me right now is that this "consnet assembly" is readily representable in the consnet. So we should be able to compress it, and then use it both to interpret things and to apply things.

Expressing the fib function from 1argasm in consnet base syntax would yield:

id01: about, argument
id02: if less than, 3
id03: return, 1
id04: about, argument
id05: decrement, 1
id06: fib, this
id07: store as, fib1
id08: about, argument
id09: decrement, 2
id10: fib, this
id11: increment, fib1
id12: return, this

which is totally acceptable. To be executable though, we would need to link "id01" to "id02", and "id02" to "id03", ...etc. The left/right order matters, so one link per pair of commands is enough to encode the flow of computation. And, evil gotoz become a breeze. Well, it's an assembly after all.

Hand-coded procedures would soon be sensed and generalized by the program, which would "get used to them". Comparing the outcome of various situations, it would become able to choose to apply some procedures instead of others. For this to work, it needs a satifaction system. Procedures would even melt.

*

Zero

  • Eve
  • ***********
  • 1287
Re: the consnet
« Reply #5 on: July 03, 2018, 12:48:38 pm »
From now on, I won't be able to come visit the forum between 17:00 and 9:00. I'll have time to think! Last night, I had a flash-thought about what I was supposed to do.

I have to admit that deep "hollywood style" AGI is currently out of reach, at least for me. I won't try to reach it.

My opinion about intelligent agents is that, even though learning capabilities matter, initial knowledge is very important. Chatbots are a very good example of applied technology: clever algorithms are employed to create concrete, usable programs that imitate adult human (not empty-brained babies). Sitting in front of an IDE to write hundreds of Question/Answer couples (among other things) seems natural to all of us. Then why would it be weird to write hundreds of Event/Thought couples, to build an agent's mind one line at a time?

I mean, take the concept of a chatbot, and extend it as much as you can. What's in a mind?
Emotions,
logic,
memories,
associations,
inner-discourse,
...etc.

Why don't we fake'em all?

Emotions can be easy to emulate. There can be several labelled values, with modifiers between them. It's up to the botmaster to choose what emotions he wants for his bot, and how emotions modify each other. It would work like force-directed graphs.

Logic is even easier. Rules are well known, it's a field where a lot of solutions exist already. This one isn't fake: logic is logic!

Memories could be like an event-log, but where non-interesting things tend to disappear. I'm not talking about learning the name of the user, I'm talking about remembering that we had a chat yesterday, and that we talked about this and that, ...etc.

Association would work by cross-counting reference, and keeping what gets high scores. Example: every blue object rings the bell "blue", every green object rings the bell "green", ...etc, and the color that gets more hits wins.

Inner-discourse is quite like what chatbots already do: it's a Question/Answer database, except the answer is not directly send to the user, but rather used as "internal command", an order the bot gives to itself. An instruction.

*

spydaz

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 325
  • Developing Conversational AI (Natural Language/ML)
    • Spydaz_Web
Re: the consnet
« Reply #6 on: July 03, 2018, 01:23:31 pm »
From now on, I won't be able to come visit the forum between 17:00 and 9:00. I'll have time to think! Last night, I had a flash-thought about what I was supposed to do.

I have to admit that deep "hollywood style" AGI is currently out of reach, at least for me. I won't try to reach it.

My opinion about intelligent agents is that, even though learning capabilities matter, initial knowledge is very important. Chatbots are a very good example of applied technology: clever algorithms are employed to create concrete, usable programs that imitate adult human (not empty-brained babies). Sitting in front of an IDE to write hundreds of Question/Answer couples (among other things) seems natural to all of us. Then why would it be weird to write hundreds of Event/Thought couples, to build an agent's mind one line at a time?

I mean, take the concept of a chatbot, and extend it as much as you can. What's in a mind?
Emotions,
logic,
memories,
associations,
inner-discourse,
...etc.

Why don't we fake'em all?

Emotions can be easy to emulate. There can be several labelled values, with modifiers between them. It's up to the botmaster to choose what emotions he wants for his bot, and how emotions modify each other. It would work like force-directed graphs.

Logic is even easier. Rules are well known, it's a field where a lot of solutions exist already. This one isn't fake: logic is logic!

Memories could be like an event-log, but where non-interesting things tend to disappear. I'm not talking about learning the name of the user, I'm talking about remembering that we had a chat yesterday, and that we talked about this and that, ...etc.

Association would work by cross-counting reference, and keeping what gets high scores. Example: every blue object rings the bell "blue", every green object rings the bell "green", ...etc, and the color that gets more hits wins.

Inner-discourse is quite like what chatbots already do: it's a Question/Answer database, except the answer is not directly send to the user, but rather used as "internal command", an order the bot gives to itself. An instruction.


Aha... Why don't we fake them all...

In the beginning days (for me)... that was the case ... i was working on building these "KEY -Topics" as the Question/Answer (detect/Response) method... and today much has not changed from that style of Chatbot (except) Mainstream is catching up to where we started years before even claiming victory for themselves... they have added "intents" ... a tree type conversation (based on Data collected from telephone based systems, they understand the structure of the dialog required for that purpose)... Called Intents.... you can design directed arguments with the same system but Call the topics intents... (more fake) ... Siri /Cortana/Alexa use this type of system .... Other google/Microsoft/ibm style "BOTS" are data analysis bots mearly front-ends for machine learning algorithms... and yet Still All of these styles although can be very amusing actually even passing the Turing test.. when well crafted ... show no intelligence....

Each person has their own Goals... for myself it is Understanding - I would like the AI to understand what it is saying or reading.... building its ontology based on its understanding / inferring truths / ... Without programming a direct response .... To construct a response to the input received....constructing the answer furnished with its learned knowledge. (Probably faking the emotional responses - At this time) , Truth is Probabilistic... as they say

nothing can be 100% Truth only perspective truth.

I think this is why its taking a lifetime to create AI.... its hard to program for every situation of learning and response... and AI is not a single algorithm.

(Zero)  the data structure your searching for is the TRIE TREE.... The binary tree is not comprehensive enough....

Car Has id:
Car has Make:
Car has Color:
Car Has description:

When adding these to the tree; the 1st pass add the car node and sub nodes; the second pass only needs to add the sub nodes below HAS;
by adding multiple items to the tree;
Car
Van
Person
related properties are held as a node under is related node....
Question does car have name?
Does person have name?
by going to the person node all related properties are listed under the node; as well as all learned data regarding Specific Item is contained in the End node related to the property.
Allowing for multiple types of the same object to be stored in the same nodes...reducing size increasing structure...
it is akin to relational database...but as a single data object...

I was searching for a way to combine dialog trees into a single tree... THE TRIE TREE ..... can handle such functions.... its very under used! only some simple documentation about it.... on the journey to suffix arrays....which by its cryptic nature creates unreadable information whcih needs reconstruction to make sense.loosing the value of the structure entirely.. in the quest for reaching o(M) time and space....the transformation from tire tree to suffix array(tree) can be probably used for Compression of tree data;.. the search function is too complicated to write to search the compressed tree... but its possible! which they seem to forget (code execution speed, based on length and iterations of code)...



 

*

Zero

  • Eve
  • ***********
  • 1287
Re: the consnet
« Reply #7 on: July 03, 2018, 03:11:46 pm »
Nice, thank you!

What if a car becomes suddenly a subtype of vehicle? Do you have to rebuild the entire tree?

*

spydaz

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 325
  • Developing Conversational AI (Natural Language/ML)
    • Spydaz_Web
Re: the consnet
« Reply #8 on: July 03, 2018, 05:04:45 pm »
Nice, thank you!

What if a car becomes suddenly a subtype of vehicle? Do you have to rebuild the entire tree?

I would say not... that node would need to be inserted Above the previous... so potentially creating a new node the copying to the new node deleting the old... as each node contains all its sub nodes.. easy to shift with code;

But also other inference scripts can also provide linkage... All cars are vehicles.... with the propositional logic i was in two minds if i should make a relational tree for the items but i found that because of the inferred logic the tree basically was really just a list ... the magic was in the searching function .... Get descendants (children) of P / Q or get ascendants (parents) collecting all the related P's & Q's ... visually you cant see structure... but the search produces  structured results (Parents and children) ... yet still a flat list...this enables fo non related items in the structure to still be connected if a single link between is created...




 


Will LLMs ever learn what is ... is?
by HS (Future of AI)
November 10, 2024, 06:28:10 pm
Who's the AI?
by frankinstien (Future of AI)
November 04, 2024, 05:45:05 am
Project Acuitas
by WriterOfMinds (General Project Discussion)
October 27, 2024, 09:17:10 pm
Ai improving AI
by infurl (AI Programming)
October 19, 2024, 03:43:29 am
Atronach's Eye
by WriterOfMinds (Home Made Robots)
October 13, 2024, 09:52:42 pm
Running local AI models
by spydaz (AI Programming)
October 07, 2024, 09:00:53 am
Hi IM BAA---AAACK!!
by MagnusWootton (Home Made Robots)
September 16, 2024, 09:49:10 pm
Attempting Hydraulics
by MagnusWootton (Home Made Robots)
August 19, 2024, 04:03:23 am
LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

523 Guests, 0 Users

Most Online Today: 571. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles