Imagining, perceiving and acting rely on the same "summarizing" engine.I was using the word "summarize", and I now I use "compress", but it's the same idea.
When you learn how to drive a car, you first have to be really focused on what to do, because nothing is automated yet.
But after repeating the same small actions again and again together, these low-level actions get summarized by a single higher-level action, or rather several higher-level configurable actions, that can be run autonomously. It's like functions definition, really.
Then when you're used to drive, you can drive and talk to someone at the same time, because the driving engine is almost entirely autonomous. Only willingness and unexpected events will draw attention back to the road, with heavy focus.
While multiple tasks are running, focus often switch from one to another, not to act at low-level, but to pilot the acting engine, which is like modifying the parameters you give to a function.
Wikipedia (https://en.wikipedia.org/wiki/Attentional_shift), as always, is a good resource on the topic.
function log(x) { console.log(JSON.stringify(x)); }
var entity = {};
entity.externalSensors = [0, 0, 0, 0, 0, 0, 0, 0];
entity.externalActuators = [0, 0, 0, 0];
entity.internalSensors = [0, 0, 0, 0];
entity.instantMemory = [
[0, 4, 4, 5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
];
entity.target = [18, 1, 2, 3];
entity.behavior = {
3: [0, 2, 3, 0],
5: [0, 1, 0, 0],
8: [1, 0, 2, 0]
}
entity.setActuator = function(actuatorId, newValue) {
entity.externalActuators[actuatorId] = newValue;
entity.internalSensors = [0, actuatorId, newValue, 0];
}
entity.refreshInstantMemory = function() {
entity.instantMemory.push(
entity.externalSensors
.concat(entity.internalSensors)
.concat(entity.externalActuators)
);
entity.instantMemory.shift();
}
entity.logInstantMemory = function() {
for (var i=0; i<entity.instantMemory.length; i++)
console.log(JSON.stringify(entity.instantMemory[i]));
console.log();
}
entity.setTarget = function(targetSlot, newAddress) {
entity.target[targetSlot] = newAddress;
entity.internalSensors = [1, targetSlot, newAddress, 0];
}
entity.xy = function(n) { return { x: n%16, y: Math.floor(n/16) }; }
entity.fetch = function() {
var values = [];
for (var t=0; t<4; t++) {
var addr = entity.xy(entity.target[t]);
values.push(entity.instantMemory[addr.y][addr.x]);
}
return values
}
entity.interpret = function(values) {
var candidate = 0;
for (var i1=0; i1<3; i1++) {
for (var i2=i1+1; i2<4; i2++) {
if ((values[i1]==values[i2]) && (values[i1]>candidate))
candidate = values[i1];
}
}
return candidate*2;
}
entity.do = function(action) {
if (action[0]==0) setActuator(action[1], action[2]);
if (action[0]==1) setTarget(action[1], action[2]);
}
entity.run = function() {
while (1) {
entity.refreshInstantMemory();
entity.do(
entity.behavior[
entity.interpret(
entity.fetch())]);
}
}
entity.run();
hi,
based on this (http://aidreams.co.uk/forum/index.php?topic=8892.msg35782#msg35782) old thread, here's a super simple esolang. it's an asm, command-colon-argument. commands take only 1 argument. "this" is the result of the last command. "about" puts something in "this".
[fib]
about: argument
if less than: 3
return: 1
about: argument
decrement: 1
fib: this
store as: fib1
about: argument
decrement: 2
fib: this
increment: fib1
return: this
now we use 1 character names for common commands
[fib]
? @
< 3
^ 1
? @
- 1
fib #
$ fib1
? @
- 2
fib #
+ fib1
^ #
and we loose the brackets, newlines, so it looks like hell
fib: ? @ < 3 ^ 1 ? @ - 1 fib # $ fib1 ? @ - 2 fib # + fib1 ^ #
From now on, I won't be able to come visit the forum between 17:00 and 9:00. I'll have time to think! Last night, I had a flash-thought about what I was supposed to do.
I have to admit that deep "hollywood style" AGI is currently out of reach, at least for me. I won't try to reach it.
My opinion about intelligent agents is that, even though learning capabilities matter, initial knowledge is very important. Chatbots are a very good example of applied technology: clever algorithms (https://thesai.org/Downloads/Volume6No7/Paper_12-Survey_on_Chatbot_Design_Techniques_in_Speech_Conversation_Systems.pdf) are employed to create concrete, usable programs that imitate adult human (not empty-brained babies). Sitting in front of an IDE to write hundreds of Question/Answer couples (among other things) seems natural to all of us. Then why would it be weird to write hundreds of Event/Thought couples, to build an agent's mind one line at a time?
I mean, take the concept of a chatbot, and extend it as much as you can. What's in a mind?
Emotions (https://en.wikipedia.org/wiki/Emotion),
logic (https://en.wikipedia.org/wiki/Logic),
memories (https://en.wikipedia.org/wiki/Memory),
associations (https://en.wikipedia.org/wiki/Association_(psychology)),
inner-discourse (https://en.wikipedia.org/wiki/Internal_discourse),
...etc.
Why don't we fake'em all?
Emotions can be easy to emulate. There can be several labelled values, with modifiers between them. It's up to the botmaster to choose what emotions he wants for his bot, and how emotions modify each other. It would work like force-directed graphs (https://en.wikipedia.org/wiki/Force-directed_graph_drawing).
Logic is even easier. Rules are well known, it's a field where a lot of solutions exist already. This one isn't fake: logic is logic!
Memories could be like an event-log, but where non-interesting things tend to disappear. I'm not talking about learning the name of the user, I'm talking about remembering that we had a chat yesterday, and that we talked about this and that, ...etc.
Association would work by cross-counting reference, and keeping what gets high scores. Example: every blue object rings the bell "blue", every green object rings the bell "green", ...etc, and the color that gets more hits wins.
Inner-discourse is quite like what chatbots already do: it's a Question/Answer database, except the answer is not directly send to the user, but rather used as "internal command", an order the bot gives to itself. An instruction.
Nice, thank you!
What if a car becomes suddenly a subtype of vehicle? Do you have to rebuild the entire tree?