Ai Dreams Forum

Artificial Intelligence => General AI Discussion => Human Experience and Psychology => Topic started by: Zero on June 22, 2019, 10:46:35 pm

Title: What thought next
Post by: Zero on June 22, 2019, 10:46:35 pm

When you're in a state of mind, what should be the next state? I can see 4 cases.

* Association

   ocean
      makes you think
   blue
      makes you think
   DeepBlue
      makes you think
   Ai
      makes you think
   AiDreams

* Utility

   it can cause pleasure or satisfaction
      it matters, think about it
   it can cause pain or be uncomfortable
      it matters, think about it
   it's neutral
      it's irrelevant, no need to focus on this

* Plan

   I'm starting a compound activity
      go down and do first step
   I'm in the middle of an activity
      do next step
   the current activity is over
      go up and start next activity

* Surprise

   standard predicted stuff happened
      why bother
   something unexpected happened
      analysis needed, could be important



===== am I missing something?

Title: Re: What thought next
Post by: HS on June 22, 2019, 11:23:29 pm
I think a broad pattern can determined from what writers use to create belivable characters. Goal, conflict, result, emotion, reason, prediction, choice.  It just keeps going in a loop. If those are in the wrong order the person will seem crazy or at least non relatable. So I like to start with the foundation of this sequence because our emotions respond well to this very repeatedly.
Title: Re: What thought next
Post by: Zero on June 23, 2019, 07:57:16 pm
Quote
Goal, conflict, result, emotion, reason, prediction, choice.

Are you saying this is the exact sequence? or is it an example?
Title: Re: What thought next
Post by: AndyGoode on June 23, 2019, 08:08:33 pm
I recommend dividing up your list of four items via computer science categorizations like poll-driven, interrupt-driven, and maybe one other (https://stackoverflow.com/questions/3072815/polling-or-interrupt-based-method). "Surprise" is clearly interrupt-driven, whereby the normal flow of logic is interrupted by a high priority event, such as an emergency. "Plan" would be the normal flow of processing (I don't think there exists a formal name for this type), "Association" would be a more creative mode of processing, like dreaming (I don't think there exists a formal name for this type, though "train of thought" is an accurate informal name (https://en.wikipedia.org/wiki/Train_of_thought)), and I don't understand what you mean by "Utility" (it may not fit into the system I'm proposing).
Title: Re: What thought next
Post by: HS on June 23, 2019, 08:30:00 pm
Quote
Goal, conflict, result, emotion, reason, prediction, choice.

Are you saying this is the exact sequence? or is it an example?


I was going from memory, made some different word choices, but that is the sequence, in that order.  Here are some quotes from my source. https://jimbutcher.livejournal.com/ (Its in the stuff from 2006)

Quote
GOAL
CONFLICT
SETBACK

The explanations for the above are a bit long to post here.

Quote
1)EMOTIONAL REACTION:
2) REVIEW, LOGIC, & REASON:
3) ANTICIPATION:
4) CHOICE:

1) An immediate emotional response.
2) A review of what happened, applying logic and reason to the events and why they turned out that way, and of what options are open to them.
3) Anticipation of what might follow the pursuit of those options. (Highly important, this one. Never underestimate the effects of anticipation on a reader.)
4) Your character makes up his mind and decides what to do next. IE, he makes a CHOICE.
Now, it's possible to SKIP some of these steps, or to abbreviate some of them so severely that you all but skip them. But you CAN'T CHANGE THE ORDER. 

Emotion, Reason, Anticipation, Choice. That reaction is typical to people, regardless of their sex, age, or background. It's psychologically hardwired into us--so take advantage of it. By having your character react in this very typically human way, you establish an immediate sense of empathy with the reader. If you do it right, you get the reader nodding along with that character going "Damn right, that's what I'd do." Or better yet, you get them opening their mouth in horror as they read, seeing the character's thought process, hating every step of where it's going while it remains undeniably understandable and genuine to the way people behave.
Title: Re: What thought next
Post by: Zero on June 23, 2019, 10:33:20 pm
Thank you both for these interesting posts!
 
Quote
"Surprise" is clearly interrupt-driven, whereby the normal flow of logic is interrupted by a high priority event, such as an emergency.

Yes clearly. But the unexpectedness doesn't come with given priority attached to it. At first, all you know is that it's unexpected. It deserves to be allocated a few CPU cycles because it COULD potentially be important.

Quote
"Plan" would be the normal flow of processing (I don't think there exists a formal name for this type)

Behavior trees (http://www.csc.kth.se/~miccol/Michele_Colledanchise/Publications_files/2013_ICRA_mcko.pdf)?
Or, if you want something very formal, NASA's PLEXIL (https://en.m.wikipedia.org/wiki/PLEXIL).
But I'd prefer not having this as the "normal" processing mode. It feels a bit too much mechanical. After all, we want human-like softwares!

Quote
and I don't understand what you mean by "Utility"

In all that could pop in your mind, things which are associated with satisfaction or uncomfortability, or are emotionally charged, will more likely be focused on.

Quote
I was going from memory, made some different word choices, but that is the sequence, in that order.

Got it. Works very well!
Do you think it also works for small things like "goal = I want a sandwich"?
Title: Re: What thought next
Post by: AndyGoode on June 24, 2019, 12:22:24 am
Yes clearly. But the unexpectedness doesn't come with given priority attached to it.

Actually, in higher animals it does. Reflexes, for example, are hardwired, and our bodies react to them without needing to think about the stimulus. For example, a sudden sting or burn or injury causes a limb to pull back involuntarily. Evidently nature built in that special-purpose system for survival purposes, since if we had to analyze every sensation we would have a harder time responding in time in an appropriate manner. I imagine a value could even be assigned to a given reflex's priority, probably based mostly on the level of pain.
http://factmyth.com/hardwired-versus-softwired/
Title: Re: What thought next
Post by: HS on June 24, 2019, 12:30:08 am
Quote
Do you think it also works for small things like "goal = I want a sandwich"?
Yes, I think it will drive along any process that a human is engaged in. If it’s a common positive experience some steps will be small or skipped entirely.
You could go: Goal (get sandwich), Conflict (N/A, or small, eg human vs sticky fridge door), Result (Success) , Emotion (positive), Reason (that sandwich really helped me out, that door was stick due to dirt), Anticipation (could eat another one like this, producing results such and such, could clean fridge door like this, producing results so and so), Choice (clean door), Which is the next Goal.
Title: Re: What thought next
Post by: AndyGoode on June 24, 2019, 01:37:37 am
Behavior trees (http://www.csc.kth.se/~miccol/Michele_Colledanchise/Publications_files/2013_ICRA_mcko.pdf)?
Or, if you want something very formal, NASA's PLEXIL (https://en.m.wikipedia.org/wiki/PLEXIL).
But I'd prefer not having this as the "normal" processing mode. It feels a bit too much mechanical. After all, we want human-like softwares!

Attached is a diagram of how I think of the general process of human thought...
[I can't upload my diagram. Do I need to be a member longer, or something?]
Title: Re: What thought next
Post by: HS on June 24, 2019, 02:03:01 am
@Andy, I've always been uploading images to websites that came up when I googled "upload image" and just pasted the link into the chat.  ImgBB I think it's called.
Title: Re: What thought next
Post by: AndyGoode on June 24, 2019, 02:14:00 am
test...

(https://i.ibb.co/ZJ9P9xR/processing-modes.jpg)

Well, HS, on my post on this site I could put IMG tags around the direct link URL of the picture I uploaded to ImgBB, but not as an attachment as you do.

Anyway, here's my description:

Attached is a diagram of how I think of the general process of human thought. It's much more general than any specific type of data structure or programming language. There exists a focus of attention (FOA) single-node highlighting mechanism that shifts from node to node (meaning topic to topic), layer to layer. Under normal processing conditions it has selected, via the search mode selector switch, a method for node searching, such as by tree or graph. That processing can be interrupted by a real-world interrupt, shown as a parallel layer of awareness interfacing continually with the real world, or by its own ending criteria that was set by the mode switch in the Virtual layer, possibly via a more complicated switch. That incorporates all four methods of topic focus transfer Zero mentioned:

Association - just another mode on the search mode selector, searches node-by-node by associated nodes on the Virtual layer
Utility - if this means personal values, then this either influences the importance of nodes in the Virtual layer, or searches them first
Plan - follow node-by-node on the Virtual layer according to the strategy set via the search mode selector
Surprise - an interrupt shifts the FOA from the Virtual layer to the Real layer
Title: Re: What thought next
Post by: Zero on June 24, 2019, 09:17:45 am
Yes clearly. But the unexpectedness doesn't come with given priority attached to it.

Actually, in higher animals it does. Reflexes, for example, are hardwired, and our bodies react to them without needing to think about the stimulus. For example, a sudden sting or burn or injury causes a limb to pull back involuntarily. Evidently nature built in that special-purpose system for survival purposes, since if we had to analyze every sensation we would have a harder time responding in time in an appropriate manner. I imagine a value could even be assigned to a given reflex's priority, probably based mostly on the level of pain.
http://factmyth.com/hardwired-versus-softwired/

Ok I see what you mean. I was calling "surprise" something a lot more common: things that aren't as usual. Say, if you have an old clock in your kitchen that sounds every second (tic, tic, tic), you don't even hear it anymore because you're so used to it. But someone new in the room will notice the sound. Same goes if a painting has disappeared from a wall. You'll notice it immediately. It's not an emergency, but it could indicate that someone got in while you were out for a walk. This is a powerful mechanism that allows you to focus on what possibly matters.

Then yeah, you can "hard-wired" instant reflexes to handle common cases of emergency: kick in the ass, low battery and so on...

Quote from: Hopefully Something
Yes, I think it will drive along any process that a human is engaged in. If it’s a common positive experience some steps will be small or skipped entirely.

...but still in the right order, I get it. Also, if there are multiple activities going on, there are several loops, some of which are suspended while others are active, right?

Quote from: AndyGoode
Attached is a diagram of how I think of the general process of human thought. It's much more general than any specific type of data structure or programming language. There exists a focus of attention (FOA) single-node highlighting mechanism that shifts from node to node (meaning topic to topic), layer to layer. Under normal processing conditions it has selected, via the search mode selector switch, a method for node searching, such as by tree or graph. That processing can be interrupted by a real-world interrupt, shown as a parallel layer of awareness interfacing continually with the real world, or by its own ending criteria that was set by the mode switch in the Virtual layer, possibly via a more complicated switch. That incorporates all four methods of topic focus transfer Zero mentioned:

Association - just another mode on the search mode selector, searches node-by-node by associated nodes on the Virtual layer
Utility - if this means personal values, then this either influences the importance of nodes in the Virtual layer, or searches them first
Plan - follow node-by-node on the Virtual layer according to the strategy set via the search mode selector
Surprise - an interrupt shifts the FOA from the Virtual layer to the Real layer

Yeah that's really nice, and feels familiar. The way I do it, as you saw in the other thread, is all in a very specific graph: a directed one, where vertices have an outdegree (https://en.wikipedia.org/wiki/Directed_graph#Indegree_and_outdegree) of 2, which is equivalent to Lisp's primary structures. Oversimplifying it, there's a "mind blackboard" containing graphs of current thoughts. The content of this blackboard, I believe, corresponds to your single-node FOA. Unused nodes eventually disappear when they become useless, and when space/time is needed by the system, like erosion or natural selection if you prefer. Now question is, what do we add to the graph next?

The everchanging content of this blackboard forms a stream (https://en.wikipedia.org/wiki/Stream_of_consciousness_(psychology)).

It all depends on what you want to do. You may want to predict what will happen after the situation you're thinking about, or try to understand how this situation happened. The way I see it, since it's a program, there should be different modules able to manipulate the graphs to achieve different things.

One thing I'm always wondering is, why do people do things. Why do we make decisions, and then stick to our goals. It seems perfectly artificial to me.

Beyond understanding how human mind works, there's something else. If we're gonna make something revolutionary, what's the philsophy behind it? An obvious one could be: the topmost goal of the entity we create will be to learn as much as it can. This is the idea of an eternally hungry Ai. But somehow, I have a strong feeling that anything infinite (here the hunger for knowledge is infinite) is extremely dangerous. Another one would be: the topmost goal is to take care of mankind. Then again, what does "taking care" mean?  I feel lack of precision is also extremely dangerous. Then you have the "free will" version. Very sexy: no specific goal, just set your own goal as you see fit. Yeah well. So what, is it random or something?

I really like the idea of the "story behind the character" loop, but it's very high-level. A lot of mechanisms in this sequence are implicitly implied, and seem hard to define.

BTW, Neo said "everything begins with a choice". :) Choice, Goal, Conflict, Result, Emotion, Reason, Prediction. Kiddin'
Title: Re: What thought next
Post by: HS on June 24, 2019, 05:21:55 pm
Quote
One thing I'm always wondering is, why do people do things. Why do we make decisions, and then stick to our goals. It seems perfectly artificial to me.
Yup.
Quote
Beyond understanding how human mind works, there's something else. If we're gonna make something revolutionary, what's the philsophy behind it? An obvious one could be: the topmost goal of the entity we create will be to learn as much as it can. This is the idea of an eternally hungry Ai. But somehow, I have a strong feeling that anything infinite (here the hunger for knowledge is infinite) is extremely dangerous. Another one would be: the topmost goal is to take care of mankind. Then again, what does "taking care" mean?  I feel lack of precision is also extremely dangerous. Then you have the "free will" version. Very sexy: no specific goal, just set your own goal as you see fit. Yeah well. So what, is it random or something?
Good question. The problem I keep bumping up against, (after a minute of thinking I've definitely found the answer), is that any one philosophy, or modus operandi, doesn't apply well to all situations.  It's the situation of,  "The Tao that can be told is not the eternal Tao." Maybe we need some kind of flexible philosophy. "The Watercourse Code." Sounds cool already!

Quote
I really like the idea of the "story behind the character" loop, but it's very high-level. A lot of mechanisms in this sequence are implicitly implied, and seem hard to define.
Yup! So do I! It'll require some thought, but it's just too good to give up on. I'm gonna do my best to incorporate it into my project.

Quote
BTW, Neo said "everything begins with a choice". :) Choice, Goal, Conflict, Result, Emotion, Reason, Prediction. Kiddin'
The end is the begining... Oooo Mysterious ways...
 ;D
Title: Re: What thought next
Post by: HS on June 24, 2019, 07:36:31 pm
Quote
...but still in the right order, I get it. Also, if there are multiple activities going on, there are several loops, some of which are suspended while others are active, right?

Woops, missed this one. Yes, that would make sense. We've got one track minds, no conscious multitasking, but we can jump from one "plot thread" to another. And sometimes even remember previous ones. Like oh shoot! There was a question that I had been meaning to respond to.  :)
Title: Re: What thought next
Post by: AndyGoode on June 25, 2019, 12:49:26 am
Oversimplifying it, there's a "mind blackboard" containing graphs of current thoughts. The content of this blackboard, I believe, corresponds to your single-node FOA.

Close, but not quite.

First, you should be careful of the word "blackboard" because there exists a variation on expert systems called a blackboard system (https://en.wikipedia.org/wiki/Blackboard_system), so I'm not sure if you're referring to a blackboard system or the common notion of a blackboard.

Either way, I am using a symmetric directed graph (https://en.wikipedia.org/wiki/Directed_graph) to represent knowledge in general, where each node might be a neural network node, semantic net node, tree node, digraph node, or whatever, and represents a specific concept. That entire graph would represent whatever your system has in its memory, and my FOA is a single highlighted node from that graph that represents the single concept that the system is currently thinking about. Only one node can be the FOA at any point in time, and I'm assuming that the next FOA can be only a node that has a link to the current FOA. It's sort of like a lighted marquee, where the changing selection of which light that is lit can give the illusion of motion through the network.

In the following diagram you can see a single FOA change location across three time steps. The graph stays the same, and is mostly independent of what is going on with the FOA (unless maybe you suddenly remove part of the graph where the FOA was active, in which case I don't know what would happen):

(https://i.ibb.co/kxqHYRp/foa-050.jpg)

P.S.--This system builds hierarchies and free will into it. (1) Hierarchies: Because interrupts will inevitably occur (hunger, restlessness, etc.), this keeps the system motivated to keep changing its state in order to survive. Survival is the #1 goal. (2) Free will: In some search modes, such as free association or random selections between equally appealing alternative actions, the system's behavior is not completely predictable. This is not a deterministic system.

I also forgot to mention that when the FOA jumps to a different node, that new FOA can be in a different layer, such as in the Real layer, such as if a bee sting causes an interrupt during the solution of a problem of lesser importance in the Virtual layer.
Title: Re: What thought next
Post by: Zero on June 26, 2019, 10:38:36 am
Quote
Yup! So do I! It'll require some thought, but it's just too good to give up on. I'm gonna do my best to incorporate it into my project.

We should keep in mind, that while it might be part of the implementation, it could also rather be an a posteriori observation one would make when looking at a well designed system. Wow, this sentence is weird. Am I being understandable?

Quote
any one philosophy, or modus operandi, doesn't apply well to all situations

Oh yes, I've been feeling this very often during my researches. I think I finally got something that do apply well to anything, at least the way I see it. I strongly believe there's not 1 AGI solution, but many ways it can be achieved.

Quote
First, you should be careful of the word "blackboard" because there exists a variation on expert systems called a blackboard system (https://en.wikipedia.org/wiki/Blackboard_system), so I'm not sure if you're referring to a blackboard system or the common notion of a blackboard.

This wikipedia article describes exactly what I'm talking about. A blackboard.
Please don't presume I don't know what I'm talking about.

Quote
Close, but not quite.

I know, I wasn't suggesting that we were doing it identically. On the contrary! You do FOA with a single node, I do it differently because I don't think 1 node  is enough to represent the current state of mind. What do you think?

Quote
P.S.--This system builds hierarchies and free will into it. (1) Hierarchies: Because interrupts will inevitably occur (hunger, restlessness, etc.), this keeps the system motivated to keep changing its state in order to survive. Survival is the #1 goal. (2) Free will: In some search modes, such as free association or random selections between equally appealing alternative actions, the system's behavior is not completely predictable. This is not a deterministic system.

Do you consider that free will is only a result of randomness?
Title: Re: What thought next
Post by: AndyGoode on June 27, 2019, 11:56:43 pm

This wikipedia article describes exactly what I'm talking about. A blackboard.
Please don't presume I don't know what I'm talking about.

I didn't presume. I was implying that *I* didn't understand which concept you meant. Anyway, now I know so you can unruffle your feathers.

You do FOA with a single node, I do it differently because I don't think 1 node  is enough to represent the current state of mind. What do you think?

I was using a simplified description of what I'm fairly sure is happening in the brain. Here are a few complications my diagram didn't show: (1) What I'm showing as a single node in the above diagram is more accurately described as an outstar, where all the nodes within one link of the central FOA node are also activated. Presumably all those target nodes 1-link distant have a different character, maybe a different phase, maybe a different frequency, so that the brain knows those target nodes are only associations and not the main concept being considered. (2) Because the brain is constantly learning, even while reasoning, a fast, dynamic mechanism of clustering nodes into a single outstar collection is presumably used. In that way the entire state of the immediately applicable part of a blackboard (of a blackboard system) can temporarily be encoded as a single node (or more accurately, outstar) used as the FOA. (3) After posting my diagram I realized I didn't take into account the subconscious, which would be a third level of processing, operating in parallel to the other two levels. There are some complications with the subconscious layer, mostly because it doesn't normally communicate in the same manner as the other two layers, but its functioning is roughly similar. If I get some time I'll post another diagram to include the subconscious layer and maybe more detail to show that outstars are being used.

Do you consider that free will is only a result of randomness?

No, not at all. A person must make decisions all the time, especially decisions that require moral judgment, so whatever heuristics a given person uses for such decisions will be often determine the outcome of the decision-making process. I believe an intelligent person makes a conscious decision as to how much each of the applicable heuristics apply, so they definitely have free will based on intelligent, conscious thought.

P.S.--Here's my updated diagram. I hope it's more understandable:

(https://i.ibb.co/6B0x715/subconscious.jpg)
Title: Re: What thought next
Post by: Zero on July 02, 2019, 08:36:45 pm
Quote
I didn't presume. I was implying that *I* didn't understand which concept you meant. Anyway, now I know so you can unruffle your feathers.

My feathers  ;D  my bad, I didn't read carefully, sorry.

Quote
I believe an intelligent person makes a conscious decision as to how much each of the applicable heuristics apply

Interesting. I'm not sure a lot of people make conscious decisions about this. Maybe a little. One way or another, the activation chain has to start from somewhere behind the curtain anyway.

Yes your diagram is very clear, thank you for sharing it.
Title: Re: What thought next
Post by: AndyGoode on July 02, 2019, 10:31:47 pm
Yes your diagram is very clear, thank you for sharing it.

You're welcome. I think it's the most general model of cognition that exists. I once applied for a job that wanted an expert in all the *published* models of thinking that existed, especially SOAR (https://en.wikipedia.org/wiki/Soar_%28cognitive_architecture%29). I had read most of Newell's book on SOAR but that wasn't good enough for the company, so they didn't hire me, and I ended up moving out of state and coming up with a model that is probably better than any published model that anybody they hired could have shown them. Go figure.

One thing that's nice about the model is that it also dreams: all you do is turn the mode selector to an uncontrolled mode--unconsciousness--whereupon thoughts wander somewhat randomly during that period, following either associations, or being influenced by interruptions that cause partial  wakefulness (such as a noise or physical sensation while asleep), with the subconscious sometimes kicking in to produce some clever planning on the outcome of the dream, like a hidden script writer.

As for how the *average* person makes decisions, I would have no respect for their "method" except maybe as humor.
Title: Re: What thought next
Post by: Zero on July 03, 2019, 08:32:45 am
Quote
I ended up moving out of state and coming up with a model that is probably better than any published model that anybody they hired could have shown them.

Did you implement your model?
Title: Re: What thought next
Post by: AndyGoode on July 03, 2019, 10:50:35 pm
Did you implement your model?

Mostly not. The basic idea comes from my dissertation, so that part was simulated, but I just came up with the idea of the subconscious while responding to your thread. If you or somebody wants to code the idea, you're going to run into trouble with the mode selector switch because the design for that is much deeper and much more general than the simple design I showed in the diagram, and this general model of cognition doesn't address the key issues of intelligence, either, only the top-level functioning of a brain-like system.