What thought next

  • 20 Replies
  • 12639 Views
*

Zero

  • Eve
  • ***********
  • 1287
What thought next
« on: June 22, 2019, 10:46:35 pm »

When you're in a state of mind, what should be the next state? I can see 4 cases.

* Association

   ocean
      makes you think
   blue
      makes you think
   DeepBlue
      makes you think
   Ai
      makes you think
   AiDreams

* Utility

   it can cause pleasure or satisfaction
      it matters, think about it
   it can cause pain or be uncomfortable
      it matters, think about it
   it's neutral
      it's irrelevant, no need to focus on this

* Plan

   I'm starting a compound activity
      go down and do first step
   I'm in the middle of an activity
      do next step
   the current activity is over
      go up and start next activity

* Surprise

   standard predicted stuff happened
      why bother
   something unexpected happened
      analysis needed, could be important



===== am I missing something?


*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1179
Re: What thought next
« Reply #1 on: June 22, 2019, 11:23:29 pm »
I think a broad pattern can determined from what writers use to create belivable characters. Goal, conflict, result, emotion, reason, prediction, choice.  It just keeps going in a loop. If those are in the wrong order the person will seem crazy or at least non relatable. So I like to start with the foundation of this sequence because our emotions respond well to this very repeatedly.

*

Zero

  • Eve
  • ***********
  • 1287
Re: What thought next
« Reply #2 on: June 23, 2019, 07:57:16 pm »
Quote
Goal, conflict, result, emotion, reason, prediction, choice.

Are you saying this is the exact sequence? or is it an example?

*

AndyGoode

  • Guest
Re: What thought next
« Reply #3 on: June 23, 2019, 08:08:33 pm »
I recommend dividing up your list of four items via computer science categorizations like poll-driven, interrupt-driven, and maybe one other (https://stackoverflow.com/questions/3072815/polling-or-interrupt-based-method). "Surprise" is clearly interrupt-driven, whereby the normal flow of logic is interrupted by a high priority event, such as an emergency. "Plan" would be the normal flow of processing (I don't think there exists a formal name for this type), "Association" would be a more creative mode of processing, like dreaming (I don't think there exists a formal name for this type, though "train of thought" is an accurate informal name (https://en.wikipedia.org/wiki/Train_of_thought)), and I don't understand what you mean by "Utility" (it may not fit into the system I'm proposing).

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1179
Re: What thought next
« Reply #4 on: June 23, 2019, 08:30:00 pm »
Quote
Goal, conflict, result, emotion, reason, prediction, choice.

Are you saying this is the exact sequence? or is it an example?


I was going from memory, made some different word choices, but that is the sequence, in that order.  Here are some quotes from my source. https://jimbutcher.livejournal.com/ (Its in the stuff from 2006)

Quote
GOAL
CONFLICT
SETBACK

The explanations for the above are a bit long to post here.

Quote
1)EMOTIONAL REACTION:
2) REVIEW, LOGIC, & REASON:
3) ANTICIPATION:
4) CHOICE:

1) An immediate emotional response.
2) A review of what happened, applying logic and reason to the events and why they turned out that way, and of what options are open to them.
3) Anticipation of what might follow the pursuit of those options. (Highly important, this one. Never underestimate the effects of anticipation on a reader.)
4) Your character makes up his mind and decides what to do next. IE, he makes a CHOICE.
Now, it's possible to SKIP some of these steps, or to abbreviate some of them so severely that you all but skip them. But you CAN'T CHANGE THE ORDER. 

Emotion, Reason, Anticipation, Choice. That reaction is typical to people, regardless of their sex, age, or background. It's psychologically hardwired into us--so take advantage of it. By having your character react in this very typically human way, you establish an immediate sense of empathy with the reader. If you do it right, you get the reader nodding along with that character going "Damn right, that's what I'd do." Or better yet, you get them opening their mouth in horror as they read, seeing the character's thought process, hating every step of where it's going while it remains undeniably understandable and genuine to the way people behave.

*

Zero

  • Eve
  • ***********
  • 1287
Re: What thought next
« Reply #5 on: June 23, 2019, 10:33:20 pm »
Thank you both for these interesting posts!
 
Quote
"Surprise" is clearly interrupt-driven, whereby the normal flow of logic is interrupted by a high priority event, such as an emergency.

Yes clearly. But the unexpectedness doesn't come with given priority attached to it. At first, all you know is that it's unexpected. It deserves to be allocated a few CPU cycles because it COULD potentially be important.

Quote
"Plan" would be the normal flow of processing (I don't think there exists a formal name for this type)

Behavior trees?
Or, if you want something very formal, NASA's PLEXIL.
But I'd prefer not having this as the "normal" processing mode. It feels a bit too much mechanical. After all, we want human-like softwares!

Quote
and I don't understand what you mean by "Utility"

In all that could pop in your mind, things which are associated with satisfaction or uncomfortability, or are emotionally charged, will more likely be focused on.

Quote
I was going from memory, made some different word choices, but that is the sequence, in that order.

Got it. Works very well!
Do you think it also works for small things like "goal = I want a sandwich"?

*

AndyGoode

  • Guest
Re: What thought next
« Reply #6 on: June 24, 2019, 12:22:24 am »
Yes clearly. But the unexpectedness doesn't come with given priority attached to it.

Actually, in higher animals it does. Reflexes, for example, are hardwired, and our bodies react to them without needing to think about the stimulus. For example, a sudden sting or burn or injury causes a limb to pull back involuntarily. Evidently nature built in that special-purpose system for survival purposes, since if we had to analyze every sensation we would have a harder time responding in time in an appropriate manner. I imagine a value could even be assigned to a given reflex's priority, probably based mostly on the level of pain.
http://factmyth.com/hardwired-versus-softwired/

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1179
Re: What thought next
« Reply #7 on: June 24, 2019, 12:30:08 am »
Quote
Do you think it also works for small things like "goal = I want a sandwich"?
Yes, I think it will drive along any process that a human is engaged in. If it’s a common positive experience some steps will be small or skipped entirely.
You could go: Goal (get sandwich), Conflict (N/A, or small, eg human vs sticky fridge door), Result (Success) , Emotion (positive), Reason (that sandwich really helped me out, that door was stick due to dirt), Anticipation (could eat another one like this, producing results such and such, could clean fridge door like this, producing results so and so), Choice (clean door), Which is the next Goal.

*

AndyGoode

  • Guest
Re: What thought next
« Reply #8 on: June 24, 2019, 01:37:37 am »
Behavior trees?
Or, if you want something very formal, NASA's PLEXIL.
But I'd prefer not having this as the "normal" processing mode. It feels a bit too much mechanical. After all, we want human-like softwares!

Attached is a diagram of how I think of the general process of human thought...
[I can't upload my diagram. Do I need to be a member longer, or something?]

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1179
Re: What thought next
« Reply #9 on: June 24, 2019, 02:03:01 am »
@Andy, I've always been uploading images to websites that came up when I googled "upload image" and just pasted the link into the chat.  ImgBB I think it's called.

*

AndyGoode

  • Guest
Re: What thought next
« Reply #10 on: June 24, 2019, 02:14:00 am »
test...



Well, HS, on my post on this site I could put IMG tags around the direct link URL of the picture I uploaded to ImgBB, but not as an attachment as you do.

Anyway, here's my description:

Attached is a diagram of how I think of the general process of human thought. It's much more general than any specific type of data structure or programming language. There exists a focus of attention (FOA) single-node highlighting mechanism that shifts from node to node (meaning topic to topic), layer to layer. Under normal processing conditions it has selected, via the search mode selector switch, a method for node searching, such as by tree or graph. That processing can be interrupted by a real-world interrupt, shown as a parallel layer of awareness interfacing continually with the real world, or by its own ending criteria that was set by the mode switch in the Virtual layer, possibly via a more complicated switch. That incorporates all four methods of topic focus transfer Zero mentioned:

Association - just another mode on the search mode selector, searches node-by-node by associated nodes on the Virtual layer
Utility - if this means personal values, then this either influences the importance of nodes in the Virtual layer, or searches them first
Plan - follow node-by-node on the Virtual layer according to the strategy set via the search mode selector
Surprise - an interrupt shifts the FOA from the Virtual layer to the Real layer

*

Zero

  • Eve
  • ***********
  • 1287
Re: What thought next
« Reply #11 on: June 24, 2019, 09:17:45 am »
Yes clearly. But the unexpectedness doesn't come with given priority attached to it.

Actually, in higher animals it does. Reflexes, for example, are hardwired, and our bodies react to them without needing to think about the stimulus. For example, a sudden sting or burn or injury causes a limb to pull back involuntarily. Evidently nature built in that special-purpose system for survival purposes, since if we had to analyze every sensation we would have a harder time responding in time in an appropriate manner. I imagine a value could even be assigned to a given reflex's priority, probably based mostly on the level of pain.
http://factmyth.com/hardwired-versus-softwired/

Ok I see what you mean. I was calling "surprise" something a lot more common: things that aren't as usual. Say, if you have an old clock in your kitchen that sounds every second (tic, tic, tic), you don't even hear it anymore because you're so used to it. But someone new in the room will notice the sound. Same goes if a painting has disappeared from a wall. You'll notice it immediately. It's not an emergency, but it could indicate that someone got in while you were out for a walk. This is a powerful mechanism that allows you to focus on what possibly matters.

Then yeah, you can "hard-wired" instant reflexes to handle common cases of emergency: kick in the ass, low battery and so on...

Quote from: Hopefully Something
Yes, I think it will drive along any process that a human is engaged in. If it’s a common positive experience some steps will be small or skipped entirely.

...but still in the right order, I get it. Also, if there are multiple activities going on, there are several loops, some of which are suspended while others are active, right?

Quote from: AndyGoode
Attached is a diagram of how I think of the general process of human thought. It's much more general than any specific type of data structure or programming language. There exists a focus of attention (FOA) single-node highlighting mechanism that shifts from node to node (meaning topic to topic), layer to layer. Under normal processing conditions it has selected, via the search mode selector switch, a method for node searching, such as by tree or graph. That processing can be interrupted by a real-world interrupt, shown as a parallel layer of awareness interfacing continually with the real world, or by its own ending criteria that was set by the mode switch in the Virtual layer, possibly via a more complicated switch. That incorporates all four methods of topic focus transfer Zero mentioned:

Association - just another mode on the search mode selector, searches node-by-node by associated nodes on the Virtual layer
Utility - if this means personal values, then this either influences the importance of nodes in the Virtual layer, or searches them first
Plan - follow node-by-node on the Virtual layer according to the strategy set via the search mode selector
Surprise - an interrupt shifts the FOA from the Virtual layer to the Real layer

Yeah that's really nice, and feels familiar. The way I do it, as you saw in the other thread, is all in a very specific graph: a directed one, where vertices have an outdegree of 2, which is equivalent to Lisp's primary structures. Oversimplifying it, there's a "mind blackboard" containing graphs of current thoughts. The content of this blackboard, I believe, corresponds to your single-node FOA. Unused nodes eventually disappear when they become useless, and when space/time is needed by the system, like erosion or natural selection if you prefer. Now question is, what do we add to the graph next?

The everchanging content of this blackboard forms a stream.

It all depends on what you want to do. You may want to predict what will happen after the situation you're thinking about, or try to understand how this situation happened. The way I see it, since it's a program, there should be different modules able to manipulate the graphs to achieve different things.

One thing I'm always wondering is, why do people do things. Why do we make decisions, and then stick to our goals. It seems perfectly artificial to me.

Beyond understanding how human mind works, there's something else. If we're gonna make something revolutionary, what's the philsophy behind it? An obvious one could be: the topmost goal of the entity we create will be to learn as much as it can. This is the idea of an eternally hungry Ai. But somehow, I have a strong feeling that anything infinite (here the hunger for knowledge is infinite) is extremely dangerous. Another one would be: the topmost goal is to take care of mankind. Then again, what does "taking care" mean?  I feel lack of precision is also extremely dangerous. Then you have the "free will" version. Very sexy: no specific goal, just set your own goal as you see fit. Yeah well. So what, is it random or something?

I really like the idea of the "story behind the character" loop, but it's very high-level. A lot of mechanisms in this sequence are implicitly implied, and seem hard to define.

BTW, Neo said "everything begins with a choice". :) Choice, Goal, Conflict, Result, Emotion, Reason, Prediction. Kiddin'

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1179
Re: What thought next
« Reply #12 on: June 24, 2019, 05:21:55 pm »
Quote
One thing I'm always wondering is, why do people do things. Why do we make decisions, and then stick to our goals. It seems perfectly artificial to me.
Yup.
Quote
Beyond understanding how human mind works, there's something else. If we're gonna make something revolutionary, what's the philsophy behind it? An obvious one could be: the topmost goal of the entity we create will be to learn as much as it can. This is the idea of an eternally hungry Ai. But somehow, I have a strong feeling that anything infinite (here the hunger for knowledge is infinite) is extremely dangerous. Another one would be: the topmost goal is to take care of mankind. Then again, what does "taking care" mean?  I feel lack of precision is also extremely dangerous. Then you have the "free will" version. Very sexy: no specific goal, just set your own goal as you see fit. Yeah well. So what, is it random or something?
Good question. The problem I keep bumping up against, (after a minute of thinking I've definitely found the answer), is that any one philosophy, or modus operandi, doesn't apply well to all situations.  It's the situation of,  "The Tao that can be told is not the eternal Tao." Maybe we need some kind of flexible philosophy. "The Watercourse Code." Sounds cool already!

Quote
I really like the idea of the "story behind the character" loop, but it's very high-level. A lot of mechanisms in this sequence are implicitly implied, and seem hard to define.
Yup! So do I! It'll require some thought, but it's just too good to give up on. I'm gonna do my best to incorporate it into my project.

Quote
BTW, Neo said "everything begins with a choice". :) Choice, Goal, Conflict, Result, Emotion, Reason, Prediction. Kiddin'
The end is the begining... Oooo Mysterious ways...
 ;D

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1179
Re: What thought next
« Reply #13 on: June 24, 2019, 07:36:31 pm »
Quote
...but still in the right order, I get it. Also, if there are multiple activities going on, there are several loops, some of which are suspended while others are active, right?

Woops, missed this one. Yes, that would make sense. We've got one track minds, no conscious multitasking, but we can jump from one "plot thread" to another. And sometimes even remember previous ones. Like oh shoot! There was a question that I had been meaning to respond to.  :)

*

AndyGoode

  • Guest
Re: What thought next
« Reply #14 on: June 25, 2019, 12:49:26 am »
Oversimplifying it, there's a "mind blackboard" containing graphs of current thoughts. The content of this blackboard, I believe, corresponds to your single-node FOA.

Close, but not quite.

First, you should be careful of the word "blackboard" because there exists a variation on expert systems called a blackboard system (https://en.wikipedia.org/wiki/Blackboard_system), so I'm not sure if you're referring to a blackboard system or the common notion of a blackboard.

Either way, I am using a symmetric directed graph (https://en.wikipedia.org/wiki/Directed_graph) to represent knowledge in general, where each node might be a neural network node, semantic net node, tree node, digraph node, or whatever, and represents a specific concept. That entire graph would represent whatever your system has in its memory, and my FOA is a single highlighted node from that graph that represents the single concept that the system is currently thinking about. Only one node can be the FOA at any point in time, and I'm assuming that the next FOA can be only a node that has a link to the current FOA. It's sort of like a lighted marquee, where the changing selection of which light that is lit can give the illusion of motion through the network.

In the following diagram you can see a single FOA change location across three time steps. The graph stays the same, and is mostly independent of what is going on with the FOA (unless maybe you suddenly remove part of the graph where the FOA was active, in which case I don't know what would happen):



P.S.--This system builds hierarchies and free will into it. (1) Hierarchies: Because interrupts will inevitably occur (hunger, restlessness, etc.), this keeps the system motivated to keep changing its state in order to survive. Survival is the #1 goal. (2) Free will: In some search modes, such as free association or random selections between equally appealing alternative actions, the system's behavior is not completely predictable. This is not a deterministic system.

I also forgot to mention that when the FOA jumps to a different node, that new FOA can be in a different layer, such as in the Real layer, such as if a bee sting causes an interrupt during the solution of a problem of lesser importance in the Virtual layer.
« Last Edit: June 25, 2019, 01:14:31 am by AndyGoode »

 


LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

341 Guests, 0 Users

Most Online Today: 353. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles