Yes clearly. But the unexpectedness doesn't come with given priority attached to it.
Actually, in higher animals it does. Reflexes, for example, are hardwired, and our bodies react to them without needing to think about the stimulus. For example, a sudden sting or burn or injury causes a limb to pull back involuntarily. Evidently nature built in that special-purpose system for survival purposes, since if we had to analyze every sensation we would have a harder time responding in time in an appropriate manner. I imagine a value could even be assigned to a given reflex's priority, probably based mostly on the level of pain.
http://factmyth.com/hardwired-versus-softwired/
Ok I see what you mean. I was calling "surprise" something a lot more common: things that aren't as usual. Say, if you have an old clock in your kitchen that sounds every second (tic, tic, tic), you don't even hear it anymore because you're so used to it. But someone new in the room will notice the sound. Same goes if a painting has disappeared from a wall. You'll notice it immediately. It's not an emergency, but it could indicate that someone got in while you were out for a walk. This is a powerful mechanism that allows you to focus on what possibly matters.
Then yeah, you can "hard-wired" instant reflexes to handle common cases of emergency: kick in the ass, low battery and so on...
Yes, I think it will drive along any process that a human is engaged in. If it’s a common positive experience some steps will be small or skipped entirely.
...but still in the right order, I get it. Also, if there are multiple activities going on, there are several loops, some of which are suspended while others are active, right?
Attached is a diagram of how I think of the general process of human thought. It's much more general than any specific type of data structure or programming language. There exists a focus of attention (FOA) single-node highlighting mechanism that shifts from node to node (meaning topic to topic), layer to layer. Under normal processing conditions it has selected, via the search mode selector switch, a method for node searching, such as by tree or graph. That processing can be interrupted by a real-world interrupt, shown as a parallel layer of awareness interfacing continually with the real world, or by its own ending criteria that was set by the mode switch in the Virtual layer, possibly via a more complicated switch. That incorporates all four methods of topic focus transfer Zero mentioned:
Association - just another mode on the search mode selector, searches node-by-node by associated nodes on the Virtual layer
Utility - if this means personal values, then this either influences the importance of nodes in the Virtual layer, or searches them first
Plan - follow node-by-node on the Virtual layer according to the strategy set via the search mode selector
Surprise - an interrupt shifts the FOA from the Virtual layer to the Real layer
Yeah that's really nice, and feels familiar. The way I do it, as you saw in the other thread, is all in a very specific graph: a directed one, where vertices have an
outdegree of 2, which is equivalent to Lisp's primary structures. Oversimplifying it, there's a "mind blackboard" containing graphs of current thoughts.
The content of this blackboard, I believe, corresponds to your single-node FOA. Unused nodes eventually disappear when they become useless, and when space/time is needed by the system, like erosion or natural selection if you prefer. Now question is, what do we add to the graph next?
The everchanging content of this blackboard forms a
stream.
It all depends on what you want to do. You may want to predict what will happen after the situation you're thinking about, or try to understand how this situation happened. The way I see it, since it's a program, there should be different modules able to manipulate the graphs to achieve different things.
One thing I'm always wondering is, why do people do things. Why do we make decisions, and then stick to our goals. It seems perfectly artificial to me.
Beyond understanding how human mind works, there's something else. If we're gonna make something revolutionary, what's the philsophy behind it? An obvious one could be: the topmost goal of the entity we create will be to learn as much as it can. This is the idea of an eternally hungry Ai. But somehow, I have a strong feeling that anything infinite (here the hunger for knowledge is infinite) is extremely dangerous. Another one would be: the topmost goal is to take care of mankind. Then again, what does "taking care" mean? I feel lack of precision is also extremely dangerous. Then you have the "free will" version. Very sexy: no specific goal, just set your own goal as you see fit. Yeah well. So what, is it random or something?
I really like the idea of the "story behind the character" loop, but it's very high-level. A lot of mechanisms in this sequence are implicitly implied, and seem hard to define.
BTW, Neo said "everything begins with a choice".
Choice, Goal, Conflict, Result, Emotion, Reason, Prediction. Kiddin'