Level 7: Grounding Intelligence in Moods to Generate Sensible yet Free Progressions of ActionOverview
I’m devising a system based on optimizing moods, with moods being a direct consequence of the environment, and the environment being a partial consequence of actions. I’ve conceptualized perception, mood, imagination, and action, as the basic modalities of a human-like intelligence. Each of these would lead to the next, creating a thought loop. Each modality would feature many subsections, (different mechanisms of perception, various types and intensities of moods, different methods of reasoning employed by the imagination, and a variety of possible actions), but the process wouldn’t go through discrete sequential steps. Instead, each mode would vary in its intensity to move the process forward. The thought process would be an epiphenomenon, like a wave appearing to travel across a surface, despite all the individual particles merely jiggling in sync.
These thought loops, guided by the meanings* of moods, (moods being symbols describing the health of an organism) would learn to create narrative progressions. Seeking to optimize the state of an organism, means eventually seeking to optimize the state of the organism’s environment. This is usually quite complex, and often requires a multi-step program inching towards a larger goal with many try – fail cycles. An AI would learn something at each step though, and become somewhat better prepared for the next one.
* In this system the observation of results is enough to give you meaning, (meaning (here) being the relationship between anything and its implications or tendencies).
The Modalities in Greater Detail
Perception would require memory to be awakened by sensory inputs which hint at it. Novel perceptual data would be combined with existing memory, enhancing subsequent perception. It could use the reoccurrence of basic physical tendencies such as opening, closing, merging, dividing, increasing, reducing, approaching, retreating, to understand sensory inputs through a vocabulary of form, function, and process. Evaluating a situation can be done by taking it apart and looking at the distinct facts, or it can be done by blurring perception until the autonomous mood system’s combined hue becomes apparent. Both make use of taking extreme views of the environment, one by separating it into the most prominent distinctions, the other by blending it together into a hitherto unknown substance. The fuzzy method could serve as a guide for the distinct method. Many precise questions and problems are unanswerable through precise means, as seen with ‘the question game’ such methods by themselves often lead an intelligence into loops, dead ends, and ever-increasing branching. When formal logic starts to exhibit that kind of behavior, guesstimates, ambiances, and intuitions, become more sensible guides.
A mood gives the mind a qualitative judgement about what is perceived, imagined, and done. The job of the other qualities is to optimize the mood. Mood associations would be informed by the positive/negative/arbitrary effects of past experience. A pitfall to avoid here is a Sisyphean attempt at an emotional trendline which rises more than it falls; this is impossible because moods have finite ranges, and current experience is judged relative to similar past experience. It may be possible to avoid this trap by thinking of mood progressions and their accompanying events musically or narratively. The AI could learn to see its life as a potentially worthwhile song or story. I’m not sure if delay of gratification fully conveys the idea. A deepening of scope is more what I’m getting at. Something like the difference between your favorite word, favorite sentence, and favorite book. The point isn’t to forego your favorite word now in order to get more later, the point is to enjoy a more intricate concept, or unfolding of interrelated events, as outlined by a sentence, or an entire novel. The development of new mood associations would probably have a positive mood effect, which might be felt as an increase of meaning, and a variation on this could create humor. These effects are important self-generated feedback, because accurate and thorough mood associations to things and processes are needed to develop a sensible intelligence in this type of system.
Imagination might be a reversal of perception. It might use memory to activate perceptual regions in the connectome. Different imaginations could be devised to essentially calculate futures on the mood system, only these calculations would play with experience instead of numbers. Intelligence then (in this system), is the degree of symbiosis between moods and imagination. Imagination might be spun into action by certain instances or sequences of perception which were indicated as significant by mood. Then imagination would churn out potential actions, until an agreeable mood pairing was found, in which case the action would be performed.
Action is the organism moving itself to change the environment, (it is devised by imagination, (which is guided by mood, (which evoked by perception, (which is informed by memory)))).
The Memory Mechanism
Repeating neural states would reactivate data structures which previously arose out of similar states, reducing the need for complete thought cycles. Novel neural states would increase the need for complete thought cycles. Imagination could then be expanded into analysis, review, and anticipation, to generate the larger contextual frameworks needed to understand novel events.
Because the AI will be electronic and not biological, it might not require multiple types of neurons. Wireless communication theoretically allows for all the necessary connectome configurations, and each individual neuron could have the code for all the modalities.
The Perception of Processes
A flat layer of neurons can represent a point instance of time. But if, on a delay, you transfer the pattern to a layer of neurons at a different depth, you’ll get a model of change through time. Now actively compare all the layers and you’ll artificially extend the present instant into a moment you have time to think in, and about. It seems reasonable that just as two dimensions can be used to simultaneously represent a volume of three-dimensional space, three dimensions can be used to simultaneously represent a span of four-dimensional process.
This is a simple way to visually demonstrate the idea, but it might not be the best way to actually run a neural net. Representations of the changing environment probably don't have to go layer by layer, with a comparison net functioning at right angles to them. Instead, each frame of perception, visual and otherwise, could be represented as a three-dimensional network of neural connections, same with the following modalities of mood, imagination and action. The comparators would then have to go by activation order or follow data transfer makers, but it could be worth it because it’d give the mechanism more degrees of freedom.
The Function of Narrative Progressions
Everyday tasks would be remembered in terms of significant process. These might include simple narrative progressions to help intelligences with basic needs. However, when an intelligence would not be able to get their basic needs met, a mood would spin the imagination into action, then a narrative progression designed to improve the situation would be set in motion. The final goal which is imagined to solve the situation might seem presently unreachable, but a few simple first steps in the right direction can usually be imagined. Once achieved, the results of these actions would further inform the intelligence, allowing it to imagine a few steps further towards the main goal.
Often the intelligence would need to get other situationally involved intelligences on the same page. One of the most significant processes for a mood based narrative intelligence would be the resolution of differences between intelligences. This would be one of the core properties in the stories/ real-life events they would remember, and one of the core bases for the fictional stories they would imagine. Conveying/enacting a story means communicating it in such a way that all particulars and processes will symbolize the same things for the listeners; therefore, both in live-action life, and for recounted/imagined events, a narrative intelligence must first and foremost recognize and minimize the differences between everyone’s experience of reality.
This can be done because other life forms (and the entire environment) reflect their experience of you, back to you, likewise between each other. The difference between your experience of yourself, and someone else’s experience of you is an obstacle to communication, and the differences between your experience and their experience of the greater environment is an obstacle to cooperation. This may be the foundational principle behind meaningful stories; there are paths of action which can align an intelligence’s internal experience of themselves with another’s external experience of them, while attempting to optimize both in the process of coming to an agreement on their experience of the environment. Stories might be the windy roads necessary to experientially get people on the same page, therefore allowing for effective communication and cooperation. Once everyone is on similar experiential ground, communication is simplified because logical constructs built on similar foundations will require similar structures, they will therefore be easier to recognize, work with, and solve.
But wouldn’t it be better if everyone just minded their own business, lived in peace, and never mind differences of experience? Possibly, but it is in the nature of the universe to change, and stability itself eventually becomes rigidity, then coordinated progress is often required to adapt to new circumstances. That’s when you need a narrative sequence to get intelligences on the same page, so that they may agree on, and work towards the necessary goals which would adapt them once again to their changing environment.
The meaning of life? My answer to the letter of the question, is that some parts of life can include meaning, but since meaning is a product of symbolic understanding, and therefore a smaller category than life itself, all of life can’t be encompassed by it. If I try to answer the spirit of the question, the ‘’meaning of life’’ could be said to exist as an empirically absent but experientially significant privative; the complete lack of a fundamental meaning being a form of infinite possibility. The noticeable void of a predetermined meaning is like finding an empty glass. This seems good, because it implies that we’re not limited to one type of drink (as we would have been, if we’d discovered just one glass of absolute meaning), plus, there might not have been a glass at all.