Since there is a huge number of possible algorithms, I'll translate it to general program operation.
Mood/demeanor govern how experiences are interpreted and the selection of responses as well as the intensity of the response. Since moods are derived from emotions. Demeanor by itself, is an optimization, for it whittles down the possible number of interpretations and responses that could be made. As is exhibited by the diversity of personalities in the population, it's pretty obvious that there is a genetic foundation for a persons normal emotional type and affect level - or simply default state. So, I began wondering about what architectural features in the brain would establish the normal default state. A couple of features immediately came to mind; allocation of resources to domains representing the various emotional types and the number of connections made with an LTM (representing the affect levels). The smaller the amount of cortical real estate allocated to say the core type of anger, the fewer the number experiences that could be instantiated or indexed that would provoke and angry response. Seems logical.
Next, I had to decide if I could agree with the fashionable list of core emotional types and decide how many affect levels existed for each type. Well, I took exception with 2 of the core types that are so fashionable in neuropsychology.
Popular Science: 1. Happiness, 2. Surprise, 3. Fear, 4. Sadness, 5. Disgust, 6. Anger
My List: 1. Happiness, 2. Love, 3. Fear, 4. Sadness, 5. Hate, 6. Anger
In looking into the core type, I leaned heavily on words that described levels of intensity for each term. Surprise and disgust don't have any terms. I could easily conjure up affect terms for all the rest and the ones I've added as replacements. In fact, I see Disgust as an affect of Hate and I see surprise as a cognitive reflex (kin to startle). I do agree with the popular notion that there are six basic core types of emotion. I've selected 4 fundamental affect levels for each emotion.
Core Affective States
---------------(Increasing Intensity) ----------------->
Happiness Content Joyful Elated Ecstatic
Sadness Depressed Melancholy Morose Tearful
Love Like Fondness Passionate Bonded
Hate Dislike Despise Disgust Hate
Anger Annoyance Mad Furious Homicidal
Fear Avoidance Scared Terror Paralysis
My thinking on this stems from the prosodic content (vocal inflections) exhibited in spoken sentences. It seemed to me at the time that each and every word seemed to have some sort of emotional tag associated with it and this evolved into adopting Deacon's 3 levels of memory representations (iconic, indexical, and symbolic). I have since expanded the iconic level to include a feature collections layer that includes multimodal features.
Next, I had to confront the problem of, how are emotions triggered and assigned. It may seem pretty obvious to all of you, but it took me quite a while to figure this out. Eventually it struck me that it had to come from context, all that peripheral stuff outside of that which is being attended to and in focus, yet actively resides in awareness (input buffer). It wasn't long until I realized that almost everything could constitute some form of context and influence interpretation. There has to be some sort of background priming going on initiated by the peripheral elements in an experience and most likely, the many possible contextual elements are competing for dominance so as to set the emotional foundation for interpretation and response selection.
So, context sets demeanor, which biases LTMs indexed in that domain by core type and affect level, a reference domain. The feature set in focus primes the appropriate LTM of the moment which then is output to a comparison layer with the present put collection that's in focus. All in all, with 4 core types and 4 affect levels each, there are potentially, 24 possible interpretations. I believe that this scheme could be applied to perception in any of the modalities (perhaps with minor modifications to the input buffer). This approach could be looked upon as a form of content addressable memory (at the feature collection layer) and looks like it could be a very fast and optimal way of recall and response.