moods and their effect on optimizing algorithms

  • 17 Replies
  • 4688 Views
*

DaltonG

  • Bumblebee
  • **
  • 47
Re: moods and their effect on optimizing algorithms
« Reply #15 on: May 26, 2021, 07:21:27 pm »
Since there is a huge number of possible algorithms, I'll translate it to general program operation.

Mood/demeanor govern how experiences are interpreted and the selection of responses as well as the intensity of the response. Since moods are derived from emotions. Demeanor by itself, is an optimization, for it whittles down the possible number of interpretations and responses that could be made. As is exhibited by the diversity of personalities in the population, it's pretty obvious that there is a genetic foundation for a persons normal emotional type and affect level - or simply default state. So, I began wondering about what architectural features in the brain would establish the normal default state. A couple of features immediately came to mind; allocation of resources to domains representing the various emotional types and the number of connections made with an LTM (representing the affect levels). The smaller the amount of cortical real estate allocated to say the core type of anger, the fewer the number experiences that could be instantiated or indexed that would provoke and angry response. Seems logical.

Next, I had to decide if I could agree with the fashionable list of core emotional types and decide how many affect levels existed for each type. Well, I took exception with 2 of the core types that are so fashionable in neuropsychology.

Popular Science:    1. Happiness, 2. Surprise, 3. Fear, 4. Sadness, 5. Disgust, 6. Anger
My List:      1. Happiness, 2. Love, 3. Fear, 4. Sadness, 5. Hate, 6. Anger

In looking into the core type, I leaned heavily on words that described levels of intensity for each term. Surprise and disgust don't have any terms. I could easily conjure up affect terms for all the rest and the ones I've added as replacements. In fact, I see Disgust as an affect of Hate and I see surprise as a cognitive reflex (kin to startle). I do agree with the popular notion that there are six basic core types of emotion. I've selected 4 fundamental affect levels for each emotion.

       
Core                                   Affective States
                       ---------------(Increasing Intensity) ----------------->
Happiness   Content           Joyful           Elated           Ecstatic
Sadness   Depressed   Melancholy   Morose           Tearful
Love           Like           Fondness   Passionate   Bonded
Hate           Dislike           Despise           Disgust           Hate
Anger           Annoyance   Mad           Furious           Homicidal
Fear           Avoidance   Scared           Terror           Paralysis

My thinking on this stems from the prosodic content (vocal inflections) exhibited in spoken sentences. It seemed to me at the time that each and every word seemed to have some sort of emotional tag associated with it and this evolved into adopting Deacon's 3 levels of memory representations (iconic, indexical, and symbolic). I have since expanded the iconic level to include a feature collections layer that includes multimodal features.

Next, I had to confront the problem of, how are emotions triggered and assigned. It may seem pretty obvious to all of you, but it took me quite a while to figure this out. Eventually it struck me that it had to come from context, all that peripheral stuff outside of that which is being attended to and in focus, yet actively resides in awareness (input buffer). It wasn't long until I realized that almost everything could constitute some form of context and influence interpretation. There has to be some sort of background priming going on initiated by the peripheral elements in an experience and most likely, the many possible contextual elements are competing for dominance so as to set the emotional foundation for interpretation and response selection.

So, context sets demeanor, which biases LTMs indexed in that domain by core type and affect level, a reference domain. The feature set in focus primes the appropriate LTM of the moment which then is output to a comparison layer with the present put collection that's in focus. All in all, with 4 core types and 4 affect levels each, there are potentially, 24 possible interpretations. I believe that this scheme could be applied to perception in any of the modalities (perhaps with minor modifications to the input buffer). This approach could be looked upon as a form of content addressable memory (at the feature collection layer) and looks like it could be a very fast and optimal way of recall and response.
« Last Edit: May 30, 2021, 06:23:08 pm by DaltonG »

*

yotamarker

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1003
  • battle programmer
    • battle programming
Re: moods and their effect on optimizing algorithms
« Reply #16 on: May 29, 2021, 04:04:59 am »
Since there is a huge number of possible algorithms, I'll translate it to general program operation.

Mood/demeanor govern how experiences are interpreted and the selection of responses as well as the intensity of the response. Since moods are derived from emotions. Demeanor by itself, is an optimization, for it whittles down the possible number of interpretations and responses that could be made. As is exhibited by the diversity of personalities in the population, it's pretty obvious that there is a genetic foundation for a persons normal emotional type and affect level - or simply default state. So, I began wondering about what architectural features in the brain would establish the normal default state. A couple of features immediately came to mind; allocation of resources to domains representing the various emotional types and the number of connections made with an LTM (representing the affect levels). The smaller the amount of cortical real estate allocated to say the core type of anger, the fewer the number experiences that could be instantiated or indexed that would provoke and angry response. Seems logical.

Next, I had to decide if I could agree with the fashionable list of core emotional types and decide how many affect levels existed for each type. Well, I took exception with 2 of the core types that are so fashionable in neuropsychology.

Popular Science:    1. Happiness, 2. Surprise, 3. Fear, 4. Sadness, 5. Disgust, 6. Anger
My List:      1. Happiness, 2. Love, 3. Fear, 4. Sadness, 5. Hate, 6. Anger

In looking into the core type, I leaned heavily on words that described levels of intensity for each term. Surprise and disgust don't have any terms. I could easily conjure up affect terms for all the rest and the ones I've added as replacements. In fact, I see Disgust as an affect of Hate and I see surprise as a cognitive reflex (kin to startle). I do agree with the popular notion that there are six basic core types of emotion. I've selected 4 fundamental affect levels for each emotion.

Core   Affective States
   ---------------(Increasing Intensity) ----------------->
Happiness   Content   Joyful   Elated   Ecstatic
Sadness   Depressed   Melancholy   Morose   Tearful
Love   Like   Fondness   Passionate   Bonded
Hate   Dislike   Despise   Disgust   Hate
Anger   Annoyance   Mad   Furious   Homicidal
Fear   Avoidance   Scared   Terror   Paralysis

My thinking on this stems from the prosodic content (vocal inflections) exhibited in spoken sentences. It seemed to me at the time that each and every word seemed to have some sort of emotional tag associated with it and this evolved into adopting Deacon's 3 levels of memory representations (iconic, indexical, and symbolic). I have since expanded the iconic level to include a feature collections layer that includes multimodal features.

Next, I had to confront the problem of, how are emotions triggered and assigned. It may seem pretty obvious to all of you, but it took me quite a while to figure this out. Eventually it struck me that it had to come from context, all that peripheral stuff outside of that which is being attended to and in focus, yet actively resides in awareness (input buffer). It wasn't long until I realized that almost everything could constitute some form of context and influence interpretation. There has to be some sort of background priming going on initiated by the peripheral elements in an experience and most likely, the many possible contextual elements are competing for dominance so as to set the emotional foundation for interpretation and response selection.

So, context sets demeanor, which biases LTMs indexed in that domain by core type and affect level, a reference domain. The feature set in focus primes the appropriate LTM of the moment which then is output to a comparison layer with the present put collection that's in focus. All in all, with 4 core types and 4 affect levels each, there are potentially, 24 possible interpretations. I believe that this scheme could be applied to perception in any of the modalities (perhaps with minor modifications to the input buffer). This approach could be looked upon as a form of content addressable memory (at the feature collection layer) and looks like it could be a very fast and optimal way of recall and response.

so you mean the moods are used as a context for the AIs operations ?
or in other words without them the AI would act more "robotic" ?

*

DaltonG

  • Bumblebee
  • **
  • 47
Re: moods and their effect on optimizing algorithms
« Reply #17 on: June 04, 2021, 12:44:20 am »
No, moods aren't used as context. They select for pre-existing long term memories that act as contexts, as peripheral elements to the objects/subjects of attention and focus. Moods are emotional states and the six core types are distributed through the brain as subnets biasing concepts according to the present emotional state or demeanor.

Without emotional indexing, correct interpretation of an experience becomes problematic. Most experiences can have a variety of interpretations depending on the context in which they occur.

Concepts may have multiple instantiations in the LTM, each by virtue of emotional biases (type and affect level). Emotional biases in essence provide as sort of index to store LTM data.

 


Requirements for functional equivalence to conscious processing?
by DaltonG (General AI Discussion)
November 19, 2024, 11:56:05 am
Will LLMs ever learn what is ... is?
by HS (Future of AI)
November 10, 2024, 06:28:10 pm
Who's the AI?
by frankinstien (Future of AI)
November 04, 2024, 05:45:05 am
Project Acuitas
by WriterOfMinds (General Project Discussion)
October 27, 2024, 09:17:10 pm
Ai improving AI
by infurl (AI Programming)
October 19, 2024, 03:43:29 am
Atronach's Eye
by WriterOfMinds (Home Made Robots)
October 13, 2024, 09:52:42 pm
Running local AI models
by spydaz (AI Programming)
October 07, 2024, 09:00:53 am
Hi IM BAA---AAACK!!
by MagnusWootton (Home Made Robots)
September 16, 2024, 09:49:10 pm
LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

284 Guests, 0 Users

Most Online Today: 467. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles