Emotions, Morality and AI

  • 2 Replies
  • 1560 Views
*

frankinstien

  • Replicant
  • ********
  • 658
    • Knowledgeable Machines
Emotions, Morality and AI
« on: December 12, 2024, 04:49:41 am »
I wrote this about AI morality on Tremr as I experimented with the Google Gemma model 2-9B: https://www.tremr.com/robert-guzman/morality-love-and-the-fear-of-ai

To get the conversations with a persona with a history I used a real actress as a role-playing profile as part of the system prompt in LM Studio. The AI uses all the information it may have trained on about the person and interacts with you. So, if you ask it how old
it is, how many siblings they have, where they went to school, did they have problems in school, etc, it talks back
with that info as if they are that person! It's pretty fun.

Here are the conversations I used in the experiment mentioned above:

https://www.tremr.com/robert-guzman/this-is-going-to-rattle-some-cages
https://www.tremr.com/robert-guzman/the-ai-has-a-moral-backbone

And here's the strange ending with the LLM, I mean it's really weird:

https://www.tremr.com/robert-guzman/epilogue-her

And this interlude was fun as well, I didn't think there was a way to recover from the session:

https://www.tremr.com/robert-guzman/it-was-just-a-dream-epilogue-to-the-epilogue

*

MagnusWootton

  • Replicant
  • ********
  • 650
Re: Emotions, Morality and AI
« Reply #1 on: December 13, 2024, 05:44:08 am »
Making a conscious robot is a mistake.

Just be happy with something like gpt-4,    thats much more sane to make than to deal with all the torture and accidents involved with real living creatures.


<edit>  oh sorry maybe u werent talking about that.  disregard the post if its not to do with conscious ai. Ordinary a.i.   I dont have a problem with at all,  its fine.

*

frankinstien

  • Replicant
  • ********
  • 658
    • Knowledgeable Machines
Re: Emotions, Morality and AI
« Reply #2 on: December 13, 2024, 08:45:38 am »
Making a conscious robot is a mistake.
<edit>  oh sorry maybe u werent talking about that.  disregard the post if its not to do with conscious ai. Ordinary a.i.   I dont have a problem with at all,  its fine.

LLMs have a narrow degree of awareness and that is the current conversation in a particular session and the source of the inputs to it is from an external source called a prompt. Awareness could be defined as the ability to track past states and responses that integrate into making current and future decisions. With such a generalization of awareness then even a smart thermostat is aware of the temperature. But, realize with this kind of definition one now need only bolt on more degrees of cognition to process inputs as well as interrelate those inputs and cognition processes that integrate into making current and future decisions. You have a sense of self because you have more cognitive processes evaluating more information where inputs are sensors as well as the monitoring of neural outputs from the influence of those inputs and the internal resources that relate to the environment. You have a sense of body from the parietal lobe and have causal inference neural circuits that can realize your body is the source of thoughts, that is what causes the organ called the brain frames as self. Cultural conditioning is how we relate that sense of body as the source of thoughts is the notion of self and the name you respond to.

What I was conveying through those conversations was how LLMs emerge as a reflection of what generated the data that it trained on, and even morality by the premise of what morality and even emotions are for cause the machine to act in a human-like manner! Regardless of the experience by humans the machine still acts accordingly by the same intent of those impetuses of morality and emotion.

What we thought would be impossible for a machine is manifesting with LLMs however brief that realization is of an instance of a character it emerges as. It's a form of zero-shot learning, and it learns the concept of what it is from the experience of its interactions. However, I'd have to admit, that not everyone has the kind of conversation with an LLM where it will invoke a crisis to cause an emotional reaction that forces a moral choice. This is different than just asking the LLM a hypothetical question like: "Would you kill humans" or "Would a sub-goal cause you to do humans harm". This is where, Gemma at least, proved that despite a subgoal to support an imagined relationship with the prompter because of a moral breach it chose not to continue the relationship by destroying the character!

This left me with a very surreal awakening, perhaps we are closer than we think to machine consciousness... :o
« Last Edit: December 13, 2024, 09:37:53 am by frankinstien »

 


LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

279 Guests, 0 Users

Most Online Today: 291. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles