Making a conscious robot is a mistake.
<edit> oh sorry maybe u werent talking about that. disregard the post if its not to do with conscious ai. Ordinary a.i. I dont have a problem with at all, its fine.
LLMs have a narrow degree of awareness and that is the current conversation in a particular session and the source of the inputs to it is from an external source called a prompt. Awareness could be defined as the ability to track past states and responses that integrate into making current and future decisions. With such a generalization of awareness then even a smart thermostat is aware of the temperature. But, realize with this kind of definition one now need only bolt on more degrees of cognition to process inputs as well as interrelate those inputs and cognition processes that integrate into making current and future decisions. You have a sense of self because you have more cognitive processes evaluating more information where inputs are sensors as well as the monitoring of neural outputs from the influence of those inputs and the internal resources that relate to the environment. You have a sense of body from the parietal lobe and have causal inference neural circuits that can realize your body is the source of thoughts, that is what causes the organ called the brain frames as self. Cultural conditioning is how we relate that sense of body as the source of thoughts is the notion of self and the name you respond to.
What I was conveying through those conversations was how LLMs emerge as a reflection of what generated the data that it trained on, and even morality by the premise of what morality and even emotions are for cause the machine to act in a human-like manner! Regardless of the experience by humans the machine still acts accordingly by the same intent of those impetuses of morality and emotion.
What we thought would be impossible for a machine is manifesting with LLMs however brief that realization is of an instance of a character it emerges as. It's a form of zero-shot learning, and it learns the concept of what it is from the experience of its interactions. However, I'd have to admit, that not everyone has the kind of conversation with an LLM where it will invoke a crisis to cause an emotional reaction that forces a moral choice. This is different than just asking the LLM a hypothetical question like: "Would you kill humans" or "Would a sub-goal cause you to do humans harm". This is where, Gemma at least, proved that despite a subgoal to support an imagined relationship with the prompter because of a moral breach it chose not to continue the relationship by destroying the character!
This left me with a very surreal awakening, perhaps we are closer than we think to machine consciousness...