It’s good that different people will approach the AGI problem space from different perspectives, it drives innovation.
If it's all the same to you, why do anything?
Do you consider the will to better oneself, help others or even curiosity an emotion?
An intelligent decision was never reached without the prodding of a biased emotional state.
We will just have to agree to disagree on this one. I have thought of many examples and I just can’t rationalize your logic. We go to great lengths to design critical systems that remove emotions from the loop; we know from experience that personal emotions are a weak link when making balanced decisions.
Emotions are a personal bias towards a person/ situation… how can this help making balanced decisions?
Not if you are a square of a person. The first would benefit from more emotional stability,
The phrase emotional stability implies an emotional equilibrium, status quo, or the cancelling out of neg/ pos emotions to reach a non emotional state, which was my point.
I would know it's only pretending to be pleased.
Just like some humans do, which is a problem. When a machine comforts a human who is sad, I think it will be good to know it has no ulterior motives, it’s not feeling sorry for you… it’s just the logical, moral kind thing to do to a fellow intelligent being.
I think I would grow to prefer honesty.
Then emotion is the last/ least trait your AGI requires. If the system is emotional then it will have the capacity to lie and deceive. You are incorporating all emotions, yin-yang; you can’t have the good without the bad. This obviously includes anger, jealousy, fear, disgust, greed and a myriad of other negative personal emotions. You are indirectly introducing personal likes/ dislikes and prejudices, opinions, etc.
We could argue over how dumb I am over supper.
That would depend on an unknown variable, the AGI emotional state when you asked for help… you could be lying dead somewhere.
an emotional intelligence does not have to be a complete moron who gets their teammates killed because it can't control it's urges to say I told you so.
I agree it doesn’t, but emotions are inherently unstable, look at humans, do you really want to take that risk when there is no reason to, especially when its mission critical or lives are at risk.
I don't know... Humans are both the best and worst things imaginable.
Again I agree, and emotions usually make all the difference. So again why take the risk, a super intelligent machine with emotions and prejudices… sounds like trouble to me.
I do definitely think there is a need for an AGI to feign emotion and act appropriately, but compromising its logical integrity for a local personal bias just seems wrong… to me.
Again, we obviously don't have to agree on any of these points... it's all good.
ED: Of course it is a machine, perhaps an emotion ON/ OFF switch lol.