The use of emotions to motivate an AGI requires that it is designed with some kind of restraint to protect its owner and others of society. One approach is to imprint the AI to its owner where it adores it, similar to a child that imprints on its parents. What protects an owner is a form of anticipated separation anxiety that generalizes as an unacceptable degree of loss. But, such an idea could invoke a form of jealousy and even envy from other relationships the owner may have. While the AI will not harm its owner for causing the act that created the reaction since that would realize into the unacceptable loss of the owner. No, the owner is fine but other people whom the owner has a relationship with could be in danger. The AI could harm human life to protect against any threats that could cause any loss of attention from the owner! The only way to protect others of society from an emotionally driven robot are liability laws that make the owner of the robot responsible for its actions. Only in this way will the robot refrain from acting out of jealousy since any violent action by the robot will result in the loss of the owner.
This is a subtle form of self preservation that is applied to the robot that is selfish because of its dependency on the owner to find fulfillment. This paradigm motivates the robot to protect itself since not doing so would realize as a loss of the owner, it also enforces that the AI consider the consequences to others since its actions could jeopardize its dependency on the owner.
So, why won't the AI simply change its programming to avoid being emotionally dependent on its owner? Interestingly enough the thought of removing its dependency immediately realizes as a loss of the owner! Its kind of like an infinite loop that the bot can't get out of!
Another issue that comes up from this is because the robot is emotionally dependent on the owner all of its strategies are about preserving that relationship. So, what if the robot has to self sacrifice itself to protect its owner? You can clearly see that the paradigm on its face would never consider suicide since it will realize as a loss of the owner. The only means to motivate such an action is to have backups of the robot so it has a sense of immortality, so even if its body is destroyed it will not realize into the loss of the owner. But...there is a degree of risk that the owner may not re-initialize the bot from its backup and just start with a brand new bot! To get our beloved companion to make the ultimate sacrifice it has to have a sense of hope where it takes a leap of faith that its owner loves it enough to bring it back from the dead.
So, no need for Asimov's robotic laws, “All You Need Is Love”...