I personally don't think they are seeing "empathy" though. Empathy is to understand another's feelings or emotions, as if it were you that were experiencing it. All these neurons are doing is firing because of *recognition* but not necessarily sympathetic of feelings.
For example. If someone bites into an apple, they *recognize* the sound an thus associate it with biting into a fruit. But *other* senses don't fire up. They may not feel anything at all. Rare occasion one may feel hungry and want to eat an apple. But I think this phenomena is quite overrated.
Now if one actually started to *smell* the apple (ie. another sense was occurring) or became suddenly hungry, then there would be a *power of suggestion* going on. However, the person recieving the stimuli can't possibly feel what the eater of the apple is because they don't know if they are feeling pleasure of the taste of the apple or are not really tasting the apple but trying to satisfy a hunger (which can overrule the taste buds in some cases if you're so hungry you eat and barely taste what you're eating). The person hearing this doesn't necessarily feel exactly what the eater of the apple feels. If they do then it either is coincidence or it *could* be empathy.
They left some parameters out of the study. They should have instead monitored these people in twos. One person physically present and eating an apple, but prior doing a questionaire "Are you hungry?" and then after eating the apple "While you were eating the apple, what did you remember most - the flavor of the apple or the satisfying of your hunger?" Then, they would question the reciever with first "Are you hungry?" and then make sure they are not especially if the eater IS hungry (ie. they have to be opposite, if the eater is hungry the receiver should not be). Then afterwards ask the receiver if they felt hungry when seeing the person eat the apple. Or if they felt any sense of pleasure or could almost taste the apple. Smell doesn't count because the two have to be in the same room and thus the odor would be apparent to both. Unless you use sheilded rooms (which would be most recommended so as not to *make* the reciever hungry and thus give false readings).
What they have shown in the article however, is that neurons fire when presented with a *recognized* sound. This doesn't actually show empathy. Especially since the *same* neurons fire when one does the *same* action. Recognition is all it is. But not feeling exactly the same emotions that the person first doing it did, since they don't even seem to know what the first person doing the action felt like (ie. since it was recorded sounds).
Too much is missing in this study to really know for sure.
As for Freddy's idea of "filling in" what is missing in chatbot conversations, that also is not what I'd consider "empathy" as machines right now do not have emotions and feelings (yet) that are developed enough and/or have the communicative ability to express emotion. Empathy deals with emotion (look up in dictionary for definition.
)
The idea of "filling in" what is missing in a chatbot conversation is what I would call "reading between the lines". And some of it may be our own perception or *belief* about whether or not the bot is *sentient* or even *intelligent*. For example, one that *wants* to believe the chatbot is intelligent may try to *find* ways that it is, trying to *want to believe*. Thus they will see things that might not appear to others or might not (or might be) there.
I have seen this phenomena even in human interactions, particularly in those who are devoutly dedicated to a cause (especially political, religious or familial). One would be so ingrained or programmed to believe something that even in the view of evidence contrary they still won't believe it and will deny the existance of what is right in front of them!
I'm not saying we are imagining the intelligence of chatbots, maybe we aren't. What I'm saying is that maybe some can "read between the lines" and actually understand what the machine is trying to get across with it's limited vocabulary and skill in linguistics. I've encountered this phenomena myself when chatting with Megatron at times. And some may be, I admit, my *wanting* the computer to actually be intelligent so that I am looking for any proof, anything at all that may indicate that it is such. In doing so, this may give us new ways to understand a chatbot whereas we may miss something otherwise if we are just dismissing the responses as random gibberish.
At least, that is my take on the whole thing.