All this AI stuff is bending my mind.
It’s been around for quite a while, but the experience with ChatGPT has profoundly changed our perceptions, feelings, and expectations about it.
We wonder what human skills and tasks will become obsolete as machines are able to perform some so much better, faster, and cheaper.
People have suggested that AI won’t be able to replicate human empathy.
Based on our experiences with ChatGPT, it’s easy to imagine a human-like deepfake avatar that has learned to use the language of active listening. It wouldn’t be very convincing if it sounds like HAL in the 1968 film, 2001: A Space Odyssey.
It’s also easy to imagine, however, that future AI systems will be able to mimic non-verbal communication of vocal pitch, expressiveness, volume, fluency, engagement, mirroring, and emphasis, as well as body language involving posture, head movement, hand movement, eye gaze, and facial expression.
Will humans be able to recognize that avatars are the creatures that are performing seemingly empathetic behavior? Even if we do, will we care?
Of course, many humans are not great listeners, so avatars wouldn’t compete with some standard of perfection. Even someone as sensitive as New York Times columnist David Brooks struggled to listen well to a friend with depression. I described in this in a post about “really listening,” which is practically necessary to feel and express empathy.
If a customer service avatar sounds really empathetic when “talking” with you about your problem, how would you feel? Is the avatar really empathetic? Why or why not?