Why Is Emotional Intelligence In AI Important and Challenging?
Artificial intelligence and emotional intelligence: Historically these two terms have not gone hand-in-hand. After all, how can artificial emotional intelligence work?
And even if it could work, is that a good thing? A basic definition of empathy, according to Merriam-Webster, is “the action of understanding, being aware of, being sensitive to, and vicariously experiencing the feelings, thoughts, and experience of another.”
So in the context of AI, emotional intelligence, or more specifically empathy, is giving an AI agent the ability to interpret users’ feelings, thoughts or experience, and enabling that agent to adjust how it interacts with users based on its interpretation.
To many this sounds like science fiction, but this capability is around the corner thanks to generative and conversational AI.
Integrating emotional intelligence into AI is important for two reasons: First, humans naturally want empathy in their day-to-day interactions, particularly as consumers.
We can all recall customer support calls that felt robotic and lacked emotion – we expected empathy in these interactions in a moment of need and didn’t receive it. Conversely, we remember the customer support calls where the agent was incredibly empathetic and we left feeling understood. Now, imagine a customer support call fielded by an AI agent that exhibits emotional intelligence, we’d no doubt experience a mixture of confusion and awe. Today we look at computers primarily as a time-saving tool – we have never come to expect emotional intelligence from our advanced technology any more than we would a wrench or a bicycle. Emotional intelligence is a new paradigm for technology; for the first time in our history, humans will start to build basic relationships with computer programs.
Second, AI is the most rapidly-adopted technology in human history. ChatGPT famously recorded over 100 million users in approximately two months, and according to McKinsey’s latest research, businesses could reach the 50% adoption threshold of AI by 2030. As customers, employees and business owners we will begin to interact with AI numerous times every day; we should have higher expectations of it than simply as a tool. The most likely use case for artificial emotional intelligence is with AI assistants. Bill Gates and many other thought leaders are touting a rapidly-approaching era, where each of us will have an AI personal assistant that will help us get tasks done – from conducting research, to booking meetings to ordering our lunch.
AI assistants will likely be at the forefront of AI and emotional intelligence convergence, as our expectations of these assistants will quickly move beyond basic tasks. Imagine if your AI assistant could detect that you’re fatigued, and offer to clear space in your calendar to recharge or offer to order your favorite beverage to rehydrate.
Enabling empathy within AI is hard, but it is not a novel concept. The AI community is continuously evolving around this space, as there isn’t yet a definitive answer as to “how” you enable a machine to interpret someone’s feelings, or to provide the machine with instructions on how to respond or adapt.
With the first generation of AI personal assistants – like Siri, Google Assistant and Alexa – there were enormous leaps in conversational interactions with machines, but their instruction sets were primarily limited to programming around language. Generative AI gives us the opportunity to significantly expand the aperture around how we build this. There are hundreds of ways to recognize that someone is frustrated, happy or curious – from language, to vocal tones, to facial expressions.
Put simply, there are infinite ways for AI to interpret and respond to those cues and no two humans or cultures exhibit those cues in exactly the same way. It’s a new frontier. It will take iterative and hard work to train AI on emotional intelligence – expect baby steps, not a turnkey AI psychologist.
While the technical challenges are profound, so too is the need to ensure user safety and prevent artificial emotional intelligence from being used in undesirable ways. The greatest risk is not the AI itself, but humans behind AI teaching AI agents to exploit emotional tipping points to influence human actions. The guardrails we put in place today will be critical to ensuring safety and ethical use of this technology. The most obvious first step is that AI agents should never be used to convince humans that they are also human; users/humans should always know they’re interacting with an AI agent to ensure that there is constantly a boundary.
Generative and conversational AI are changing the way humans interact with machines.
For the first time in history, machines will be able to interpret our emotions and adapt their interactions with humans based on those interpretations. This new paradigm will undoubtedly bring a broad mix of questions, polarizing reactions and excitement. If we can do it safely, and incrementally, there is the potential to transform experiences for customers and employees alike in ways we’ve never expected, anticipated or imagined.
I, for one, am cautiously excited!
[To share your insights with us, please write to sghosh@martechseries.com]
Comments are closed.