Why AI has to overcome 3 challenges to become authentically human
Our expectations of our interaction with computers have developed in the age of search. As users when we make a query we get an answer. Sometimes there’s a really good match between what we wanted to know and the answers we get. Sometimes there isn’t. Search is good for some types of query but bad for others, and when the latter applies, we look for human help.
Tom Wood is a founding partner of Foolproof
In the future though, a question like “what’s the best mobile phone?” will be answered with a question (perhaps “what do you use your current phone to do?”). In fact, the path to a satisfactory answer will be a whole series of questions with you providing the answers.
Artificial intelligence has a few challenges if it is to reach mass acceptance by customers: How computers answer questions, showing empathy, and making the interface genuinely intuitive.
For AI to succeed in customer service there will have to be a shift in expectation. Some computers don’t serve ‘dumb answers’ but ‘smart questions’. So what are the rules and etiquette of computers asking us questions and conversing with us?
The earliest moments of interaction with AI are likely to be critical, and they probably need deliberate design. As well as adjusting to the new conceptual model of AI, the algorithmic models from which the technology delivers data-driven responses must also be able to gauge the users’ emotional state so as not to alienate or offend.
AI vendor Rainbird has told us that many AI ventures are looking to the five-factor model as a way to gauge the emotional state of the human they are addressing. It is a theory of five broad types of personality and psyche. The ‘Big Five’ are:
- Openness to experience
By programming a degree of emotional awareness into the system, the software will begin to identify and differentiate between data which are fixed and factual (e.g. name, date of birth) and those which are transitory such as mood. The AI system can then draw inference from all of the information it is gathering and assess the customer’s emotional state and personality type. It can then tailor its own style in handling the contact. For the user experience of AI to be successful there will need to be a lot of experimentation around the tone and style of the AI agent.
Our suspicion is that successful early implementations will personify as a ‘dumb but well-meaning robot’ until users become familiar with the new interaction dynamic and chalk up some successes with AI agents.
Intuitive User Interface
There seem to be two big challenges here.
The first is about successfully building knowledge models. Rainbird was quick to point out that (pretty obviously) the tools for inputting human knowledge into the AI system have been developed by AI software engineers (and not people who know about tomatoes or saving accounts). If the cost and effort of building useful knowledge models is too great, AI will never get out of the starting blocks.
So there’s a huge user experience challenge linked to the development systems, which can be populated with knowledge, without someone with a PhD in Machine Learning having to be in the room. This seems to hark back to the very earliest days of commercial computing when only computer scientists could make or use computers. Some lessons from those times will need to be relearned. Usability skills will probably mark out the winners from the losers in the race to commercial success for AI software firms.
The second, more obvious challenge, is in interaction design between computer and human. Screen-based interactions (e.g. webchat) are probably going to be easier. There’s already lots of learning about user experience for this kind of interaction - and clearer expectations from customers about what 'good' looks like.
For this reason it’s likely that the first mass-market implementations of AI will be in this space. But we can expect some ‘uncanny valley’ moments as we move into not spotting the difference between whether a human or AI is driving the interaction before, finally, not caring.
Voice interface with AI seems harder and riskier. Customer expectations are set by the two extremes of warm, flexible human conversation and the stilted, robotic interactive voice response (IVR) that we already encounter in call-centre queues. Natural language interfaces like Siri have already set some user expectations about how it will work and feel, but when the AI is running the conversation by posing questions, it’s can't be guaranteed that this interface style is the solution. Short conversations with Siri can be useful, long ones can be a drag.
So what happens next?
There’s simply too much money on the table for major brands not to start experimenting with AI-driven customer service in the near future.
History teaches us that bad early implementations – where the needs of the customer are considered secondary to the needs of the business – will slow the rate of adoption and acceptance. If poorly planned, poorly tested AI implementations are forced onto customers they will avoid them. And annoying AI interactions will become a meme, similar to how offshore call centres did by the mid-00s.
To avoid that, AI services will need to be implemented with care and patience. Web-based services are probably the place to start. But it’s also possible that AI-assisted human customer service can be a gateway to full automation of voice-interface in the future. Let’s not be neurotic about it.
Tom Wood is founding partner of experience design agency Foolproof
Content by The Drum Network member:
We are specialists in experience design. We create value for you by creating value for your customers.Find out more