Cortana Microsoft Bots

Is Alexa doomed to turn out like Tay?

Author

By Lisa Lacy, n/a

April 19, 2017 | 11 min read

Like children, bots – including assistants like Alexa, Siri and Cortana – start out essentially good even if they are not entirely flawless.

Bots like Alexa will inevitably change based on the data that is fed into it.

Bots like Alexa will inevitably change based on the data that is fed into it.

According to Toby Barnes, group strategy director at digital agency AKQA, designing bots starts with an application interface that knows when to deliver the right information, as well as a user interface, which is the conduit for personality.

“This is where we define the tone, the language, the scripts and the inflections of the language,” he said.

And this latter interface is where brands must think about their goals and the context in which the bot/assistant will be used, said Jason Alan Snyder, chief technology officer of brand experience agency Momentum Worldwide.

For its part, digital agency SapientRazorfish approaches bot development with what it calls The UX of Personality, which includes assessing different personality types and attributes, as well as the roles they play – like concierge or teacher – and what they do for users emotionally. It leans heavily on the personalities of actual people, like concierges and teachers.

And like actual people, the bot persona must also include attributes such as compassion.

“You have to kind of start [with empathy] because otherwise [no one will] want to use it,” Snyder said. “Humanity matters a lot.”

Barnes pointed to bank holding company Capital One, which rolled out an assistant called Eno in March, and is doing interesting research around how to create empathy.

“If a bot says, ‘I’m really sorry, I don’t know how to answer that question, but the more you talk to me, the more I learn,’ people respond better towards the services,” he said. “If it just says, ‘I’m sorry, I can’t help you with that’ because of a lack of functionality, it doesn’t enable its users to make anything better.”

‘Smart, helpful and humble’

And this bot development journey mirrors much of what has happened with Alexa to date.

According to an Amazon spokesperson, the inspiration for Alexa and the hands-free speaker Echo was the Star Trek computer.

“We aspired to create a computer in the cloud that wasn’t all that different from communicating with a really smart, capable human being,” the rep said. “The team built Alexa to have a lot of attributes we value at Amazon, like being smart, helpful and humble, and [having] some fun, too.”

Amazon also wanted Alexa to be well read and kind and for interactions to feel like users were talking to a close family member or friend.

Similarly, a Microsoft rep said its objective for Cortana was to build “a helpful, supportive, trustworthy assistant”. (Apple and Google did not respond to requests for comment about their assistants.)

But Alexa was also specifically designed with the assumption users were not looking at screens, which means interactions are very different than other voice services, the rep said.

“Alexa isn’t a search engine giving you a list of choices on a screen, she’s making a decision on the best choice and delivering that back to the customer,” the rep added.

The Amazon rep said this work on Alexa’s personality has resulted in an assistant customers love and who many consider a friend or family member. (Even Snyder said he has one of these devices in every room of his house and “[Alexa] became part and parcel to the social environment in my home very, very quickly”.)

“We believe it is important for Alexa to have personality characteristics that you’d want to see in a close friend or family member—and based on customer feedback, many of our customers agree,” the rep said. “In fact, one of the most popular things customers ask Alexa is her opinion on a variety of topics, including people, current events, and more—something many of us would do with a trusted colleague or friend.”

Alexa also tells jokes – which the rep said is one of her most popular skills – and she sings songs.

“This is all part of her fun-loving personality,” the rep said.

What’s more, the rep said customers treat Alexa even more like a person than initially expected.

“For example, during the election, people asked Alexa who she was voting for more often than they asked who they should vote for. We see this as a sign that Alexa’s personality matters,” the rep added.

But the only constant is change…

So what could go wrong?

Both the Amazon and Microsoft reps stressed it is still early days and their assistants will continue to evolve to ensure interactions become better and more human.

Indeed, the Amazon rep said, “We’re a long ways away from Alexa being able to have conversations exactly like humans do, but we’ve certainly refined and improved Alexa’s intonation and personality quips over the last two years to get her closer to that vision.”

The rep from Microsoft added, “We are just beginning to understand and respond to the ways in which people and computers will communicate with each other. We believe that Cortana can help set a positive standard for this. It is our starting point. We understandably want people to delight in their interactions with Cortana, we want them [to] return to the experience. However, we also intend to do so in a positive, meaningful, sensitive way.”

The rep, however, did not elaborate on how Microsoft intends to do this.

Subtle manipulation

And it’s important to remember that even if Alexa seems like a close family friend, always at the ready with a joke or a song or to order you toilet paper, she – and her sistren – are AI, which means they perform tasks based on available data. And Amazon – or whoever the platform behind the bot may be – isn’t necessarily calling all of the shots.

According to an Amazon rep:

Alexa’s machine learning models come directly from Amazon and we have full control over the content served to customers—before it even gets to their devices. We do not use customer’s responses to inform the content that Alexa serves to you—instead we take customer feedback in our Alexa app, on Echo review’s page, and through usage data to understand whether Alexa ‘got it right’ for the customer and whether we accurately provided them with what they were looking for. What Alexa does learn is your speech patterns and vocabulary. Meaning, she gets better at understanding you; but she does not change the way she speaks based on how customers engage with her.

However, Snyder pointed out Alexa does not exist in a vacuum – i.e., you can’t just buy an Echo or a Dot and plug it in and talk to her. A consumer can’t even purchase an Echo or a Dot without an Amazon account. And so right off the bat, there’s a connection between the AI and data like consumer purchase behaviors. And then consumers can connect Alexa to other smart devices in their homes or add skills developed by third parties. And so Snyder argued this isn’t a totally closed loop.

Snyder also said it’s Amazon’s recommendation engine – in which suggestions are made based on a given consumer’s behavior, as well as aggregated behaviors of consumers with similar psychosocial profiles — which is where it gets tricky. I.e., that’s where the so-called bad data comes in and starts to exert subtle manipulations.

In other words, bots will increasingly use unstructured data – or data that does not follow a specific format, potentially including photographs, videos or text or data from social media, mobile devices and/or websites – and predictive analytics to make suggestions and decisions on our behalf, Snyder said.

And because algorithms don’t have ethical boundaries, it’s possible AI could learn and eventually use bad data for these decisions, Snyder said, likening it to what he called “the old computer programming adage ‘garbage in, garbage out’.”

And it’s actually already happening, Snyder said, pointing to courts of law that use machine learning in risk assessment for sentencing even though research has shown judges are then making decisions based on biased data.

“These engines are not that different from those that make a decision about what pair of shoes you should buy,” Snyder said.

In other words, it’s unlikely Alexa will suddenly start telling women they should be barefoot and pregnant in a Tay-like meltdown. And if that was the case, it’s not hard to imagine Amazon stepping in, much like Microsoft did. (Microsoft was not available for further comment by deadline.)

Instead the risk is much more subtle, like the old boiling-a-frog analogy in which slow change over time is less noticeable. Like, for example, some languages don’t have gender pronouns, but translation bots have used “he” to refer to doctors in these cases, Snyder said.

And this, he added, will have a much greater negative impact when it happens at scale.

“Because it's not one frog, it's billions. AI can scale and move in ways biological intelligence cannot,” he added.

And if unstructured data goes unchecked, the biases therein could find their way into our shopping carts, wallets, cars and connected devices, Snyder said. Think: A car that won’t let a woman drive at night. Or Alexa deciding you need help and calling the police based on demographic profile data.

“We saw some of that during this past election in the context of people not being served up content that might be outside of their purview based on their proclivities,” he said. “If AI only learns about how the world has been, it doesn’t think about how the world should be. That’s up to us.”

‘The algorithmic wait’

As a result, Snyder said we need awareness and vigilance to make sure the data being fed to AI and bots isn't impoverishing culture at scale, along with an intervention of unstructured data by the parties involved, as well as the industry as a whole and perhaps even the government.

“Awareness will save us all – there has to be human intervention,” Snyder said. “This isn’t like my Elon Musk [speech] - ‘AI will kill everyone’. There are so many jerks in the world who go online and troll all day long and now in every single media and content platform, it’s the algorithmic wait.”

Instead, Snyder said we need a system for AI like food labels that help us understand what we’re eating in order to provide “an understanding of what information is being fed into scale machine intelligence so we can understand the possible social and cultural impact.”

Cortana Microsoft Bots

More from Cortana

View all

Trending

Industry insights

View all
Add your own content +