Getting past the AI hype at SXSW
The prevailing winds of artificial intelligence (AI) are blowing towards that of a not-so-distant future rife with machines taking over and humans becoming more surplus than requirements.
AI’s hype machine, perpetuated in part by the media, has some notable surrogates who espouse the existential threat. Elon Musk, at a 2015 MIT symposium said, “With artificial intelligence we are summoning the demon.” Bill Gates confirmed Musk’s contention, saying in a Reddit AMA that he is "in the camp that is concerned about super intelligence.” More directly, physicist Stephen Hawking said in a BBC interview that he thinks "the development of full artificial intelligence could spell the end the human race.”
Indeed, AI looks to be an evolution that can make some of the more rote roles and processes in work and life automated, but, to Kate Darling, research specialist and fellow at MIT Media Lab and Nilesh Ashra, director of creative technology at Wieden+Kennedy Portland and creative director and co-leader of W+K Lodge, the agency’s technology practice, the hype may not be close to the reality.
“Names like that are drumming up a lot of attention for this and it's just interesting that none of these people actually work in AI or know much about it,” said Darling. "We are so quick to ascribe intent to the system itself rather than looking at the people building the systems and the effect that might have.”
The big names are discussing the role of artificial super-intelligence — yet the practice and reality of AI is still very much nascent. To Ashra, the narrative has been around for awhile, that of humans becoming obsolete.
The latest marketing news and insights straight to your inbox.
Get the best of The Drum by choosing from a series of great email briefings, whether that’s daily news, weekly recaps or deep dives into media or creativity.Sign up
“There's a cultural tension there that's fascinating — the whole impending ‘Robo-pocalypse — has just been a part of science fiction and culture for so long,” he noted. “I think that's the current rhetoric. A lot of what I want to do is get up there and say really, there seems to be some very interesting and also very untended to existing opportunities and challenges with AI as it relates to humanity.”
To that end, Darling and Ashra look to illustrate the reality of AI in their SXSW talk entitled “AI: Actually Still Terrible,” on March 16th.
The big disconnect
As it stands now, the AI universe and, especially the consumer ecosystem, has been inundated with practicality (see: Siri, Alexa, Google Home), yet seem to miss the empathy and more human side of the technology’s promise.
“It all feels like it was made out of the same place. A very serious, earnest female sounding, slightly dystopian AI,” said Ashra.
Where the disconnect seems to stand at the moment is functionality meeting humanity. This is an important cohesion that seems to be lost in the hype shuffle. As AI helps optimize things in life such as driving a car, shopping or accessing information, there is likely ample opportunity for both to work together.
“We're not looking at a few decades of robots gradually replacing people,” said Darling, who works mainly in the robotics space, specifically robot ethics. “What we're looking at in the next few decades is robotic systems and AI systems working together with people because the technology is simply not anywhere near being able to take over most human activity. There are a lot of pitfalls in overestimating what the technology can do and underestimating some of its flaws.”
That maturity (or lack thereof) of the technology is a critical consideration. For every Gates, Musk or Hawking, there are numerous others who work in the more practical side, away from the discussion of the world’s impending doom. Interestingly, the voices of woe tend to be more white and male. The likes of Kate Crawford, principal researcher at Microsoft Research New York City and Heather Roff, senior research fellow at Oxford, point out that though there are issues and questions to address, a broad-stoke view is dangerous. The bias that comes from big names or companies is potentially driving the narrative.
“It is remarkable how much of the investment is happening out of three post codes in the Bay Area,” said Ashra. “There’s an over-emphasis on the AI that is happening at scale via big tech companies, I think there's such a heavy bias felt on multiple factors. There's bias on what the founder makeup looks like of those companies; there's bias on what Wall Street will value as companies; there’s bias because those companies are basically looking for product market fit now. And it's a challenge for people like Kate and I who are actually coming up through this via a design and technology practice born out of experimenting in a very early field.”
Indeed, with the huge investments and mind share in AI, those toiling day in and day out can get lost in the AI shuffle.
“I think it's time to shine a light on either the concerns that smaller groups are having with those big implementations of AI — and what smaller design and technology studios and practices are doing to experiment with alternative applications of AI that are more honest and grounded,” said Ashra.
The big opportunities, including emotion
For Darling and Ashra, though they are frustrated with how AI is playing out publicly - and the fact that the title of their SXSW talk may raise eyebrows - they certainly see myriad opportunities.
Part of the ethos at Wieden+Kennedy is connecting hearts and minds and, to Ashra, the practice of character design — creating more whimsical, honest, personable and unique personas that counter the current state of play — is, for example, a field that could become massive and is worth discussing and one of many takeaways that both hope to give the audience in Austin.
“Let’s put emotional intelligence and emotional tolerance back in the center of the way that we design these AI experiences,” said Ashra when asked about what he hopes the audience will glean, in addition to practical advice. “One of the primary motives should be to understand the other side. AI should be more understanding of the human on the other side, and the human should be more understanding of AI on the other side.”
“If people just come away with a little bit more skepticism about some of the claims being made by these big names I think that would be a success too,” concluded Darling. “I love that we're going to try and be constructive and not just puncture the hype — but also hopefully provide some takeaways for people on how to design systems or what to focus on.”