Artificial Intelligence SXSW Marketing

What our debate taught us about AI and bots

Author

By Jon Wilkins, Chairman

March 27, 2019 | 9 min read

At SXSW this year we presented ‘The Battle of Extremes: An AI Moderated Debate’ to festival goers. The idea behind it was to explore the possibility of using an AI bot to follow a conversation about some of the biggest issues of our time and call out any extreme thinking, absolutisms, high emotions and positivity versus negativity. From data protection and privacy to personalisation and ethics, the meta twist is that these are some of the central debates stemming from the existence of AI itself.

What our bot taught us

The idea first came about as a convergence of different strands of conversation within the agency. At Karmarama, our mantra is ‘good works’ and our client work and culture follows that credo. For this idea, we wanted to do something interesting with technology and confidently ask ourselves, is it useful? Can it do more than just save money and cut costs? These are undoubtedly good commercial reasons for working with emerging technologies, but we wanted to take it further.

We landed on the format of a debate fairly easily, driven by what SXSW stands for: the convergence of diverse topics and the assorted people who discuss them. We looked at AI because it’s one of the most exciting and promising set of technologies in existence right now and one still in its relative nascency.

We wanted to explore whether it was possible to prove or disprove AI’s usefulness in society. So, we set about running an experiment to explore if it can be socially positive. We looked at how best AI would play a role in a debate; we landed on giving it the role of analysing, visualising and moderating the debate in collaboration with a human moderator on stage. The team decided to model the bot after, well, me. They told me it was in the spirit of demonstrating how human and machine can work together, but until you’ve been interviewed by yourself, don’t tell me it isn’t a bit weird.

The format wasn’t quite right, however. The issue lay in the binary nature of debating. One side wins the argument and the other loses. What we wanted to achieve was a constructive dialogue, so to do this the panel members would need to start from their differing points of view. The job of Jon-Bot would be to help them and the audience see different perspectives and at what point they diverged in views. We didn’t want to use technology to force consensus, but instead help each side find common ground.

We built the bot ourselves. Our starting point was searching through existing services. Amazon, Microsoft and IBM all provide off-the-shelf speech-to-text, natural language processing, machine learning and sentiment analysis capabilities for different tasks, which suited our challenge of building an AI that could analyse a conversation in real time and look at the topic being talked about and related content, the breadth of the content, the emotions displayed and the accuracy of the content. When it came to analysing how the debaters performed against one another, we needed to see whether they were converging or diverging and what topics they found common ground on.

We worked closely with Accenture’s team in The Dock, Dublin who specialise in applied intelligence. They set off creating a bespoke AI platform to perform the tasks required, working closely with our research and insights team, who gathered a corpus of data on the topics of the debate.

At the start of our experiment, in our minds AI was human-like, with a broad range of knowledge. Our expectations were that it could understand what was happening and respond. This is known as ‘general intelligence’, but it doesn’t exist in AI — yet. The point at which this happens is what’s known as ‘the singularity,’ when machine learning surpasses human understanding. It’s an exciting fantasy for futurologists, but that’s all it is for now.

For our bot, we had to think brass tacks. If we trained the AI to understand themes for a debate about society, the easiest way to achieve this is through pattern recognition. This led to further considerations. Can the AI fact check? How does it know what is true? How does it match the fact it’s picked up on against sources it thinks are true? And how do you identify those themes?

There’s a common misconception that you can set machine learning a task and it can sort it out — a bit like using Google search. For example, asking it to seek out the debates happening online about data and society and train itself in these subjects. That’s simply not possible with where we are with the technology. It’s only possible with a human being identifying and collating the information to then feed to the algorithm. This has been a massive learning for many — that the majority of the total time spent on a project is getting access to data and organising it, so machine learning can do something with it.

As part of our build journey, we undertook a huge amount of research to understand the debates around AI and society, to land on the ones we would use to educate our bot. Among the many topics, here are some we explored:

  • Human and machine – The technology should augment human beings. But there is no shortage of random fearmongering from the press and other pressure groups. We discovered that there are many institutions, governments and organisations concerned and there is a high-quality research effort being put into this important issue
  • Abuse of power from commercial bodies – If there’s the expectation that investment into AI will produce a commercial benefit in the long run, is that good or bad for society? And how should this be controlled? Where does regulation or law come into it? On the flipside, as people increasingly have less trust in governments, big business has a de facto role to use their power for good. And this goes beyond ‘greenwashing’ — look at Unilever’s Sustainable Living Brands, which is an excellent example of a business seeing growth through doing good
  • Fear over exclusion and bias – Who is training the AI, and do they have the capacity to be unbiased? How do you make AI fair? Amazon had to pull the plug last year on a recruiting tool that was biased against female applicants
  • Loss of privacy – To be truly intelligent, machine learning needs to know everything — so how do you get around that? Is it through anonymising data?
  • Human agency – the more that machine learning improves, the more you’re disempowering human beings as they become reliant on the technology

We also looked at solution areas — from human collaboration to ethical policies and democratic safeguards. It’s clear that education is going to play a central role. Everybody needs to be aware of the technology and its potential so that they understand the associated risks. One of the most promising solutions we learned about is collaborative processes to develop machine learning. Take healthcare for example, where you could optimise machine learning in this field through the community: invite members of the community to help put in checks and balances to eliminate bias.

When it was time to perform, we learned a lot from our bot. As it followed the debate, it did an interesting job of listening to those themes. And what the debaters discovered was that when you’re in the heart of a discussion you don’t always realise that you’re plugging into multiple meta-themes. We used the speech-to-text capability to transcribe the debate in real time, from which the AI analysed the themes and highlighted the topic areas. We also trained the AI to recognise phrases and words that indicated the speaker’s mood — whether they were happy, shocked or angry, for example. It also pointed out any dubious or incorrect facts; the effect was as embarrassing as it was educational for our debaters.

Our research shows us that discourse around this technology is moving in the same direction: towards collaboration — which is a reassuring thought. There are plenty of initiatives out there using AI for good, such as Colorintech looking at inclusion bias and ProPublica’s investigation of bias in COMPAS. But what we’ve come to understand through projects like ours is that these applications need to show more solutions and ask the difficult questions, like how we can contribute to society. Only then can we deepen our knowledge and understand our limits.

Jon Wilkins is the chairman of Karmarama, part of Accenture Interactive

The team involved were speakers Hannah Matthews from Karmarama and Sonoo Singh from The Drum, plus Jon Wilkins as moderator. Pete Dolukhanov of Karmarama was technology lead, with front end digital development, interface, apps and UX from the Karmarama Creative Products team, and Kaihaan Jamshidi as strategy lead. AV production was by Kream. Thanks go to Accenture’s The Dock – in particular, Rory Timlin, Antonio Penta and Fernando Lucini – for developing the AI that powered this project.

Artificial Intelligence SXSW Marketing

More from Artificial Intelligence

View all

Trending

Industry insights

View all
Add your own content +