As media companies across the globe experiment with artificial intelligence (AI) to relieve pressure on the newsroom, there are fears robot reporters will displace quality content creators. However, without the emotional intelligence necessary for engaging storytelling, AI is unlikely to replace quality writers.
The move towards robot reporting
The use of AI in journalism isn’t an entirely new phenomenon; it is regularly used to automate monotonous tasks and free up reporters’ time. The Associated Press uses AI to automate corporate earnings reporting, freeing up 20% of the time dedicated to this task while boosting the volume of stories from 300 to 4,400 per quarter. As far back as the Rio Olympics, The Washington Post used its AI-based Heliograf technology to publish real-time reports on results and medal counts, allowing reporters to focus on the athletes' stories. For these data-based tasks, AI may even do a better job than journalists. Algorithms don’t get bored or make mistakes when they haven’t had enough coffee.
However, the use of AI in the newsroom is accelerating and becoming more editorial in nature. It is scanning social networks and other media sources to identify breaking news so journalists can respond quickly. It is trawling through mounds of information to suggest interesting angles for reporters to pursue, with Quartz’s AI Studio recently generating an article about Lyft’s unique risk factors after analysing IPO and financial filings. It is delivering localised fact-based news to regional titles, with Press Association platform Radar expected to produce tens of thousands of stories a month jointly written by journalists and AI.
As the use of AI to generate automated content becomes more prevalent, so does the possibility that robots could eventually replace journalists. In fact, an unsupervised AI writer built by OpenAI is able to create content that is so coherent and convincing the technology is being withheld for fear it could be used to mass-produce fake news.
There will always be a need for the human touch
While AI may be able to produce content that is factually accurate and even logical, it is still a world away from authentic, meaningful content written by passionate, enthusiastic, knowledgeable writers. AI-generated content will never truly engage readers or speak to them on a deeper level because robots are lacking the human emotions and characteristics that feed great content, from humour and curiosity to frustration and reflection.
Journalists and writers share similar experiences with content consumers – whether these are work-related occurrences, personal hobbies or life stage interests – and this empathy allows their stories to resonate with readers. Technology will struggle to achieve this depth, as it is a long, long way from having the human experiences to produce the emotion behind the story. Great content is not built only on templates and algorithms, but on instinct, opinions and talent.
AI can provide valuable tools to help content creators do what they do better. It can help writers quickly and efficiently analyse masses of data to identify pertinent information. But machines must work with writers, rather than replace them to avoid the erosion of content quality. Reuters understands this and is aiming to create a ‘cybernetic’ newsroom, using its in-house AI tool to marry “the best of machine capability and human judgement to drive better journalism, rather than asking one to be a second-rate version of the other.” The goal isn’t to get AI to write stories, but to generate data insights that answer journalists’ questions and then present those insights for human evaluation and reporting.
AI can enhance journalism, not replace it
Interest in using AI to augment the work of journalists – and in tackling the problems AI poses to news and content – is high across the globe. A recent AI and News Open Challenge hosted by the Ethics and Governance of AI Initiative received more than 500 applications. Applicants were invited to submit experimental approaches to challenges created by the intersection of AI and news across four specific issues: empowering journalism, reimagining AI and news, stopping bad actors, and governing AI platforms. The contest ultimately funded seven new initiatives, which will join existing AI solutions on the market. These include Factmata, a human-focused machine learning platform that identifies fake or spoof content, politically extreme views and hate speech, and Associated Press’s Verify program, which analyses user-generated content.
The use of AI in the newsroom is an unstoppable trend. It confers a number of benefits which can be used to automate mundane tasks, mine data for unique angles, and prevent the spread of fake news. That said, it’s my opinion that quality content will always need passion, enthusiasm, and real-life experience – which are well beyond the capabilities of robot reporters.