AI marketing: how seriously should you take artificial intelligence?
Recent years have seen the rise of artificial intelligence (AI) adoption in the marketing and media industries. While often a buzzword for marketers to make their work sound more exciting, the real benefits to brands center on the use of machines to carry out deep learning and make humans’ jobs easier.
AI is certainly growing in notoriety, with up to 85% of UK businesses said to be set to invest in the field by 2020. In addition, studies have shown the gradual uptake of soft robotics in the home – 23-32% of households in the US and 18% of households in the UK have at least one voice assistant, the most popular models being either Amazon’s Alexa or Google Assistant. Moreover, Apple claimed in 2018 that a staggering 500 million of its users now frequently make use of Siri, its pre-installed voice assistant.
The Drum examines the ways in which AI can be, and has been used, to benefit brands.
Despite many regarding AI to be in its infancy, the technology has already come to simplify many aspects of our day-to-day lives, including things like predictive text. Similarly, AI can already be applied to business processes in order to alleviate much menial labor, particularly data-driven analysis. As such, less investment is then required in these roles meaning more budget can be dedicated to developing more creative, lateral areas of the business. This is a feeling echoed by The Trade Desk’s Anna Forbes who told The Drum: “No longer do we have to waste time on boring processes and hard number-crunching - we can now focus on being more strategic and creative.”
We have already seen industry creatives unlock the power of AI to produce notable feats of advertising. Examples include Sony’s 2016 production of its ‘Daddy’s Car’ ad, a Frankenstein effort to build a new Beatles song built from the band’s back catalog; and The Times’ ‘un-silencing’ of JFK. Creatives have here leveraged AI to revive past icons in order to construct meaningful ad experiences, which both shock and resonate emotionally with audiences.
While the merits of AI’s place in the creative industries has been widely questioned, we see that it is gaining traction every year. In 2017 the digital director of Coca-Cola announced that the Fortune 500 company would be trialing the use of AI in content creation, rather than relying on human marketing teams. In the same year, the talents of AI were pushed further again when the creative director of McCann Millennials was tasked with creating a music video for Japanese Kawaii band Magical Punchline, before they had even written the music. In order to do this, the AI looked at a number of music- based TV ads and created a concept that it deemed would work best for the band. After the music video was realized, the band went on to write four songs which were all subsequently released to the same music video.
Similarly, Lexus called on AI-powered creative direction when putting together an ad for its ES series at the close of 2018. The combined efforts of Lexus and IBM, the ad draws on the numerous AI features that are built into the ES models. The ad was directed by Oscar-winner Kevin McDonald and while it is visually stunning, the narrative does leave its audience somewhat perplexed. Despite this, it signals a huge feat in ad production by an AI.
We often discuss AI as being a future focus for many advertising agencies, when the truth is that it is already a present aspect of marketing. From chatbots to targeted advertising, these are all elements of AI which have come to define modern advertising strategy.
Media organizations have readily embraced AI too, from Quartz using it to analyze Lyft’s risk factors in its IPO filing to the Press Association trialing it to take the heavy lifting out of everyday stories as part of its RADAR (Reporters and Data and Robots) project.
At the Associated Press, AI has been used to examine videos sent in by eyewitnesses to events. This use of AI, which has been in development since 2017, has allowed journalists to avoid spending hours poring through examples of user-generated content to verify their authenticity and to instead release news to the public more quickly. In respect of the immediacy of social media and the proliferation of “fake news” in recent years, this tool is particularly effective in guaranteeing the legitimacy of AP’s reporting and differentiating it from other news agencies.
The development of AI technology has the potential to have a profound impact on careers. One of the most ambitious uses of it so far has been seen at holding company Publicis Groupe, which has developed the AI platform, Marcel. One of the most interesting ambitious for Marcel is for it to be used on client briefs, where the tech can find the most appropriate teams from some 80,000 employees. Arthur Sadoun, chief executive of Publicis Groupe, has championed the funding of the technology, citing the platform’s detached and unemotional matching of employees with client requirements as a non-biased means of finding the right person for the right job, and keeping the client at the heart of the company’s work.
More broadly, AI may eventually mean the end of some roles, with machines taking on the less skilled labor within a business, but this tech still needs to be managed. Alongside creating a new careers sector, it further presents an opportunity for current employees to upskill. Having more menial labor taken over by AI technology allows staff to be retrained to take on more complex roles within the business. Using the example of the banking sector, which has come to rely more heavily on machine learning in recent years, the industry’s employment rate has not been negatively impacted. Despite the efficacy of AI, there are a number of issues which only human intuition can handle and this is an opportunity to upskill workers to deal with these more cognitive challenges.
One ethical issue that has been voiced against the adoption of AI is that of educational elitism. The argument is that a high level of education isn’t always attainable for everyone, and by taking away the opportunities presented by more unskilled roles, it could leave many people unemployed and without the social mobility required to pursue a career in AI.
Despite these concerns, AI has the ability to make positive impacts on the personal lives of the public. One example of this is the news that Instagram filters could be used to identify those suffering from low moods or depression. In this regard, the AI identifies facial expressions in conjunction with the filter selected by the user and can hazard a guess at the emotional and mental state of the individual. While still somewhat rudimentary, this signals the untapped potential of social media platforms to identify and help combat personal issues. This news is especially relevant given the series of incidents that have been borne from social media’s irresponsible sharing of damaging and harmful content. This development is an opportunity for social media to reposition itself as being on the frontline of tackling personal problems experienced by its users, rather than encouraging them.
More broadly, people are becoming increasingly excited by the use of AI to improve little aspects of everyday life. A recent report in The New Scientist has shown how AI can be leveraged to identify potential inequalities in areas where there is little statistical data. Similarly, medical professionals are looking to AI as a means of improving the health service. AI’s second-to-none ability to quickly sift through patient records would massively cut down on referral times for doctors and allow medical problems to be tackled more efficiently.
Despite the bright future promised by the significant investments in the field of AI made by businesses, there are others who are not quite so enthusiastic. In separate interviews with AI experts Jaan Tallinn and Nick Bostrom, The Guardian reported the general unease and dystopian perspective of both scientists. Tallinn and Bostrom have founded institutions, the Centre for the Study of Existential Risk and the Future of Humanity respectively, which predict and provide for global issues which pose a threat to human existence, which includes the rise of AI. Although these cautionary institutions may seem overdramatic, there are already signs of how AI can go horribly wrong.
Consider the unfortunate fate of Tay, Twitter’s chatbot that within 24 hours had been corrupted by users to spout hate speech and promote right-wing propaganda. Dismissed by many as an immature joke, this incident does prompt questions about the responsibility of humans in developing this highly specialized technology, and whether we can be trusted to do so. A machine programmed by humans can be imbued with our fallibilities.
Moreover, the very real evidence over the lack of diversity present in the teams that develop AI technology has had a significant impact on the technology’s ability to recognize women and people of color. With evidence such as this, it’s necessary to wonder at the bias or impartiality that could be transferred into these machines.