The Drum Awards for Marketing - Extended Deadline

-d -h -min -sec

Author

By Rebecca Stewart, Trends Editor

June 6, 2019 | 14 min read

The Drum goes behind the scenes of Facebook's content moderation centre in Barcelona, gleaning some insight into the people who moderate the darkest parts of the web and how the social network is investing in AI to make its platform cleaner for users and brands.

Facebook has long made it clear it doesn’t want to be the editor of the internet. However, amid increasing pressure from users – and the brands that fund it – its hand has been forced to be more transparent about how it creates (and implements) the policies that keep abusive and illegal content from circulating within its walls.

In a rare move, the company opened the doors to its content moderation centre in Barcelona to The Drum and a handful of other press, revealing just how far it has to go in perfecting this policing, as well as a vision of how AI could help it separate the wheat from the chaff.

Fifteen minutes from Barcelona’s famous Las Ramblas, in the sunny El Clot area of the city, tourists and locals weave in and out of shops as loud muzak floods into a courtyard. Above it, a Gherkin-esque tower dominates the skyline.

The silver structure is Torre Glòries, the Jean Nouvel-designed building Facebook chose to house one of its European content moderation hubs in 2018.

Each day, hundreds of young people (a cursory scan of the room suggests few are over 30) flood into window-lined, clinical-white, open-plan offices where they will spend eight hours policing the murkiest parts of the internet. Beheadings, bestiality, stabbings, child pornography, racism, and sexual exploitation are just some of the things that could flicker across their screens, unbeknownst to the tourists who meander just a few meters below, sipping oversized cappuccinos and perusing linen sundresses.

As with other Facebook moderation hubs, the content reviewers in Torre Glòries are part of an outsourced operation. In the US, the likes of Accenture and Cognizant take on similar contracts, but the Barcelona site is managed by a company called Competence Call Center (CCC).

However, there are still the familiar Facebook hallmarks. A series of big blue thumbs (the social network’s ‘like’ motif) are dotted along walls alongside posters you’d see in Facebook HQ, featuring slogans like ‘Even busy bees stop and smell the roses’ and ‘Move fast and break things’ without a smidge of irony.

For confidentiality reasons, the moderators pause their work and Facebook's community standards swiftly appear on their screens as I, an outsider, am shuffled along the room, flanked by a Facebook and CCC representative.

This is one of several Facebook moderation centres in Europe, including an outpost in Berlin which, like Barcelona, was opened as part of a pledge from chief executive Mark Zuckerberg to hire thousands of moderators following a spate of murders and suicides broadcast on Facebook Live in 2017.

In 2018, Cambridge Analytica whistle-blower Christopher Wylie ensured a year of reckoning for Facebook on the data regulation front as politicians finally started to seriously question its role in democracy and watchdogs pursued it with abandon.

In the aftermath of Facebook’s annus horribilis, though, the live-streaming of a terrorist attack in New Zealand’s Christchurch – in which 51 people were murdered – saw public conversation around Facebook shift sharply to how it should police distressing content, as well as misinformation.

The Christchurch video was initially broadcast on Facebook Live, but the social network later revealed that its content moderators had purged a further 1.5m versions of it from the platform. It’s a figure that gives some indication of the scale and speed needed to moderate user-generated content on Facebook.

Australia reacted to the news with a law that could see execs like Zuckerberg put behind bars if they don’t do enough to remove violent or abusive content. The UK is toying with a similar idea. In New Zealand, the privacy commissioner called the company “morally bankrupt pathological liars”.

Following the massacre, Facebook restricted access to Live for users who had previously “broke certain rules” in broadcasting content on Facebook. But the horse had already bolted.

Where politicians saw an opportunity to open up further debate about social media regulation, advertisers – Facebook’s lifeblood – remained largely silent.

Unlike the extremist videos flagged as part of YouTube’s brand safety crisis of 2017, the Christchurch broadcast hadn’t been monetised, meaning there was little chance of their brand appearing directly adjacent to the noxious content.

However, in April this year the World Federation of Advertisers (WFA), which represents some of the biggest brands on the planet, put its flag in the sand and called on members to “think carefully” about funding platforms like Facebook.

It stopped short of asking brands to freeze investment but highlighted the “moral” responsibility of big spenders to hold social media giants to account.

“Marketers must reflect on the extent and terms on which they fund these platforms,” said WFA boss Stephan Loerke in April. “Conversely, the platforms must do more to assuage the growing number of advertiser concerns.”

For all this, brands have yet to put their money where their mouths are (Facebook’s ad revenue topped $14.9bn in the first quarter of the year), even though the threat still looms.

This rare glimpse of Facebook’s biggest content moderation centre in Europe shows both the commitment the company has made to getting its house in order for brands (and users) but also the momentous challenge it faces in policing human behaviour online.

FACEBOOK CONTENT MODERATION CENTRE BEHIND THE SCENES 2 OFFICE

Deciding what actually makes it on to the news feed

Unlike its reviewers, Facebook’s policy team – which write the community guidelines enforced by moderators – are housed internally across 10 of the company’s global offices.

Ex-lawyers, law enforcers and politicians work alongside experts in the fields of human rights, charities and communications to devise the blueprint – which is updated constantly. Before a new policy is implemented, these staff will consult with relevant outsider groups. For instance, a charity like NSPCC might be invited to counsel on a child safety issue.

The complexities of navigating this space as a business that refuses to define itself as a media company, or work to a set of editorial values, is something that has long plagued Facebook.

It’s faced criticism for the way its algorithms allow for the spread of toxic content and been hauled over the coals for showing bias and being over-zealous.

The perceived failures of Facebook’s content policies have been highlighted by both liberals, horrified at Zuckerberg’s defence of allowing Holocaust denial on the news feed, and right-wingers who believe Alex Jones should be allowed to use Facebook as a soapbox.

The social network has also found itself under fire for censuring post-mastectomy photos from breast cancer survivors and blocking Nick Ut’s Pulitzer Prize-winning ‘Napalm Girl’ image on the grounds of child nudity – decisions it U-turned on. It has baked updates into its policy that will prevent them from happening again.

In other words, Facebook can’t really win.

Its ever-growing list of community standards covers everything from self-harm to bullying, nudity, commercial spam, graphic violence and cruel or insensitive humour. None of which advertisers want to find themselves next to.

Sitting on the team that writes the rules is Paula Garuz Naval, a safety product policy manager at Facebook. She says policy changes are fed to moderators once a month, and that they need to be “principled, operable and explicit” before going live.

FACEBOOK CONTENT MODERATION CENTRE BEHIND THE SCENES FACEBOOK CONTENT MODERATION CENTRE BEHIND THE SCENES FACEBOOK CONTENT MODERATION CENTRE BEHIND THE SCENES FACEBOOK CONTENT MODERATION CENTRE BEHIND THE SCENES 3

‘It’s not a job for everyone’

From Barcelona, the 800 staff who are fluent in more than 50 languages are responsible for moderating markets around the globe.

After a tour of the space, which features a cushion-lined ‘wellness’ room on each floor, some of the reviewers spoke to The Drum under the condition of anonymity.

Under the gaze of senior comms representatives, the group of young Europeans detail how they were recruited to the job and casually say that they haven’t been surprised by the type of imagery they see each day.

“If you look at what’s going on in the world, Facebook is just like a mirror – reflecting things back,” says one.

Reviewers are assigned to their region based on their native language and market knowledge – which Facebook claims gives them a better understanding of context when it comes to matters like politics, hate speech and humour.

Some 1.52 billion people use Facebook every day and its now 15,000-strong army of reviewers across 20 delivery centres globally – in APAC, EMEA and North America – check over 2m posts each day. Since October, they’ve deleted 3.4bn fake accounts and 7.3m hate speech posts with some help from Facebook’s AI detection tools.

Mark Davidson, director of vendor partner management at Facebook, is responsible for overseeing the operations of some of the centres, including Barcelona. His remit is to ensure Facebook’s partners, in this case CCC, find the right people to moderate this type of content.

“This job isn’t for everyone,” he admits. “The recruitment process is really important in terms of trying to make sure we get the best fit every time.”

He explains that once candidates have progressed through the interview stage (which also features psychological tests) they then have to complete an intense two-week training period where they learn about Facebook’s community guidelines and make a judgement call on real-world examples of questionable content.

“We monitor all of the people moving through that training programme to see how they’re responding to the content they’re seeing and look out for early signs of people who are struggling.”

CCC claims that since the Barcelona centre opened its doors last year, not a single new hire has dropped out of training.

The median pay at Facebook itself is $240,000 a year, but the outsourced moderators at Facebook’s EMEA centres are paid on average anywhere between $28,000 to $33,000 in Euros. Reviewers globally are also offered private health cover.

FACEBOOK CONTENT MODERATION CENTRE BEHIND THE SCENES FACEBOOK CONTENT MODERATION CENTRE BEHIND THE SCENES FACEBOOK CONTENT MODERATION CENTRE BEHIND THE SCENES FACEBOOK CONTENT MODERATION CENTRE BEHIND THE SCENES 4

‘If we didn’t moderate it, it would be disgusting’

To ensure consistency, the moderators are assessed every day by ‘quality auditors’ who appraise the decisions they take. The auditors are audited too, says Davidson. If reviewers and auditors disagree on whether a piece of content should be removed, a process called a ‘dispute’ begins and the original arbitrator has to argue their case.

Much has been written about the secret lives of Facebook moderators. Last year, The Verge detailed how some current and former US employees, shackled to NDAs and deeply impacted by the content they’d consumed, were developing mental health issues including post-traumatic stress disorder (PTSD).

Testimonies made to The Guardian in 2017 by moderators painted a picture of a staff who were “underpaid and undervalued”. And earlier this year the business faced a lawsuit from two former reviewers who alleged to have suffered psychological trauma and symptoms of PTSD caused by reviewing violent images.

Davidson insists Facebook and its contractors invest “heavily” in the “wellbeing and resilience” of its reviewers. He reels off how Facebook ensures the moderation centres themselves are well lit, with dedicated spaces should they have to step away from their desks. Staff are offered access to mental health support and counselling too, which can be in a group or one-on-one.

“We also have five psychologists who sit internally within Facebook building the frameworks that we use,” Davidson adds.

When the group of moderators, cherry-picked to answer journalists’ questions, are asked how they’d respond to negative headlines about their job, they largely shrug it off. There is no one present who moderates a market like the Middle East, or South America, where more violent content is likely to be flagged. Because they moderate for European countries, the content, they say, isn’t as harrowing.

“The misinterpretation is that all we’re watching every day is ‘war, war, war’ and that we must be heartless psychopaths sitting in dark rooms and watching things, but it’s not like that,” says one.

Another insists: “I find a lot of meaning in my work. We know we can handle the content and that’s why we’re here. We can actually respond to real-life situations and make a difference.”

When asked what Facebook would look like without them, though, they are less pragmatic. “Like World War Three,” exclaims one young man.

Another chimes in: “It would be disgusting, the filtering is needed.”

FACEBOOK CONTENT MODERATION CENTRE OFFICE 1

Could AI protect Facebook’s advertising business model?

When Facebook released its Community Standards Enforcement report last month, it detailed the wealth of violating content removed by automation before people even had to report it.

99.8% of fake accounts were expunged in this way, as was 99.9% of spam and 99.2% of child nudity and sexual exploitation material.

Its AI is less well trained in removing hate speech, which was taken down in 65.4% of cases by a machine. Algorithms perform worst when it comes to detecting bullying and harassment, which sat at just 14.1% of removal.

Since 2015, Facebook’s engineers have developed sophisticated software that can identify keywords and images that flout policies.

More recently, the company has worked on multi-modal understanding techniques, which allow computers to identify non-obvious hallmarks of offending content. Recognising images of drugs by their packaging, for instance, instead of only being able to discern a picture of a marijuana plant or joint.

Simon Cross, product manager, community integrity at Facebook – who leads the team behind this AI – says the end goal would be “to do this all automatically and not have to rely on people reporting stuff”.

“Ideally we'd enforce our community standards completely accurately so much so that there would be nothing violating for people to report,” he says. “But it’s not a state we will likely ever reach.”

Zuckerberg stressed last month that the investment Facebook had made to clean up content on its platforms had come at the expense of business growth.

The Facebook boss also outlined, however, that he’d expect the progress to eventually be helpful in giving ad revenues a bump – a prediction that’s likely to prove right in an industry where advertisers and their media agencies are seeking out greater brand safety assurances from tech giants.

Though the bulk of advertisers have yet to vote with their wallets and pull spend from Facebook over content moderation concerns (be that overzealous censorship or the quick spread of disturbing content), what is clear is that Facebook is on an expensive journey to convince users, brands and journalists that it’s trying to do better.

Its content moderation setup certainly isn’t now, and might never be, perfect. And while it’s at pains to respond to criticism around how subcontractors treat the humans eyeballing the very worst of the internet, Facebook might pose a fresh question for brands 10 years down the line: what is the human cost of keeping the internet clean for them?

Technology Facebook

More from Technology

View all