4 ways the UK might tame 'wild west' Facebook and force it to fix its fake news flaws

4 ways the UK government plans to force Facebook to focus on fake news fixes

Facebook is under siege as everything from how it uses data to the spread of fake news is under increasing global scrutiny. This week, a British government committee set up to investigate its practices accused it of acting like a “digital gangster” in a damning report. So where does it go from here?

The UK Digital, Culture, Media and Sport (DCMS) select committee studied Facebook's role in fake news and disinformation for 18 months, in the aftermath of the Cambridge Analytica scandal. In a report published yesterday (18 February) it said that Facebook deliberately obstructed its inquiries with chief executive Mark Zuckerberg said to have “shown contempt towards both the UK Parliament and the ‘International Grand Committee’”.

The report made a number of recommendations for how it could be regulated.

1. More than ‘platforms’

There was a call to reclassify tech giants in response to Facebook's argument that it is a platform and not a media company. This claim has, in the past, diminished its responsibility for content on its site.

These companies “cannot hide behind the claim of being merely a ‘platform’,” the report said.

There was a call for a new category to be created which tightens tech companies’ liabilities, acknowledging the gap between ‘platform’ and ‘publisher’. This approach would see the tech companies "assume legal liability for content identified as harmful after it has been posted by users.”

Currently, regulatory bodies including Ofcom, the Advertising Standards Authority, the Information Commissioner’s Office, the Electoral Commission and the Competition and Markets Authority all have a remit to tackle breaches of content, data and business conduct. However, there was a view that these different groups could better join the dots with a new purpose-built entity funded by a digital levy.

Tamara Littleton, founder and chief executive of The Social Element agency said that “Facebook has positioned itself as a utility - a platform with no responsibility for the information and news it hosts”. This she said, is no longer the case.

She said: “Facebook knows so much about us. It knows when we fall in love, when we split up, when we’re suffering from depression. Its algorithms are so sophisticated that it knows these things before we tell anyone, by the pattern of our posting. It has invested in image recognition and content tagging. All this is done for marketing, of course. But that technology could also be the solution in identifying fake news sources, protecting data and avoiding foreign intervention in elections. It’s all possible, with the right resources behind it.”

However, it has failed to act quickly enough and as a result, “regulation is inevitable… however, the big question for me is whether our lawmakers will understand enough about the technology to regulate sensibly," said Littleton. "And whether Facebook will take the UK seriously enough to comply with the spirit of the law”.

2. Battling misinformation (particularly in politics)

With liability for content established, social networks would have to take greater responsibility for the information within their ecosystems.

Germany has been ahead in this space. After a “failed” period of self-regulation, The Network Enforcement Act came into play January 2018. Under it, companies have 24 hours to remove hate speech – coaxed on by the threat of a fine of up to €20m.

Today, “one in six of Facebook’s moderators now work in Germany, which is practical evidence that legislation can work," the report said.

In France, a controversial law passed in November 2018 granted judges the power to remove of 'fake news' articles during election campaigns. Furthermore, Facebook had to disclose the source of political ad money while the French national broadcasting agency can also suspend TV channels controlled by or under the influence of a foreign state if they “deliberately disseminate false information likely to affect the sincerity of the ballot”.

Under UK proposals, in addition to the removing disinformation, networks could be responsible for publicising instances of instances of foreign interference.

To this end, the investigation was of the opinion that “electoral law is not fit for purpose” and should be “changed to reflect changes in campaigning techniques”. This included acknowledging what actually constitutes political advertising, in particular, the use of unpaid campaigns and private groups to propagate campaign ideas.

“There needs to be absolute transparency of online political campaigning, including clear, persistent banners on all paid-for political adverts and videos, indicating the source and the advertiser; a category introduced for digital spending on campaigns; and explicit rules surrounding designated campaigners’ role and responsibilities.”

Additionally: “Paid-for political advertising should be publicly accessible, clear and easily recognisable.”

The Institute of Practitioners in Advertising (IPA) responded to the report. Director general Paul Bainsfair doubled down on its calls for an online repository for UK political ads.

“When it comes to political advertising online, the trend reveals increasing spend, increasingly refined targeting and increased automation, all leading to a mushrooming body of microtargeted online political advertising. Trump alone deployed 5.9 million different online executions. By tying a cost to each creative, we hope it will lead to a reduction in the number of dark ads being microtargeted at consumers.”

3: Targeting illegal?

"Senator, we run ads," Zuckerberg famously told Congress in April 2018.

Facebook’s business is built upon advertising using a database of inferred user data - it makes these leaps by tracking user web habits, studying their moods, and 'Likes'. This marketing model has been used by political parties to identify supporters using ‘lookalike audience’ ad tools, including the investigation laden Vote Leave movement that drove much Brexit discourse.

Facebook's arguement is that information is inferred and therefore does not constitute private user data. But, the report disagreed and argued that this data should be as protected under the law as personal information.

It added: “Protections of privacy law should be extended beyond personal information to include models used to make inferences about an individual. We recommend that the government studies the way in which the protections of privacy law can be expanded... this will ensure that inferences about individuals are treated as importantly as individuals’ personal information.”

As UK Information Commissioner, Elizabeth Denham previously said: “We do not want to use the same model that sells us holidays and shoes and cars to engage with people and voters. People expect more than that.”

The report echoed ICO findings: “If this information is based on assumptions about individuals’ interests and preferences and can be attributed to specific individuals, then it is personal information and the requirements of data protection law apply to it.”

4. A Code of Ethics

The report suggested the creation of a code of ethics for tech companies, similar to Ofcom’s Broadcasting Code, might be a solution to dealing with issues on social media today as well as those likely to crop up in the future.

The report said: “The Code of Ethics should be developed by technical experts and overseen by the independent regulator, in order to set down in writing what is and is not acceptable on social media.”

The code should establish clear, legal liability for tech companies, urging them to act against “agreed harmful and illegal content”. The should also have the relevant systems in place to highlight and remove ‘types of harm’.

“Large fines” should be delivered to those who failed to comply with the code, it added.

Samir Patel, chief innovation strategist, Blue State Digital, an agency that worked on president Obama's digital campaigns and with the UK government, said the recommendation for a compulsory code of ethics is a welcome one.

"This is the wild west. Facebook and its top dogs are not only acting like digital gangsters, but they are also behaving like cowboys and bank robbers… Facebook data has been collected, stolen and misused to influence the outcome of elections across the world," Patel added.

“The company has refused to give simple answers to government and is essentially acting as if it is above the law. For a platform that is supposed to be about people, this is unacceptable.”

The suggested code of ethics, he said would mark the “first time that the duopoly could be held to account by someone outside of their ‘inside circle’”.

“When it comes to the duopoly, we haven’t seen these issues start to impact their revenue yet, but with stricter regulation and greater enforcement, we may start to.”

With the social network planning to bring its chat app ecosystem of Facebook Messenger, Instagram and WhatsApp into the same infrastructure, affecting 2.6bn web users, governments are keen to impose laws and restrictions on the social network which the report outlined has breached “fifteen voluntary codes of practice” since 2008.

The report claimed: “Where we are now is an absolute indictment of a system that has relied far too little on the rule of law.”

In the report, Facebook was of much greater focus than rival Google. One week prior, the government's report into the sustainability of journalism, the Cairncross Review, outlined numerous ways the duopoly could do more to support the media, especially considering they secured 54% of online ad revenue in the UK in 2017.

Get The Drum Newsletter

Build your marketing knowledge by choosing from daily news bulletins or a weekly special.

Come on in, it’s free.

This isn’t a paywall. It’s a freewall. We don’t want to get in the way of what you came here for, so this will only take a few seconds.