Culture secretary Karen Bradley has said the government needs to be "careful" when considering the legal classification of Facebook and Google amid calls for the internet giants to be regulated like publishers.
On Tuesday (10 October) Patricia Hodgson, chairwoman of the media regulator Ofcom, said it was her personal view that social media companies were publishers rather than merely platforms.
But Bradley warned that applying publisher legislation to internet companies could impact civil liberties.
"I am not sure the publisher definition in UK law would necessarily work in the way that people would like it to work. I think it would end up being very restrictive and make the internet not work in the way we want it to work," she said.
Bradley was addressing members of the Digital, Culture, Media and Sport Committee over her department’s new Internet Safety Strategy, launched on Wednesday, which includes the proposal of a new tax on tech companies to pay for action to tackle online bullying.
“We need to be careful here that what we do is not a sledgehammer to crack a nut – a piece of legislation where we say under UK common law these platforms are now publishers, which could impact on freedom of speech, civil liberties and the ability of people to enjoy the benefits that the internet brings. But we have to do this in a way that doesn’t allow harm," she added.
Bradley's words echo those of Patrick Walker, Facebook’s director of media partnerships for EMEA, who in July said the company is a “very different type of social media company”, and that “old media terms” cannot be applied to what it does.
Currently the two companies are considered ‘information conduits’ rather than editorial gatekeepers which means they have minimal responsibility for the content that is shared on their platforms.
However, since these platforms offer anyone (with exceptions to country limitations) a free platform to exchange content, they have inadvertently given rise to the spread of fake and extremist content online.
While the companies have both made moves to tackle the spread of extremist and fake content on their platforms - predominantly through the use of artificial intelligence (AI) - they repeatedly deny taking an editor role when it comes to vetting content. But there have been a number of examples where their identity as tech companies has become muddied, including both Facebook and Google's decision to hinder access to white supremacist pages/apps in the wake of the Charlottesville disaster.
These inconsistencies, where the companies vet content that violates their 'hate speech' policies but continue to support Backpage.com in its trafficking stance, has mounted pressure on Facebook and Google to take responsibility for the content shared on their platforms rather than simply acting as ‘pipes.'