Singapore’s parliament may have passed the Protection from Online Falsehoods and Manipulation Act (POFMA) but the debate around its possible implications rages on.
The act, more commonly referred to as the anti-fake news law, compels online media companies to correct or remove any content the government determines is false. Criminal charges may also be brought where the false information is knowingly propagated with malicious intent, with penalties including lengthy prison terms and hefty fines.
Journalists, human rights groups and tech companies fear the new regulation could be used as a tool to suppress freedom of speech. They argue it gives the government too much power to decide what is true and what is false and discourages the expression of any viewpoint or sharing of any information that can’t be backed up with hard evidence. The government, on the other hand, is offering assurances the law won’t impact the average citizen or freedom of speech but is intended to support democracy by tackling misinformation propagated by bots, trolls, and fake accounts. It states the act is designed purely to stop the deliberate circulation of falsehoods online, which are being used to divide society and spread hate.
A worthy cause but a vague and contentious solution
The new regulation presents something of a conundrum. Any activity that tackles fake news and contributes to making the internet a more open and transparent environment should be supported. But regional regulations that are equivalent to censorship are not necessarily the right approach to achieve this transparency. POFMA’s definition of fake news is vague and open to interpretation as well as potential misuse, stating:
“A statement of fact is a statement which a reasonable person seeing, hearing or otherwise perceiving it would consider being a representation of fact. A statement is false if it is false or misleading, whether wholly or in part, and whether on its own or in the context in which it appears.”
Corrections are permitted under the act, which is important in a media landscape where the occasional genuine mistake in reporting is inevitable. However, there appear to be no clear guidelines as to when correction directions should be used instead of content removal or blocking.
The world wide web is no longer accessible worldwide
Sir Tim Berners-Lee intended the worldwide web to be “an open platform that would allow everyone, everywhere to share information, access opportunities, and collaborate across geographic and cultural boundaries.” But fast forward thirty years and the internet no longer feels like an open book. It is increasingly split into access zones, governed by a patchwork of regional regulations.
Last year the EU’s General Data Protection Regulation (GDPR) caused thousands of US-based websites to block EU citizens from accessing content for fear of breaching the rules. At the same time, new portability regulations ensured EU citizens had increased access to subscription content they had paid for, but only within EU countries. YouTube is warning it may have to block access to certain video content under the EU’s copyright directive, Article 13, which holds media companies responsible for infringements on their platforms. And the upcoming California Consumer Privacy Act (CCPA) in the US will mean different rules for different states, potentially resulting in further access restrictions across the country.
Tackling fake news needs a sensible regulatory approach
In its current form, POFMA could be yet another regional restriction that limits free access to information on the world wide web. To tackle fake news what is needed instead is a sensible regulatory approach, applied across the internet, with clear, transparent and universally agreed on criteria for classifying any piece of information as fake. Instead of blocking or removing content, which feels too close to censorship, there could be a means of signposting content identified as potentially false or misleading. In much the same way native advertising is clearly labelled as such, and Wikipedia flags when an article requires fact checking, this type of approach informs the user and allows them to make up their own mind.
To quote the internet’s founder once more, “the web is for everyone and collectively we hold the power to change it.” This change won’t come from distinct regions creating their own laws and blocking access to content, but from a sensible and proportional regulatory approach, with a clear and transparent definition of fake news that can be applied at a global level.
Nickolas Rekeda is CMO at MGID.