Prime minister Theresa May reiterated her call for an end to the “safe spaces” that internet companies provide to terrorists in a statement made today (4 June) in wake of the attacks at London Bridge and nearby Borough Market last night.
According to the latest update from the Metropolitan Police, seven people have died and 48 are injured after a van drove into pedestrians on London Bridge at around 10pm. The attackers then took to the streets – where a number of people were subsequently stabbed – before being shot and killed by armed police.
In a frank speech outside of Downing Street, May condemned the attackers and their “evil” ideology “that is a perversion of Islam" and said “things need to change” to counter the terrorist threat.
She went on to call out the “internet companies” which have provided a “safe space” for terrorism to breed.
“We cannot allow a place for this ideology the safe space it needs to breed. Yet that is precisely what the internet and the big companies that provide internet based services provide,” she said.
“We need to work with allied democratic governments to reach international agreements that regulate cyberspace to prevent the spread of extremist and terrorism planning. And we need to do everything we can at home to reduce the risk of extremism online. Third, while we need to deprive extremist of their safe spaces online, we must not forget about the safe spaces that exist in the real world.”
During the speech, home secretary Amber Rudd appeared on ITV’s Peston on Sunday where she clarified that the prime minister has announced an international forum to get internet companies to address radicalisation online.
“We want them to focus on two things,” Rudd said. “Taking down material that radicalises people online; and stopping people exploiting end-to-end encryption.”
Rudd referred to the May’s meeting with G7 leaders last month where she urged them to unite in forcing internet companies like Google, Facebook and Twitter to do more to detect and suppress extremist content online as well as actively block and report individuals if there is evidence of imminent harm.
That request came after the Manchester Arena attack, in which 22 people died.
Google, Facebook and Twitter had not returned The Drum’s request for comment at the time of writing.
Update: The Drum received the following statements from Google, Facebook and Twitter.
A Google Spokesperson said: "Our thoughts are with the victims of this shocking attack, and with the families of those caught up in it. We are committed to working in partnership with the government and NGOs to tackle these challenging and complex problems, and share the government’s commitment to ensuring terrorists do not have a voice online.
"We are already working with industry colleagues on an international forum to accelerate and strengthen our existing work in this area. We employ thousands of people and invest hundreds of millions of pounds to fight abuse on our platforms and ensure we are part of the solution to addressing these challenges."
Facebook's director of policy, Simon Milner, said: “We condemn the attacks that took place in London on Saturday night and our thoughts are with the families of the victims and those who are injured. Facebook’s Safety Check was activated by the local community last night. We hope the people in the area found the tool a helpful way to let their friends and family know they are okay.
"We want to provide a service where people feel safe. That means we do not allow groups or people that engage in terrorist activity, or posts that express support for terrorism. We want Facebook to be a hostile environment for terrorists. Using a combination of technology and human review, we work aggressively to remove terrorist content from our platform as soon as we become aware of it — and if we become aware of an emergency involving imminent harm to someone's safety, we notify law enforcement. Online extremism can only be tackled with strong partnerships. We have long collaborated with policymakers, civil society, and others in the tech industry, and we are committed to continuing this important work together.”
Twitter's UK head of public policy, Nick Pickles said: "Terrorist content has no place on Twitter. We continue to expand the use of technology as part of a systematic approach to removing this type of content. We will never stop working to stay one step ahead and will continue to engage with our partners across industry, government, civil society and academia."