Updated: how tech platforms are responding to Russian disinformation around Ukraine
Platforms have previously been criticized for their response to Covid-related misinformation
Over the course of the past few days a number of tech platforms have updated their responses to the invasion of Ukraine. For the most part that has taken the form of effective blocks on the ability of Russian media and businesses to monetize using those platforms at all.
On Thursday March 3, Google announced it had stopped selling online advertising in Russia – a ban that covers search, YouTube and outside publishing partners. It is an expansion of its previous efforts, which amounted to preventing Russian state-owned media from buying and selling ads across its platforms. In light of the ongoing invasion, however, the company has taken further steps.
In a statement Google said: “In light of the extraordinary circumstances, we’re pausing Google ads in Russia. The situation is evolving quickly, and we will continue to share updates when appropriate.”
Earlier in the week both Twitter and Snap Inc announced similar measures, reducing the ability of companies within Russia to monetize themselves on those platforms. Twitter had previously banned state-owned media from purchasing ads in 2019; the latest move is a recognition that ad networks are being used as a get-around for the spread of misinformation.
As of Sunday February 27 Google and its most notable subsidiary platforms have joined the ranks of organizations imposing financial strictures on Russian state media. It announced that it was pausing the ability of Russian media outlets to monetize themselves through advertising across its platforms, including YouTube. A Google spokesperson told Reuters that due to ”extraordinary circumstances” it was “pausing a number of channels’ ability to monetize on YouTube.”
Meta has also taken the step of preventing Russian media from buying ad space or monetizing content across the Facebook ecosystem. Its head of security policy, Nathaniel Gleicher, stated: “We are now prohibiting Russian state media from running ads or monetizing on our platform anywhere in the world. These changes have already begun rolling out and will continue into the weekend.”
More information can be found here.
Original story: The Drum asked the major tech platforms what they’re doing to stop misinformation following Russia’s invasion of Ukraine. Here’s what they said.
Online disinformation has erupted following Russia’s invasion of Ukraine. Numerous fact-checking teams, both on-the-ground and international, have warned of the dangers of spreading deliberate and inadvertent misinformation on social networks, particularly from unverified sources.
Within Russia information is tightly controlled, with Vladimir Putin’s government forbidding the dissemination of any information about the invasion that does not originate from the government itself. That, combined with the relative freedom of Ukraine’s citizens to film and share videos of the invasion, has created an ecosystem in which disinformation is rapidly escalating and being disseminated across social media.
The situation is exacerbated by the presence of state-affiliated channels such as Russia Today (RT), which have a legal broadcasting licence in many countries outside of Russia but are widely known to broadcast pro-Russia propaganda. The UK’s culture secretary Nadine Dorries has written to Ofcom asking it to look into RT’s role specifically in disseminating disinformation.
The counterargument has been made that since RT’s footprint in the UK is relatively small, silencing it would provoke a larger response from Russia. That could very well include the forced shutdown of BBC activities in the country, which in turn could take accurate information out of the ecosystem. A number of bodies have yesterday written to the government requesting that the BBC’s funding be maintained, especially given the potential of that eventuality.
Organizations including Poynter have launched or rewritten sources to help the public spot misinformation across social, but the lion’s share of the responsibility for weeding out the misinformation at its source lies with the social platforms themselves.
It follows much criticism of the platforms’ inability to effectively counter misinformation about the Covid pandemic.
YouTube and Facebook were singled out as the key vectors for misinformation in an IPG study last year, with Joshua Lowcock, global chief brand safety officer at Mediabrands network agency UM Worldwide, stating: “While some platforms have policies on disinformation and misinformation, they are often vague or inconsistent, opening the door to bad actors exploiting platforms in a way that causes real-world harm to society and brands.”
YouTube did not provide The Drum with a comment when asked what steps it is taking to limit misinformation about the Ukraine invasion on the platform.
It did, however, note that it is applying its pre-existing misinformation detection strategies, such as prominently surfacing authoritative news content for topics related to Russia and Ukraine, with its top news shelf highlighting videos from authoritative news sources. It also noted that of RT specifically it is using its information panel to provide publisher context under videos to highlight that it is a state-affiliated broadcaster.
Facebook and Instagram owner Meta told The Drum: “Our thoughts are with everyone affected by the escalating military conflict in Ukraine. We have established a Special Operations Center to respond to activity across our platform in real-time. It is staffed by experts from across the company, including native speakers, to allow us to closely monitor the situation so we can remove content that violates our Community Standards faster. We also launched a new feature in Ukraine that allows people to lock their profile to provide an extra layer of privacy and security protection over their information.”
Twitter has a curated Moment providing up-to-date information, and stated: “Twitter’s top priority is keeping people safe, and we have longstanding efforts to improve the safety of our service. As we do around major global events, our safety and integrity teams are monitoring for potential risks associated with the conflict to protect the health of the service, including identifying and disrupting attempts to amplify false and misleading information and to advance the speed and scale of our enforcement.
“We’re proactively monitoring for emerging narratives that are violative of the Twitter Rules, including our synthetic and manipulated media policy and platform manipulation policy, as the situation develops. We remain vigilant and will continue to closely monitor the situation on the ground.”
Shout out to Twitter for upping its efforts to remove misinformation and old videos being passed off as current events in the Ukraine. It's time the social networks step up to battle bots, trolls, fake news and misinformation.— Will Guyatt (@willguyatt) February 24, 2022
A LinkedIn spokesperson said: “Like everyone else, we’re watching the developments in Ukraine, and our teams at LinkedIn are focused on keeping members safe and informed. Our team of global editors are also keeping members updated with news from trusted sources. Our safety teams are closely monitoring conversations on the platform and we’ll take action on content that doesn’t follow our Professional Community Policies. We also encourage members to report content that might violate our Professional Community Policies.”
A Snap spokesperson said: “We are shocked and saddened by the events in Ukraine. We do not allow misinformation on Snapchat. The app has actually been designed to make it hard for misinformation to spread. We limit the size of group chats and snaps disappear. Unlike traditional social platforms, we don’t feature an open, unvetted newsfeed and the content on the public parts of the app - Discover and Spotlight - only host pre-moderated content. If we find misinformation, we remove it immediately.”