The UK Government has been forced to deny claims that it is behind a bot network imitating NHS staff on Twitter.
Although it is yet to release an official statement, the Department for Health and Social Care has been replying directly to accusations on Twitter, calling the claims “categorically false” and condemning users for sharing disinformation as it "undermines the national effort against coronavirus.”
It took the opportunity to advise Twitter users to adopt the 'share checklist' in order to stop the spread of harmful content and the sharing of unsubstantiated claims online.
These claims are categorically false.
To share disinformation of this kind undermines the national effort against coronavirus.
Before anyone shares unsubstantiated claims online, use the SHARE checklist to help stop the spread of harmful content:
— Department of Health and Social Care (@DHSCgovuk) April 20, 2020
The source of the allegation links back to a Twitter account held by a member of Far-Right Watch, an anti-nationalist political group. They took to Twitter last night (20 April) posting: "Regarding those 128 fake #NHS Staff accounts posting for 'Herd Mentality' and support of the Govt that were set up by @DHSCgovuk or their marketing agency ... Posts were sent using Hootsuite, a mass-posting tool. Account registered to 1 person with 4 assigned contributors." The Tweet has since racked up 25.5K likes and 20.1K retweets.
While the account holder claims to have identified a UK government employee behind the ‘128 fake NHS accounts,’ they are yet to provide evidence that this is the case.
Other Twitter accounts have argued that the accounts are more likely trolls than bots.
And let's have a look at some of the tweets from the account in question. It's pretty obviously an incredibly unfunny right-wing troll + in my opinion that's all it is.
If you wanted to impersonate an NHS worker and be taken seriously you wouldn't be posting stuff like this pic.twitter.com/eaPODUDC1I
— Jimmy (@JimmySecUK) April 20, 2020
Bots – automated accounts that publish content and infiltrate online communities to try to influence online debates – exist on all the major tech platforms.
Last year, Twitter suspended over 5,000 pro-Trump accounts tied to a network denouncing the Mueller Report as a ‘RussiaGate hoax.’
During the 2019 general election, the Conservative party was accused of using bots to sway votes, after thousands of nearly identical messages claiming ‘I support Boris 100%’ and ‘Brilliant’ were posted online, alongside comments that imitated a broken bot.
But, an investigation by a BBC journalist found that that the messages of support for the prime minister were posted by pranksters mimicking a bot-like response. As a result, the increasing presence of bot-mimicking comments has made it harder to spot genuine automated activity.
Outside the political realm, Amazon was forced last year to defend its army of 'fulfillment centre ambassadors' – a group of warehouse employees who were paid to write positively about Amazon's working conditions.
While the general public had been aware of the 'ambassadors' for over a year, the accounts came under scrutiny after Twitter users pointed out the use of 'robotic' or 'scripted' language as evidence that employees were "paid to lie."
Persistent efforts to manipulate the course of online debates with automation and deception illustrate the obstacles social platforms face.