Facebook and Twitter's autoplay function has helped the social media platforms cozy up to advertisers who are looking to get more eyeballs on their videos, but this week’s shooting of two journalists in Virginia displayed some inherent problems with the service.
As a result of platform users automatically being served the video posted by shooter Vester Lee Flanagan without warning, questions have been asked of whether or not the social media giants have a moral obligation to disable the feature when footage is being disseminated that most would consider extremely graphic.
Autoplay largely benefits advertisers since it automatically plays ads, leading to increased engagement without asking users whether they want to view a video or not. Facebook rolled out the feature in late 2013 and reported earlier this year that it was now serving four billion video views every day, up from one billion per day last September.
Twitter unveiled the feature in June. In a blog post announcing the tool, Twitter said that during autoplay testing, brand and publishing partners saw improved view rates that “resulted in lower cost-per-views for marketers and increased video recall by consumers.”
Yet Flanagan, who went by the name of Bryce Williams on Facebook and Twitter, took advantage of the function by using it in an unintended way on Wednesday when he posted a film he took of the shootings to both of his profiles. Twitter shut down his account mere minutes after he posted the video and Facebook followed suit.
While both platforms quickly stopped the video from spreading and made it clear that the autoplay function could be disabled, it’s unclear whether or not their actions will set a precedent or if they will take further steps to prevent this another similar event from happening again.
Jo Farmer, partner at law firm Lewis Silkin, said the only reason Facebook and Twitter even bother with autoplay is so they can get more viewers and therefore more ad revenue. She noted that it becomes problematic in situations such as this one, when viewers get absolutely no warning that they’re about to see potentially distressful content, whereas television stations have the option of giving viewers caution beforehand.
“Instead it just goes out to the people and then if enough people complain about it or if it starts getting flagged as inappropriate, then the platforms react swiftly to take it down,” she said.
Farmer also pointed out that from a legal standpoint, significant change to combat this issue is more likely to come from within the industry rather than the government.
“If brands start saying we don’t like Facebook and Twitter as a platform anymore because there’s too much of this content that gets autoplayed and our customers don’t like it, then it’s going to turn the brands off from using them as a legitimate platform,” she said. “That’s probably where the tension is going to come from if anything’s going to happen.”
The tragic incident also points to the fact that while social media and its capabilities can certainly be a force for good – the Ice Bucket Challenge being a primary example – its bells and whistles can also have dire consequences that popular platforms.
Brett Gary, associate professor of media, culture and communication at NYU, said “there’s nothing good that can come of dissemination of the shooter’s strategic media deployment of his act of mayhem.”
He added: “The conversation might well be how the shooter understood the logic of the media, and could count on the continuously repeated display of his lunatic acts."
To highlight the potential audience that Flanagan's post may have reached, Facebook has revealed that it now receives 1 billion daily users.