Facebook has hit back at The Wall Street Journal’s coverage of the ‘Facebook Files’, specifically its implication that the company has known its Instagram platform has negative effects on young women’s mental health.
The WSJ had contended that Facebook has known its photo-sharing social app contributes to low self-worth and mental health issues among teenage girls, but chose to bury that information rather than address the issue.
The WSJ is currently publishing a series of articles and analysis based on leaked documents from within Facebook. Among the leaks are reports that Facebook chose to prioritize on-platform engagement above combating misinformation, that it bent its own rules to allow celebrities to continue posting harmful content across its platforms, and that it had turned a blind eye to human trafficking in countries outside the US.
However, the issue that has grabbed the most attention due to its emotive nature is the contention that Facebook and Instagram knowingly allowed harm to happen to teenage girls’ mental health.
The central claim is that 32% of teenage girls surveyed said that when they felt bad about their bodies, Instagram made them feel worse. The WSJ alleges that as a result of an algorithm designed to create more engagement, Instagram had created a feedback loop in which vulnerable teenagers saw more of the content that negatively impacted their mental health.
In response Facebook has issued a statement denying that Instagram is “toxic” for young women, and arguing that in 11 of the 12 cases it examined “teenage girls who said they struggled with those difficult issues also said that Instagram made them better rather than worse”.
The statement takes specific exception to the idea that Instagram is on the whole bad for mental health, claiming that its research demonstrates that it is simply on ‘body issues’ that it negatively impacts more teenagers than it does positively: “While the headline in the internal slide does not explicitly state it, the research shows one in three of those teenage girls who told us they were experiencing body image issues reported that using Instagram made them feel worse – not one in three of all teenage girls.”
Despite the refutation, Facebook announced today that it is pausing the launch of its ‘Instagrams for kids’ product. The app, which was designed for children aged 10-12, will still roll out at a later date, according to the company.
The pitfalls of self-regulation
Facebook also claims that 22% said that using Instagram made them feel better about their body image issues, and that 45.5% said that Instagram didn’t make it either better or worse (no impact). It should be noted, however, that in the same statement Facebook also says that “some of” the research was based solely on 40 teenagers – which undercuts its own arguments just as much as those of The WSJ.
Crucially one statement from The WSJ states that “13% of British users and 6% of American users traced the desire to kill themselves to Instagram”. Facebook instead argues that fully 1% of suicidal thoughts among the surveyed teens began on Instagram. The social network does, however, say that it has put in place tools and support for teenagers on the platform, citing its work to remove images of self-harm and suicide from Instagram entirely.
The Drum has previously reported on advertisers’ responses to the Facebook Files, with many expressing wariness to purchase until the facts are clearer. Phil Smith, director general at ISBA, said: “What this story really tells us is the urgent need for independent regulatory oversight, which ISBA has been calling for on behalf of its members for a long time. [Social platforms are] optimized for user attention and engagement in a competitive market, in the absence of regulatory constraints.”
While Facebook does argue that The WSJ has deliberately misinterpreted the facts in the statement, it also admits that its own documents show at least the potential for harm. While the fallout from the leaks continue, advertisers will need to grapple with their own approach to spending across social platforms, due to both a lack of transparency about their impact on users, and the need to choose brand-safe environments.