Early in the morning of December 4, press started hitting about Facebook's new "Messenger Kids" app, an extension of their chat product designed for the under-13 set.
The articles entirely miss Facebook's twin pernicious underlying goals behind this new product.
I'll get to what's wrong with Messenger Kids shortly, but let's start with what the various articles do cover: Messenger Kids creates protected Facebook Messenger accounts for pre-teens where bad words aren't allowed (a nanny AI is watching) and parents control their kids' contact lists. This sounds good on the surface: with Messenger Kids, parents can feel safe knowing that their kids won't be trading messages with strangers or sketchy non-strangers, and parents can also relax knowing that message content won't be dirty, hurtful or scary.
The thrust of the articles all focus on what Facebook is trying to achieve with Messenger Kids; this sounded like they were inspired by a press release, which a little rapid searching confirmed.
Elsewhere online, an article from the Associated Press included a more skeptical note:
Is Messenger Kids simply a way for Facebook to rope in the young ones?
Stephen Balkam, CEO of the nonprofit Family Online Safety Institute, said "that train has left the station."
Federal law prohibits internet companies from collecting personal information on kids under 13 without their parents' permission and imposes restrictions on advertising to them. This is why Facebook and many other social media companies prohibit younger kids from joining. Even so, Balkam said millions of kids under 13 are already on Facebook, with or without their parents' approval.
He said Facebook is trying to deal with the situation pragmatically by steering young Facebook users to a service designed for them.
Balkam throws up his hands at the very idea that parents can keep their children off social media, agreeing with Facebook that it's better for the kids to have water wings and a blow-up pool than the scary, undertow-filled ocean of real Facebook.
This is like saying, "well, those pesky kids are going to take heroin anyway, but at least we can control the dose!"
So what are the articles missing?
The danger of Messenger Kids has nothing to do with the content of the messages or the senders of the messages. It has everything to do with the fact of the messages themselves.
It's not about the content: it's about the container.
Inviting a notification-filled interruption machine like Messenger Kids into the lives of children impairs their ability to focus and think at the exact time when they are building critical skills that will serve them for the subsequent several decades of their lives.
With Messenger Kids, according to the article from Fast Company, "Kids have access to a number of kid-approved GIFs, masks, emojis, and sound effects also available to play with that were designed to be appropriate to youngsters in the 6-11 age range."
While the content of the GIFs and emojis might be appropriate, the fact of the GIFs and emojis is inappropriate.
At six or seven, for some kids it's the Rainbow Fairies, for others it's Captain Underpants, for some it's Harry Potter, but regardless of the stories this is the age where — if the kid manages to sail between the Scylla of video games and the Charybdis of YouTube — many kids cross that magical barrier between listening to stories other people read to them and being able to read independently.
An endless river of colorful interruptions will not help kids learn to read, or do math or simply learn the basic rules of social interaction with their peers at school and on the playground.
Adults have trouble resisting the allure of the screens in their lives. A recent study called "Brain Drain: The Mere Presence of One's Own Smartphone Reduces Available Cognitive Capacity" demonstrates that you have to turn your smartphone off and move it to an entirely different room before it stops distracting you. (I write more about this study and its implications here.)
Children by definition have less self-regulation than adults.
As Tristan Harris, a design ethicist formerly of Google, famously observed in an Atlantic article:
"You could say that it's my responsibility" to exert self-control when it comes to digital usage, he explains, "but that's not acknowledging that there's a thousand people on the other side of the screen whose job is to break down whatever responsibility I can maintain."
The message that Facebook is sending to parents with Messenger Kids is that it's okay to give their kids untrammeled access to this messaging service because the messages themselves are safe.
It is not okay.
It is not safe to expose children with highly plastic minds to an addictive technology — the messaging service. (For more on addictive technologies, see Adam Alter's remarkable recent book, Irresistible.)
At the start of this column I said that Facebook had two pernicious goals in launching Messenger Kids. By this point the first goal is obvious: to create more generations of people who will be lifelong addicts of social media.
Facebook's second pernicious goal is to do an end-run around Snapchat, which for teenagers in 2017 is crack compared to Facebook's mulled wine. If Facebook can bond pre-teens to its messaging platform, then those kids will be more likely to stick with Facebook (because that's where their contacts already are) once they hit 13.
To wrap this up, the heavy burden of parenting in today's technology-filled age is that there is no technology that replaces the time-consuming, energy-draining, usually thankless work of actually paying attention to what your kids are doing. This is why the Common Sense Media data that Emily Price cites in Fast Company is terrifying: 93% of 6- to 12-year-olds in the U.S. have access to tablets and smartphones, and 68% have a device of their own.
There is no device or service to which you can outsource monitoring what your kids are doing with tech.
Not even Facebook's new Messenger Kids.