When we look at our kids, we see ourselves
On online safety for all
When it comes to AI policy in 2026, one thing is abundantly clear: kids’ online safety will remain a banner issue, both in legislatures and in the court of public opinion.
Some folks were caught off guard by just how quickly child safety became an AI policy flashpoint in 2025. I’m not sure we should be that surprised.
The energy clearly drew from Jonathan Haidt’s wild success with his 2024 book Anxious Generation. Whatever you think about Haidt, his ability to galvanize policymakers on this issue can be chalked up to more than polemics. His book struck a chord with the so-called anxious generation’s even more anxious Millennial and Gen X parents, who worry that their kids’ lives will be worse than their own.
But behind this trend, I think those same parents and advocates are channeling more than concerns about child welfare. Research suggests that adults are deeply conflicted about their own relationship with technology. Consider the numbers: a survey by Harmony Healthcare IT found that although Gen Z spends more time daily on their phones, Millennials and Gen X aren’t far behind.
In a testament to Haidt’s influence, the vast majority of respondents from all generations agreed that high schools should ban cell phones. But those same adults are also looking for ways to curb their own addictions. Over half of all Gen Z, Millennials, and Gen Xers want to cut down on phone usage themselves.
Source: Harmony Healthcare IT, 2025
Kids are even picking up on their parents’ struggles to unplug: 46% of teens say their parent is at least sometimes distracted by their phone when they’re trying to talk to them.
Concerns about AI follow some of the same patterns. Researchers at Pew found that nearly half of 18-50-year-olds are more concerned than excited about increased use of AI in daily life. Half also said they’re worried that using AI will make people worse at forming meaningful relationships with others. (Note: this issue isn’t cut and dry. The same survey from Pew found that people who use AI less are more worried about its negative effects… suggesting concerns could start to abate with greater exposure to the technology.)
That data suggests that kids’ safety is the tip of the public policy iceberg about more general fears that inundating our lives with AI might be net negative for society writ large.
This data, combined with my own deep ambivalence about my reliance on tech, has got me wondering: as adults continue to talk about protecting kids in the age of AI, how much are we actually talking about ourselves?
Politically speaking, child safety is an arena where both sides of the aisle can (sometimes) admit that free speech and self-determination may have to take a back seat to well-being. But it’s also becoming a politically safe space for working out adults’ own concerns about the profound social, psychological, and privacy risks AI poses in general, not just to the youngest users.
I’ve found that what starts as technical and legal debates often turns personal. In most conversations I hear on this topic, advocates, researchers, and industry leaders eventually start talking about battling their own kids over screentime… and most eventually admit that they themselves struggle with putting their phones down.
It’s easy to argue that healthy relationships are the operating systems kids should grow inside of. But most adults know deep down that that’s true for us too. Human development extends long after we turn 18.
Could AI policies protecting kids’ relationships, privacy, and well-being serve as a blueprint for AI policy that centers adult development in general? As my colleague Gregory Nadeau recently wrote, “safe AI for minors is politically doable and forces the issues of safe AI for all.” If we can figure out the right guardrails and independent oversight to safeguard kids, that same infrastructure could inform greater safety and transparency across the market.
There are major headwinds, of course, to that happening. Sam Altman’s “treat adults like adults” mantra is emblematic of much more than OpenAI’s particular philosophy and profit motive—it’s a libertarian bent that runs deep in how far both the left and right are willing to impose on adult users’ rights when it comes to tech.
But I think parents (of all political persuasions) are going to prove a unique constituency in this mix: we’re wrestling in real time with the ways in which unbridled freedom to use tech can become its own trap. We’ve seen two decades of social media shape our own social habits, not just younger generations’. And we can remember the days when our own parents couldn’t check email at the dinner table.
That memory matters. It reminds us that what feels inevitable now was not preordained—and that different choices are still possible. And maybe preferable.
So this year, like many in the AI world, I’ll be closely watching how the kids online safety debate unfolds. But I’ll especially be listening for what that tells us not just about what we want for our kids, but maybe what we’re also trying to find for ourselves.


