Am I an AI doomer?
Fear-based research vs. research-based fear
About a year ago, I started to question whether or not I was a doomer.
It was a new feeling because I’ve spent 12 years working for a think tank dedicated to the positive power of technology to disrupt for good.
But the emergence of ChatGPT and a host of AI companion tools both inside and especially outside of education was telling me a different story—a disruption was coming that could upend our social landscape in dramatic and harmful ways.
I’ve spent nearly a decade advocating for schools to focus on relationships and tech providers to build “edtech that connects”, or tools that deepen and diversify students’ networks.
Within the same period, social connectedness has mostly moved in the wrong direction. Former Surgeon General Vivek Murthy continues to highlight worsening loneliness data, and we’ve seen evidence of what AEI researcher Daniel Cox dubs a friendship recession. In schools, even with COVID’s harsh wake-up call about the important social aspects of learning, the edtech market has remained focused squarely on competencies and credentials, mostly ignoring tools that build connections.
So, when conversational generative AI hit the mainstream, my research clearly showed that we were not ready—in the world, or as an education sector—to safeguard and scale human connection in the age of AI.
Last summer was the first time I spoke publicly about my emerging concerns about AI disrupting human connection. I was at a conference about technology and higher education. A Salesforce leader was not buying what I was selling. He labeled me a pessimist (his exact catch phrase: “pessimists sound smart, but optimists build the future.”)
Soon after, in a conversation with someone from another major tech company, my concerns were dismissed as naive and impractical. This person reminded me that this was not the first nor the last time that technological progress would exact costs on society … but, rest assured, over time, things “recalibrate.”
More recently, someone I deeply admire claimed that he “believed my fear was real” but unfounded, at least inside of edtech.
The tail of technological advances is in fact long with many unforeseen consequences. But what’s maddening about these interactions is not just that they’re tinged with patronizing assumptions (not to mention gender dynamics). It’s that the term fear is misleading. There’s a big difference between overblown fears and research-based fears.
That’s why as AI continues to move at lighting speed, I’m going back to what my research spanning the last decade clearly shows:
AI companions are a textbook example of disruptive innovation. As I detail here, AI companions fit all the criteria of disruptive innovation. Not just because they emulate humans, but because the market conditions—widespread loneliness, isolation, and digital communication—make human connection ripe for disruption.
Edtech has a relationship problem. Inside of schools, there's a woeful lack of business models that support relationship building as a core priority. That's not just about the tech, it's about what the education market rewards. Institutions don't pay for relationships as outcomes. They pay for content, productivity, enrollment, delivery, and persistence. But they routinely leave students' access to social capital to chance and luck. That means people building new tools that unleash the power of generative AI are not incentivized to use this technology to support or scale connections.
In education, the social costs of AI will be subtle, but likely harmful. In education, AI companions are unlikely to be highly addictive or disruptive to human connection. But edtech’s relationship problem means that conversational bots are poised to sustain and scale the many ways education systems today fail to cultivate relationships. A self-help bot (unless it’s designed with networking in mind, and most aren’t) won’t help and, in fact, is poised to fuel the massive rates of help-seeking aversion we see in rising generations, which hinders them from building the networks they need to get the jobs they want.
The more young people turn to AI for advice (which Sam Altman suggests is basically constant), the less time and energy they will dedicate to talking to real people. Every conversation with a bot is not a conversation with a person who could be invested in your future. That doesn't mean all bots are bad, but schools are ill-equipped to detect the hidden cost of nudging students to rely on bots and not on each other. If you don't have the confidence (or access) to build a network in the first place, AI is well engineered to keep you stuck there. That’s despite the fact that even one-off conversations can be gateways to jobs (see Granovetter, and more recently, LinkedIn’s research on the strength of weak ties).Outside of education, we’re looking at the next social media, but with even more profoundly anti-social effects. The risks of AI companions in the consumer market are much greater. Built by companies like Character.ai and Replika, AI companions are artificially generated friends, lovers, and confidantes. Rather than connecting users to one another, these platforms connect users to bots engineered to their precise tastes. The bots learn about users’ preferences, mirror their emotions, and communicate with subtle verbal cues that are increasingly similar to real human conversation. They are social media…with the last pretense of genuine human socialization stripped away.
AI companions are fringe but potent. Usage is still relatively low compared to general AI assistants or productivity tools. But engagement is alarmingly high—nearly 20 times higher than with any other form of AI. In other words, those who are using these apps really use them, pouring hours a day into chatting with their companions. What public health experts see as an epidemic of loneliness, Silicon Valley sees as a new market. Investors are betting big on AI companions, with some estimating their worth in the hundreds of billions over the next decade.
All of these predictions that may evoke fear, but aren’t based on fear. They are based on research.
Does that make me a doomer? I hope not. A doomer would look for confirmation that these fears are coming true. I’ll admit that I’m inclined to do that some days… especially as sinister headlines about sensual chats with children, emotional manipulation, and AI-induced psychosis abound.
But hunting for the worst-case scenarios is how realism generated through research can slip into pessimism that colors the future.
As a researcher, when you discover something, you have to keep asking more questions.
In my upcoming research I’m aiming to ask why young people are turning to AI companions: what are they trying to get done in their lives that AI companions can help them accomplish? What’s happening in their lives that pushes them towards this new species of companionship? And in what circumstances do these bots actually help move young people in a prosocial direction?
Answers to these questions can help us get much more specific about where this tech is headed, and help us parse the benefits from the social risks–and the smart hopes from the real fears.
This is *the* moment for rigor and realism about the dynamic intersection of AI and human relationships. If we get stuck in stale, boring camps–techno-optimist or dystopian doomer–we’ll likely miss what’s actually happening, both inside of human-bot interactions and across the market of new tools.
Instead, we should all be striving to separate signals from noise, and figuring out how we talk to each other about the incentives, facts, possibilities–and yes, fears–that research lays bare.


I find your insights and highlights extremely valuable, Julia. The pace at which AI is developing and expanding needs to be recognized. The opportunity for all stakeholders in education is to spend more time deepening and diversifying student networks with real humans rather than fighting or policing AI. An authentic human connection is more powerful than screen time.
Julia - thanks for the research and insights that "keep it real" when reviewing the potential impacts of AI on human connections. I agree that it is critical for us to pay attention to building human connections, not just personal/individual competencies or skills. We must be sensitive to promoting AI-enabled learning designs that are human-centered and human connection sensitive.