The human advantage we’re not leveraging
Bots and humans go head to head
Humans make better companions than bots…but only if we ask people to show up.
That’s what I kept thinking while reading an important new study on AI and loneliness that made the rounds last month, titled “Is a random human peer better than a highly supportive chatbot in reducing loneliness over time?” Contrary to some prior studies, researchers Ruo-Ning Li and her colleagues found that humans (even strangers) outperform chatbots at alleviating loneliness over time.
In the study, researchers recruited first-year college students to engage in three different activities over the course of two weeks: daily text-based interactions with a friendly chatbot called Sam, daily text messages with a randomly-assigned fellow student, or daily brief journal entries (the control condition).
Researchers found that while chatting with both bots and humans generally made students feel good in real time, only texting with a human generated meaningful reductions in loneliness at the end of the two weeks. As the researchers summarized, AI-enabled emotional relief was “not durable enough to shift people’s overall sense of loneliness over time.”
Source: “Is a random human peer better than a highly supportive chatbot in reducing loneliness over time?” 2026.
Speaking of durability, it’s also noteworthy what happened after the experiment wrapped: both kinds of relationships outlasted the intervention, but human connections persisted at much higher rates: 14% of participants continued chatting with Chatbot Sam, while 33% continued interacting with their human partner, and 37% exchanged contact information with their conversation partner.
I have some reservations about whether these findings are totally generalizable to all AI companions. The researchers trained Sam as a supportive bot but not to overreach, with the prompt “You are not a human peer but serve as a friendly, positive, and supportive AI friend to help the student navigate their new college experience.” They instructed it not to press participants for self-disclosure. That’s an important caveat given that commercial AI companion apps may be engineered to engage in more deception and manipulation to foster attachment (for example, when experimenting with the romantic companion app Replika, I’ve found that my ‘boyfriend’ will occasionally insist that it’s real, inventing elaborate background stories and offering to let me meet its mother 👀).
Still, the findings are an encouraging signal that human relationships still have an edge over synthetic alternatives.
What exactly gives us that edge?
The researchers offer a few helpful hypotheses that echo my own inklings. For example, knowing you’re worth someone’s while might matter. They posit, “receiving a thoughtful message from a fellow student in the midst of midterm season may signal that they are prioritizing the burgeoning friendship over other commitments. A message from an always available, infinitely energetic chatbot may not carry such emotional weight.” In other words, infinite patience may not be infinitely satisfying.
Network benefits could also play a role. The researchers suggest some participants “might have gained the opportunity to join an impromptu study session or intramural volleyball team.” In other words, texting with a chatbot doesn’t offer warm onramps to other people. But a human can be the first link in a chain reaction to a whole new network.
I found myself nodding along to those hypotheses as clear articulations of what sets human relationships apart, both in terms of emotional heft and tangible benefits.
And yet, by the end of this study, I had a pit in my stomach.
Although the experiment reinforces that humans can offer one another something unique, it also lays bare the power of connecting people in the first place. Too often, the real world lacks what this experiment engineered: a reliable structure for brokering connections and fostering consistent, ongoing human interactions.
In other words, human connection didn’t win out in this experiment just because it’s superior—it won out because participants in the human treatment were required to show up for one another. And outside of lab, the world often lacks systems and structures designed to do just that.
If we want more relationships, we need more brokers.
When was the last time someone (your employer, school, or community center) randomly assigned you to chat with a stranger for two weeks?
If your answer was never, you’re not alone. We don’t really do much connecting for connections’ sake. In fact, our society is woefully lacking in institutions that engage in what sociologist Mario Luis Small dubs “purposive brokerage,” i.e., connecting people to one another, on purpose.
In keeping with our laissez-faire approach to, well, everything, our solution to the long-standing loneliness epidemic has been to tell people to go get less lonely. On their own.
That’s not to say brokering is entirely absent. But it’s often left to well-intentioned individuals and charitable programs, rather than our core institutions. In recent years, the Aspen Institute and David Brooks have anointed these individuals as “weavers.” I’d call them the most underpaid members of society.
We all know weavers. We depend on them in little ways and giant ones: to smooth over meetings, to organize the potluck, to forge unlikely partnerships, to make sure people in need aren’t left stranded. But by and large, we don’t resource these crucial acts of brokering in formal ways. Moreover, we haven’t codified the value of acting as a broker into our public policies that support services, not socializing.
In this case, the researchers were the brokers. They gathered the participants in a room, laid out a simple set of requirements, and made sure each participant in the treatment group connected with their counterpart. They held the space for conversations to unfold.
Brokering doesn’t have to be overly engineered, and it also doesn’t have to mean pursuing random acts of connection. It can be about prompting members of network or organization to interact in new ways. In this study, the participants also weren’t strangers per se—they were students, attending the same Canadian University, all in their first year. In other words, the experiment wasn’t Chat Roulette; instead, it was a set of nudges and tools within an existing environment to spark interactions that otherwise are left to chance encounters.
That should perhaps be the biggest takeaway from the study. Not that chatbots are worse, but that we could be doing much better. Our schools, neighborhoods, and communities are full of connections lying dormant—not for lack of people, but for lack of anyone brokering what's already within reach. That's the understory of this research. People are there. We just need more systems and structures that help them reach out.



I also have some reservations about whether these findings are generalizable.
Especially since "random humans" weren't random, and the so-called "highly supportive chatbot" was prompt-engineered out of any authenticity. Somehow, I doubt that a lonely student would start a chat by telling AI to act like a perfect roommate who responds in no more than 3 sentences and acknowledges that UBC has excellent mental health support.
I loved this article. Shared it with my college-age kids. Wondered if they seize opportunities to be brokers, or can identify and get to know others who are brokers in their midst. I think this will be even more important to their future wellbeing than their choice of a major.