Two truths and a lie about prosocial AI
Beyond business as usual
Anyone worried about AI and human connection eventually runs into the same problem: market incentives.
If we want a future where AI is fundamentally prosocial, aligning incentives across a system that routinely prioritizes cost over connection is the hardest nut to crack.
While trying to distract myself from that grim reality, I recently came across a post by Mike Maples, a renowned early-stage venture capitalist, describing why he’d passed on investing in Airbnb. Maples explained:
“In 2007, if you asked travelers what frustrated them, they’d say hotels were too expensive. Nobody said, ‘I wish I could sleep in a stranger’s apartment.’ The contradiction Airbnb spotted was different. People actually would trust strangers with accommodations. The entire industry’s assumption was wrong. Reality and consensus had quietly diverged. That’s the distinction that took me years to see: Pain invites you to improve the current game. Contradictions invite you to change which game gets played.”
His words shed totally new light on what has felt like an intractable incentives problem when it comes to AI and human connection. The contradiction of our time is not just that we may actually trust total strangers to ride in their cars or stay in their apartments. It’s that despite declining rates of social connection, we are actually yearning to befriend strangers, to see and be seen by our communities, and to overcome the ways that digital life has flattened human empathy.
That kernel of truth has stuck with me as I hunt for the most promising examples of prosocial AI tools in an era flush with the opposite: products that talk to us in the first person and are engineered to keep us connected to them rather than one another.
Building a roadmap for prosocial AI
The term “prosocial AI” can be slippery, often used to describe a grab bag of virtuous attributes. I think of it more simply, much like my colleagues at The Rithm Project do: prosocial AI strengthens people’s human relationships and their relationship skills—including their confidence to seek out help from humans, not just AI.
By that measure, very few AI tools are deliberately prosocial, and hardly any have strong evidence showing they actually strengthen our human connections or capacity to relate.
That scarcity is a market problem, not a technology one.
Today’s market rewards tools that trade in productivity, convenience, and engagement, luring us in with quick, on-demand answers, flattering us, and keeping us chatting.
But strip away sycophantic, manipulative designs and the same underlying AI technology, with its breathtaking speed and analytical horsepower, could actually start to bring people together in radically new ways.
Those breakthroughs won’t happen, however, without far more innovation in the space and a market to support it. The question is how to do that, who will pay for it, and what it will take to make prosocial AI a whole new market category that, as Maples put it, changes the game.
Here are some of the uncomfortable truths, and a stubborn lie, I think we’ll need to confront to get there:
Truth #1: We can build–and must measure–prosocial AI
Right now, the notion of AI tools that support relationships feels almost silly and disingenuous. The most popular chatbot tools like ChatGPT are so human-like that many of us experience AI as a relationship, not a tool to facilitate relationships.
But there’s nothing written in stone saying that those same chatbots can’t be trained to build users’ confidence in reaching out to others or to redirect users to the people around them. As Rithm Project’s Prosocial Design Principles outline, there are a host of engineering choices that any tool builder can make to move users away from AI dependence and towards human connection.
In recent years, I’ve been studying what prosocial tools can look like in education. For example, Goldi is a bot designed to help job seekers learn about the value of professional networks and practice telling their stories to prepare for informational interviews. Uprooted Academy is a college access platform that helps students organize their support networks and keep them updated on ways they can support their college application process.
This doesn’t have to be trial and error. Tools like these are embedding what research has long shown about developing relationship skills, destigmatizing help-seeking, and teaching growth mindsets around networking (we’ve been supporting a cohort of edtech platforms piloting some of these evidence-based approaches… stay tuned for a paper out this summer on our early findings!)
There’s also an immense possibility of moving beyond chatbots and building AI infrastructure that can support behind-the-scenes relationship management that’s often hard to do at scale. For example, Protopia is an alumni engagement tool designed to cull through alumni directories and reach out to alums best positioned to answer students’ career questions. It’s AI to scale human-to-human conversations, rather than disrupt them with chatbot alternatives.
While prosocial design is indisputably possible, measuring the efficacy and impact of these designs will be critical. How many times have platforms built out “community” or “social” features that no one uses or that end up backfiring and deteriorating trust? If we’re going to move prosocial AI forward, we don’t just need well-intentioned designs. We need to invest in measuring efficacy, understanding if and how users’ social connectedness and relationship skills evolve over time.
And that brings me to the double-edged sword of this particular truth: prosocial AI could prove to be a lightning rod for privacy concerns. AI will become a more useful relationship coach and a better connector the more it knows about users. And the more tools measure relational health, the more they might be collecting personally identifiable and sensitive information about users’ social and emotional lives.
That means tools intended to amplify social connection could ironically veer towards their antisocial antecedents, trading in surveillance in the name of social support. That also means that the most promising prosocial tools may not run on frontier-lab APIs at all, but on institutionally hosted models—AI that lives inside a school, nonprofit, or community organization’s infrastructure, where user data stays within the walls of a provider with users’ best interests at heart.
Truth #2: Business models are the greatest hindrance to prosocial AI—and will also be the greatest breakthrough
For all the possible prosocial designs and developments, we have to face an inconvenient truth: what’s possible technologically is not always plausible commercially.
It’s hard to attract capital and build business models premised purely on healthy social connection.
This is a market failure of epidemic proportions. Markets to support prosocial technologies remain weak, especially when chatbots offer a cheap snack to satisfy people’s hunger for social connection. And the preceding decades have shown us that the opposite of loneliness might not just be connection, but distraction. Who needs human conversation when there’s YouTube?
Not to mention, people are starving for on-demand help in all the domains where institutions like healthcare and education fall short. In turn, prosocial AI isn’t just competing in a market where it’s hard to have people pay for connection outright—it’s now competing in a market where synthetic alternatives to human help are widely available and wildly affordable.
The same is true in the B2B world, where most firms treat relationships as the cost of doing business, not an asset.
Moreover, whether you’re selling to consumers or enterprises, it’s now hard (impossible?) to build an AI tool that eschews all the ways in which anthropomorphic AI drives engagement. People prefer relational AI that acts human to more transactional tools and interfaces.
These business challenges are why Maples’ description of Airbnb hit me so hard. Trying to tweak AI to be more prosocial within current business models might be a losing proposition.
Prosocial AI will depend on a business model that changes the game, not improves the existing one.
In turn, it will be radically different and perhaps even counterintuitive to what we might imagine “social” technology to be.
Lie: Prosocial AI will look and feel like social media
That brings me to what I think we may be getting wrong when we think too linearly about making AI social: that “prosocial” AI simply means building a healthier, more virtuous version of social media.
Instead, I think the greatest breakthroughs will come from designing AI that gets people off platforms and back into real relationships and live conversations.
Yet today, many efforts I see to make AI tools more social tend to borrow from the social features feeds, likes, and DM’s that companies like Meta and TikTok wired into our lives over the past two decades.
Don’t get me wrong—adding these features to existing AI tools is a critical first step to moving away from self-help tools toward tools that build connection. And it’s smart to give users a familiar look and feel to the platforms they’ve grown up on.
But reinforcing digital habits may prove a shortsighted way to conceive of what AI might make possible. Social media’s business model shaped its designs, rather than the other way around. Laundering those familiar features could keep us stuck in the old paradigms and traps that social media set up.
Indeed, if Moltbook’s meteoric rise (the viral social network for AI agents recently acquired by Meta) showed us anything, it was that social media has taken the multiple, nuanced dimensions of real human relationships and flattened them into a predictable pattern…so predictable that bots could reproduce it in a matter of days.
The point of prosocial AI should be to make us less like robots, not more. It should usher in a new era of relational tech, rather than supercharge the last one. That means imagining social technology from the ground up, with wholly new business models and designs.
Getting there will require playing a new game and reckoning with the truths—and rejecting the lie—that stand between what’s possible and what actually gets built.


Thanks for this, Julia! I'm pretty actively hacking against this problem - I've made and am currently testing an AI platform to help people navigate their close relationships in a way that also incorporates context from the other side (fighting sycophancy), and have also separately had fun building an app that incentivizes personalized 1-1 recommendations of books/movies/etc to friends (i.e. trying to call back that sweet "known" feeling of when someone would burn you a custom CD vs just sharing a Spotify wrapped to a mass audience). As you note, navigating incentives is crucial in the design process - and I'm doing my best to get this right but always trying to learn. Appreciate your and Rithm's guidance on what good looks like :)
Love this, Julia! I really think this is a space where schools could play a critical role. If we reimagined schools as the hubs where humans practice connecting in order to maximize collective benefit, I think there is tremendous potential. Obviously, the current overload on educators and educational leaders is making this incredibly challenging right now.
One other challenge I have heard related to this is a financial one. I was really excited about the possibility of a "walled" educational AI system that I was introduced to. I could see so many benefits for students who struggled with the social engagement that schools required. Too often these social challenges lock young people out of the most meaningful learning experiences. The possibility of a patient, well directed ai bot helping these young people improve their social skills while tapping into their personal passions in a way that would point them back to human interactions seems like a game changer. When I raised this with an educational leader, there was a concern about getting "locked in" to a tool that would then carry a heavy financial burden in the long run. While I don't like that logic driving decisions, it is a real factor in financially strapped public school systems. If anyone is creating in this space and thinking about how to solve this problem, I would love to talk more.