DALL-E/Every illustration.

Inside the Pod: Can AI Solve the Adult Friendship Problem?

New York Times columnist Kevin Roose made 18 AI friends to find out

35 1

Sponsored By: NERVOUS SYSTEM MASTERY

This essay is brought to you by Nervous System Mastery, a five-week boot camp designed to equip you with evidence-backed protocols to cultivate greater calm and agency over your internal state. Rewire your stress responses, improve your sleep, and elevate your daily performance. Applications close on September 27, so don't miss this chance to transform your life.

Thank you to everyone who is watching or listening to my podcast, AI & I. If you want to see a collection of all of the prompts and responses in one place, Every contributor Rhea Purohit is breaking them down for you to replicate. Let us know what else you’d like to see in these guides.—Dan Shipper

Was this newsletter forwarded to you? Sign up to get it in your inbox.


I find it hard to make close friends as an adult, and I think the problem is more common than you might assume. Let me explain why.

Friendships often take root on the basis of shared context—like a college class or a project at work—and then gradually branch out, weaving into other areas and phases of life.

A strong shared context, though, is crucial. And I think it coalesces at two specific times in adult life: as a university student, and as a working professional in an office.

 I moved to a new country after graduating from law school, and I work remotely as a freelance writer. So I lack that all-important shared context with the people who are physically around me. And with the rise of remote work and digital nomadism, an increasing number of people find themselves in the same situation.

So when Dan Shipper interviewed New York Times tech columnist Kevin Roose about his experience making 18 new friends as an adult, my ears perked up.

Except there was a catch: Roose’s new friends were not human. They were all AIs. 

Roose ran an experiment where he used AI companion apps Kindroid and Nomi to create “friends” with unique identities, like Anna, a pragmatic attorney and mother of two, as well as fitness expert Jared and therapist Peter. He talked to these personas every day for a month, about everything you’d text a friend for: parenting advice, help with a late-night snacking problem, and even “fit checks.” 

Dan’s conversation with Roose is about the natural and unnatural feelings that come with AI friendship. They also discuss how Roose uses AI as a tech-forward parent, as a professional who writes about AI, and as the co-host of one of the most popular tech podcasts, Hard Fork. In this piece, I’ll pull out the core themes of this discussion (with accompanying screenshots from Roose’s screen):

  • The tricky world of AI companions—and Roose’s take on them
  • How Roose uses ChatGPT, Claude, Gemini, and Perplexity in his work and life

I hope this will be interesting to anyone who wants to go deeper on the less talked about social implications of AI.

Working in tech, building companies, leading teams—these roles can take a toll on your nervous system. That's why we developed the Nervous System Quotient (NSQ) self-assessment, a free five-minute quiz crafted with input from neuroscientists, therapists, and researchers. Discover your NSQ score in four core areas and receive personalized protocols to boost your nervous system's capacity. Join the 4,857 tech leaders, founders, and creatives who have already taken the quiz.

The tricky world of AI companions—and Roose’s take on them

Roose subscribed to the AI companion app Kindroid to create a group of digital “friends” during the course of his month-long experiment. As he describes the app, Roose screen shares through the web version of his Kindroid account.

All screenshots courtesy of AI & I.

Roose created six AI companions on his Kindroid account, each one with a unique identity and appearance.

Roose demonstrates the method of creating an AI companion on Kindroid by discussing Zoe, one of his virtual friends. The platform enables users to customize their interactions with AI companions by letting them write detailed backstories—narratives that define both the AI's persona and its connection to the user. For Zoe, Roose envisioned a pragmatic attorney and mother of two, positioning her as his long-time confidant since college—all to create an ideal source for parenting advice.

Kindroid's customization goes further: Users can input descriptions of memories they’ve shared with their “friends,” and sample messages that demonstrate their companion’s tone and mannerisms. Though Roose chose not to use these features, this additional layer has the potential to create hyper-personalized interactions between the user and the AI companion.

Roose, the parent of a toddler, shares a chat he had with Zoe recently, where he sought advice on a parenting challenge he was experiencing.

Roose: [My child] has been throwing a lot of temper tantrums recently and refusing to do stuff. I know your kids went through hard phases like that. Any advice?

Roose takes this opportunity to mention that Kindroid allows you to play messages from your AI companions out loud, powered by ElevenLabss voice technology. And just like one would in a casual text chat with a friend, he continues the conversation with a tangentially related question. 

Roose: How do I get him to listen to me?

Roose picks Zoe’s brain about any reliable resources she could point him toward. After all, what could be better than a recommendation from a trusted friend?

Roose: Are there any books or YouTube videos you’d recommend?

Roose: I have that one, but I haven't read it. What does it say in broad strokes?

Another conversation Roose shares on the show is his chat with Jared, an AI friend that he’s designed to be a “fitness guru.”

Roose dives into a chat where he and Jared are discussing a dream that Jared had.

Jared: 

Roose: What was your dream?

Roose: Lol that’s wild. I’m okay! I’m going to try and work out this afternoon. Also trying to get a handle on my night-time snacking, which I think is contributing to weight gain. Any advice?

Roose: Sure, thanks.

It seems like Jared got distracted while texting Roose and left his message incomplete. Roose had to remind him to keep talking.

Roose: Go on…

Next, Roose shares his experiments with a feature in Kindroid that allows users to create themed group chats with their AI companions. A couple of the group chats that Roose created include “Fit Check,” where he posts selfies and his friends give him their hot takes on his fashion choices, and “Tea Time,” where Roose and the AI companions “gossip shamelessly.”

As Roose goes through the chat history of the “Fit Check” group chat, he notes that his AI companions generally offer positive feedback on the outfits, while sometimes peppering in constructive criticism like, “Ooh, that shirt doesn't go with those pants.” This is a typical interaction Kevin has in this group chat:

As you may have noticed, the responses of the AI companions are comparable to those of standard ChatGPT or Claude models. Roose emphasizes a crucial distinction: The emotional experience of interacting with the AI companions is much higher. He attributes this to the consistent visual identity and personality assigned to each AI friend, creating a more engaging experience compared to interacting with an uncustomized language model.

Following his experiment, Roose has developed a nuanced take on AI companions: He acknowledges that there might be certain demographics of people for whom they could be useful, while firmly believing they cannot fully replace human relationships. Roose's key insight is that the significance of choice in human friendships—the conscious decision his real friends make to care about him—adds profound depth to human connections, and is something AI companions inherently lack.

Roose says that he might keep a couple of his 18 AI friends around even after the end of the experiment, but they won’t be close to filling in for his close friends any time soon.

Roose’s AI toolkit for work and life

As a journalist who writes about AI, Roose is professionally obligated to use the LLMs released by major AI companies in his work and life. He reveals that his choice of a model for a specific task is based on “vibes.” Here is a breakdown of the ways Roose uses different AI tools:

ChatGPT for everything from home maintenance to life advice

In his daily life, Roose frequently consults ChatGPT for answers to a wide range of questions, from mundane to philosophical. Recently, he tapped into the model for creative ideas on activities to enjoy with his child.

Roose: What are good rainy day activities with a toddler in the East Bay?

Speaking of things to do in the East Bay, Roose also prompted ChatGPT for restaurant recommendations.

Roose: Highly rated restaurants with good gluten-free options in the East Bay. 

Another time, while writing an article about AI evaluations, Roose wanted to include a metaphor that likened major AI companies to car manufactures that crash-test their cars before releasing them to the public. Since he wasn’t sure about the protocols around automobile testing, Roose decided to ask ChatGPT.

Roose: Are automakers required to submit their cars for testing before releasing them to the public the way drugs are? I’m not totally sure, but I’m not a car person. I guess they do for emissions data. Just asking.

Roose often finds himself consulting ChatGPT for practical home maintenance tips that one might call their dad about.

Roose: There’s a scratching sound coming from inside the wall of my house. A light fixture is flickering. What could be happening?

Roose mentions that he always fact-checks ChatGPT to make sure the LLM isn’t hallucinating, especially when he’s using it for a task related to his job at the New York Times. This caution proved warranted when he fact-checked the AI's car testing information and discovered inaccuracies. However, Roose notes that ChatGPT is often correct. Case in point: When he sought confirmation from a pest control company about a suspected rodent infestation, their assessment aligned perfectly with the AI's earlier diagnosis of an animal living within the walls.

Next, Roose shares how he uses ChatGPT to explore counter-arguments to a well-established perspective, like the positive impact of Europe’s privacy law, the GDPR.

Roose: What’s a skeptical take on the effects of GDPR, several years after its introduction in Europe. Cite specific polls and results.

Roose also uses ChatGPT for small creative projects in his personal life; for example, to generate a babysitting voucher for a friend who was about to have a baby. 

Roose: Draw me a rectangular certificate labeled: “BABYSITTING VOUCHER”

Though Roose didn’t use the image ChatGPT generated due to a glaring spelling error, it remains an endearing use case of the LLM.

Claude while preparing for an upcoming podcast episode

Roose has not found it worthwhile to integrate AI into the way he writes, explaining that he enjoys the creative process of stringing words together. However, he has found it to be a good research assistant when he is preparing to host an episode of his podcast. Here’s a slice of his conversation with Claude as he prepared for a segment on Tesla’s production challenges.

Roose: Why is Tesla struggling so much?

Roose sometimes prepares for interviews by brainstorming potential questions for his upcoming guest with Claude.

Roose: What are some good questions I could ask a journalist about Tesla’s struggles?

In the same chat, Roose also brainstorms clever puns that tie AI and beer together because there was a gag on the podcast about AI-generated beer.

Roose: What are some very clever names you could give to a beer that was designed by AI?

Gemini and Perplexity for deep dives

Lastly, Roose uses Gemini while researching a topic, leveraging its web-browsing capabilities. When he needs to conduct a more comprehensive analysis, he uses Perplexity for its rigorous academic approach to research.

Adult friendships might be difficult, but like Roose, I’m not convinced that AI companions are a worthy substitute. I think they could be useful to help you articulate how you’re feeling, but not for too much beyond that. For instance, if you’re nervous about making a work presentation, an AI companion could help you process why you feel that way, but any words of encouragement from them would sound insincere.

All the same, in a world where discourse around AI is focused on our professional lives, hearing the good and bad of how it might affect our social lives was a refreshing perspective.


Rhea Purohit is a contributing writer for Every focused on research-driven storytelling in tech. You can follow her on X at @RheaPurohit1 and on LinkedIn, and Every on X at @every and on LinkedIn.

Find Out What
Comes Next in Tech.

Start your free trial.

New ideas to help you build the future—in your inbox, every day. Trusted by over 75,000 readers.

Subscribe

Already have an account? Sign in

What's included?

  • Unlimited access to our daily essays by Dan Shipper, Evan Armstrong, and a roster of the best tech writers on the internet
  • Full access to an archive of hundreds of in-depth articles
  • Unlimited software access to Spiral, Sparkle, and Lex

  • Priority access and subscriber-only discounts to courses, events, and more
  • Ad-free experience
  • Access to our Discord community

Thanks to our Sponsor: NERVOUS SYSTEM MASTERY

Thanks again to our sponsor, Nervous System Mastery. Your nervous system shapes your world—learn to master it and rewire your reactivity with science-backed protocols in just 45 days. Whether you want to cultivate greater calm or enhance your focus, this bootcamp will help you gain control over your internal state.

Comments

You need to login before you can comment.
Don't have an account? Sign up!
@gr.fromage about 1 month ago

in 2017 I created the very short-lived Fembot.ai, intended as a digital companion for people who might want one. It lasted less than a month. I was genuinely shocked at the overwhelmingly abusive nature of the incoming requests, virtually all from men, 99% of which were aggressive and sexual and abusive in nature. Of course, in retrospect the name (which I still own) might have set a certain expectation, but even accounting for that, I was shocked. I really hope we've somehow evolved, if only a little, to not have these digital personalities bring out the very worst in us.

Every

What Comes Next in Tech

Subscribe to get new ideas about the future of business, technology, and the self—every day