
TL;DR: Today we’re releasing a new episode of our podcast AI & I. Dan Shipper goes in depth with author and researcher Nadia Asparouhova. Watch on X or YouTube, or listen on Spotify or Apple Podcasts. Here’s a link to the episode transcript.
Was this newsletter forwarded to you? Sign up to get it in your inbox.
Nadia Asparouhova wasn’t looking for a spiritual awakening.
She was just curious why Jhana—mental states of bliss that can be reached through an intense program of meditation—had become so popular in tech circles. The more she read, the more she wanted to experience it firsthand. Nadia went to her first Jhana retreat without having a regular meditation practice, and on her second one, she reached what practitioners call “cessation,” the final stage of Jhana where one slips out of consciousness at will.
Afterward, something shifted. Nadia found that she had better control over her attention, emotions, and distracting self-talk. It challenged her assumption that these messy qualities are an integral part of being human—and made her wonder if we’re more like LLMs than we know.
Asparouhova is a deep thinker, writer, and researcher. She published Working in Public, a book that chronicles the evolution of open-source software development, with Stripe Press in 2020. More recently, she’s the author of Antimemetics, a book about why certain ideas, despite being compelling, don’t go viral on the internet today, optimized as it is for attention.
In this episode of AI & I, Dan Shipper and Asparouhova talk about her journey into Jhana, and how the practice’s goal-oriented structure sets it apart from other forms of meditation (she has since stopped practicing, saying that the experience of cessation was akin to completing a video game). They also get into how our modern sense of self is the product of centuries of cultural evolution, and what that means for how we perceive large language models today.
You can check out their full conversation here:
If you want a quick summary, here are some of the themes they touch on:
Maybe LLMs don’t lack consciousness—maybe we’ve just misunderstood ours (00:19:11)
When we wonder if LLMs are sentient, we tend to evaluate them against traits of consciousness we recognize in ourselves—like the inner monologue most of us have (and have often wished would quiet down). But Asparouhova challenges the idea that our current sense of self is as timeless or universal as we assume. She draws on the work of psychologist Julian Jaynes, whose theory of the "bicameral mind" suggests that what we now consider consciousness may have only developed in humans a few thousand years ago. Asparouhova also points to historical examples like the emergence of soliloquies in Shakespeare’s plays, where characters spoke their thoughts out loud, revealing the existence of an inner world.
This raises a provocative question: If our sense of self has evolved so significantly, why are we so certain that LLMs lack something essential when it comes to consciousness or intelligence? “I think a lot of people take for granted that [our inner monologue] is just part of what it means to be human, but it may not actually be… I think there's a mountain of evidence that it's at least fluid and has changed over time,” she says.
You don’t have to understand AI to use it well (00:33:55)
While contemplating the nature of selfhood, Dan reflects on how difficult it is to understand the causes of conditions that disrupt our sense of self, like obsessive-compulsive disorder (OCD) and depression. Despite the many books and theories, every answer that’s proffered—whether it points to parenting, brain chemistry, or trauma—feels both partially true and yet somehow incomplete. He draws a parallel to how LLMs predict which token comes next, a process dependent on thousands of subtle correlations, too context-dependent and complex to be reduced to a simplistic explanation.
Asparouhova finds something compelling in that ambiguity. To her, one of the quieter joys of AI is accepting that we don’t fully understand how these models work—and using them anyway. As someone who enjoys “resisting legibility,” the unknowability of LLMs draws her toward them even more. This prompted me to think about why there’s no right way to use AI: The fuzzy nature of LLMs is an invitation to experiment with them to find your own rhythms and workflows.
No one ever needs to write in a vacuum again (00:38:03)
According to Asparouhova, the trope of the brooding writer who spends all day wrestling with their ideas alone is beginning to shift (I tend to agree, but more on that in a moment). For her, integrating ChatGPT into the writing process has made the act feel far less isolating. She turns to language models at the bookends of her work: In the early stages, when messy, half-formed ideas are taking shape, she uses AI to explore connections between them; and at the very end, when she’s editing herself, it sometimes helps her land on just the right word.
Asparouhova’s description of writing resonated with me. At Every, we’ve been experimenting with using AI to get a preliminary round of editorial feedback on our drafts—and I’ve found it helpful, even comforting. There’s something reassuring about having a distilled version of my editor’s perspective embedded in a language model that can read through my piece before they do.
Here’s a link to the episode transcript.
You can check out the episode on X, Spotify, Apple Podcasts, or YouTube. Links are below:
- Watch on X
- Watch on YouTube
- Listen on Spotify (make sure to follow to help us rank!)
- Listen on Apple Podcasts
What do you use AI for? Have you found any interesting or surprising use cases? We want to hear from you—and we might even interview you.
Miss an episode? Catch up on Dan’s recent conversations with founding executive editor of Wired Kevin Kelly, star podcaster Dwarkesh Patel, LinkedIn cofounder Reid Hoffman, a16z Podcast host Steph Smith, economist Tyler Cowen, writer and entrepreneur David Perell, founder and newsletter operator Ben Tossell, and others, and learn how they use AI to think, create, and relate.
If you’re enjoying the podcast, here are a few things I recommend:
- Subscribe to Every
- Follow Dan on X
- Subscribe to Every’s YouTube channel
Rhea Purohit is a contributing writer for Every focused on research-driven storytelling in tech. You can follow her on X at @RheaPurohit1 and on LinkedIn, and Every on X at @every and on LinkedIn.
We build AI tools for readers like you. Automate repeat writing with Spiral. Organize files automatically with Sparkle. Write something great with Lex. Deliver yourself from email with Cora.
We also do AI training, adoption, and innovation for companies. Work with us to bring AI into your organization.
Get paid for sharing Every with your friends. Join our referral program.
Find Out What
Comes Next in Tech.
Start your free trial.
New ideas to help you build the future—in your inbox, every day. Trusted by over 75,000 readers.
SubscribeAlready have an account? Sign in
What's included?
-
Unlimited access to our daily essays by Dan Shipper and a roster of the best tech writers on the internet
-
Full access to an archive of hundreds of in-depth articles
-
-
Priority access and subscriber-only discounts to courses, events, and more
-
Ad-free experience
-
Access to our Discord community
Comments
Don't have an account? Sign up!