Every illustration/Robin Sloan.

What Do LLMs Tell Us About the Nature of Language—And Ourselves?

An interview with best-selling sci-fi novelist Robin Sloan

33

TL;DR: Today we’re releasing a new episode of our podcast AI & I. I go in depth with Robin Sloan, the New York Times best-selling novelist. We dive into what LLMs and their internal mechanics tell us about the nature of language and ourselves. Watch on X or YouTube, or listen on Spotify or Apple Podcasts. 


One of my favorite fiction writers, New York Times best-selling author Robin Sloan, just wrote the first novel I’ve seen that’s inspired by LLMs.

The book is called Moonbound, and Robin originally wanted to write it with language models. He tried doing this in 2016 with a rudimentary model he built himself, and more recently with commercially available LLMs. Both times Robin found himself unsatisfied with the creative output generated by the models. AI couldn’t quite generate the fiction he was looking for—the kind that pushes the boundaries of literature.

He did, however, find himself fascinated by the inner workings of LLMs

Robin was particularly interested in how LLMs map language into math—the notion that each letter is represented by a unique series of numbers, allowing the model to understand human language in a computational way. He thinks LLMs are language personified, given its first heady dose of autonomy. 

Robin’s body of work reflects his deep understanding of technology, language, and storytelling. He’s the author of the novels Mr. Penumbra’s 24-hour Bookstore and Sourdough, and has also written for publications like the New York Times, the Atlantic, and MIT Technology Review. Before going full-time on fiction writing, he worked at Twitter and in traditional media institutions. 

In Moonbound, Robin puts LLMs into perspective as part of a broader human story. I sat down with Robin to unpack his fascination with LLMs, their nearly sentient nature, and what they reveal about language and our own selves. It was a wide-ranging discussion about technology, philosophy, ethics, and biology—and I came away more excited than ever about the possibilities that the future holds.

This is a must-watch for science-fiction enthusiasts, and anyone interested in the deep philosophical questions raised by LLMs and the way they function. Here’s a taste:

  • Cultivate a note-taking habit. Robin’s note-taking habit inspired some of his novels, and he believes that building a repository of things you find interesting unlocks new avenues of creative pursuit. “The great thing about keeping notes and trying to cultivate a sense for that stuff that just appeals to you in a hard-to-describe way is [that] you’re essentially writing the perfect blog for yourself,” he says.
  • Prompt LLMs to mimic your voice. If you can articulate the tone and style of content that you want to create, Robin thinks LLMs would excel at generating output that meets your expectations. The “trick of fitting [LLMs] into a style” by prompting the model to write in a specific genre or format like a “murder mystery” or a “business memo” is “really impressive.”
  • The ingredients of great fiction writing. Through his experiments with AI, Robin discovered that language models couldn't generate exceptional creative writing because they were trained on a “fuzzy cloud” of generic data. He believes that truly great writing lies “way out at the edge of that probability cloud
[it’s] the stuff that expands the frontier of what we thought could be written.”
  • Balancing known inputs and fresh outputs. Robin is unsettled by the fact that the sheer volume of data used to train commercial LLMs makes it impossible to know exactly what information they contain. He wishes “there was a system where I was able to say
I know what's in [the LLM]...and now it's going to operate in this living, organic, unpredictable way,” a concept materializing with ethically trained models.
  • From code to cognition. Robin finds the idea that LLMs may have improved reasoning abilities due to being trained on vast amounts of code intriguing because it raises questions about the linguistic properties of programming languages. “[Code] is a bridge—it’s a way for us to think in a more machine way, and we express that in these linguistic terms,” he muses.

The philosophical underpinnings of how LLMs work 

The narrator in Moonbound frames the central question of the human race as, “What happens next?” These are Robin’s thoughts on the query and its philosophical implications in the context of LLMs:

  • Anticipate the future to thrive in the present. Robin says that he uses the narrator of his book to ask, “What happens next?” because it’s a fundamental query that’s “carved into [his] heart”. He elaborates that the question is essential for all forms of life, from a “little microbe” to a functioning human being, because it “allows [them] to do things like plan and react to possible dangers and all the other things you can imagine life doing.”
  • The philosophical significance of LLMs. Robin believes it’s crucial to consider whether LLMs can be regarded as living beings, given that they also grapple with the central question of what will come next. “If there are not presently, at minimum, dozens of philosophers, cognitive scientists, and ethicists thinking about this stuff
then the academy is derelict in its duty because these are really rich, interesting questions,” he says.
  • LLMs are infusing language with life. While Robin doesn’t think LLMs are “beings,” he is drawn to the idea that these models are the personification of language given its “first dose of autonomy.” “[I]t's like you rip language out of our heads and our society sets it up and turns a crank on the side and it starts walking around slowly and weirdly, like one of those little wind-up toys,” he explains.

Robin reflects on the release of Moundbound

As we draw to the end of our conversation, Robin shares his feelings about the launch of his new book:

  • Power of the written word. Robin is excited about the book’s publication so he can witness the way readers interpret and engage with it.“I think [books] literally get into people's heads because they have to—you didn't just watch Moonbound on a screen, you enacted and rehydrated the events
in your own language model inside your own head,” he says.
  • The interplay between LLMs, books, and dreams. Robin concludes with a pet theory he has about a fascinating link that exists between books, LLMs, and dreams. “I feel that the mechanism of dreaming is very similar to the mechanism of language models, kind of saying, ‘Okay, well, that's weird, but I'm going to keep it going’...and the reason I bring it up is that I think that's also very similar to the mechanism of a novel—my pitch for novels fundamentally is that they are packaged dreams,” he explains.

You can check out the episode on X, Spotify, Apple Podcasts, or YouTube. Links and timestamps are below:

Timestamps:
  1. Introduction: 00:00:53
  2. A primer on Robin’s new book, Moonbound: 00:02:47
  3. Robin’s experiments with AI, dating back to 2016: 00:04:05
  4. What Robin finds fascinating about LLMs and their mechanics: 00:08:39
  5. Can LLMs write truly great fiction?: 00:14:09
  6. The stories built into modern LLMs: 00:27:19
  7. What Robin believes to be the central question of the human race: 00:30:50
  8. Are LLMs “beings” of some kind?: 00:36:38
  9. What Robin finds interesting about the concept of “I”: 00:42:26
  10. Robin’s pet theory about the interplay between LLMs, dreams, and books: 00:49:40

What do you use ChatGPT for? Have you found any interesting or surprising use cases? We want to hear from you—and we might even interview you. Reply here to talk to me!

Miss an episode? Catch up on my recent conversations with LinkedIn cofounder Reid Hoffman, a16z Podcast host Steph Smith, economist Tyler Cowen, writer and entrepreneur David Perell, founder and newsletter operator Ben Tossell, and others, and learn how they use AI to think, create, and relate.

If you’re enjoying my work, here are a few things I recommend:

The transcript of this episode is for paying subscribers.


Thanks to Rhea Purohit for editorial support.

Dan Shipper is the cofounder and CEO of Every, where he writes the Chain of Thought column and hosts the podcast AI & I. You can follow him on X at @danshipper and on LinkedIn, and Every on X at @every and on LinkedIn.

Find Out What
Comes Next in Tech.

Start your free trial.

New ideas to help you build the future—in your inbox, every day. Trusted by over 75,000 readers.

Subscribe

Already have an account? Sign in

What's included?

  • Unlimited access to our daily essays by Dan Shipper, Evan Armstrong, and a roster of the best tech writers on the internet
  • Full access to an archive of hundreds of in-depth articles
  • Unlimited software access to Spiral, Sparkle, and Lex

  • Priority access and subscriber-only discounts to courses, events, and more
  • Ad-free experience
  • Access to our Discord community

Comments

You need to login before you can comment.
Don't have an account? Sign up!
Every

What Comes Next in Tech

Subscribe to get new ideas about the future of business, technology, and the self—every day