Midjourney/Every illustration.

How to Prepare for AGI According to Reid Hoffman

Humans can master AI—instead of losing our agency

8

TL;DR: Today we’re releasing a new episode of our podcast AI & I. I go in depth with Reid Hoffman, cofounder of LinkedIn, author, and venture capitalist. We get into the psychological patterns of how we adopt new technologies, the notion of human agency, and Reid’s take on the next decade of AI. Watch on X or YouTube, or listen on Spotify or Apple Podcasts. 

Was this newsletter forwarded to you? Sign up to get it in your inbox.


AGI is coming. Reid Hoffman just wrote the book on how to prepare.

According to Reid, every major tech breakthrough (the written word, the printing press, the telephone) triggered mass fear. But, contrary to our worries, new technology tends to enhance human agency—even more so, if you know how to use it well.

In Superagency, his book that was released yesterday, Reid examines how we’ve historically adopted new technologies and focuses on AI’s potential to increase our agency—the ability to make decisions that affect outcomes. He wrote the book for two audiences: anyone who is curious, or even skeptical, about AI; and technologists who are building in AI, with the hope that they will think about human agency as a design principle for their products. As someone who straddles both worlds, I read the book and really liked it.

Beyond being a prolific author, Reid is the cofounder of LinkedIn, Inflection AI, and Manas AI; a partner at venture capital firm Greylock Partners; an early backer and board member of OpenAI; and an award-winning podcaster—and I was pleased to invite him onto AI & I again, this time in person. Here's a link to the episode transcript.

We recorded an hour-long conversation, going deep on:

  • The notion of human agency, how our sense of agency shapes our response to new technologies, and its interplay with uncertainty
  • Why Reid believes that private commons and equitable access to AI will be beneficial for society at large
  • How the history of AI mirrors a philosophical shift in how we understand intelligence, from trying to program explicit rules about how thinking works, to building systems that learn patterns from data
  • Reid’s take on how the next decade of AI will involve a play between rule-based systems and pattern-matching ones   

It’s a must-watch for anyone who wants to use AI to increase their agency. Here’s a link to the episode transcript.

Watch on X or YouTube, or listen on Spotify or Apple Podcasts. 

If you want a quick summary, here’s a taste for paying subscribers:

How the history of the printing press mirrors AI 

If you find yourself leaning ahead, craning your neck, to catch a glimpse of tomorrow, the one where AI is a powerful, prevalent technology, Superagency has a counterintuitive suggestion: Turn around, and look back. Reid argues that a good way to understand how AI will reshape our lives is to study the past; specifically, how humans have historically interacted with a new technology. 

Taking an example familiar in tech circles, Reid uses the printing press as an example: “When the printing press was introduced, a lot of the public dialogue was very similar to the dialogue we have around AI—this will lead to the collapse of our trust in human cognition, it’ll lead to widespread misinformation, it’ll lead to the collapse of the solidity of our knowledge and society.” Six centuries later, we recognize the printing press as a cornerstone of human progress. Being aware of the patterns that emerge in our perception of new technologies can temper any misgivings we might have about AI and help us navigate through the tricky transition period we find ourselves in today.

Technology shifts agency, perception decides how

Superagency goes beyond history, drawing from psychology to understand why we’re typically afraid of new technologies. Reid believes that a big part of our sense of self tracks to a “notion of agency.” He argues that new technology has historically been viewed as reducing human agency, leaving us bitter and fearful of it.

While agreeing that a new technology does lead to a change in our agency, Reid points out that agency isn’t just about objective control; it’s also about how we perceive and experience our own sense of control. “Is it a loss of agency to being driven in an Uber, or a gain in agency to being driven in an Uber? If you're like, ‘My hands aren't on the steering wheel, and who knows what this random human being is doing?’—it's an enormous loss of agency. And yet, of course, hundreds of millions of people are doing it because they realize it's a gain of agency—I can not have a car and get somewhere.”

How Reid developed a sense of agency

Reid’s personal sense of agency is grounded in an old catechism, “the strength to change the things I can, the tolerance to live with the things I can't, and the wisdom to know the difference.” He managed to internalize this at a young age through an
unlikely source: moving cardboard ships across hexagonal maps. Reid played a lot of games as a child—Star Fleet Battles, board games by Avalon Hill, and Dungeons & Dragons—and it helped him develop a strategy for life: “Figure out what the nature of the game is and what are the things that are within your ability to change,” he says, “and then accept the things that you can't, while changing some really interesting things.” This approach continues to guide Reid’s investment decisions today 

While he did play games like chess and Go, Reid doesn’t think they mimic real life as well as other board games. “By having some randomness with dice rolls, [other board games] actually more closely approximated the kinds of circumstances we encounter in life—because life is not like chess, life is not like Go. It’s not deterministic that way. There’s
epistemic uncertainty sometimes that you have to play into.”

Navigating our fraught relationship with uncertainty

A factor that affects whether or not we feel a sense of agency is our attitude toward uncertainty. The desire to eliminate uncertainty is common: Science attempts to formulate fundamental laws so that we can make accurate predictions; western philosophers have historically tried to make knowledge as explicit as possible.

While these pursuits are undoubtedly useful, according to Reid, we delude ourselves into believing the world is more certain than it is. “We don’t, for example, realize that every time we drive somewhere on the highway, we are taking a certain amount of risk: We’re taking a risk about our own competence, we’re taking a risk on our vehicle, we’re taking a risk on weather conditions, we’re taking a risk on other folks. And of course, once again, just like agency, if you dwell on all that, maybe you’d never get into a car—but then you would never go anywhere, right?” 

Reid believes that we should treat uncertainty as a feature, not a bug, leveraging it to make better decisions. In Superagency, he uses the metaphor of a compass to describe how AI can help us navigate through the space of possibilities. 

How your data can drive your potential

As AI becomes more capable, Reid sees equitable access to the technology as more than just a moral imperative. Concentrating AI in the hands of a privileged number of people isn’t just unfair, it’s inefficient. “When you get talent from as broad a range as possible
to do work, to be creative, to create maximum benefit, all of the rest of us in society benefit from that too.”

On a related regulatory point, Reid is in favor of private commons—the idea that an individual’s personal data, like the information accumulated about that individual by Google, can be thought of as a valuable resource. He argues that it’s a matter of framing: You can think of Google Maps as “surveillance capitalism” because it allows people to see the location of your house, or as something that enables you: “Google Maps allows my friends to figure out where my house is and come visit. Yes!”

All the same, Reid believes that personal data should not only belong to the platforms but should also be accessible to individuals in a way that allows them to use it for their benefit, such as analyzing patterns with AI. We talk about an app that I’m building to predict my OCD symptoms based on data collected from my wearable device, and facial and vocal cues from a video I record of myself talking every day. Reid says, “I do think that it's precisely this kind of data that—if you have the ability to shift it around as you need and want as an individual—can create great things
we're at this point decisively moving more and more towards quantified self.”

From rigid rules to flexible thinking

The evolution of western philosophy from searching for universal definitions in philosophy to coming to the realization that there’s a limit to our ability to grasp things is mirrored in the shift from symbolic AI—which relies on clear rules and definitions—to sub-symbolic AI—which learns by recognizing patterns and making guesses from data. Reid says he was part of a movement in cognitive science called connectionism, which emphasized that “symbols are important, but if you only had a symbolic theory, you're probably going to radically underperform your modeling of what your intelligence is.” 

How you can evolve with AI

If you’re a knowledge worker watching the progress of AI—with curiosity, skepticism, excitement, or fear—this is what Reid wants you to know:

  • AI is the new standard. “I can't be a professional and say, ‘I don't use computers,’ ‘I don't use smartphones’...AI is just the amplification of that.”
  • Your job isn’t going away, just changing. “I think a lot of jobs won't go away; I think they'll transform—and so the question is, are you adapting and transforming with them?”
  • Don’t forget to experiment, for work and play. “The best way [to learn] is just start engaging with it in some seriousness. It's fine, for example, to go to ChatGPT and say, ‘Give me a sonnet for my friend's birthday’
Great. Do that. But also use it for things that you are serious and earnest about—and you may find that some of them it's not ready yet, but you'll find that some of them it is.”

You can check out the episode on X, Spotify, Apple Podcasts, or YouTube. Links and timestamps are below:

Timestamps:
  1. Introduction: 00:01:29
  2. Patterns in how we’ve historically adopted technology: 00:02:50
  3. Why humans have typically been fearful of new technologies: 00:07:02
  4. How Reid developed his own sense of agency: 00:13:25
  5. The way Reid thinks about making investment decisions: 00:20:08
  6. AI as a “techno-humanist” compass: 00:29:40
  7. How to prepare yourself for the way AI will change knowledge work: 00:35:30
  8. Why equitable access to AI is important: 00:41:39
  9. Reid’s take on why private commons will be beneficial for society: 00:45:15
  10.  How AI is making Silicon Valley’s conception of the “quantified self” a reality: 00:47:23
  11. The shift from symbolic to sub-symbolic AI mirrors how we understand intelligence: 00:52:14
  12. Reid’s new book, Superagency: 01:03:29

What do you use AI for? Have you found any interesting or surprising use cases? We want to hear from you—and we might even interview you. Reply here to talk to me!

Miss an episode? Catch up on my recent conversations with star podcaster Dwarkesh Patel, LinkedIn cofounder Reid Hoffman, a16z Podcast host Steph Smith, economist Tyler Cowen, writer and entrepreneur David Perell, founder and newsletter operator Ben Tossell, and others, and learn how they use AI to think, create, and relate.

If you’re enjoying my work, here are a few things I recommend:


Thanks to Rhea Purohit for editorial support.

Dan Shipper is the cofounder and CEO of Every, where he writes the Chain of Thought column and hosts the podcast AI & I. You can follow him on X at @danshipper and on LinkedIn, and Every on X at @every and on LinkedIn.

We also build AI tools for readers like you. Automate repeat writing with Spiral. Organize files automatically with Sparkle. Write something great with Lex. Deliver yourself from email with Cora.

Get paid for sharing Every with your friends. Join our referral program.

Find Out What
Comes Next in Tech.

Start your free trial.

New ideas to help you build the future—in your inbox, every day. Trusted by over 75,000 readers.

Subscribe

Already have an account? Sign in

What's included?

  • Unlimited access to our daily essays by Dan Shipper, Evan Armstrong, and a roster of the best tech writers on the internet
  • Full access to an archive of hundreds of in-depth articles
  • Unlimited software access to Spiral, Sparkle, and Lex

  • Priority access and subscriber-only discounts to courses, events, and more
  • Ad-free experience
  • Access to our Discord community

Comments

You need to login before you can comment.
Don't have an account? Sign up!
Every

What Comes Next in Tech

Subscribe to get new ideas about the future of business, technology, and the self—every day