Transcript: ‘How to Prepare for AGI According to Reid Hoffman’

‘AI & I’ with the Linkedin cofounder and early OpenAI board member

3

The transcript of AI & I with Reid Hoffman is below. Watch on X or YouTube, or listen on Spotify or Apple Podcasts.

Timestamps

  1. Introduction: 00:01:29
  2. Patterns in how we’ve historically adopted technology: 00:02:50
  3. Why humans have typically been fearful of new technologies: 00:07:02
  4. How Reid developed his own sense of agency: 00:13:25
  5. The way Reid thinks about making investment decisions: 00:20:08
  6. AI as a “techno-humanist” compass: 00:29:40
  7. How to prepare yourself for the way AI will change knowledge work: 00:35:30
  8. Why equitable access to AI is important: 00:41:39
  9. Reid’s take on why private commons will be beneficial for society: 00:45:15
  10.  How AI is making Silicon Valley’s conception of the “quantified self” a reality: 00:47:23
  11. The shift from symbolic to sub-symbolic AI mirrors how we understand intelligence: 00:52:14
  12. Reid’s new book, Superagency: 01:03:29

Transcript

Dan Shipper (00:01:30)

Reid, welcome to the show.

Reid Hoffman (00:01:31)

It’s great to be here and great to be doing this in person.

Dan Shipper (00:01:35)

Yeah, I love that. It makes it a much different experience to be face-to-face. We’re actually very close to each other.

Reid Hoffman (00:01:40)

Yeah, exactly—and actually even philosophically.

Dan Shipper (00:01:42)

Yeah, definitely. So, I'm excited to have you for a number of reasons, but one of the big ones is you have a new book coming out. By the time this is out, it may be out. It's called Superagency. I read it. I actually really liked it. Professionally, I have to say that, but—

Reid Hoffman (00:01:59)

It happens to be true!

Dan Shipper (00:02:01)

It happens to be true. And the reason I like it is I think you're writing it in response to the— There’s a prevalent fear of AI and maybe AGI socially. And I think you're examining this moment and saying, if we want to understand what to do in this moment and how things might play out, a good way to do that is to look at history and look at other historical technology changes to see how we reacted and whether we were wrong or not. And you go much further than that in the book, but I want to start there because I love that history angle because I think we forget how many of the things that we're familiar with now used to be really, really scary. I wonder if you could go into some of those so we can understand them a little more.

Reid Hoffman (00:02:49)

Well, it goes all the way back to the written word with Socrates, although there are some different scholarly interpretations of the Socratic remarks or the challenges. So why don't we start with the printing press? 

When the printing press was introduced, a lot of the public dialogue was very similar to the dialogue we have around artificial intelligence. This will lead to the collapse of our trust in human cognition. It’ll lead to widespread misinformation. It will lead to the collapse of the solidity of our knowledge and society and what we're doing. And who knows what people are going to do with this technology that could really erode things? 

And on one hand, here, many centuries later, we are obviously deeply indebted to the printing press. We can't have science without the printing press, because you can't get to that spread of information. You can't have widespread literacy and education. You can't have the progress of knowledge in middle-class institutions, universities, etc. All of which— Well, universities existed before the printing press, but getting to the many universities of vigorous strength required the printing press. Now, all of that shows that it's great despite all the fear. Now, that being said, the other thing to track with the printing press is there was nearly a century of religious war. So as human beings when we get these new technologies, the transition periods of what we get in can be very challenging. But that's to some degree, that's an opportunity, not just a notion of fear, because the world is what we make it. So let's make this transition much better than the earlier transitions.

Dan Shipper (00:04:40)

Do you think that— Because one of the overriding points in your book is in these kinds of transitions, we have some amount of control over them, but don't have total control. And we can get more into all the nuances of what that means, but I'm curious about, do you think we can prevent— Let’s say we went back to the printing press days, is there something we could have said to Martin Luther to be like, hey this schism thing and the reformation— It’s great to have a personal relationship with God, but maybe cool it a little bit on some of this stuff, because it's going to lead to a lot of war? Or is that sort of inevitable?

Reid Hoffman (00:05:20)

Well, I think it depends on where your society and culture is. At the time, it's unclear that even if Martin Luther had said, let's do this gradually and have a set of discussions that the reigning Catholic Church would have tolerated and allowed it.

Dan Shipper (00:05:40)

I think he did. I think he was trying to go into the system for a long time before he did the door-pounding thing.

Reid Hoffman (00:05:45)

Exactly. So it's unclear. But one of the things we hope in having built great global institutions post-World War II, having learned from what the massive amount of tragedy is when you get in that kind of amount of conflict, that— And by the way, of course, we have the we have various technologies, not the least of which is printed books, to remind us of our histories and to learn it, that maybe we can do this one much better. And that's part of the reason why, and you know from having read Superagency, it's part of the reason for doing the book, for doing podcasts and other things to say, hey, if we actually steer the right way, we can minimize the transition difficulties. I don't think there'll be zero transition. I think that's a pipe dream and no one should have that expectation. But we can actually get through the transition much cleaner and much more compassionately and humanely than we have done in earlier transitions and get to the amazing benefits that are always on the other side of these massive new technology leaps.

Dan Shipper (00:07:00)

That makes sense. I think one of the places to go from there is, the book to me is not just a history book. It's also a book about psychology in a certain way, because you have a theory of why we are typically very afraid of new technology.

Reid Hoffman (00:07:20)

So, part of our fundamental concept of our place in the world and our dignity, our meaning, etc., comes down to a notion of agency and it's both agency as individuals and agency as groups and agency as societies. And so most often, and back to the earlier referred printing press, what people experience the new technology as is reducing the agency for key people who are leaders in society: heads of institutions, participants in institutions, the institutions themselves, and so they react because they go, oh, this is gonna be a change in agency therefore reduction and therefore bad, therefore destructive society. 

And the actual path that happens is, yes, agency changes. So some things that you had agency in before, you no longer have agency in. But when you begin to look at agency, it's not just a set of external factors, it's internal factors. It's how do you approach it. And so, for example, let's use a very modern example that people can think about. So is it a loss of agency to being driven in an Uber or a gain in agency to being driven in an Uber? And obviously if you're like, oh my god, my hands aren't on the steering wheel. And who knows what this random human being is doing? Then you're like, oh my god, it's an enormous loss of agency. And yet, of course, hundreds of millions of people are doing it because they realize it's a gain of agency. I can not have a car and get somewhere. I can go, oh, I drunk too much, I'm going to get home this way much more safely, etc. 

And so it's a question of how we approach it, and how we experience it, and how we choose our own agency. That doesn't mean it's completely separate from external effects, but it's a combination of that internal and external, which is really appropriate and then what happens as we first encounter technology. I remember arguing at Davos with people about: Are smartphones these humanity-reducing cybernetic control of human beings—

Dan Shipper (00:09:40)

Never argue with people at Davos. That’s mistake number one.

Reid Hoffman (00:09:44)

You're making me think about The Princess Bride: “Your first mistake!” 

And then, obviously, part of the reason why we have billions of people with them is because actually, in fact, it's a massive increase in agency, and matter of fact, it quickly gets to kind of the thesis of the book of Superagency is what happens when millions of people get all elevated in their agency with a new technology, and then all of a sudden we collectively get Superagency, both as individuals and as societies.

Dan Shipper (00:10:20)

Yeah, one of the core parts that I read in that I want to pull out is: You talk about agency, but you talk about it as a sense, which I read as like an internal kind of aesthetic experience of agency, which I think, most people think about when they think about agency, it's all external. But it's quite clear to me that the aesthetic component is a huge part of it. And I think a lot of the questions about human agency reflect almost a lack of faith in our ability to change and adapt, and then also lack of understanding of how that aesthetic sense arises and how you can bring it to any experience to some degree, like the history of literature is about people in like pretty dire circumstances they have some agency, which is not to say that external conditions don't matter, but a lot of it is internally driven. I'm curious how you got there. What influenced that? How did you start doing that?

Reid Hoffman (00:11:20)

Well, some of it may be my philosophical background and highlighting the aesthetic is also very interesting. I think maybe when I was writing, I was also thinking of Dan Dennett's intentional stance. It's a stance that we have towards the world, like a mental stance, a worldview. And obviously there's an aesthetic stance too. So I think that's a great highlight. 

And I think part of it is that, using the Uber example and using the smartphone example, I think that there's many times where, when you're encountering an external circumstance including of course technologies, if you approach it as this is taking my my agency, then you're essentially throwing yourself under the wheels, right? Whereas if you go, oh, here's how I can use this to transform my agency, to extend and enhance my agency in various ways, then it becomes much better. So for example, think about just driving down the highway. You go, hey, I'm in this car. I have an ability to slow down, speed up, drive, etc.. Well, there's a lot of other cars on the road, too. If you say, oh my god, my agency is taken away because these other cars are slowing down or are potential hazards, etc., well, then you're never going to get on the road, right?

Dan Shipper (00:12:45)

And I think that points to: Agency is to some degree a way of looking at the world and where you put your attention. So if you push your attention to, oh my god, all these other cards, of course that sense of agency is going to go down. If you put your attention somewhere else that sense of agency is going to go up. And again it’s not only your attention. There are external things, but to some degree there’s some sense of control. And I'm curious for you, how do you notice your personal sense of agency fluctuating day-to-day and in your life? Have you always felt this connection with a sense of agency? Have there been periods where you haven’t? And how has that played out for you?

Reid Hoffman (00:13:25)

Great question. It's interesting because I think one of the things I picked up from fairly early in my childhood was realizing that kind of that old catechism that's pretty good, which is something like the strength to change the things I can, the tolerance to live with the things I can't, and the wisdom to know the difference.

Dan Shipper (00:13:45)

Yeah, it’s like the Alcoholics Anonymous prayer thing?

Reid Hoffman (00:13:50)

Yes, yes, exactly. And I think it comes from a Christian Catholic catechism. And I paraphrased it.

Dan Shipper (00:14:00)

Did you grow up Catholic?

Reid Hoffman (00:14:02)

No, no. It wasn't so much that, as I'd adopted that— I came to love that catechism later because I had early got to that sense of how you should navigate the world.

Dan Shipper (00:14:10)

Wow. How’d you do that?

Reid Hoffman (00:14:12)

I think it’s just maybe playing a lot of board games. I mean, it's just kind of the sense of, hey, these things are under your control and you can affect them. And then you can affect that outcome. And these things are not under your control, and overly tearing your hair out about the things that are out of your control is not helpful to you or to anyone else. So you go, the fact that there's suffering in the world. If you go, oh my god there's suffering in the world, then of course you’re gonna get crushed—there’s going to be suffering in the world. You should, of course, always feel for people's suffering. But the fact that there are children dying around the world today, it's really, really sad. We should try to do things about it, but we're not going to stop all of it today. It's an ongoing process. And so there's things out of your control, and then there's things in your control. And of course, that's where your ability to navigate it— And so it's kind of, whether maybe it's kind of simple, like who you're friends with when you're in school. And you go, oh, do you have friends that you treasure and then that's really great. And maybe there's other people that you want to be friends with who aren't as interested in you. That's fine too. But kind of navigating that within the kind of child circumstances I think where I started, and then from there, I think that became kind of how I approached each new challenge that I was getting. 

And part of how I think I got a sense of good strategy in life and strategy for me, strategy for what I did in school, strategy for what I do with companies, strategy for what I do with investing, all came from this kind of, figure out what the nature of the game is and what are the things that were within your ability to change and then accepting the things that you can't while changing some really interesting things.

Dan Shipper (00:16:17)

It sounds like that came from board games. What board games are we talking about?

Reid Hoffman (00:16:24)

Well, so a whole set. Some and it’s less board games than Dungeons and Dragons, so I was doing that, which seems to have been some resurgence on, which is cool. But also what probably most people don't track these days, these Avalon Hill board games I did a bunch of. And then a variety of others. And one of the things about It being multiple, is that you're learning all of them. And actually one of the things that people frequently say, I did play chess and I did play go. I wasn't as attracted to those games because part of the thing with when you're playing like the Avalon Hill board games or also Starfleet Battles was another one I did. By having some randomness with dice rolls, it actually more closely approximated the kinds of circumstances we encounter in life. Because, life is not like chess. Life is not like go. It is not deterministic that way. There're uncertainty variables that you have to play into and epistemic uncertainty sometimes that you have to play into and both go and chess have no epistemic uncertainty. And so adopting your strategies to those was really important. I think those are the initial mindsets.

Dan Shipper (00:17:35)

That makes sense. There’s a meme in tech right now about being high-agency, which I think is great. It’s good to be high-agency, but I think we tend to think of agency as always good. And generally I think it's good to have agency, for example, in the example you gave of, there's suffering if you think of yourself as high agency and have high internal locus of control for things that are totally out of your control, it’s actually a pretty miserable way to live. And it definitely doesn't make you more effective. And I think what you're saying is there's a certain range of things within which you want to be high-agency and have internal levels of control. And then there's also a whole set of other experiences in life that are important that are about completely giving up control and recognizing your lack of control, and those are some of the most meaningful experiences that people have transcendent experiences. I’m curious what that sparks in you.

Reid Hoffman (00:18:40)

Well, I mean, I think the obvious ones are in friendship, romantic relationships, and other things as part of what you're doing is essentially giving yourself over to not being in complete control of how a relationship's playing. And those are obviously some of the places where we learn and become kind of wiser, more compassionate, more evolved people, that's actually, in fact, I think one of the really central things. It's also, by the way, sometimes people encounter that playing in team sports of various sorts. There's team sports themselves. There's also, of course, team sports within companies in terms of how a company's operating. The shared controls. The shared agency becomes, I think, really key. And obviously, some people find that within a religious experience, too.

Dan Shipper (00:19:45)

How does that work for you as an investor, obviously, because you're investing in companies, you care about the outcome deeply, but also you don't want to be in control in a lot of ways. there's a lot of trade-offs in control there. How does that work for you?

Reid Hoffman (00:20:00)

Well, I think one of the things is that— So, start with something kind of enormously simple, pragmatic, heuristic. One of the questions that I ask myself on an investment is, would I do this investment and walk away and say, call me in five years? Because, call it 98 percent of good investments, you will accurately have that sense. Not to say you wouldn't try to help and work with them because well, part of it is you think as an investor, I can spend maybe, if I'm spending a lot of time on an investment, I might be able to spend four hours a week on the investment. Because as an investor I have a portfolio of investments, I've got other things I'm doing. And so the person I'm investing in, she or he, the CEO and the founders, how they may help. They're in this, presumably, 80 plus hours a week, maybe 100 plus hours a week. 

And so, if they're not capable of carrying the game themselves, it’s almost certain that you've made a bad investment. Now, sometimes the exception is where you go, oh, what I really need to do is help them get this one deal, or help them hire this one person, or help them with this one strategy element, and then everything else will be fine. Then, that's okay, and I will sometimes do that investment. But, one of the mistakes that investors frequently make is they tell themselves that they're more important than they are in the judgment, and then they'll get too much involved in it. And they'll actually be messing it up because—

Dan Shipper (00:21:40)

Yeah, iatrogenic investing.

Reid Hoffman (00:21:45)

Yes, because truly, if you think your four hours is more important than the person's 80 to 100 hours, you probably invested in the wrong person.

Dan Shipper (00:21:59)

Yeah. Or you have a massive ego problem.

Reid Hoffman (00:22:01)

Yes or both.

Dan Shipper (00:22:05)

Yeah, and one of the things that came to me as I was reading this book is you’re tracking agency as this psychological experience—an intentional stance, an aesthetic stance. One of the things that seems to affect our sense of agency is how we approach uncertainty. And the more we kind of grind against uncertainty to try to eliminate it, the lower our agency is because you can't eliminate it. And the more you work with uncertainty and have that sort of stance of surfing through it, the better things go, and that's the appropriate stance for dealing with new technologies. 

And the thing that, of course, for me, the thing that I'm thinking about as well, I think our default stance in the West to uncertainty is to try to eliminate it. And you can read that in the history of philosophy, which is Socrates and Plato to Descartes trying to make knowledge that is clear and explicit is episteme. And it's the same thing in science, right? You want to boil things down into fundamental laws, so you can predict things in advance, laws that are explicit. Religion, I think, is also one of the ways that I'm dealing with it, too. Maybe the world of the senses is totally different, but there is a world to come that is totally certain and definitely going to happen, and it'll be amazing. And not to denigrate any of those things, because I think those outlooks are actually quite useful in certain circumstances but in particular when it comes to technology that way of thinking about things can lead to trouble. I'm curious if you have the same reading or how you would respond to that.

Reid Hoffman (00:23:50)

Well, I think it's definitely the case that we— And I think there's the generalization, which is a very broad brush, almost Jackson Pollock—quick, quick, quick, quick description of the stuff. But I do think that the notion that we as human beings try to delude ourselves to over degrees of certainty. We don't, for example, realize that every time we drive somewhere on the highway, we are taking a certain amount of risk. We're taking a risk about our own competence, we're taking a risk on our vehicle, we're taking a risk on weather conditions, we're taking a risk on other folks. And of course, once again, just like agency, if you dwell on all that, maybe you'd never get into a car, but then you would never go anywhere and never be able to benefit from significant portions of modern society. And so part of it is to think about how do we live in a heuristics of managing uncertainty and managing various forms of uncertainty. There's a form of uncertainty about, well, how does the world actually play out? There's uncertainty about our epistemology and how we know the world. 

And then there's uncertainty about how the system comes together. It's one of the things that's really central to entrepreneurship because any entrepreneur who persuades themselves that they're guaranteed success in what they're doing is massively deluding themselves, which maybe sometimes is necessary to work really hard in order to do something, but encountering and navigating a world of risk is actually in fact very important. And actually, I think that it's a true line of philosophy—kind of the Western philosophy, actually, we should probably isolate this too—frequently is overly trying to get to human nature is X. Frequently they'll say, human nature is selfishness. And you're like, well there's a lot of different ways in which human beings are self-oriented, but there's a diversity of them. And sometimes you say, well, when this person's really committed to their family or really committed to their society, well, that's just the way they're expressing their selfishness. So, well, but that's not exactly what you mean by the word selfish. So there's a whole bunch of different self-orientations and that spread is actually really important. And the same thing is true for understanding uncertainty well, which of course is part of the reason why we're still struggling to resolve Einstein's theories of relativity and quantum mechanics. And Einstein himself said quantum mechanics can't be right because God does not play dice with the universe.

Dan Shipper (00:26:45)

Yeah, I mean, I think I think that's really another example of how aesthetic sensibilities guide what we see and guide our decision making and supposedly really rational, truth-centered endeavors. And that's why I think I'm sort of keen on the aesthetic sense because I think when we talk about uncertainty, there’s a way in which you can try to eliminate it, you can try to manage it, which is sort of still like, we'd rather we weren't uncertain. And then there's a way of sort of playing with uncertainty and like you use the phrase “bloomers,” sort of using it to flourish, which I think in a lot of ways like that. That is a way to look at life. And I think that perspective, going back to the aesthetic sense, is often embodied in entrepreneurs and artists who are often guided by an aesthetic sense. And I think those two outlooks where you're like, yeah, I want to face the blank page, or I want to change how people see the world, how the world works even though I know that's highly uncertain, is, seems to me somewhat core to working with technology well.

Reid Hoffman (00:27:55)

I completely agree. And actually, one of the things that's interesting about the arc that you're highlighting in your call it philosophical aesthetic reading of Superagency is the actual ties between the first book, The Start-up of You and this, because The Start-up of You, in part, was trying to give advice you give entrepreneurs but to individuals for their lives. And part of it was— Multiple of the chapters are navigating environments of uncertainty. Because that's what entrepreneurs must do. And must do it intelligently.

Dan Shipper (00:28:30)

And preserve your sense of agency while doing it.

Reid Hoffman (00:28:35)

Yes, exactly. And so it's interesting, I hadn't quite realized between book one and book six, that there was kind of an arc from it. And I do think that part of the thing in surfing is a very good metaphor. It is to say how you embrace uncertainty in a way that you're taking agency and that you are leveraging it to having a better life, to working better, to making decisions better for you, creating better in terms of the artist side—and, well, many things, but artists especially—and then leveraging that to a feature, not a bug.

Dan Shipper (00:29:15)

Yeah, totally. I think the wave metaphor is really interesting to me because you can try to stand in front of a wave and surf it or you can surf it. And the surfing you're working with in a dynamic context. You're not totally in control, but you are in control within that sphere, which is sort of what you’re going for, which, I think, gets to your point about the way our stance toward uncertainty is what guides whether or not we feel agency. And you have a couple of specific ways of thinking about that I think are really interesting. So, one of the dichotomies you bring up is: Some people try to make a blueprint or a plan or a specific theory about the future but you use the metaphor of a compass or or you say cognitive GPS or another term, which I don't think is in the book, but I think is really related is wayfinding. So you have a sense of the direction of where you’re going but you’re pretty open to the details. I’d love for you to talk about that.

Reid Hoffman (00:30:25)

So, exactly right. And part of the thing, whether or not, almost everything, if you're making too concrete of a plan and—earlier we were kind of talking a little bit about Sun Tzu, and no battle plan survives contact with the enemy—plans that are rigid, break fast and uselessly. And so you can make flexible plans but one of the easiest ways to make a flexible plan is to have a compass and the compass might be, for example, well I need to hike from here to here and there may be multiple trails and sometimes one of them was washed out and everything else. Well you can get from there to there with and maybe there's no trails if you have compass map information as a way of doing it and you're adjusting your plan as you go and as you discover what the terrain and the conditions look like. And it's a good metaphor for also a wide variety of things because as you think about, for example, well we're making decisions about like, what jobs we might take, what careers might take, where we might go on a holiday, where we what we might be doing Friday night, all of those are informational space questions. And again, a compass is a good way to do that. And so that you think about what an AI chatbot is helping you with, it's almost like it's a compass helping you with these informational decisions, hence informational GPS. It's resilient because if you suddenly discover something really changes, the world isn't as you expected, the world changes, you change. you're like, well, I thought I was in the mood for watching a movie, but I'd actually rather just like to have a quiet conversation with my friend. Well, oh, hey, there's a cafe down the street, let's go, that we discovered from the compass. And part of what in writing the book, we described as the techno-humanist compass. And the reason is, too often when people think about technology, they think about the thing that is new and therefore not really fully adjusted to us. So they go, oh, AI, that's technology. Whereas our cars aren't technology, our phones aren't technology, our laptops aren't technology, our glasses aren't technology.

Dan Shipper (00:32:45)

Our books aren’t technology.

Reid Hoffman (00:32:50)

They’re all technology. And what happens is we get familiar enough with them that we naturally include them in our existence, including them in our agency.

Dan Shipper (00:33:00)

They build up a patina of culture in our psychology.

Reid Hoffman (00:33:05)

Exactly. And that's what we want to be doing with this in order to embrace it. And that's why giving metaphors and it being techno-, but also humanist in the compass.

Dan Shipper (00:33:15)

One of the things that the compass makes me think of is, like a counter argument would be like, isn't that a little wishy-washy? One of the big memes or tactics is you gotta go back to first principles, so you're not gonna just change your mind based on the latest, where the wind is blowing. So how do you square those two?

Reid Hoffman (00:33:40)

Well, to some degree, I think a techno-humanist compass can be amongst the first principles, because the question on principles are, what are the set of kinds of navigational truths that you're using to make decisions, to incorporate information and change belief states, action states, etc. And using a compass as a means of a metaphor vs., for example, saying, well, that we only can be in one route. I think the first principles thing is a very good thing that I, myself, and I think you as well, embrace from a viewpoint of clarity of thinking about things, but, as a metaphor, when the earliest GPS navigation systems for cars played out, I had the entertaining observation between the German and the Japanese systems. I drove a German one, which when I left the path was like, return to the path, return to the path, return to the path. And the Japanese one was like, oh, you want to go a different way. Here's your new set of directions for what you're doing. And I think the question is, if you say first principle is only being Germanic, then you're going to have problems.

Dan Shipper (00:35:00)

Yeah. It’s so funny because I was telling you before we started recording that I was reading War and Peace over Christmas, and that's literally exactly how Tolstoy talks about how German generals do strategy. So I’m glad it made it all the way to GPS.

Reid Hoffman (00:35:15)

All the way to the modern GPS systems! Although I think they've now since learned some from the Japanese ones, and they're both like, oh, when you didn't turn there, maybe you had a reason for not turning there. We're going to recalculate a new path.

Dan Shipper (00:35:30)

I want to make this really concrete. We can talk about compass all we want. How do we feel for a person who's like, okay, maybe I work in tech, maybe I'm an engineer, maybe I'm a product manager, designer, whatever, and I'm seeing all these changes happening. How does this change how I think about and orient to AI and think about, okay, over the next couple of years, o3 is coming out soon, Sam Altman saying, hey we pretty much have got AGI. Google just released something today called Titans. That's really cool and seems like a really interesting step in that direction. So arm me with some more practical concrete ways to instantiate this compass in how I approach my life. 

Reid Hoffman (00:36:10)

So another concept in the book is iterative development, which is part of actually how we get broad-based networks of inclusion. What we also refer to as consent is governed by customers, by theorists and commentators, by government, etc., by the press, the whole way of feeding back into the system. And it starts with individuals experimenting with it, engaging. And I think one of the reasons why, when you think that we're in the cognitive industrial revolution, you think, oh, there's a new set of tools by which, for example, I can't be a professional and say, I don't use computers, I don't use smartphones. Every professional job requires that kind of information connectivity, processing, analysis, consumption, generation, etc. AI is just the amplification of that. So, every single profession is going to require, how am I deploying AI? How am I engaging with multiple agents in order to do my work? And you say, well, how is that exactly? Do I read a book right now? It's like, no, no, no. The best way is just to start engaging with it in some seriousness. So it's fine, for example, to go to ChatGPT and say, give me a sonnet for my friend's birthday or my kid's birthday, whatever. That's great. What ingredients can I have that I can make into a meal for my lunch? Great. Do that. But also use it for things that you are serious and earnest about. And you may find that some of them it's not ready yet. But you'll find that some of them are. I'll give a personal example. So when I first started, I sat down with ChatGPT, I asked it, how would Reid Hoffman make money investing in artificial intelligence? And he gave me the business school professor's analysis that was completely wrong for venture capital. It sounded smart, like you identify large total addressable markets, you understand which products and services would actually—

Dan Shipper (00:38:20)

What a relief.

Reid Hoffman (00:38:22)

Yes, and so I was like, nope. I'm still good. I still have a very unique skill set for doing this relative to these things. But on the other hand, if I then said, oh, it's useless for investing, then I wouldn't have noted the things that it's good for. For example, in the same session I sat down and I put in a business plan and I said, how should I do due diligence on this business plan? And the list of things that came back with were not necessarily—none of them were a surprise. But items like four and seven, I would have gone, oh yeah, I would have thought of that in a few days. And it's useful to have it now for figuring out what to do. And so engaging with it is really important because we will see a lot of jobs transform. And the way they'll transform, some jobs, customer service, other places where you're essentially trying to get human beings to act like robots, there'll be much higher replacement coefficients. But a lot of jobs, the human will be replaced by humans with AI. And we want as many of the humans to be the humans with AI. So, be learning it and adjusting with it.

Dan Shipper (00:39:28)

So then let's go deeper into that. What is an agency-preserving way to approach the question, which is, what happens if my job goes away because of AI?

Reid Hoffman (00:39:40)

So, first, I don't think— Well, I'll do the job goes away too, but I think a lot of jobs won't go away. I think they'll transform. And so the question is, are you adapting and transforming with them at sufficient speed and in advance, which is one of the reasons to start playing with AI now? And AI, by the way, can help you with that because AI could go, oh, well here is the new way that we can be accelerating how we understand what a good marketing message would be, how we would test it, how we would generate options for it, how would you think about what that goes, and AI already today can help with a bunch of different marketing tasks. And it doesn't mean it will do it decisively and say, oh, we just plug it into the machine and we press play and look, that's our marketing because I've used GPT-4 and GPT-o1 for this and it's really helpful, but it's a copilot for these things. Now, in a replacement circumstance, like say, for example, customer service, so you get companies like Sierra and others who are kind of designing a how do you have the AI agent be the first customer service agent and that may reduce a lot of different customer service jobs and you say, oh my customer service job went away. Well, AI can still be helpful to go, okay, here's what my skill set is. Here's what I think I'm capable of learning. What job should I now go look for if there's now far fewer customer service jobs? And you might go, well, there's these things in account management, there's these things in sales. there's these things in support desk, there's these things in— And that kind of thing may actually in fact go, okay, and then how do I, how do I learn those jobs? How do I get those jobs? How do I do those jobs? AI can actually help with all of that.

Dan Shipper (00:41:35)

And that gets to another point that you make in the book, which is that one of the really important agency-preserving approaches to new technology is the idea of equitable access. Can you talk about that?

Reid Hoffman (00:41:50)

So, one of the things that's really key, I think, for, kind of answering this question about, look, how do we bring as many people along in society? How do we make sure it's not just beneficial for rich individuals, rich companies, but as broadly as possible, is to have it in as many people's hands as possible. And when you do that, not only does it obviously create a better chance of fair and just participation in the new jobs, career paths, etc. But it also makes society a whole lot better because when you get talent from as broad a range as possible being unlocked to do work, to be creative, to create maximum benefit, all of the rest of us in society benefit from that too. And so, equitable access is not just a question of, well, is it fair for everybody, which it’s important to have, but also that it's also better for all of us.

Dan Shipper (00:42:50)

Another extension of that you talk about is this idea— Generally, I’d say the book is anti-regulation in general. Competition will regulate itself—that kind of stuff. But one of the places where you seem a little bit more open to the idea of regulation in some form is the idea of private commons— the way that we treat our data in a world with AI maybe should shift in a certain way. So the personal data that I've accumulated, let's say on Facebook and Google and all that kind of stuff, becomes tremendously more valuable to me as an individual when I have GPT-4 to go through it and be like, here are some patterns I found. And that changes how we might want to think about the private commons and then therefore the regular query landscape or whatever for example—maybe everyone should be able to download the data stuff like that.

Reid Hoffman (00:43:50)

So first, just the general thing is I tend to be, call it more regulatory cautious than anti-regulation, right? Because it is generally true that when you get into, oh, well, I should act as a regulator, I’ve got over a degree of certainty that I can issue, you shall do this and not this and actually enshrine the past against the future and there's all kinds of benefits to get into the future and that's why regulatory cautiousness. So I tend to say, look, when you start having the impulse that maybe there should be regulation, you should start with, well, how do we measure the questions that we're worried about as harms? Because if we can get those measurements, then we can track it and see, do we really have those harms? Are they really increasing? And then maybe, what kinds of innovations might bring down those harms relative to our benefits and then allow that dynamic progression of innovation and invention and risk taking, all of which is very important. And a lot of regulators go, well if I let you take risks and something bad happens, then it's on me and it's like, well, but life is a risk, and by the way, by taking some risk, we can figure out things like cures for diseases. We need to be figuring out cures for diseases. So anyway, that's the first general stance on regulation. 

Next is private commons, which is part of what we're actually seeing in the build up of these big tech companies, counter a bunch of surveillance capitalism and other kinds of critiques—

Dan Shipper (00:45:25)

I like that you include that in the book by the way.

Reid Hoffman (00:45:30)

Yeah, exactly. Because it's like, look, it's almost like framing. If you frame it as the agency is, oh, it's surveillance capitalism vs. it's no, no, actually, it's building up a private commons that I benefit from and enables me, right? Because you go, well, Google Maps allows other people to figure out where your house is. And by the way, Google Maps allows my friends to figure out where my house is and come visit. So that kind of framing I think is very important. And then these data commons then are a kind of a micro instance of how Superagency is created because not only because I have a private commons, you have a private commons or private data sources, but then as it learns from these together, you get automatic tagging on your pictures and you can go, oh, this picture I took of Dan, I can share it with them etc. And all of this adds a lot of important human connectivity framework richness to our lives, ability to make decisions or find things in good ways. And now, that being said, we want to enable that private commons as much as possible. So you go, well, look, in general, most of the incentives for these corporations are very pro-social, pro being good for society. Because it's like, oh, I'm building all these features that engage you, cause you to take pictures, cause you to store them, etc. And it's great. But on the other hand, some of my incentives are, you do them only with me, and you don't do them with anyone else. It's like, well, actually, maybe occasionally, in a very specific way, we intervene and say, no, part of my rights with my data is when you were storing a whole bunch of my data, I have the right to move that as I like, and you must facilitate that.

Dan Shipper (00:47:20)

I'm about to go off roading, so you’re making me think of something, which is a more specific example that you might be interested in how this might play out. And I'm curious about your take on. I don’t know if i’ve told you this, but I have OCD. And the treatments for OCD, you basically manage it. It's fine, but it's still not great. You don't cure it. And one of the things I've been doing recently is I just— Have you used Windsurf? Windsurf is like a Cursor competitor and it has an agent thing and you can say build this thing and it’ll build it. And it’s the coolest thing ever. And so it just makes weekends a lot more really fun. I have an idea. I have this thing that I've been doing, where I just built a little app in Windsurf. And every day I can take a Y-BOCS test, which is the OCD assessment and then I upload a screenshot of my Whoop data. And then I also take a little video of myself and I just do a daily log where I don't talk about how I'm feeling, I just like to talk about my day. And I upload it and there are some APIs where it embeds my face and my vocal tone. And what I'm trying to do is see if I can predict my OCD symptoms from my facial, vocal, and Whoop data. And that's something that, Whoop, for example, they have an API, but it doesn't really give you all my data and I need that in order to make this work. And this is the kind of thing where there's a whole, there's thousands of types of diseases or conditions or whatever, where, if you enable people to do this, make so much more progress on solving them. I don't know what I'm going to find when I figure this out, and I think what I'm likely to find, which gets into my whole soapbox about science which we've talked about before, is that there's not one variable where it's like, well, this makes your OCD worse. It's, actually, there's thousands of different things that interact together, and different combinations that I should pay attention to. And yeah, I think that kind of thing is like, it's so important to the future. And it makes me feel like— There was this whole quantified self movement and Silicon Valley for a long time. It's like, I feel like a quantified self is like, it's going to happen finally. It's like a thing now. What do you think?

Reid Hoffman (00:49:45)

So, I do think that it's precisely this kind of data that if you have the ability to have to shift it around as you need and want as an individual can create great things. So I think Whoop is one version of quantified self, glucose monitoring is another one, the things that you do with the various watches is more. And I think that we are going to have all that. And actually, by the way, having that stream of data that you can then index to other data that you might have, for example, what did you eat for lunch? Are you traveling? Where did you go? Planes, trains, automobiles, etc. And I think being able to query that together really, really matters. And so I think it's a.) we are moving, I think we're at this point decisively moving more and more towards quantified self. But part of it is enabling what I can do with it and therefore how it can really benefit me. And by the way, for example, when you get to it, one of the things that one of my doctors noted is that if he has access to the Whoop data and sees what's happening, he can look at someone's Whoop performance of the last, I forget how many days, some number of days, he can tell whether or not you're about to get Covid. And so that's the kind of thing where it's actually, that's really useful.

Dan Shipper (00:51:30)

It’s so important. Maybe there’s one thing he looks at, but probably he just sort of squints at it and gets an intuitive sense of what it is, which is a thing that AI is really good at exactly because predicting a sequence of let's say Whoop data or predicting or classifying a sequence of Whoop data or let's say predicting a sequence of OCD data where it's like, here's what I did today. Here’s my Y-BOCS score. I think that problem is incredibly similar to predicting the next token in language. And we're just getting to the place where we're starting to apply that in other places. So I think that there's this really interesting shift that happened in AI that I think you can sort of start to see it in, first in philosophy where we're going from a search for universal truths and universal definitions in philosophy, starting with Kant really you're starting to say, okay, there's a limit to our ability to like grasp things. And then there are all these different flowerings of that. So there's the transcendentalists and the American pragmatists, there's the later Wittgenstein. There's a whole conundrum of philosophy and postmodernism or whatever, which regardless of what you think of it, it sort of is all in that vein. And I think you can see this, that same shift in the history of AI where we started it in a very Socratic-Platonic alignment, we’re trying to define the underlying theory of intelligence and defining it as a set of symbols and their relations. And each symbol has some sort of semantic meaning. What we found is that it doesn't really work. It gets very brittle. It works in certain circumstances, but it's really brittle and it suffers from computational explosion. You have to have a frame of reference to start with before you can start manipulating symbols otherwise it’s too expensive. And then we shifted to subsymbolic AI and where we're just basically fuzzy-matching patterns and bringing to bear many, many thousands of different rules based on the context, partially fitting them based on the context. And I think that it's possible that shift will also occur in other areas of the world where we're sort of doing the equivalent of symbolic AI because there's nothing better and now we can use data and AI and that kind of stuff to make progress.

Reid Hoffman (00:54:10)

Well, I think you know, and we may have talked about it in the last podcast, that my undergraduate major was symbolic systems. And part of when I was doing that, because the theory was we are symbol processors, that we reason and think in symbols, we consume it through languages and books, etc., all of which is very interesting and a bunch of different parallels between different symbolic systems. But I was also kind of within that very early movement, which was called connectionism, which was kind of this notion of well, symbols are important, but if you only had a symbolic theory, you're probably going to radically underperform your modeling of what our intelligence is, your ability to construct tools or intelligences, etc. And so, that's subsymbolic AI, and, for example, what are the things that lead to us getting a mastery of concepts? Or since we mentioned Wittgenstein, following a rule in language is, I think, really key. Now, I think part of, if I project out the future for the next 10 years, I think a major part of this is going to be this kind of play between probabilistic models and symbols. And with the current LLM transformers, we have one, but I think we're going to have more as it were, technologies, mathematical descriptions, skill sets about how probabilistic modes and symbols can come together. And I think that's among the things when you say, what's currently not visible within the next generation of AI? I think there's going to be some stuff in there.

Dan Shipper (00:56:00)

Can you give me an example of what that might look like?

Reid Hoffman (00:56:05)

Well, a simple one that's out of the past and I'm not saying this is the right one, but it's like using symbols, but with Bayesian probabilities applied. So you say, oh, well, I have the belief that we are having philosophical insight in this podcast, but it's probably 90 percent vs. probably 100 percent. That kind of thing. Well, I'm just giving some range for humility and all the rest. But you might also say I have the belief that the weather tomorrow will be good for taking a hike. And I believe it at 80 percent.

Dan Shipper (00:56:50)

Is the o1 paradigm an example of that, training it on math problems where it’s based on discrete steps that can be verified.

Reid Hoffman (00:57:10)

I don't think it's unusually that from a lot of the transformers, but I do think the question is you do these chains of thought, you have a fitness function on the chain of thought, and you're making a prediction, not just on the next token, but is this chain of thought out of multiple chains of thought, which of these ones are the right ones? And that fitness function itself might have that probabilistic characteristic. 

Dan Shipper (00:57:30)

Because I don’t want to— I like to cast aspersions on symbols.

Reid Hoffman (00:57:35)

With symbols, I might add.

Dan Shipper (00:57:40)

I love symbols. But that’s mostly because—and it’s a little bit rhetorical because I think we’re so symbol-heavy in how we think about things. At least my model, right now, symbols are important, but they are important when they arrive based on subsymbolic architecture. And so all that means is that when you’re doing something like, for example trying to pick apart your thought process, there is a limit to that picking apart, where you're getting down to the symbolic level for yourself, but underneath that, there's thousands and thousands of subsymbolic things that you're not aware of. And realizing how the symbolic level arises out of the subsymbolic level is a really important way to approach the world, especially if you're someone who has a tendency to philosophize, because it's the sort of Wittgensteinian therapeutic approach to philosophy, which is like, you need to find a philosophical stance that allows you to give up philosophizing.

Reid Hoffman (00:58:50)

Yes, although I think even maybe he realized that it wasn't—that it never ended—and you gave up because obviously early Wittgenstein's I solved it. I'm done. I'm gonna go teach. Which is the reason why he died as a philosophy professor. 

Dan Shipper (00:59:15)

Every time I thought I was out they pulled me back in, is Wittgenstein.

Reid Hoffman (00:59:20)

Wittgenstein is the godfather.

Dan Shipper (00:59:25)

Let's see what else I have to talk about with you. We've gone a little bit into the science vs. engineering, which I do think when we talk about dynamic tensions you can probably think of everything post-Newton as the era of science. And I think there's a way of looking at the era of of AI that is starting to work as turning science problems into engineering problems and that it may usher in more of an era of engineering or lean us more in that direction, which has a bunch of other Jackson Pollock-like, you can twist the knobs on other dynamic connections in this sort of engineering direction, which is like, we're a little bit more pragmatic. We’re a little bit less concerned with uh maybe with fully explicit causal explanations for things and maybe more okay with lots and lots of little correlations—stuff like that.

Reid Hoffman (01:00:15)

By the way, I obviously love Jackson Pollock and I would also add, since you've done that already, another metaphor, or framework, thinking about how we're actually moving less from thinking about these thoughts as deduction and more as induction and abduction.

Dan Shipper (01:00:40)

What is abduction? I never remember.

Reid Hoffman (01:00:50)

It's the best theory. It's a kind of theory that models the evidence.

Dan Shipper (01:00:55)

Okay, interesting. so, but it requires some sort of creative process to come up with the theory rather than coming up with it from the data itself.

Reid Hoffman (01:01:00)

Yes. It's a little bit like— If you contrast induction and abduction, induction is I look at all the data points and I model the curve and abduction, abduction is based on some other information about the world or priors. I go, here's a model which might actually be different than what you come up with. And induction, and it's one of the reasons why I actually think both frankly of all three, but part of when we're good thinking beings, we apply all of them.

Dan Shipper (01:01:25)

Yeah, that is the thing that I love about AI is you don't have to decide which way to approach problems, it just like figures out— It has many different approaches, and it just figures out the one that's best or uses many different ones at the same time, which I think is sort of the failure point of a lot of philosophizing. So, coming up with a moral theory, right? We've never found one that has no holes in it, and when we do think that we've found it, you end up with all these weird things. So, effective altruism is a great idea, but then you take it too far, and then you become SBF. But then it feels really wishy-washy to be well. I applied the moral rule that made the most sense in this circumstance, but that's how language models work for languages or solving problems. And I think having some sort of, diverse, partially explicit, but also mostly implicit set of moral theories or whatever you want to call it, is probably for the best. It's probably the best way to be in the world, which is fundamentally, sort of philosophically unsatisfying from the perspective of trying to make everything explicit, but it's also kind of beautiful and goes back to the stance towards uncertainty, the sort of artistic entrepreneurial stance towards uncertainty.

Reid Hoffman (01:02:50)

Yeah. Look, I think it's critical that we're deriving theories of the world, theories of morality, theories of ourselves, theories of what we should be doing with our lives and work. But it's also critical to think about those theories as dynamic and being updated and part of the updating is not just kind of, oh, look, I got some new data. That's important, too. But also the way we think about it, the way we learn from each other. And all of that leads to kind of a best judgment circumstance. And by the way, that's how science progresses, too.

Dan Shipper (01:03:25)

So people, they've listened to this discussion. They're familiar with some of the ideas of the book and they're probably thinking about, should I go get it? And I'm curious, are there any other takeaways that you think are really important for people to know about what your stance is on this topic?

Reid Hoffman (01:03:50)

Well, so, in a sense, I wrote the book for two audiences. One audience is anyone who has any AI curiosity or skepticism, because the thought is, here's a set of lenses to think about the reason why this is very humanism and humanity positive, and why it is important to have a kind of a theory of agency that is, this is what we can accomplish, this is what would be really good, and how to work towards it. And so whether you're a skeptic, or whether you're curious, all of that I think plays into it. Now I also wrote it for technologists, because I wanted technologists who were inventing this to be thinking about human agency as almost a design principle. To think about individual agency, collective superagency, as ways of, ah, if that's one of the fitness functions and goods of what we're trying to do, that may make certain design decisions, deployment decisions, much more effective and much more humanist.

Dan Shipper (01:04:55)

How does that work? Because I think a lot of the framing we’ve approached agency so far is, we’ve talked about it in an internal aesthetic sense rather than something you can build into a product, so how would that change a technologist’s fitness function for what they’re building?

Reid Hoffman (01:05:10)

Well, that was a little bit of the reason why we highlighted the ChatGPT iterative deployment as an example, because GPT-3.5 had existed for like a year before ChatGPT. And yet you launch ChatGPT and all of a sudden people can access it. They could do stuff with it. They could make it happen. And so it's making that kind of affordance available to people so they can engage with it and that they're interested in engaging with it and that they're easy to do an iterative pattern of engaging is one of the things that comes out of that. I also think that if I were to be able to say, hey everyone, make sure you're doing this as well as this. Obviously when we build a lot of technology, we tend to think about the easiest simple path is a form of human replacement. So you go, okay, we don't need customer service agents, we'll just do this. What I think is also super important is to think about that copilot, that human amplification, and making it more as per the book Impromptu, amplification intelligence. I think that's also something to be thinking about with a kind of a superagency design lens.

Dan Shipper (01:06:35)

Yeah, one of the things that I think about as a lens for trying to figure out places to build AI that increases agency is that it reduces the cost of certain intelligence services that people pay for right now that use humans—not so that they don't use humans, so that other people who can afford them can get the same thing. So a really simple example is, I run a media company. I don't write all of the YouTube headlines and all of the descriptions and all that kind of stuff. I have a ghostwriter that I work with, who is super talented. But if you're just starting out, you can't afford that. But you can use ChatGPT or Claude or we have an internal incubation we do called Spiral that does this. And so it lowers the cost for people to like having a lot of the leverage that I get because I have more money and a whole organization. There’s lots and lots of areas like that that go beyond creator stuff and will be really useful for people.

Reid Hoffman (01:07:35)

Part of the thing I think with AI and its amplification of intelligence is I think it raises the bar, raises the capability of the folks who have less like I don't have access to elite school, I can still learn. I don't have access to a ghostwriter, I can still write some marketing copy. But by the way, it also raises the, well, you have a ghostwriter who's really good, that ghostwriter learns to use ChatGPT, and then also can do that in a much stronger and better way. And I think that's good. I think amplifying across the board is actually a good thing.

Dan Shipper (01:08:10)

Yeah. Totally. Let me think about if there’s anything else for us to discuss. 

Reid Hoffman (01:08:12)

Well, I’m sure there will be.

Dan Shipper (01:08:20)

This is great. Thanks so much for doing this. I had a great time. I can't wait for the book to come out. And I can't wait for another one. I would love to have you one again.

Reid Hoffman (01:08:30)

A delight. I look forward to our next conversation.


Thanks to Scott Nover for editorial support.

Dan Shipper is the cofounder and CEO of Every, where he writes the Chain of Thought column and hosts the podcast AI & I. You can follow him on X at @danshipper and on LinkedIn, and Every on X at @every and on LinkedIn.

We also build AI tools for readers like you. Automate repeat writing with Spiral. Organize files automatically with Sparkle. Write something great with Lex.

Find Out What
Comes Next in Tech.

Start your free trial.

New ideas to help you build the future—in your inbox, every day. Trusted by over 75,000 readers.

Subscribe

Already have an account? Sign in

What's included?

  • Unlimited access to our daily essays by Dan Shipper, Evan Armstrong, and a roster of the best tech writers on the internet
  • Full access to an archive of hundreds of in-depth articles
  • Unlimited software access to Spiral, Sparkle, and Lex

  • Priority access and subscriber-only discounts to courses, events, and more
  • Ad-free experience
  • Access to our Discord community

Comments

You need to login before you can comment.
Don't have an account? Sign up!
Every

What Comes Next in Tech

Subscribe to get new ideas about the future of business, technology, and the self—every day