
The transcript of How Do You Use ChatGPT? with Reid Hoffman is below for paying subscribers.
Timestamps
- Introduction: 00:01:58
- Why philosophy will make you a better founder: 00:04:35
- The fundamental problem with “trolley problems”: 00:08:22
- How AI is changing the essentialism v. nominalism debate: 00:14:27
- Why embeddings align with nominalism: 00:29:33
- How LLMs are being trained to reason better: 00:34:26
- How technology changes the way we see ourselves and the world around us: 00:44:52
- Why most psychology literature is wrong: 00:46:24
- Why philosophers didn’t come up with AI: 00:52:46
- How to use ChatGPT to be more philosophically inclined: 00:56:30
Transcript
Dan Shipper (00:01:13)
Reid, welcome to the show.
Reid Hoffman (00:01:15)
It’s great to be here.
Dan Shipper (00:01:16)
It’s great to have you. So I'm sure that everyone listening or watching knows this, but you are a renowned entrepreneur. You're a venture capitalist. You are an author. You're best known as the co-founder of LinkedIn, you’re a partner at Greylock. You were a board member, and an early backer at OpenAI. And you also have an incredible podcast, Masters of Scale. But perhaps most relevant to this conversation, you also studied philosophy at Stanford and Oxford, and you almost became a philosophy professor, which I didn't know before researching this interview. It's really cool.
Reid Hoffman (00:01:53)
Yeah, no, part of it was I've always been interested in human thought and language. Started at Stanford with a major called Symbolic Systems. I was the eighth person to declare that as a major at Stanford and then kind of thought, hmm, we don't really know what thought and language fully are, maybe philosophers do. And so, took some classes at Stanford, but also trundled off to Oxford to see if philosophers had a better understanding of it.
Dan Shipper (00:02:24)
I love it. It's funny. I feel like since then, Symbolic Systems has become the go-to Stanford major for curious, analytical people who end up doing startups. So that's pretty funny to know that you're one of the first. So, usually in this show, we talk about actionable ways that people use ChatGPT. And that's the big question. That's, I think, what people come here for. But underneath that, I think what a more interesting question is how does AI in general and ChatGPT in particular, might change what it means to be human? How might it change how we see ourselves and how we see the world? How might it enhance our creativity, our intelligence, all that kind of stuff. And these are really deep, big philosophical questions. And as someone who rigorously studied philosophy and probably still thinks about those questions, I thought you might have a unique perspective on this intersection. 'Cause I think people tend to be— They're either in the philosophy camp or they're in the language models camp. And people who are sort of in the middle are kind of an interesting one. And what I wanted to start with because I think there are probably people who are listening or watching who are like, I just want Reid’s actionable tips, is to ask— Tell me more about why you care about philosophy. And I think you got into that a little bit in talking about how you got into it, but tell us why do you care about philosophy? Why are answering these big questions important?
Reid Hoffman (00:03:50)
So, one of the things that I sometimes will tell MBA schools when I give talks, that a background of philosophy is more important for entrepreneurship than an MBA, which of course is startling and contrarian. And part of that is to get people to think crisply about this stuff. 'Cause part of what you're doing as an entrepreneur is you're thinking about what is the way the world could be, what could it possibly be? What, if you wanted to use analytic philosophical language, logical possibility or something like that, but it's kind of what is possible. And then, partially because these are human activities, what's your underlying theories of human nature about how human beings are now, how they are quasi-eternally, and how they are as circumstances change, as the environment, the ecosystems we live in change, which is technology and in political power and institutions and a bunch of other things as ways of doing that. And philosophy is very important to this stuff because it's understanding how to think about very crisply what are possibilities, what are theories of human nature as they are manifest today and as they may be modified by new products and services, new technologies et cetera. And so obviously people tend to say, oh, that's a philosophical question because it's an unanswerable question. Nature of truth. Or while we all speak and understand languages, we don't really know how that works. And as part of the reason why there was the linguistic turn in philosophy that Wittgenstein and others were so known for, which is, well, maybe these problems in philosophy are problems in language. And if we understand language, we'll understand philosophy. And this question around these unanswerable questions, but actually, in fact, science itself is full of a lot of unanswerable questions. And it's the working theory as we dynamically improve, and that's part of what the human condition is. And that's part of what actually the in depth philosophy is. It isn't to say that the same questions today— Some of the same questions today in philosophy, the same questions that Plato and Aristotle, and even the pre-Socratics and other folks are grappling with: truth, knowledge, et cetera. But some of the questions are also new questions and the questions evolve and part of how science has evolved from philosophy was this question as we get to our more specific theories and kind of developing the new questions that we get to those are outgrowths. And the same thing is true in building technology, in building products and services in entrepreneurship. And that's why philosophy is actually, in fact, robust and important, as applied to serious questions versus the— One of the things I wrote my thesis on in Oxford was the uses and abuses of thought experiments. And the most classic one is trolley problems. And there are both uses and abuses within the methodology of trolley problems. The most entertaining of which, if people haven't watched it is, there's a TV series called The Good Place, which embodied the trolley problem on a TV episode in an absolutely hilarious way.
Dan Shipper (00:07:28)
That's really interesting. What is the way that people tend to misuse that? Because I feel like trolley problems are so common in EA discourse and people run into that a lot online.
Reid Hoffman (00:07:37)
The fundamental problem is they try to frame it— To get, to get an intuition, to derive an intuition, a principle, et cetera, they try to frame an artificially different environment. So it's like, no, no, it's a trolley and the trolley will either hit the five criminals or the one human baby. And it's default set to hit the human baby. And do you throw the switch or not? And then when you start attacking the problem, you say, well, how do I know that I can't break the trolley? I could just not make it continue to run. It's like, well, you know that you're like— Oh, so you're positing in your thought experiment that I have perfect knowledge that breaking the trolley is impossible. So in your posit to make your thought experiment work, you're positing something we never— Or, when we encounter, we generally think people are crazy. You have perfect knowledge. Why in fact do I know that I have perfect knowledge that I can't break the trolley? And say what is the right human response to this trolley problem is I'm going to try to break the trolley. So it doesn't hit either of them.
And you might even say that the problem is that to say, well, you have perfect knowledge that you can't break it. You're like, well, okay, a.) don't have perfect knowledge and b.), even if you did, maybe it's still the right response. You're trying to get me to say, do I do nothing and run over the baby, or do I do something and run over the five criminals? Those are my only two options. And you're like, well, no, I could say, even if I think I can't break the trolley, that's what I'm going to try to do because that's the moral thing to do.
Dan Shipper (00:09:22)
I've heard a lot of trolley problems and I've never heard anyone posit the third option. I love that. That's great. And also there's something about that where it's, yeah, certain thought experiments sort of hijack your instincts and you don't quite have a reason through all these hidden assumptions that I think honestly reminds me of certain doomer arguments. And I don't want to go into the full thing, but I think it's a really interesting way to think about it. If I had to summarize what you just said, the value to you of philosophy is thinking crisply about possibilities, thinking about human nature and reality. All of those things are really, really, really important for business people. I want to take it another step, which is some of those questions that philosophers or philosophy students or philosophy nerds just sharpen our skills on. There are some of these big questions. Some of the big perennial questions, like what is truth? What is reality? What can we know? All that kind of stuff. I'm kind of curious if you have a sense, as we start to get into talking about AI stuff, what are those questions where AI large language models are going to give us a little bit of a new lens on some of those questions? Or what are questions where we'll find new ones to ask that are better than previous ones, even if they maybe don't answer them. Do you have a sense for that?
Reid Hoffman (00:10:52)
Well, I mean, historically, for example, questions have led to a bunch of various science disciplines, right? It's everything from things in the physical world to things in the biological world, like germ theory and all the rest. I think it's actually even true. It's one of the reasons why philosophy is the root discipline for many other disciplines. When you get to questions around, how do you think about economics and game theory? Or how do you think about political science and realpolitik and kind of the conflict of nations and interests. And it's also one of the reasons why probably one of my deepest critiques of the non-reinvention of the university is the intensity of disciplinarianism. So it's just the discipline of just political science or just the discipline of even philosophy as opposed to multidisciplinary. And if part of the thing that I tend to think is an interesting thing is how much the academic disciplines tend to be more and more disciplinary versus the, hey, maybe every 25 years, we should think about blowing them all up and reconstituting them in various ways. And that would be actually a better way of thinking and why some of the most interesting people are the people who are actually blending across disciplines within academia. And I think that part of it is extremely important. And part of the question in philosophy is the kind of the question of, well, how do we evolve the question of what do we know? And obviously you evolve the question where through, for example, a lot of the history of science is instrumentation, new measurement devices that help with provisioning of theories. And that's one of the reasons why people frequently don't think enough about how technology helps us change. What is the definition of a human? Because we have this kind of imagination, like the Descartesian imagination, that we are this pure thinking creature and you're like, oh, if you've learned anything, that's not really the way it works. That doesn't mean that we don't think that way to have abstractions to generate logic and theories of the world and all the rest. But put your philosopher on some LSD and you'll get some different outputs.
Dan Shipper (00:13:37)
That makes sense. So I guess along those lines, if I step back and squint, I can kind of divide the history of philosophy into essentialism and nominalism for a certain part of philosophy, right? And essentialists are like, do you believe that there's a fundamental objective reality out there that's knowable and that there's a way to kind of carve nature at its joints. And nominalists, which would include Wittgenstein, which I know you studied pretty deeply, and pragmatists. I think that truth is more or less relative, or it's about social convention or it's about what works or there's a lot of different formulations of it. And there's this sort of ongoing debate between people who think one thing or the other. Do you think language models like change, or add any weight to either side of that debate?
Reid Hoffman (00:14:31)
I think they add perspective and color. I don't think they resolve the debate. And there's certainly some question about since they function more like later Wittgenstein or more nominalist, you say, well, does that weigh in on the side of nominalists because of actually, in fact, the way they function? And actually, in fact, you say, well, if you look at how we're trying to develop the large language models, we're actually trying to get them to embody more essentialist characteristics as they do it. How do you ground in truth, have less hallucination, et cetera? And to gesture at a different, earlier German philosopher, Hegel, one of the things I think it was kind of the human condition is that thesis antithesis synthesis, like you could say, hey, we have an essentialist thesis, we have a nominalist antithesis, and the synthesis is how we're putting them together in various ways, because you say— And I don't even think later Wittgenstein would have said that the world is only language, kind of what the deconstructionist and Derrida went to. It was only the veil of language and you have no contact with the world, so you're not grounded in the world at all. I think he would think that's kind of absurd, right? But his point was to say that there is also in how we live as forms of life, the way that it operates is not a simple denote, and he understood it wasn't just denoting the cat on the mat or the possibilities. The cat is on the mat and the possibility of the cat is on that, but actually possible configurations of the universe. And that was this kind of notion of logical possibility that was described as one one language of possibility was to say that kind of essentialist about a language of possibility is actually incorrect to actually how we discover truth and how we operationalize truth. And you still have a robust theory of truth, which is not essentially what the deconstructionists do. But the robust theory of truth is partially grounded in this notion of language games and a biological form of life of how you do that. And then obviously you go into this deeply with saying, well, okay, how is mathematics a language game as a classic language of truth is a way of trying to understand that. And that's part of where you get what philosophers refer to as Kripkenstein, Saul Kripke’s excellent lens on reading part of what Wittgenstein was about. And you kind of then apply all that—everyone's going, where's this going?—to large language models? And you say, well, actually, in fact language is this play out of this language game. Large language models are playing out this language game in various ways. But part of what is revealed is we don't just go. Truth is what is expressed in language. Truth is a dynamic process and human discourse could be synthesis, synthesis thesis, antithesis, synthesis, or other things is this human discourse that's coming out of this dialogic period, this truth discovery, this logical reasoning, whether it's induction is reasoning, whether it's abduction, whether it's deduction and these reasoning processes that get us to what we think are these kind of theories of truth that are always to some degree works in progress.
Dan Shipper (00:18:17)
That's really fascinating. I want to try to summarize that in case it was a little bit difficult to follow— to be honest, there's a point in there that I think I missed something. So you tell me what I missed. But I think one of the things that I heard there that I thought was really interesting is, when you think about how we built AI, which is, predicting the next token, that's a very, late Wittgenstein compatible idea or pragmatic, compatible idea where it's really about the relationship between different words in a sentence. And we're not finding anything out about the world. There are other AI approaches, I don't know, in the eighties or seventies, where it was literally let's list out every single object in the world. And those didn't really work. And that would be something along the lines of a more essential approach to AI. And, the one that works is a more pragmatic and the more late Wittgensteinian one. But, what's quite interesting is now that we have that pragmatic base that we've bootstrapped, we're in this process of trying to make it more grounded in reality or more reduced down to being able to talk about the essential ground truth. And I think what's really interesting about Wittgenstein is he's sort of famous for saying like the limits of my language are the limits of my world. I don't know. I don't remember if that's late or early, but more or less I think what you're saying is that Wittgenstein doesn't think that there's nothing outside of language, but he does think that the way we talk about the world, or the way that we use language is part of this sort of social discourse where we're all kind of going back and forth to co-invent language and structures and language games together. And you kind of see that happening with language models where when you do something like RLHF that's sort of us playing with a language model, like playing a language game to be like, no, no, you don't talk like that. Is that like generally what you're getting at?
Reid Hoffman (00:20:33)
Yes. So everything you said. But then the additional thing, which later, Wittgenstein was really trying to explore in various ways because he wasn't trying to do a kind of a completely social construction of truth. You have to be a Wittgenstein scholar to actually understand how both early and late Wittgenstein are actually part of the same project. And late Wittgenstein wasn't, early Wittgenstein was an idiot and I've religiously converted to this different point of view. But there is a particular thing, which is how do you get to the notion of understanding truth? And truth is the dynamic of discovery through language and it has to have some explicit external conditions that isn't my truth, your truth. There is only, to some degree, our truth or the truth, in various ways. And how do you get to that? Is what you're doing and having truth conditions and then kind of early Wittgenstein. The truth condition was that it caches out into a state of possibilities and actualities in this logical space of possibilities, which include physical space is part of the broader than that. And then, later Wittgenstein said, well, actually, in fact, this modeling of logical possibility is actually not the way this works, right? And we're not actually, in fact, grounding it that way. The way that we're grounding it is in the notion of how we play language games, make moves in language, and the way that's grounded is to some degree sharing a certain biological form of life by which we recognize that's a valid move in the language game, this is not a valid move in the language game.
Now, this is what's interesting when it gets to large language models, because you go, well, large language models, are they the same biological form of life as us, or are they different? And how does that play out? And I think Wittgenstein would have found that question utterly fascinating, and really would have gone very deep on it, trying to figure that out. And by the way, the answer might be some and some, not 100 percent yes or 100 percent no. Because the argument in favor is the large language models are trained on the corpus of human knowledge and language and everything else. And they're doing language patterns on that. Some might even argue that some of their patterns are very similar to the the patterns of human learning and brains. Others would argue that it's not, but then you'd say, well, but it's also not a biological entity and it actually learns very differently than human beings learn. And so maybe its language game, which looks like it's the human language game, is actually different in significant ways. And so therefore the truth functions are actually very different.
And in a sense, what we're trying to do when we are modifying and making progress with how we build these LLMs to make them much more reliable on a truth base. We love the creativity and the generativity, but we want it, for a huge amount of the really useful cases in terms of amplifying humanity, we want it to have a better truth sense. I mean, the paradoxes in current GPT are when you can kind of tease it out with very simple questions around prime numbers and you go, well, you got that answer wrong. It's like, oh yeah, I got it wrong. Here's the answer. Well, that answer is wrong too. Oh, I got that one wrong too. Here's the answer. And a human being understanding these things, I'm just getting these things wrong. I get I'm wrong as opposed to, oh, I'm sorry. You're right. I got it wrong. And here's another wrong answer. And we're trying to get that truth sense into it as we're doing because we do have some notion of, oh, right. This is what characteristic mathematics gets us in very pure definitions of certain kinds of language games. It's one of the reasons why centuries ago, people thought math was maybe the language of the universe or language of God or language of et cetera, because you're like, okay, there is the one where the purest truths that we know, two plus two equals four is kind of embedded in, and we're still working that out. As we play with how we create these language tools, these language devices, and it's part of the reason I think this question is really interesting because you can actually model it to some of the actual, as it were, technological physics that we're trying to create when we're doing the next version well how do we, how do we get these things into good reasoning machines, not just good code. Generativity machines and they have some reasoning from their generativity, but part of the classic showing where they break is showing where their reasoning stops working in ways that we value and aspire to in terms of what we try to do as human beings, as in our best selves.
The transcript of How Do You Use ChatGPT? with Reid Hoffman is below for paying subscribers.
Timestamps
- Introduction: 00:01:58
- Why philosophy will make you a better founder: 00:04:35
- The fundamental problem with “trolley problems”: 00:08:22
- How AI is changing the essentialism v. nominalism debate: 00:14:27
- Why embeddings align with nominalism: 00:29:33
- How LLMs are being trained to reason better: 00:34:26
- How technology changes the way we see ourselves and the world around us: 00:44:52
- Why most psychology literature is wrong: 00:46:24
- Why philosophers didn’t come up with AI: 00:52:46
- How to use ChatGPT to be more philosophically inclined: 00:56:30
Transcript
Dan Shipper (00:01:13)
Reid, welcome to the show.
Reid Hoffman (00:01:15)
It’s great to be here.
Dan Shipper (00:01:16)
It’s great to have you. So I'm sure that everyone listening or watching knows this, but you are a renowned entrepreneur. You're a venture capitalist. You are an author. You're best known as the co-founder of LinkedIn, you’re a partner at Greylock. You were a board member, and an early backer at OpenAI. And you also have an incredible podcast, Masters of Scale. But perhaps most relevant to this conversation, you also studied philosophy at Stanford and Oxford, and you almost became a philosophy professor, which I didn't know before researching this interview. It's really cool.
Reid Hoffman (00:01:53)
Yeah, no, part of it was I've always been interested in human thought and language. Started at Stanford with a major called Symbolic Systems. I was the eighth person to declare that as a major at Stanford and then kind of thought, hmm, we don't really know what thought and language fully are, maybe philosophers do. And so, took some classes at Stanford, but also trundled off to Oxford to see if philosophers had a better understanding of it.
Dan Shipper (00:02:24)
I love it. It's funny. I feel like since then, Symbolic Systems has become the go-to Stanford major for curious, analytical people who end up doing startups. So that's pretty funny to know that you're one of the first. So, usually in this show, we talk about actionable ways that people use ChatGPT. And that's the big question. That's, I think, what people come here for. But underneath that, I think what a more interesting question is how does AI in general and ChatGPT in particular, might change what it means to be human? How might it change how we see ourselves and how we see the world? How might it enhance our creativity, our intelligence, all that kind of stuff. And these are really deep, big philosophical questions. And as someone who rigorously studied philosophy and probably still thinks about those questions, I thought you might have a unique perspective on this intersection. 'Cause I think people tend to be— They're either in the philosophy camp or they're in the language models camp. And people who are sort of in the middle are kind of an interesting one. And what I wanted to start with because I think there are probably people who are listening or watching who are like, I just want Reid’s actionable tips, is to ask— Tell me more about why you care about philosophy. And I think you got into that a little bit in talking about how you got into it, but tell us why do you care about philosophy? Why are answering these big questions important?
Reid Hoffman (00:03:50)
So, one of the things that I sometimes will tell MBA schools when I give talks, that a background of philosophy is more important for entrepreneurship than an MBA, which of course is startling and contrarian. And part of that is to get people to think crisply about this stuff. 'Cause part of what you're doing as an entrepreneur is you're thinking about what is the way the world could be, what could it possibly be? What, if you wanted to use analytic philosophical language, logical possibility or something like that, but it's kind of what is possible. And then, partially because these are human activities, what's your underlying theories of human nature about how human beings are now, how they are quasi-eternally, and how they are as circumstances change, as the environment, the ecosystems we live in change, which is technology and in political power and institutions and a bunch of other things as ways of doing that. And philosophy is very important to this stuff because it's understanding how to think about very crisply what are possibilities, what are theories of human nature as they are manifest today and as they may be modified by new products and services, new technologies et cetera. And so obviously people tend to say, oh, that's a philosophical question because it's an unanswerable question. Nature of truth. Or while we all speak and understand languages, we don't really know how that works. And as part of the reason why there was the linguistic turn in philosophy that Wittgenstein and others were so known for, which is, well, maybe these problems in philosophy are problems in language. And if we understand language, we'll understand philosophy. And this question around these unanswerable questions, but actually, in fact, science itself is full of a lot of unanswerable questions. And it's the working theory as we dynamically improve, and that's part of what the human condition is. And that's part of what actually the in depth philosophy is. It isn't to say that the same questions today— Some of the same questions today in philosophy, the same questions that Plato and Aristotle, and even the pre-Socratics and other folks are grappling with: truth, knowledge, et cetera. But some of the questions are also new questions and the questions evolve and part of how science has evolved from philosophy was this question as we get to our more specific theories and kind of developing the new questions that we get to those are outgrowths. And the same thing is true in building technology, in building products and services in entrepreneurship. And that's why philosophy is actually, in fact, robust and important, as applied to serious questions versus the— One of the things I wrote my thesis on in Oxford was the uses and abuses of thought experiments. And the most classic one is trolley problems. And there are both uses and abuses within the methodology of trolley problems. The most entertaining of which, if people haven't watched it is, there's a TV series called The Good Place, which embodied the trolley problem on a TV episode in an absolutely hilarious way.
Dan Shipper (00:07:28)
That's really interesting. What is the way that people tend to misuse that? Because I feel like trolley problems are so common in EA discourse and people run into that a lot online.
Reid Hoffman (00:07:37)
The fundamental problem is they try to frame it— To get, to get an intuition, to derive an intuition, a principle, et cetera, they try to frame an artificially different environment. So it's like, no, no, it's a trolley and the trolley will either hit the five criminals or the one human baby. And it's default set to hit the human baby. And do you throw the switch or not? And then when you start attacking the problem, you say, well, how do I know that I can't break the trolley? I could just not make it continue to run. It's like, well, you know that you're like— Oh, so you're positing in your thought experiment that I have perfect knowledge that breaking the trolley is impossible. So in your posit to make your thought experiment work, you're positing something we never— Or, when we encounter, we generally think people are crazy. You have perfect knowledge. Why in fact do I know that I have perfect knowledge that I can't break the trolley? And say what is the right human response to this trolley problem is I'm going to try to break the trolley. So it doesn't hit either of them.
And you might even say that the problem is that to say, well, you have perfect knowledge that you can't break it. You're like, well, okay, a.) don't have perfect knowledge and b.), even if you did, maybe it's still the right response. You're trying to get me to say, do I do nothing and run over the baby, or do I do something and run over the five criminals? Those are my only two options. And you're like, well, no, I could say, even if I think I can't break the trolley, that's what I'm going to try to do because that's the moral thing to do.
Dan Shipper (00:09:22)
I've heard a lot of trolley problems and I've never heard anyone posit the third option. I love that. That's great. And also there's something about that where it's, yeah, certain thought experiments sort of hijack your instincts and you don't quite have a reason through all these hidden assumptions that I think honestly reminds me of certain doomer arguments. And I don't want to go into the full thing, but I think it's a really interesting way to think about it. If I had to summarize what you just said, the value to you of philosophy is thinking crisply about possibilities, thinking about human nature and reality. All of those things are really, really, really important for business people. I want to take it another step, which is some of those questions that philosophers or philosophy students or philosophy nerds just sharpen our skills on. There are some of these big questions. Some of the big perennial questions, like what is truth? What is reality? What can we know? All that kind of stuff. I'm kind of curious if you have a sense, as we start to get into talking about AI stuff, what are those questions where AI large language models are going to give us a little bit of a new lens on some of those questions? Or what are questions where we'll find new ones to ask that are better than previous ones, even if they maybe don't answer them. Do you have a sense for that?
Reid Hoffman (00:10:52)
Well, I mean, historically, for example, questions have led to a bunch of various science disciplines, right? It's everything from things in the physical world to things in the biological world, like germ theory and all the rest. I think it's actually even true. It's one of the reasons why philosophy is the root discipline for many other disciplines. When you get to questions around, how do you think about economics and game theory? Or how do you think about political science and realpolitik and kind of the conflict of nations and interests. And it's also one of the reasons why probably one of my deepest critiques of the non-reinvention of the university is the intensity of disciplinarianism. So it's just the discipline of just political science or just the discipline of even philosophy as opposed to multidisciplinary. And if part of the thing that I tend to think is an interesting thing is how much the academic disciplines tend to be more and more disciplinary versus the, hey, maybe every 25 years, we should think about blowing them all up and reconstituting them in various ways. And that would be actually a better way of thinking and why some of the most interesting people are the people who are actually blending across disciplines within academia. And I think that part of it is extremely important. And part of the question in philosophy is the kind of the question of, well, how do we evolve the question of what do we know? And obviously you evolve the question where through, for example, a lot of the history of science is instrumentation, new measurement devices that help with provisioning of theories. And that's one of the reasons why people frequently don't think enough about how technology helps us change. What is the definition of a human? Because we have this kind of imagination, like the Descartesian imagination, that we are this pure thinking creature and you're like, oh, if you've learned anything, that's not really the way it works. That doesn't mean that we don't think that way to have abstractions to generate logic and theories of the world and all the rest. But put your philosopher on some LSD and you'll get some different outputs.
Dan Shipper (00:13:37)
That makes sense. So I guess along those lines, if I step back and squint, I can kind of divide the history of philosophy into essentialism and nominalism for a certain part of philosophy, right? And essentialists are like, do you believe that there's a fundamental objective reality out there that's knowable and that there's a way to kind of carve nature at its joints. And nominalists, which would include Wittgenstein, which I know you studied pretty deeply, and pragmatists. I think that truth is more or less relative, or it's about social convention or it's about what works or there's a lot of different formulations of it. And there's this sort of ongoing debate between people who think one thing or the other. Do you think language models like change, or add any weight to either side of that debate?
Reid Hoffman (00:14:31)
I think they add perspective and color. I don't think they resolve the debate. And there's certainly some question about since they function more like later Wittgenstein or more nominalist, you say, well, does that weigh in on the side of nominalists because of actually, in fact, the way they function? And actually, in fact, you say, well, if you look at how we're trying to develop the large language models, we're actually trying to get them to embody more essentialist characteristics as they do it. How do you ground in truth, have less hallucination, et cetera? And to gesture at a different, earlier German philosopher, Hegel, one of the things I think it was kind of the human condition is that thesis antithesis synthesis, like you could say, hey, we have an essentialist thesis, we have a nominalist antithesis, and the synthesis is how we're putting them together in various ways, because you say— And I don't even think later Wittgenstein would have said that the world is only language, kind of what the deconstructionist and Derrida went to. It was only the veil of language and you have no contact with the world, so you're not grounded in the world at all. I think he would think that's kind of absurd, right? But his point was to say that there is also in how we live as forms of life, the way that it operates is not a simple denote, and he understood it wasn't just denoting the cat on the mat or the possibilities. The cat is on the mat and the possibility of the cat is on that, but actually possible configurations of the universe. And that was this kind of notion of logical possibility that was described as one one language of possibility was to say that kind of essentialist about a language of possibility is actually incorrect to actually how we discover truth and how we operationalize truth. And you still have a robust theory of truth, which is not essentially what the deconstructionists do. But the robust theory of truth is partially grounded in this notion of language games and a biological form of life of how you do that. And then obviously you go into this deeply with saying, well, okay, how is mathematics a language game as a classic language of truth is a way of trying to understand that. And that's part of where you get what philosophers refer to as Kripkenstein, Saul Kripke’s excellent lens on reading part of what Wittgenstein was about. And you kind of then apply all that—everyone's going, where's this going?—to large language models? And you say, well, actually, in fact language is this play out of this language game. Large language models are playing out this language game in various ways. But part of what is revealed is we don't just go. Truth is what is expressed in language. Truth is a dynamic process and human discourse could be synthesis, synthesis thesis, antithesis, synthesis, or other things is this human discourse that's coming out of this dialogic period, this truth discovery, this logical reasoning, whether it's induction is reasoning, whether it's abduction, whether it's deduction and these reasoning processes that get us to what we think are these kind of theories of truth that are always to some degree works in progress.
Dan Shipper (00:18:17)
That's really fascinating. I want to try to summarize that in case it was a little bit difficult to follow— to be honest, there's a point in there that I think I missed something. So you tell me what I missed. But I think one of the things that I heard there that I thought was really interesting is, when you think about how we built AI, which is, predicting the next token, that's a very, late Wittgenstein compatible idea or pragmatic, compatible idea where it's really about the relationship between different words in a sentence. And we're not finding anything out about the world. There are other AI approaches, I don't know, in the eighties or seventies, where it was literally let's list out every single object in the world. And those didn't really work. And that would be something along the lines of a more essential approach to AI. And, the one that works is a more pragmatic and the more late Wittgensteinian one. But, what's quite interesting is now that we have that pragmatic base that we've bootstrapped, we're in this process of trying to make it more grounded in reality or more reduced down to being able to talk about the essential ground truth. And I think what's really interesting about Wittgenstein is he's sort of famous for saying like the limits of my language are the limits of my world. I don't know. I don't remember if that's late or early, but more or less I think what you're saying is that Wittgenstein doesn't think that there's nothing outside of language, but he does think that the way we talk about the world, or the way that we use language is part of this sort of social discourse where we're all kind of going back and forth to co-invent language and structures and language games together. And you kind of see that happening with language models where when you do something like RLHF that's sort of us playing with a language model, like playing a language game to be like, no, no, you don't talk like that. Is that like generally what you're getting at?
Reid Hoffman (00:20:33)
Yes. So everything you said. But then the additional thing, which later, Wittgenstein was really trying to explore in various ways because he wasn't trying to do a kind of a completely social construction of truth. You have to be a Wittgenstein scholar to actually understand how both early and late Wittgenstein are actually part of the same project. And late Wittgenstein wasn't, early Wittgenstein was an idiot and I've religiously converted to this different point of view. But there is a particular thing, which is how do you get to the notion of understanding truth? And truth is the dynamic of discovery through language and it has to have some explicit external conditions that isn't my truth, your truth. There is only, to some degree, our truth or the truth, in various ways. And how do you get to that? Is what you're doing and having truth conditions and then kind of early Wittgenstein. The truth condition was that it caches out into a state of possibilities and actualities in this logical space of possibilities, which include physical space is part of the broader than that. And then, later Wittgenstein said, well, actually, in fact, this modeling of logical possibility is actually not the way this works, right? And we're not actually, in fact, grounding it that way. The way that we're grounding it is in the notion of how we play language games, make moves in language, and the way that's grounded is to some degree sharing a certain biological form of life by which we recognize that's a valid move in the language game, this is not a valid move in the language game.
Now, this is what's interesting when it gets to large language models, because you go, well, large language models, are they the same biological form of life as us, or are they different? And how does that play out? And I think Wittgenstein would have found that question utterly fascinating, and really would have gone very deep on it, trying to figure that out. And by the way, the answer might be some and some, not 100 percent yes or 100 percent no. Because the argument in favor is the large language models are trained on the corpus of human knowledge and language and everything else. And they're doing language patterns on that. Some might even argue that some of their patterns are very similar to the the patterns of human learning and brains. Others would argue that it's not, but then you'd say, well, but it's also not a biological entity and it actually learns very differently than human beings learn. And so maybe its language game, which looks like it's the human language game, is actually different in significant ways. And so therefore the truth functions are actually very different.
And in a sense, what we're trying to do when we are modifying and making progress with how we build these LLMs to make them much more reliable on a truth base. We love the creativity and the generativity, but we want it, for a huge amount of the really useful cases in terms of amplifying humanity, we want it to have a better truth sense. I mean, the paradoxes in current GPT are when you can kind of tease it out with very simple questions around prime numbers and you go, well, you got that answer wrong. It's like, oh yeah, I got it wrong. Here's the answer. Well, that answer is wrong too. Oh, I got that one wrong too. Here's the answer. And a human being understanding these things, I'm just getting these things wrong. I get I'm wrong as opposed to, oh, I'm sorry. You're right. I got it wrong. And here's another wrong answer. And we're trying to get that truth sense into it as we're doing because we do have some notion of, oh, right. This is what characteristic mathematics gets us in very pure definitions of certain kinds of language games. It's one of the reasons why centuries ago, people thought math was maybe the language of the universe or language of God or language of et cetera, because you're like, okay, there is the one where the purest truths that we know, two plus two equals four is kind of embedded in, and we're still working that out. As we play with how we create these language tools, these language devices, and it's part of the reason I think this question is really interesting because you can actually model it to some of the actual, as it were, technological physics that we're trying to create when we're doing the next version well how do we, how do we get these things into good reasoning machines, not just good code. Generativity machines and they have some reasoning from their generativity, but part of the classic showing where they break is showing where their reasoning stops working in ways that we value and aspire to in terms of what we try to do as human beings, as in our best selves.
Dan Shipper (00:26:04)
That's really fascinating. You said a lot there. I really want to get into the reasoning thing in a second, but I want to go back to the way that you talked about late Wittgenstein versus early Wittgenstein, because I haven't really heard it said that way. And the usual thing people say is he just disagreed with everything when he was older or whatever. And what I hear you saying now, is more or less, in both cases, he's saying some of the same things, or he has some of the same views, but like the real difference is how he caches out what it means to be true, whether something is true. And in his first period, he's talking about truth in terms of a logical space of possibilities. That can be broken down into what he calls atomic facts. And those are never really defined, but you can kind of build up truth from there, mapping those possibilities into actualities, what's actually in the world. And in later Wittgenstein, it's all about these language games, the social relationships, the use of that word or that phrase in the context of people. And one of the things that I really wanted to ask you about is that the first version of Wittgenstein is, where it's that logical space of possibilities. What that reminds me of is embeddings, they're one of the, the key underlying, technologies that gave rise to AI. And in traditional NLP, they're allowing you to represent words or tokens in a high-dimensional space. And then the language model innovation is kind not just words, it's words in their particular context. Each word in a particular context has its own part of the space. So in a language model, the word king, if it's tokenized that way there's a king in chess, there's an actual king, there's a king of England, there's King Lear, and they're all kinds of kings, but they're different spaces. And language models are able to represent all of those differently. When we say king, we mean many different things that are able to represent all that. And that just actually reminds me a lot of atomic facts or Wittgenstein's, early work. And I'm just kind of curious, 'cause I think you said that language models, because of the next token prediction, are sort of late Wittgensteinian, but I wonder how you factor in the fact that embeddings work and they're sort of a core part of this.
Reid Hoffman (00:28:49)
Well, and actually this is part of the fact that late Wittgenstein is not, early Wittgenstein was an idiot. Because yes, I do think that the kind of the notion of, call as it were, a probabilistic bet for what are the set of different tokens that apply are there. Now, the reason why I would slant more as current practice late Wittgenstein than early Wittgenstein is because early Wittgenstein thought that once you had the grasp on the logic of it, you then almost, by speaking correctly, couldn't make truth mistakes because the logic was embedded in it. And even though the token embeddings are part of a very broad quasi-symbolic network. And the reason it's quasi-symbolic is because it's still activations and so forth and isn't purely the reasoning around a token of king—or 15 different tokens of king or 23 different partial tokens of king—as much as there's conceptual spaces in that tokenization as mapped from a very large use of language. But part of language isn't just the historical language, but is the reapplication of it. If you say this is the king of podcasts, right? Or this is the king of microphones. As instances, that's part of why kind of later Wittgenstein went to, well, it's how we're playing these language games and how we're reapplying them. And when we say for example on this podcast, this could become the king of podcasts. We all have a sense of what we're doing. Well, what would be the cases where that would be true? And what would be the cases that would be false? And what prediction is that making? And how is it that that's a useful thing? I'm sure someone said king of podcasts before, but I've never heard it before. And it's a different tokenization, especially as it gets developed and elaborated a lot in discussion. And then actually, if you suddenly had another terabyte of information about discussions of kings and kingdoms and all the rest and all of a sudden that token space that it's learning from would change, right? And then the generalizations off it would change. And that's part of the reason I would say it's more later Wittgenstein, even though not completely disconnected from those embeddings early. And it's one of the reasons why, actually, in fact, later Wittgenstein is not, truth is just what language says. It's no, there's ways in which it's embedded in the world by how we navigate as biological beings. And that's part of how the world kind of comes and impacts it and therefore it's not just language by itself, free floating, like the Cartesian consciousness, but it's embedded in some ways. And part of what he was trying to do is figure out, well, from a philosophy standpoint, how do we understand those embeddings and how do we drive our truth discourse in language based upon that biological embedding?
Dan Shipper (00:32:18)
That makes sense. So I think what I hear you saying is, despite the fact that embeddings are mapping words into this high-dimensional space, which seems like mapping words into this sort of atomic facts or logical possibility space, the way that that space is constructed and what makes something go into one part of the space or another is more late Wittgensteinian because it's very much about how it's used in practice and whether it's useful for humans in the world rather than it's about some deep underlying logical ordering where if you've created that ordering, like you can't say anything wrong because you're using you're only using words from that space. Is that kind of on target?
Reid Hoffman (00:33:12)
Yes, exactly. And part of it is we know that there's truths that where the coherent use of language still is a falsity. And what we're trying to figure out is how do we get more of those truths and truth-telling and reasoning—'cause reasoning is about finding truth—into how these LLMs work.
Dan Shipper (00:33:37)
And just to move into that point a little bit, do you think that what is most promising to you in terms of ways that we're getting reasoning into these language models? And do you think that there are any ideas from philosophy, whether Wittgenstein or otherwise, that are relevant to that project?
Reid Hoffman (00:33:58)
Well, the answer is certainly yes on the relevant ideas. Currently, I think we're doing a couple of things. So I think we're taking human knowledge and figuring out how to get that as part of what's trained. So the earliest discoveries were actually, in fact, if you trained on computer code, then these models learn patterns of reasoning much broader than just computer code. And so all of the models that are doing this are now also training on computer code even if they don't have a target of being a Microsoft Copilot code generation, et cetera, if they're not doing that because there's a pattern, just like math, of crisp modeling of reasoning. Another one is that's currently happening, is well, what are you doing with textbooks? And the notion is if you take the same kind of training discipline that we use for human beings encapsulated in textbooks, you can, for example, build much smaller, but still very effective models based on textbooks as ways of doing it. And so textbooks are another one. This is probably some interesting, as it were, computational philosophy, if you begin to say, well, how do we cache out kind of theories of whether it's theories of science and you're building those models into how do you get—it's kind of like Lakatos is a development on Popper given thinking about Kuhnian kind of models of scientific paradigm—how do you kind of make predictions on those kinds of bases and some of the in depth work in logic, maybe Bayesian logic, as ways of possibly looking at us. I'm quite certain that there probably are some very useful things to elaborate beyond it. Now, currently, of course, part of the notion of these things are they’re learning machines. So you have to give a fairly substantive corpus of data from them to learn from. Now of course there's synthetic data and there may be philosophy in what patterns do we create synthetic data that is still useful to learn from off the current data might be anyway. So there's a bunch of different kinds of gestural areas, but I'm certain those are, even though I'm making gestures rather than specific theories, as to how that caches out.
Dan Shipper (00:37:02)
That's really interesting. So it seems basically the way that we're trying to get reasoning into models is to find sources of data that just have really crisp reasoning. And so they'll learn the reasoning from that. I'm sort of curious if that's the case. Aren't there only a certain number of moves you can make in logic? You can do induction, you can do deduction. There's not infinitely many moves. If we have a really crisp set of data on that sort of teaching them these moves, what's the thing that's sort of stopping them from being able to apply them more broadly? And maybe that question is not well formed.
Reid Hoffman (00:37:48)
Well, first, yeah, correction on the question, because actually, in fact, in logic, there are infinite moves. One of the things that's interesting in various logics is different orders of infinity, as people kind of think through it. So there are various things. Now, what you did actually remind me of is one of the things that I've been recently rereading, because thinking of Gödel's theorem as kind of a classic instance of human meta thinking. And so Gödel, Escher, Bach, which I read as a high school student, I've been rereading recently, because—
Dan Shipper (00:38:20)
That's great. What do you think?
Reid Hoffman (00:38:22)
Well, it's this tangle of amazing observations that I'm trying to think about from a viewpoint of modern LLMs. So it's this question of, you got the Gödel self-reflection, which is roughly speaking— In any sufficiently robust language system, there are truths that cannot be expressed within the language system, right? And, that's mind-boggling right? And what exactly it means and so forth. And it's because of this classic kind of diagonalization proof to say if you're enumerating all the truths, there's at least one of them That's not captured in your numbering out of all truths, hence one version of kind of infinity. You get that in the recursion patterns that you see within Escher and within Bach that you say, that's another recursion pattern because there's a recursion pattern of showing the shadow of at least one truth That's not captured within your enumeration of all the truths. You go, okay, well, what does this mean for thinking about truth discovery, whether it's human truth discovery, LLM truth discovery, and what are the things that are outside the boundaries of logic? I would have been very curious to have Gödel and Wittgenstein, two folks very focused on logic, to talk about Gödel’s theorem. I was asked recently if I had a time machine, what I want to go forward or back me. I'd rather go forward. I'm just curious about how you shape the future, but one of the historical back ones that I would love to do is put Gödel and Wittgenstein in a room and say, Gödel’s theorem, discuss! And I would do a lot to try to be able to hear that conversation.
Dan Shipper (00:40:28)
We need some GPTs here with Gödel and Wittgenstein. Maybe Gödel doesn't have enough writing to make that happen, but, maybe eventually.
Reid Hoffman (00:40:47)
And the twistiness of the thinking is one of the things that made Gödel. So spectacular in this another one, by the way, were historical walks— Einstein and Gödel used to take walks. You wish that you had digital recorders, please record the conversation. We would really like to listen to that.
Dan Shipper (00:41:01)
No, I love that. That's really interesting. I read Gödel, Escher, Bach in college. I loved it. I'm thinking that's so good, it's such an interdisciplinary book, it's got math and music and art and all this stuff. And you're like, wow, that's the kind of mind that's going to invent new minds. And then you see Hofstadter today and he's definitely not in the LLM conversation. He's a little bit freaked out by them. And I'm kind of curious, what do you make of that? What did he get right? And what do you think he got wrong?
Reid Hoffman (00:41:37)
Well, I think a central thing that he got right, at least to how I operationalize, is— And that was the reason I was gesturing at Hegel. With thesis, antithesis, synthesis, which is it's a dynamic process that's ongoing and you can't necessarily predict the future synthesis and that's part, even though obviously in philosophy, you try to articulate the truths that Wittgenstein saying, well, there actually have to be a world in a certain way that they're actually there to be truth statements in the language statement of, I think, therefore, I am. And so therefore you can be kind of broader than just the disembodied mind, as a way of thinking about that. 'Cause you think about what the truth conditions must be in a language. If you're saying in a way that's coherent to your current self and your future self, I think, therefore I am, what are the truth conditions in the language as ways of doing it. But that's a dynamic process by which we are making new discoveries, and that's kind of the synthesis. And that's the thing that I think is part of what I take from the Gödel, Escher, Bach interweaving of these different dynamics and showing the kind of the patterns across it. Now, frequently, when you go across a lot of areas where people say, hey, we have this language system and all we know is through our language. And then they kind of go, and the world is unknowable to us because the only thing that's knowable to us is our language. You say, well, that's presuming there's no relationship between how the language engages with the world and how we engage with the world with the language. And so it's one of the reasons why you get really interested in biologists like Varela and Maturana. It's a reason why you get to different patterns of self-referential logic. And so it gets very interesting. And so I don't myself don't get freaked out by LLMs on part of this. I think, wow, new things that we can discover And how does that make the discourse much richer, much more valuable, much more compelling. And in some ways, higher on target discoveries of the truth. Like, 'cause I gave a speech in Bologna last year, where, along with the book I published last year, Impromptu, the last chapter is “Homo Techne,” that one of the things that we think of ourselves as human beings is static. And actually we're not static as we are constituted by the technology that we engage and bring into our being. So, for example, you and I are looking at each other on this podcast through glasses. Think about the world with glasses, without glasses. The world is a very, very different place and how you can perceive, we say, most of our theories of truth are fundamentally based on perception—seeing is believing is kind of a classic idiom. And well, if you don't have glasses, how you see is very different. And so technology changes our landscape. In the perception of truth that's why microscopes and telescopes and all this rest, these other things that are changing that landscape. And that's part of what we're doing with technology. And we're doing this in particularly interesting ways with these LLMs in terms of how they're operating.
Dan Shipper (00:45:07)
Yeah, that makes a lot of sense. And I love that point about how technology changes us and really how flexible humans are. It reminds me a lot actually, because I read your book to prepare for this and I read your Atlantic article, and you have some podcasts on this, and it reminds me a lot of— Have you read the book, The WEIRDest People in the World by Joseph Henrich?
Reid Hoffman (00:45:33)
No, I probably should.
Dan Shipper (00:45:34)
It's really great. He's a psychologist at Harvard. And the point of the book is, most of what we take to be the psychology literature is wrong. And it's not wrong because of p-hacking and all that other stuff, but it's wrong because the psychology literature is based on studies of Western college students and Western college students have a completely different psychology than people everywhere else in the world—now and in history. And one of the key differences in Western college students is that they can read, and reading changes your brain in all of these different ways. It enlarges parts of your brain and shrinks other parts where, for example, if you can read, you're more likely to pick out objects in a landscape rather than see the holistic scene. And there's a bunch of these other significant differences that you find in humans who can read versus humans who can't. And so reading as this technology created all of this stuff. One of them that he argues is that It allowed us to create a society where we had churches that created rules and principles that people would follow, even though they weren't being watched. So, I'm not supposed to steal or whatever. And it's really hard to get a big organized society without reading. Basically, one big point of the book, and that it's because it changes our actual biology.
And I think that's the thing that people sort of miss about language models not to say that we should ignore that there are any language model dangers or anything like that. There's a lot of really interesting and really important problems to solve. But when you think about what language models might replace versus augment, I think it's also really important to know that we've been replacing or augmenting ourselves for many, many, many, many generations. And, if you took a human from five generations ago or 10 generations ago and put them now, it would be really hard for them to interact in our society. Now, same thing if you took one of us and pushed us back in time. And that's because we sort of grow and change in response to our environment and our culture, which is this collective memory that gets loaded up so that we're a modern human instead of a pre-evolutionary human or whatever. And the same thing is going to happen with language models. You can kind of put it on this timeline from the invention of language to reading to the printing press. It's all the same kind of cultural transmission technology, I've heard some researchers call it. I think that that's exactly what it is to me. I'm curious what you think about that.
Reid Hoffman (00:48:34)
Well I definitely think that the progress of cultural knowledge, and I don't know if it's the same author, but The Secrets of Our Success, is a very good book. And it's partially because how we make progress is updating our cultural knowledge. And it's part of the reason why it's not surprising that when we generate interesting learning algorithms that we can apply to the human corpus of knowledge that we then generate interesting things that come out of that, because that's essentially a partial index of cultural knowledge. It's not the complete index because, for example, the secrets of success go through. It's, well, how do you identify which things to eat or which things not to eat or when to do that and all the rest of that and that's part of how you make progress. And I think that's an essential part of how we actually evolve. Everyone tends to think human beings evolve to be faster, longer, stronger genetics? And actually, in fact, a major clock of our evolution as we shifted, you could say, there's geological evolution, which is super slow. Then there's biological evolution, which is slow. And then there is cultural evolution or knowledge, digital, et cetera, which is much, much faster. And part of how the secrets of our success is that we got into kind of cultural evolution and that progress of digital and that part of what we're doing with AI and LLMs are tools to help accelerate that cultural slash digital evolution, which can include, why is everyone going to have a personal assistant? Because the personal assistant will be, I read all the texts and I can bring them to you as you're talking and trying to solve problems. So, for example, what are the things that people should be using ChatGPT for is obviously immediate on-demand personal research assistants that today hallucinate sometimes, and you have to be aware of that and understand that, but an immediate research assistant is one of the things that is obviously here already today—and if you don't think you need a research assistant, it's because you just haven't thought about it enough.
Dan Shipper (00:51:04)
Yeah, I mean it's incredible. It takes everything that humanity knows and gives it to you in the right context at the right time when you ask for it and that’s exactly the bottleneck of cultural evolution is getting the right information out to the edges of people that need it instead of having it be locked up in the internet or in a library or where you have to go expend resources to get it. All those are better than having to transmit knowledge orally, for example. But, language models are a profound next step. So we're getting close to time. We had a whole final section about science, but we may not be able to get to science. We'll have to maybe do a part two.
Reid Hoffman (00:51:51)
Yep. That'd be great. I'd be up for that. I love these topics.
Dan Shipper (00:51:54)
But I want to ask you a couple more things, just sort of on the philosophy and AI front. So, why do you think philosophers didn't come up with AI? I guess it came out of a computer science tradition, but also just really engineering people who just were making stuff. Talk to me about why that didn't come from philosophers?
Reid Hoffman (00:52:25)
Well, I do think that this is a little bit of what I was gesturing out earlier, which is being disciplinarian. Obviously, people are not idiots and doing this, they have some strengths to note, but also some weaknesses. And I think part of it is to think about well, how is it that technology is going to change our conceptions of how we use language and how we discern truth and how we argue about it and all the rest of the stuff is, I think, pretty central and it's kind of how is technology as ways of knowing, or ways of perceiving, or ways of communicating, or ways of reasoning important. And philosophers will say, you don't need any of that. I sit down and I cogitate canonically like Descartes. And look, I think there's a role to sitting down and cogitating, but I think there's also a role to discourse and it doesn't necessarily mean you have to be a externalist or—I don't know who the current physical materialist advocates are the Churchlands and other people back in the days when I was a philosophy student were those among those who were very vocal on that—but is to say that actually, in fact, this notion of how do we engage technology in our work is a very good thing to do.
And if so, then maybe philosophers would have come up with it more or would have been able to participate more in it versus the computer scientists who are like, okay, I'm working on the technology side of it. What can I make with this technology? And obviously, what can I make with this technology goes well earlier than computer science, right? I mean, you go all the way back to Frankenstein and kind of thinking about kind of imaginations about what could be constructed here, or the Golem or Talos in Greece. And so the notion that things could be constructed—now could be constructed with silicon, and it could be constructed with computer science that's the modern kind of artificial intelligence. But the notion of that is, I think one of the reasons why I want philosophy to be broader in its instantiation, not just a question around—this is obviously a bit of a deliberate rhetorical slam, but trolley problems.
Dan Shipper (00:55:16)
Yeah, that makes sense. It may be a way to frame that as it's better to be asking deep, philosophical questions and be a philosopher out in the world to some degree than it is to just be a philosopher. I don't know if you'd agree with that, but something like that?
Reid Hoffman (00:55:34)
I chose that with my own feet.
Dan Shipper (00:55:39)
Yeah, there you go. Yeah, I definitely agree with that. So, we have a minute left. The last thing I want to ask you is, I assume that there are a lot of people who are listening to this, maybe have not been philosophically inclined in the past and are either like, wow, I could not follow any of that and I want to figure out what they said, or they're like, oh my god, I want to learn how to think like that. And I think for the first group of people, I would totally recommend just using ChatGPT, talk to ChatGPT about this stuff and it will tell you for sure.
Reid Hoffman (00:56:11)
Yes.
Dan Shipper (00:56:13)
But I wanted to ask you if people are thinking about they want to get that kind of thinking crisply about possibilities that you talked about so well at the beginning, where would they start? Or what are your favorite kinds of philosophers or kinds of books like this to dive into?
Reid Hoffman (00:56:30)
Well I think the best way is to get interactive. It's part of the reason to study philosophy or even for the second part of the question, some use of ChatGPT is also very helpful there because the interactive is what it does. And, for example, one of the things that I use ChatGPT for, which is part of this is I have something that I'm arguing for, and I put in my argument and I say, okay, give me more arguments for this. How would you argue for this differently or more? And then also how would you argue against it? What would your counter arguments be to this? And use that as, again, the kind of thesis and synthesis trying to get the synthesis in this. And so I think that dynamic process is really important. And so part of the way that people traditionally try to get to this is they try to go through what some of the real instances are of great human thought and then try to understand that and how to think that way.
So, one of the things that was too much text prompting to go into impromptu, but I think it is very useful as another utility for kind of use of chat. I'm a non-mathematical college graduate, explain Gödel’s theorem to me. I'm a non-physicist, explain Einstein's thought experiments around relativity to me, et cetera. And that dynamic process of getting into understanding those things is part of how you learn to think this way. And it's one of the reasons why that has helped us accelerate our cultural evolution, the secret of our success is having things like books, having things like universities, because it's that dynamic process of engaging that's so important. And so there's not necessarily one specific book, although, by the way, if you really want to have your mind boggled, go read or reread Gödel, Escher, Bach. It's great. But what are the instances of these canonical, amazing pieces of thinking? And then in that dynamic engagement process, you're internalizing them.
Dan Shipper (00:59:10)
Yeah. Be curious about great ideas and engage with them. This was a great conversation. I really appreciate you coming on. I feel like I learned a lot. Thank you so much.
Reid Hoffman (00:59:20)
My pleasure. Awesome.
Thanks to Scott Nover for editorial support.
Dan Shipper is the cofounder and CEO of Every, where he writes the Chain of Thought column and hosts the podcast How Do You Use ChatGPT? You can follow him on X at @danshipper and on LinkedIn, and Every on X at @every and on LinkedIn.
Ideas and Apps to
Thrive in the AI Age
The essential toolkit for those shaping the future
"This might be the best value you
can get from an AI subscription."
- Jay S.
Join 100,000+ leaders, builders, and innovators

Email address
Already have an account? Sign in
What is included in a subscription?
Daily insights from AI pioneers + early access to powerful AI tools
Ideas and Apps to
Thrive in the AI Age
The essential toolkit for those shaping the future
"This might be the best value you
can get from an AI subscription."
- Jay S.
Join 100,000+ leaders, builders, and innovators

Email address
Already have an account? Sign in
What is included in a subscription?
Daily insights from AI pioneers + early access to powerful AI tools