Transcript: ‘Being Human in the Age of Intelligent Machines’

'AI & I' with physicist and novelist Alan Lightman

1

The transcript of AI & I with Alan Lightman is below. Watch on X or YouTube, or listen on Spotify or Apple Podcasts.

Timestamps

  1. Introduction: 00:01:18
  2. Science can deepen your sense of the spiritual: 00:02:36
  3. The nature of consciousness: 00:11:31
  4. AI might appear to be conscious, but it isn’t: 00:13:11
  5. Why AI can be considered to be “natural”: 00:19:50
  6. AI shifts the focus of science from explanations to predictions: 00:30:40
  7. How modern neural networks simulate thinking: 00:33:48
  8. Lightman’s vision for how humans and machines will merge: 00:39:38 
  9. Does AI know more about love than you?: 00:43:11
  10. How technology is accelerating the pace of our lives: 00:49:18

Transcript

Dan Shipper (00:01:18)

Dr. Lightman, welcome to the show.

Alan Lightman (00:01:22)

Thank you for having me on, Dan.

Dan Shipper (00:01:24)

Thanks for being here. So for people who don't know, you are a physicist and a writer. You’re one of the first people at MIT to hold a joint faculty position in both science and the humanities, which I think is amazing. You're the author of Einstein's Dreams, which I think is an incredible book, and most recently the book The Miraculous and the Material. So thanks for coming on.

Alan Lightman (00:01:45)

Thanks for having me.

Dan Shipper (00:01:47)

The thing I like about The Miraculous and the Material is it reads to me almost like a devotional, but for people who feel awe from science. So each chapter is a little story about the atmosphere or atoms or bubbles—and it's an alphabetical order and it almost feels like something I could read every day just to think a little bit about how the world works and then feel a little bit of awe or wonder. Tell me about that.

Alan Lightman (00:02:16)

Well, I wanna make a very slight editorial correction. The title of the book is The Miraculous from the Material.

Dan Shipper (00:02:22)

Ah, what did I say? The Miraculous from the Material. I apologize. An important distinction.

Alan Lightman (00:02:33)

No apologies needed. That's an important distinction because the point of the book is that even though we may understand the material basis for a lot of extraordinary natural phenomena like spiderwebs and volcanoes and so on, that doesn't make them any less awe-inspiring.

Dan Shipper (00:02:53)

It feels like a little bit of a counterpoint to that Walt Whitman poem, “… the learn'd astronomer.” Yes. Are you familiar with that poem?

Alan Lightman (00:03:05)

Yes.

Dan Shipper (00:03:07)

How do you respond to Whitman? If he was here, what would you say to him?

Alan Lightman (00:03:12)

Well, yes, I do know about that poem. I love that poem. For your listeners who don't know, it's about a person who listens to a lecture on astronomy and goes out on a dark night and looks up at the sky and is just overwhelmed by the beauty of the sky. And the message of that poem, I think, is that explanations of phenomena don't really replace the actual experience of phenomena. And sometimes take away from it, I think.

Dan Shipper (00:03:42)

And sometimes take away from it, drain it of its enchantment, right?

Alan Lightman (00:03:50)

Yeah. There's a slight negativity there, and I don't have that point of view. I think when you have a scientific explanation of a phenomena, it actually enhances your appreciation of the phenomenon. And for me, having an explanation of spiderwebs or volcanoes or lightning or the rings of Saturn doesn't diminish when I get my awe and appreciation and admiration of those phenomena.

Dan Shipper (00:04:24)

I think one of the ways to restate that, which I'm curious about how you feel, is: Since enlightenment, as we have more explanations for things we like a lot of— There's a sort of this common refrain that we've disenchanted the world, right? And when we look at the Sun or whatever, you can just say it's just a collection of gases. It's not a God. And when you have the explanation, it just drains everything else out of it. And the explanation is just this inert dead thing that people are like, well, that's all it is. So everything is meaningless. So how do you reframe an explanation in your head? And does it have just the just, or is there something else?

Alan Lightman (00:05:08)

Well, I would just omit the word just. That would be my solution to that dilemma.

Yes, the Sun is a collection of atoms and molecules and it balances gravitational forces inward against thermal pressures outward and all of that. But isn't it amazing? First of all, the Sun and all the stars are amazing spectacles. But isn't it amazing that human beings can understand so much about the cosmos that's far beyond our small planet? And we're never gonna be able to do experiments directly with the Sun or with other stars, but just on the basis of our brains and intellectual power we have, we learn how those things work. We've learned that the universe began 14 billion years ago, which is many, many, many, many, many human lifetimes. We've learned about the vast extent of the universe, which is far beyond our tiny dot of a planet. We’ve learned the molecule that encodes the instructions for making more human beings. And to me, all of these things we've learned, all of this knowledge is a testament to the power of the human brain. It doesn't diminish any of the phenomena and nature, the fact that we understand many of those phenomena.

Dan Shipper (00:06:54)

One of the things that comes out of what you just said, especially going back to omitting the just, is that explanations can exist alongside our own conscious experience of things. You don't have to get rid of your conscious experience because you have the explanation. And they might be mutually enriching. What do you think about the way that viewing reality from those different lenses plays together or should play together?

Alan Lightman (00:07:24)

Well, I think they are mutually enriching. That is the direct experience of the world and a scientific explanation of it. The scientific explanation does not replace direct experience and that's why I call myself a spiritual materialist. The materialist part of that is my scientific side, which understands everything in terms of atoms and molecules and the law of conservation of energy and so on. But I'm also open to and embrace spiritual experiences like feeling part of something larger than myself, or the appreciation of a waterfall or a sunset or a communion with a wild animal, or relationships with people. All of those things are vital and part of what I call spirituality, and I embrace all of that. So I call myself a spiritual materialist.

Dan Shipper (00:08:38)

I think it's a beautiful term and there's a lot to unpack there. The first thing I wanna unpack is just what do you mean by spiritual? Because some of the things you listed, for example, like the beauty of a sunset, or human relationships are not things that people would typically categorize necessarily as spiritual. Maybe the idea of being part of something bigger than yourself, maybe something that you can't explain, but what is spiritual to you vs. not spiritual?

Alan Lightman (00:09:04)

Well, spiritual for me is a list of experiences that I mentioned, feeling part of things larger than myself, the appreciation of beauty, communion with animals and with non-human animals. And that all of that is the experience of awe. That's part of my understanding of spirituality. Now, many people include a belief in God or some divine being as part of their understanding of spirituality. And I think that's perfectly fine. So I think that you can be a spiritual person whether you believe in God or not. And you know, I'm not gonna define God right now unless you asked me to. I think most of us have some understanding of that. So it can either include a belief in God or not. It can include a belief of heaven and hell or not. But I do think that we can have my version of spirituality, whether or not we believe in God.

Dan Shipper (00:10:34)

Is your personal version, I'm assuming because of the word materialist in spiritual materialism, that you don't believe in God?

Alan Lightman (00:10:49)

Well, I'm agnostic on that. If you draw a line with total belief and faith on one end and agnosticism in the middle and atheism on, on the other side, I'm somewhere between an agnostic and an atheist. I'm not an atheist but I'm in that direction. I don't think that our minds and understandings are big enough and broad enough and deep enough to rule out the possibility of God—that is, an intelligent being that created the universe.

Dan Shipper (00:11:30)

That gets me to my next question, which is: One of the interesting implications of materialism is that everything can be explained in terms of basic physical laws, but because, for example, we don't know how consciousness arises from the activities of neurons, that's still kind of an undecided question. So declaring yourself a materialist, while that's undecided, requires a little bit of faith. What do you think about that?

Alan Lightman (00:12:01)

The question of consciousness is a good one because that's one of the outstanding mysteries at the frontiers of science. So my view about consciousness and things like that is that I believe that consciousness, and in fact all mental experiences, are rooted in the material brain that is in the electrical and chemical activity of neurons in the brain, which are material things. So I think that all mental sensations are rooted in that, but we don't yet understand how you get from that material basis to the feeling that we call consciousness. Consciousness is a feeling. It’s a name that we give to a certain sensation caused by 100 billion neurons exchanging chemical and electrical signals, it produces a certain sensation. And we call that consciousness, but it's very hard to know and maybe impossible to know how another organism feels. And that, of course, is related to the question of whether AI can ever be conscious and my view there. Is there any finite list of manifestations of consciousness that you write down, for example, self-awareness and the ability to plan for the future. I think that at some point AI will check all of the boxes of the manifestations of consciousness. But whether that computer is actually conscious is a different question.

Dan Shipper (00:13:54)

I think it sort of plays with one of the themes of a lot of your work, which is there's a limit to how much we can write down or how much we can, explicitly say. And then there's some things that are mysterious and maybe we can feel into some of them, but like even that is maybe, is maybe too much. So you know, anything that we can give a target to, for AI. It can do, but there's actually probably limits to what we can give targets to.

Alan Lightman (00:14:24)

Yes. I think I would agree with that. I mean, of course, as time goes on, AI will be able to do more and more things, but there will always be things that it can’t do most likely.

Dan Shipper (00:14:37)

Yeah. I'm sort of curious how you feel at this point. For example, the idea that something reacts to pain is one of the ways that we might tell if something's conscious. Because an ant runs away and when you try to kill it, we're kind of like, it probably has some amount of consciousness. And that seems like it's a useful thing to build into AI systems. And in fact, generally we're doing those kinds of things with AI right now, we don't think of it as causing them pain, but we reward them for good things and punish them for bad things—that kind of thing. It seems like at some point that sort of basic thing will ladder up into behavior that looks a lot like consciousness to us, and we may just decide to treat it that way because we sort of—

Alan Lightman (00:15:27)

Well, I would disagree a little bit with your initial statement that the reaction to pain represents consciousness because you can take an ant, and I think most of us would agree that an ant doesn't have anything resembling consciousness, at least our level of consciousness. I mean, consciousness is a graded phenomenon. It's not an all-or-nothing thing. Dolphins can recognize themselves in the mirror. Crows play games with each other. So there's some level of consciousness there, but as you go down to the animal kingdom you eventually get to ants and even single cell organisms like amoeba. And I would say that, let's take an amoeba. If you put some chemicals near an amoeba that are dangerous to it, it will react. It will avoid those chemicals. Now that's happening. That's a totally automatic response that doesn't involve any higher levels of cognition. And I think that you could do the same with a computer.

Let’s say that a computer needs to have a certain temperature range in order to operate. And we know that when a room gets too hot, it's bad for a computer. That's why we have fans on our laptops and so on. So if you made it, you had a computer sitting in a room and you turned up the temperature and made it hotter and hotter and hotter, eventually there would be sensors in that computer that would try to turn off certain things. It would react. I mean, my MacBook Pro reacts when it gets hot. So that doesn't involve any consciousness in my understanding of consciousness.

Dan Shipper (00:17:38)

Sure. I think maybe there's different levels of complexity of sensitivity to pain. And in the amoeba example, or in the laptop example, it's about chemicals—a certain amount of chemicals or a certain temperature of the air, the surrounding environment. Whereas for a human, it's like, my mortgage—I might not be able to pay my mortgage, and that causes me a certain level of pain and that represents psychological pain. I'm not saying that there's ever gonna be a perfect test, but there might be some of these things you can kind of— You get glimpses to be like, I don't know. It seems like there's something in there.

Alan Lightman (00:18:24)

Yeah. Well I think even with physical pain. The human reaction to physical pain is different from the reaction of an amoeba to toxic chemicals, because with humans, when you have pain, you are self-aware of having pain and you're able to name it. You're able to name this sensation is pain. It's similar to other sensations that I've had in the past that often also causes me these physical reactions. You can categorize it and name it and so your mind is operating at some higher level. That’s not only sensing certain things, but it's actually aware that it's sensing is able to name it and categorize it by processing if there's something else happening. There's something else happening. So that's the difference between a human putting their hand near a fire and an amoeba moving away from some toxic chemicals.

Dan Shipper (00:19:38)

That's interesting. This leads pretty nicely into our AI discussion, which I'm professionally obligated to bring up. So you wrote an article for The Atlantic called “When the Unnatural Becomes Natural,” which is basically about AI and it's about the problems and the promise of getting used to things that are artificial. Can you talk to us about that article? I'm curious for you to go into it with us.

Alan Lightman (00:20:09)

What we call natural and unnatural is somewhat arbitrary. And you could take the view that eyeglasses and hearing aids are unnatural because we weren't born with those abilities. And you could take the point of view that any machine that we create is unnatural because that machine, we didn't find lying under a rock one day. We put it together with intelligence. On the other hand, from the article in The Atlantic, you could take the point of view. Since we are natural, we homo sapiens are natural. We have evolved from lower organisms. I don't think there was anything supernatural that created human beings. I think that we started off as single cell organisms in the ocean and evolved from there due to Darwinian evolution. So we are totally natural and our brains are totally natural. You could take the point of view that anything that our brains invent is natural because it is an inevitable consequence of something that is natural. So that kind of blurs the distinction between human-made objects and objects that we just find under a rock.

Dan Shipper (00:22:01)

And I think one of the implications of that is maybe to say that typically when we use the word natural, we mean things that we grew up with. And you start to forget those books or keyboards or windows or whatever were at one point technology that people might have been afraid of and probably actually were—it didn't, didn't exist at all. And there's something that's an interesting lens through which to view AI. So how did that help? How does that change your perspective on AI? What do you bring to AI with that perspective?

Alan Lightman (00:22:48)

Well, I know that my granddaughter can use apps on the smartphone with more agility and familiarity than I can. So everybody growing up in the last 20 years or so is very familiar with technology, with the internet, with smartphones, with apps on the smartphone. And that's their world and, of course, old farts like myself— I grew up a long time ago, so I think it's a good point that you make, that it depends somewhat on what we grew up with.

Dan Shipper (00:26:16)

And just I guess, tell us what your current experiences with ai or what your current feeling is about it. You've been around a while, you've seen everything from computers to smartphones, to the internet to this. You're someone that deeply understands the kind of technical mathematical foundations of the universe and also is also a humanist. And this sort of something new here. At least to me it feels like there's something new here going on. I'm curious how it feels to you.

Alan Lightman (00:26:47)

Well, I think that first of all, AI is developing very rapidly. And even the people at the frontiers of research can't predict when certain benchmarks will be reached. So I think most people would agree that AI has the potential for great benefits and also great dangers, and it's going to be awfully hard to regulate. I know the European Union a year or so ago issued some policy for regulation. The US is now investigating, most countries are investigating different kinds of regulation, but it's gonna be very difficult to regulate because a lot of it is being done in private companies. And the profit incentive is huge to to be one step ahead in the AI game. And there's also national pride that some countries are gonna just want to be out in front with AI and are and are going to resist any constraints. So it's gonna be very hard to regulate and that is a problem. I think having AI that can make autonomous decisions on the battlefield is very dangerous. It is possible that to have Darwinian evolution, you need mutations and you need reproduction and AI is almost capable of that now. I mean, when AI can control a factory that's making other AIs, so once you have those two necessary ingredients for Darwinian evolution, then it's automatic that the organism, in this case, the organism made out of silicon, can develop a sense of self-preservation, which also comes from Darwinian evolution. And so at some point, advanced AI may decide in the interest of self-preservation that we homo sapiens are not good for it.

Dan Shipper (00:29:27)

That makes sense. I think that's a reasonable viewpoint. One of the things that perspective has not yet grappled with, or one of the reasons why it hasn't tried to kill us all yet is it doesn't quite account for the way in which that self-preservation instinct is also being self being created in the context of an instinct to collaborate and coordinate with humans that is deeply embedded. And so I'm not saying it's sort of an either/or thing, but I think sometimes those thought experiments miss that the co-evolution happening highly incentivizes these systems from the beginning to want to be helpful and around us.

Alan Lightman (00:30:17)

Well, of course we hope that's gonna be the case, but I agree with you. Those are two forces that'll both be operating this ability of AI to develop a sense of self-preservation. We're not at that point yet, but we may be at that point at some time in the future.

Dan Shipper (00:30:48)

Yeah, that makes sense. One of the things that I'm kind of curious to talk to you about is that this sort of AI revolution or this new level of technology might change what it means to know things or how we might know things, particularly in science. So, for example, right now if you're working in psychology or neuroscience and you're asking a question like, what is depression, you have to go find one particular mechanism for depression, which has proven to be really hard to find but what you can do instead is train a model on a bunch of people who start out without depression and then get it and have the model be able to predict who's going to get depression and who's not gonna get depression, which feels like a different way of knowing things than finding root explanations. Have you thought about that? What does that bring up for you?

Alan Lightman (00:31:42)

Well, that's a data lookup table. You get lots and lots of data on different people and you correlate that data with the data that has both some information about the degree of depression and also lots of background data on the person where they grew up, who their parents were, etc. And, of course, a computer can sort through all of that information much faster than a human can. And I call that not thinking, but data lookup. Of course, it can still be very beneficial to us, even if we don't call it thinking. And we know in the medical field that AI is already being very useful in developing new drugs because it can try out lots and lots and lots of different chemical combinatorics and possibilities and find out which ones have different properties. This would be theoretical. It's not actually doing experiments, but it can look at the different shapes of molecules, especially when you have protein folding. It can try out lots and lots of different configurations and it's just trying out lots and lots of things and it's a totally mindless operation. But it can be extremely useful to the medical profession and finding new drugs.

Dan Shipper (00:33:30)

Let me push you a little bit on the data lookup thing because I think that's a common worry that people have about neural networks or the way that AI works and to some degree. I think there's something to what you're saying, but to make a data lookup table work, let's say for depression prediction, you'd have to have a database that's so big and so hard to look through that it probably wouldn't work if it was actually data lookup. And the way that a neural network works is it distributes all of that context and all of that background information across the entire network where it doesn't exist in any one particular place. And it can process in parallel lots and lots and lots of tiny correlations between different pieces of context that might lead to depression and have nonlinear interactions. So you might have 1,000 different things that might lead to depression and three of them might be turned on in any given person, and that feels a little bit more like what a neural network is doing than a data lookup.

Alan Lightman (00:34:41)

Yeah, I agree with you. So I accept that clarification and revision of what I said. That is more than a data lookup table when the different elements of different digital digital neurons of a neural network can exchange information back and forth in a non-linear way, then that is more than just data lookup. The computer's actually teaching itself.

Dan Shipper (00:35:12)

So then is that, in your mind, thinking? Because that is how a transformer works.

Alan Lightman (00:35:18)

Well, it certainly comes closer to thinking. I mean, I think that how you define thinking is somewhat arbitrary. But I think that's a lot closer to thinking than just a data lookup table. I think when the neural network starts modifying itself due to the back-and-forth communication of the digital neurons, that's certainly approaching thinking.

Dan Shipper (00:35:51)

It's definitely doing that when it's being trained, although it's not modifying itself. Something else is modifying it. The union of the training program and the network is doing that. But once it's trained, it's not being updated, at least for now, although there are architectures that are getting close to that. But one of the things that this line of thinking takes me down, which I think is related to your work, which I think is a lot about how even when you're— And you tell me if I'm wrong. But even when you're dealing in the most sort of abstract, rational places in human knowledge—so, physics—human intuition and creativity is still incredibly important. 

And one of the things that this makes me think of and when we talk about new ways of knowing this why I'm kind of interested in AI stuff, is, for example, the ability to predict who's gonna get depression without an explanation, which is what an AI model might be able to do and probably can, like a neural network probably can, in some circumstances, do it pretty well. It actually looks to me a lot like human intuition. Like a skilled clinician being able to tell you what's going on with someone because they've seen thousands and thousands and thousands of examples. But the difference is that intuition is stuck in our heads. And so in order to make that useful to other people, we've had to create theories or mathematical explanations and models seem to be an alternative way to express things that are very difficult to express in terms of mathematical theories. What do you think about that?

Alan Lightman (00:37:39)

I agree that models of intuition or depression can reach conclusions we cannot articulate. I mean, another way to explore the origins of depression and be able to predict depression is for a neuroscientist to see what molecular processes are associated with and brains that have depression. And I know some neuroscientists are doing exactly that and and focused on depression and they're looking for a chemical basis in the brain that leads to depression. So that's another avenue you're not looking at. You're not doing correlations with the background of lots and lots of people and their behavioral characteristics. You are looking at, in this case, maybe one or two brains and trying to understand them in detail.

Dan Shipper (00:39:05)

My guess is that the word depression is actually a bunch of different things.

Alan Lightman (00:39:10)

I'm sure you're right.

Dan Shipper (00:39:17)

There are probably cases where you'll be able to find some specific physiological marker, but it's probably gonna be the combination of a physiological marker and a bunch of other contexts in a person's life. And it won't be just localized in just the chemicals.

Alan Lightman (00:39:30)

Yeah. I think you're right about that.

Dan Shipper (00:39:38)

And so you have a documentary series called Searching and you bring up the idea of homo techno. Can you talk about what that is?

Alan Lightman (00:39:48)

Well human beings or homo sapiens have bypassed Darwinian evolution now for a few hundred years or so, or maybe longer just hearing aids and eyeglasses. Or an example of how we're no longer subject to characteristics that have survival benefits because we can create new devices and we can develop medicines that will allow people to survive that would've died several hundred years ago. And so we're evolving by our own hand. AI is just one example of that, and I think that at some point in the future we will have hybrid organisms that I call homo techno, that have evolved beyond homo sapiens and are part-human and part-machine and just for example, we already have the ability to implant computer chips into the brains of paralyzed people that allows them to move robotic arms simply by pure thought and even control the movement of the arm by pure thought. And so it is possible that at some time in the future, we will all have enhanced brains. I mean, already we have enhanced eyesight and enhanced hearing with hearing aids and eyeglasses.

Dan Shipper (00:41:49)

Part of your brain.

Alan Lightman (00:41:44)

So sometime in the future we may have computer chips implanted in our brains that, for example, allow us to be connected instantly to the internet to the vast amount of information on the internet or to compute, to communicate with other people through the internet. So I'm thinking of thought and I'm connected to the internet and my thought is broadcast to the internet and then broadcast to the computer chip in your brain, and it would be another form of communication much faster. So at that point we're a new species and I'm just suggesting that we would be a new species then, which I call homo techno.

Dan Shipper (00:42:39)

And why is that interesting to you? Or what do you think about that?

Alan Lightman (00:42:43)

Well, it raises questions of what it means to be human. And I think that AI is doing that already. What can we do that AI can't do? Or what can homo sapiens do that this homo techno can't do? Are there experiences that we can have and particularly emotional experiences that a computer can't have? Let's talk about love. Well, a computer could read every novel that's ever been written about love. All the love affairs throughout the last several thousand years of history, would it understand love the same way that you and I understand it, who have fallen in love and know that thrill, that feeling, which is very personal. I mean, each love is different. Each person falls in love in a slightly different way. And can that experience ever be replaced by a computer can? If you've read all of the novels in all the stories, the romantic stories that have ever been written, do you understand more about love then actually experiencing it?

Dan Shipper (00:44:14)

I think it's a great question and I think my answer to that is, well, obviously no in a certain sense because assuming these things are not conscious, then no, but there's actually a lot more. One of the things that I think is really interesting about language models is it shows that there's a lot more to the way that text is put together than what it explicitly says, you can learn a lot. Language models know a lot about the world that's beyond just what is explicitly in a particular sentence just by the way letters are combined. So I do think if it has read all the love stories, it actually knows something. and something that is not the same, but more than you might expect or more. More than I would've expected, which is interesting.

Alan Lightman (00:45:01)

Well, yeah. It might be a good relationship counselor.

Dan Shipper (00:45:03)

Yeah. I think it is actually. I mean, I use it all the time for that. It's all right.

Alan Lightman (00:45:12)

Because it can, it knows about all the romances and relationships that have gone south. But there's still gonna be things I hope and believe that you and I can experience with love that the computer cannot duplicate.

Dan Shipper (00:45:32)

Well, that's the thing that you brought up that I think is an interesting one, which is raising the question of, okay, what does it mean to be human? And then the next thing that you brought up is to answer that question, what can humans do that an AI can't do, which I think once you make that move, it brings us back to something that you said earlier. We were talking about how AI can do anything that we can put our finger on anyone, anything that we can define well so once you ask that question, you're already setting yourself up for, well, there's nothing I can say because anything I can say. Eventually, it's gonna do, but there's a lot that you can't say. And having some faith, I think that humans— The problem with defining human nature is static and it is not static. It's always changing. It’s high-dimensional, it's very fluid. And it sort of requires, I think, a little bit of faith that there's a lot of things that we can't put our finger on, but the moment we try to articulate it is the moment. We kinda lose it.

Alan Lightman (00:46:41)

Well, that brings us back to Walt Whitman's, “... the astronomer” poem. I like to hope that there are things that I can do and can experience that a computer will not be able to do. For some reason that I could be totally duplicated. The computer bothers me, and maybe I shouldn't be bothered by that, but it's also related to the ego. I'm bothered somewhat because I have an ego, and the ego was developed by Darwinian evolution. I mean, it had a survival benefit. I don't know whether Freud ever talked about that aspect of the ego, but I think that most biologists would say that the development of the ego, the sense of a self and the allegiance to that self had some kind of survival benefit at one point. And so it's that ego and sense of self that bothers me in thinking that I can be replaced by silicon.

Dan Shipper (00:48:10)

I agree, which I think is healthy. I think there's definitely something healthy there. I think there's a question about sense. How will that sense evolve over time as you talked about getting it? Used to things that are unnatural. So for example, I'm not bothered that you can also fall in love. I think that's great. We can connect about it. Yeah, but you're someone else that in the same way that maybe in 20 years or 30 years, I might not be bothered that I think a machine can fall in love. Because I've created, even though we kind of have the same brain, I've created this separate identity because we're separate people. That I think is to some degree inside of us. Once we get used to a certain thing, once we've lived with it for a while, we're not as threatened by it anymore, and we realize, oh, no, I'm still my unique self. Even if you know, Dr. Lightman, you've written way more books than me and you've had way more experiences than me. It doesn't necessarily just eliminate my value. Totally. Does that make sense?

Alan Lightman (00:49:18)

Well, there's the anecdote about the frog that you put into warm weather and warm water, and you slowly turn up the temperature. And it never notices. There's never any definite point where it knows that it's being killed, but at some point of course it is killed. And I wonder whether as our technology gets more and more advanced and we get more accustomed to it and more familiar with it, whether there will ever be a sharp line where we realized that we've crossed a boundary and one thing, one way in which I think that we've crossed a boundary, but it's not a very clear boundary, is the pace of modern life. The pace of modern life or the pace of life has always been regulated by the speed of communication. And the speed of communication has gotten faster and faster in the middle 19th century, the telegraph was the new communication device and it could communicate three bits per second. And then in the mid 1980s when the internet got roaring, you could communicate 1,000 bits per second, and now it's billions of bits per second. And you can tell that the pace of life has increased when we look at our smartphones every 5–10 minutes, we rush around from one appointment to the next. We rarely take the time to go out and take a quiet walk in the woods without our smartphones. So something has changed there. And whether we've crossed a sharp boundary or not, I know that the frog began to notice the heat.

Dan Shipper (00:51:38)

Obviously I love playing with new technology and all that kind of stuff, but I feel that too. I love books. My favorite thing to do is wake up in the morning, sit on a couch, and just take a physical book and read it. That's how I discovered you. And I'm in my thirties now and so it's like the first time where I'm starting to see there's a new generation of kids and they don't read. No, they don't read like I do. Some of them do, but most of them don't. And you see them scrolling on TikTok or whatever and I've had that first feeling of like, oh shit, I'm getting a little older and they're doing stuff that, I don't know if it's so good, they should be reading. And one of the things that I've been playing with— I don't know if this is right, but one of the things I've been playing with is it is true and it is uncomfortable for me, but I know, for example, that my brain is different because I grew up with books. It's different from someone that didn't and their brain, even though they're not totally recognizable to me, has adapted in this particular way. We haven't figured out what the limits of human brains are because we've never tested them as far as they can go, because we haven't had the technology. What you grow up with determines the level of technological facility or intelligence or whatever that you have. And that's why IQ scores go up pretty linearly over time. So we could be the frog, but also each generation of frogs is slightly better at adapting to temperature than the generation before. So even though the generation before is a little bit hot, the new generation is maybe okay is my more optimistic take or my thought.

Alan Lightman (00:53:33)

But at some point, all of those frogs, even the ones that are more adapted to higher temperatures, they're all gonna be dead.

Dan Shipper (00:53:47)

I can't argue with that. I can't argue with that unless the frogs invent technology that—

Alan Lightman (00:53:50) 

They may invent a technology.

Dan Shipper (00:54:02)

This has been a fantastic conversation. Is there anything else that you wanted to talk about that we didn't get to you today?

Alan Lightman (00:54:06) 

Well, we didn't say very much about the new book, The Miraculous from the Material. And I would just say a few words about it. It's got about 35 chapters and each chapter begins with a full page color photograph of an extraordinary visual phenomenon like spider web or lightning or soap bubbles and then there's an essay accompanying the photograph that not only explains the science behind it, but explains my personal experience with that thing. And so that's a brief description of the book.

Dan Shipper (00:54:46)

I think it's a wonderful book. If you're listening to this, you should go pick it up. And it was great to get to chat with you. Thank you so much for coming on.

Alan Lightman (00:54:55) 

Very happy to be on your program, Dan Shipper, and you are a real thinker yourself. You’re not an amoeba.

Dan Shipper (00:55:07)

Thank you.


Thanks to Scott Nover for editorial support.

Dan Shipper is the cofounder and CEO of Every, where he writes the Chain of Thought column and hosts the podcast AI & I. You can follow him on X at @danshipper and on LinkedIn, and Every on X at @every and on LinkedIn.

We also build AI tools for readers like you. Automate repeat writing with Spiral. Organize files automatically with Sparkle. Write something great with Lex. Deliver yourself from email with Cora.

We also do AI training, adoption, and innovation for companies. Work with us to bring AI into your organization.

Get paid for sharing Every with your friends. Join our referral program.

Find Out What
Comes Next in Tech.

Start your free trial.

New ideas to help you build the future—in your inbox, every day. Trusted by over 75,000 readers.

Subscribe

Already have an account? Sign in

What's included?

  • Unlimited access to our daily essays by Dan Shipper and a roster of the best tech writers on the internet
  • Full access to an archive of hundreds of in-depth articles
  • Unlimited software access to Spiral, Sparkle, and Lex

  • Priority access and subscriber-only discounts to courses, events, and more
  • Ad-free experience
  • Access to our Discord community

Comments

You need to login before you can comment.
Don't have an account? Sign up!
Every

What Comes Next in Tech

Subscribe to get new ideas about the future of business, technology, and the self—every day