ChatGPT and the Future of the Human Mind

AI is a lever that becomes a lens

Illustration by Lucas Crespo. Image generated with AI.

I remember when I first saw GPT-3 output writing: that line of letters hammered out one by one, rolling horizontally across the screen in its distinctive staccato. It struck both wonder and terror into my heart.

I felt ecstatic that computers could finally talk back to me. But I also felt a heavy sense of dread. I’m a writer—what would happen to me? 

We’ve all had this experience with AI over the last year and a half. It is an emotional rollercoaster. It feels like it threatens our conception of ourselves. 

We’ve long defined the difference between humans and animals by our ability to think. Aristotle wrote: “The life of the intellect is the best and pleasantest for man, for the intellect more than anything else is man; therefore, a life guided by intellect is the best for man.” 

Two thousand years later, the playwright and short story author Anton Chekhov agreed in his novella, Ward No. 6: “Intellect draws a sharp line between the animals and man, [and] suggests the divinity of the latter.”

The primacy of thinking and the intellect as the feature by which we define ourselves has become even more salient as we’ve moved from an economy driven by industrial labor into one driven by knowledge. Indeed, if you’re reading this, you probably put a lot of stock into what you know. After all, that’s what knowledge work is all about. If we define ourselves and our value this way, it’s no wonder LLMs seem scary. 

If AI can now write, and, worse, think, what’s left that makes humans unique? 

I think LLMs will change knowledge work. In doing so, they’ll change how we think of ourselves, and which characteristics we deem uniquely human. But these days I’m not particularly scared. In fact, I’m mostly filled with excitement. 

My sense of self has changed—and that’s a good thing. ChatGPT has made me see my intellect and role in the creative process differently than I did before. It doesn’t replace me; it just changes what I do. It’s possible to achieve this feeling of excitement by 1) getting a clearer conception of what LLMs actually do and don’t do, and 2) expanding your view of what you are and what you are capable of. 

Let’s talk about what that looks like. To start, we have to understand what the intellect is.

What is the intellect?

For the purposes of this article, the intellect is the thing that humans uniquely have that animals don’t. This is a fuzzy definition, by design. It reflects what feels threatened by AI: that which makes us human.

In reality, the intellect is a gigantic combination of brain processes that look like thinking. Thinking, the intellect, the mind—these are all different processes that we lump under a single heading. That’s why it’s easier to define it via negativa, by what it is not—it’s whatever animals don’t do. (We now know that animals do have thinking processes that look a lot like what we might call intellect, but that hasn’t filtered into our popular conceptions of self. Read Frans de Waal’s classic book, Are We Smart Enough to Know How Smart Animals Are?, for an excellent overview of animal intelligence.)

Our fuzzy definition of the intellect is why our first encounter with ChatGPT and its ilk can be so terrifying. It touches a lightning rod within us. For millennia we’ve set ourselves apart by a strange, amorphous, many-dimensional thing called intelligence—and suddenly there is something encroaching on our turf. Because it can do some of the things we associate with the intellect, we feel both excited, because we’re no longer alone, and threatened—because we might be replaceable. 

In order to regain our sense of self and place in the world, we need to redefine what we mean by “intellect.” We need to create a new sense of separation between what humans do and what AI can do. We need to redefine “intellect” so as to make it work in an AI-driven world. 

Fortunately, we’ve done this before, and technology can help.

Psychology is a branch of science that is full of concepts that are fuzzy in the same way the “intellect” is. Take depression and bi-polar disorder. Neither is detected by a blood test, or even has a consistent set of symptoms. Instead, they’re characterized as syndromes: a set of often associated symptoms that can vary from case to case. 

The problem is that there are significant overlaps between depression and bipolar disorder that can both manifest as low mood. As late as the 1960s, depression and bipolar disorder were lumped together under a single heading and understood to have a similar underlying cause. But it turns out that technology—specifically, the discovery of the drug lithium—was the key ingredient that we needed to pull them apart.

It all started with guinea pigs. 

Pharmacological dissection of mental illness

In the late 1940s, a doctor named J.F. Cade, who apparently had entirely too much time on his hands, discovered that the urine of manic patients was toxic to guinea pigs. (I can’t believe I just typed that sentence, but, like, we all have our kinks, I guess.)

Do not let this man around your guinea pigs. Source: Wikipedia.

Anyway, Cade set out to find out why. He discovered that manic patients had elevated levels of uric acid in their pee and thought that might be what was causing the toxicity. He decided to perform a controlled experiment on the guinea pigs: He would inject them with varying levels of uric acid to see if increasing levels would cause toxicity.

To do the injections, he dissolved the uric acid into a lithium carbonate solution. When he injected the guinea pigs with lithium he noticed that they became remarkably calm. They just sat around in their cages and didn’t respond to being messed around with—a side effect he had not anticipated.

Then, in a move that revolutionized the field of psychology, Cade decided to inject the lithium carbonate into his manic patients. The results were astonishing: Lithium resolved their mania.

This is a remarkable series of events for two reasons: One, it resulted in a drug, lithium, that has saved many thousands of lives over the decades since it was discovered. And two, it acted as what the psychiatrist Peter Kramer calls a “pharmacological dissection” that differentiated manic depression as a separate disease from depression, with its own biological cause.

Kramer argued that manic depression hadn’t previously been thought of as separate from depression. Indeed, psychoanalytic thought, which was the prevailing model of the mind at the time, viewed all psychiatric issues as essentially about internal psychological conflicts resulting from childhood trauma. 

But a drug, lithium—which cured manic depression, and only manic depression—helped carve the disease out as distinct from the rest of mental illness, and one that was fundamentally biological in origin. In other words, lithium was a lever that became a new lens on the way our minds work.

We can use this model to do a similar technological dissection on our intellect. Once we understand what ChatGPT does, it can help us further define and flesh out our previously fuzzy concepts like the intellect—while leaving our sense of what keeps us different intact.

ChatGPT is a summarizer

Technically, ChatGPT does next token prediction. Given a string of words, it is very good at statistically predicting the most likely words that come next in the sequence.

In practice, that means that ChatGPT and other LLMs are incredible at reformatting, reconstituting, and recombining old knowledge in new and useful ways. The cognitive scientist Alison Gopnik and colleagues refer to LLMs as technologies that enhance “cultural transmission” and are “powerful and efficient imitation engines.” 

In this way, Gopnik argues, LLMs are an extension of previously existing technologies like writing, the printing press, and the internet. They allow existing information to be “passed efficiently from one group of people to another,” which allows for “a new means of cultural production and evolution.”

What this means, in effect, is that LLMs aren’t very good at discovering new things thus far. But they are incredible at bringing to bear human knowledge into any given area of inquiry, by effectively compressing and reformatting it in precisely the best way for it to be consumed. Its powers for increasing transmission and understanding of knowledge are better by far than any preceding innovation, from the book to the internet. 

To say it plainly: These things are great at summarizing. Calling ChatGPT a summarizer might sound pejorative, but it’s not. It is powerful and important. Because the sum total of all human knowledge far outstrips any single person’s ability to remember it, we need LLMs in order to have any hope of enacting all of the things humanity knows. 

Thinking about ChatGPT as a summarizer, we can now come back to the word that started us off—”intellect”—and see how it might help us refine our understanding of our intellect, and ourselves.

A technological dissection of ‘intellect’

Here’s what happened when I started to see ChatGPT as a summarizer. 

First, I started to see that summaries are happening everywhere, all the time. Most of the emails I write are summaries, and so is most of the code I write. Even much of this article is a summary. For example, the passage about J. F. Cade, lithium, and pharmacological dissection is a summary of a particularly enlightening section of Peter Kramer’s book, Listening to Prozac.

At this point, it’s easy for me to again panic and start to feel threatened. If summaries are everywhere in my work life, and ChatGPT can do summaries, what role do I have? The answer is obvious: There’s a large amount of things that need to be done to write a great essay. The fact that I don’t have to summarize as much is fantastic.

When I really sat down to think about this article, the interesting thing, the hard thing, was not the summary. It was everything else that went into it: the life experience, the diversity of reading sources, the emotional journey of interacting with ChatGPT and considering its implications. Summarizing skills are a kind of creative drudgery. I learned them out of necessity, but they’re no longer as useful as they once were.

Once I start thinking this way, I start to automatically subtract summarizing as a skill that’s core to my sense of identity, humaneness, and intellect. It feels much more “okay” that ChatGPT is able to do this—because now I get to direct it to summarize for me. I can summarize many more things in a day now, with far less effort, so it increases my writing productivity.

My sense of self, having suffered the loss of summary, heals pretty quickly as soon as I realize that much richness remains. In fact, the loss of summary highlights the richness that I might not have seen otherwise. ChatGPT is a lever that becomes a new lens on what my intellect is—and what my role in the creative process should be.

A key point is that this doesn’t have to be specifically about summarizing. If ChatGPT had been good at finding out new knowledge and terrible at summarizing, I would instead extol the virtues of my human ability to summarize. This is good and natural: Humans, above all, are adaptable. I define what’s interesting by what I can uniquely do. That’s a human process at work. And our adaptability is the thing we miss when we freak out about AI and ChatGPT.

Sure, it is an important fact that many jobs require a lot of summarizing, and that those jobs will change dramatically, or may no longer exist. Our society will do better if we face that head on, and take care of the people who will need to learn new skills or find new roles in the economy. 

But this has nothing to do with our underlying sense of self or what makes humans unique in the universe. That can still be left slightly changed, but intact in a world with LLMs. This has happened before.

Technology has been changing our brains for generations

In his book The WEIRDest People in the World, Joseph Henrich tells a story about English bricklayer and convict William Buckley, who had been sent to Australia to serve his sentence. In 1803, Buckley and a few compadres escaped from their penal colony. All of them died in the wilderness, except him. He got separated from them, and up running into an Aboriginal tribe who saved his life and adopted him.

Henrich tells this story because it teaches us something about how human beings change in response to technology. All of Buckley’s friends died in the wilderness, despite being almost identical genetically to the Aboriginal tribe that adopted Buckley. Why?

Buckley and his friends came from a modern culture, and he was raised with a set of norms, beliefs, and ideas that equipped him to succeed in that environment. When he was thrust into a completely different environment, he couldn’t survive. From the outside, he might’ve looked quite similar to the Aboriginals who could survive but there was one distinct difference: They had cultural technology for thriving in that environment that he lacked.

Henrich argues that humans have evolved “brains to allow us to most effectively learn the ideas, beliefs, values, motivations, and practices we’ll need to survive and thrive in whatever ecological or social environments we end up in.” The way this happens is through culture, which is a dramatic accelerant on our ability to solve problems and thrive.

But Henrich writes, “[T]hese genetically evolved learning abilities aren’t simply downloading a cultural software package into our innate neurological hardware. Instead, culture rewires our brains and alters our biology—it renovates the firmware.”

In essence, our species—our psychology, biology, brains, and bodies—is shaped by culture. And culture is, to a large extent, a function of technology. Who we are is shaped by the technology we are surrounded by. 

ChatGPT is the latest in a long line of such cultural and technological changes that alter what it means to be human. 

We don’t need to wait for brain-computer interfaces for AI to modify our biology. That’s happening already. The question is: Can we use it to do good instead of evil? Can we use it to create more richness and beauty, instead of scarcity and ugliness?

I think we can. Let’s do it together.

Like this?
Become a subscriber.

Subscribe →

Or, learn more.

Read this next:

Chain of Thought

Can a Startup Kill ChatGPT?

Google is dangerous—a founder cracked on Zyn and Diet Coke more so

2 Mar 15, 2024 by Dan Shipper

Chain of Thought

The Knowledge Economy Is Over. Welcome to the Allocation Economy

In the age of AI, every maker becomes a manager

8 Jan 19, 2024 by Dan Shipper

Chain of Thought

What I Saw at OpenAI’s Developer Day

Bigger, smarter, faster, cheaper, easier

5 Nov 7, 2023 by Dan Shipper

Chain of Thought

Transcript: ChatGPT for Radical Self-betterment

'How Do You Use ChatGPT?’ with Dr. Gena Gorlin

🔒 Jan 31, 2024 by Dan Shipper

Napkin Math

Crypto’s Prophet Speaks

A16z’s Chris Dixon hasn’t abandoned the faith with his new book, 'Read Write Own'

13 Feb 1, 2024 by Evan Armstrong

Thanks for rating this post—join the conversation by commenting below.


You need to login before you can comment.
Don't have an account? Sign up!
@rledesma115 5 months ago

I personally love to think, muse, and reflect. Thank you for inspiring us to continue thinking deeply!

Dan Shipper 5 months ago

@rledesma115 so glad!! of course :) keep at it!

David Thomas 5 months ago

Enjoy your posts - especially those that share experiences using GPT yourself and with others. Thanks

Dan Shipper 5 months ago

@Dat2023 thanks David! more coming soon :)

Jonathan Raymond 5 months ago

Great post, thanks / new subscriber. Very much aligned with the principles we are baking into our product.

Dan Shipper 5 months ago

@jonathanrefound thanks jonathan! welcome! glad to see it's aligned with what you're building

@yahli 5 months ago

it is really insghitfull and thought provoking . thank you for sharing

Dan Shipper 5 months ago

@yahli really glad you think so! appreciate you commenting

@jtowers349 5 months ago

Just what we need (& even made me laugh out loud at one point)

Dan Shipper 5 months ago

@jtowers349 glad it made you smile :)

@jenn.planner 5 months ago

Thank you for sharing your thoughts and helping to inspire a more optimistic vision. I agree with your point of view. Let's keep discovering!

@Mark_5418 5 months ago

Great article. My experience is that it does more than summarize. When I give it a clear direction and engage as a co-creation new things I never would have anticipated emerge. I read that the human mind is also a prediction engine. And yet we create original thought. Maybe our whole notion of intelligence needs to change, more along the lines of Bohm’s work.

Dan Shipper 5 months ago

@Mark_5418 thanks Mark! yeah, i think it's "summarize" in the broadest sense of the word: recombining and extending existing ideas and knowledge, rather than coming up with things that are completely new. check out the Gopnik paper I linked for more on what that means

@rahman.rizvi 5 months ago

Great read, thanks! Also reminded me how powerful summarizing is. So… did you use chatGPT’s summarizing skills for this article? How? Maybe this could be the topic for a future article :)

Dan Shipper 5 months ago

@rahman.rizvi i did!! i used it to summarize the manic depression story i told. super helpful :)

Tom Parish 5 months ago


Dan Shipper 5 months ago

@tom.parish thanks Tom!

Benedict 5 months ago

Excellent post Dan! You're on fire this week, your building an app podcast was really thought provoking too.

Do you have a Goodreads list or similar? Your book rsuggestions are great and I find recommendation engines sorely lacking when it comes to interesting books that don't have as many ratings.

Dan Shipper 5 months ago

@bjnoel w00t!! i'm trying my best, thanks so much for noticing. lots more good stuff coming soon!

I don't...but I usually tweet about what I'm reading, follow me on there! and maybe i'll do a roundup at some point soon

Every smart person you know is reading this newsletter

Get one actionable essay a day on AI, tech, and personal development


Already a subscriber? Login