
Last week I wrote that we’d be publishing a few excerpts from a book I’m writing about the worldview I’ve developed by writing, coding, and living with AI. Here’s the first piece, about the differences between the old (pre-GPT-3) worldview and the new.—Dan Shipper
When I say the word “intelligent,” you probably think of being rational. But language models show us that this assumption is wrong.
To us in the West, smarts are about being able to explicitly lay out what you know and why you know it. For us, the root of intelligence is logic and reason to ward off superstition and groupthink; it is clear and concise definitions to eradicate vague and wooly-headed thinking; it is formal theories that explain the hidden laws of the world around us—simple, falsifiable, and parsimonious yet general enough to tie together everything in the universe from atoms to asteroids.
Our romantic picture of ourselves as “rational animals,” as Aristotle said, has produced everything in the modern world—rockets, trains, medicines, computers, smartphones.
But this picture is incomplete. It contains a huge blind spot: It neglects the fundamental importance of intuition in all of intelligent behavior—intuition which is by nature ineffable; that is to say, not fully describable by rational thought or formal systems.
How to build a thinking machine
The best way to understand what I’m talking about is to imagine trying to build a thinking machine. How would you do it?
Let’s start with a task, something easy and basic that humans do every day. Maybe something like scheduling an appointment.
Let’s say we’re a busy executive who gets the following appointment request:
New request
From: Mona Leibnis
Hey,
I’m available Monday at 3 p.m., Tuesday at 4 p.m., Friday at 6 p.m.
When can you meet?
We want to build a machine to intelligently schedule an appointment. How would we go about it?
We’d probably start by giving our machine a couple of rules to follow:
- First, check available time slots on my calendar.
- Then, compare my open slots to the open slots on the invitee’s calendar.
- If you find one, add the appointment to the calendar.
That all seems pretty reasonable. You could definitely write a computer program to follow those rules. But there’s a problem: The rules we’ve specified so far can’t handle urgency or importance.
For example, consider a case where you desperately want to meet with someone and you’d be willing to move another appointment in order to make the time work. Now we have to introduce a new rule:
- If it is urgent that I meet with the invitee, you can reschedule a less urgent appointment in order to make the meeting happen sooner.
But this rule is incomplete because it introduces the concept of urgency without defining it. How do we know what’s urgent? Well, there must be some rules for that too. So we need to delineate them.
In order to measure urgency, we have to have some conception of the different people in your life—who your clients and potential clients are, and which clients are important or not.
Now things are starting to get hairy. In order to determine the relative importance of clients, we have to know about your business aims—and about which clients are likely to close, and which clients are likely to pay a lot of money, and which clients are likely to stay on for a long time. And don’t forget—which clients were introduced by an important friend whom you need to impress, so while they may not be directly responsible for a lot of revenue, they’re still a priority.
This is only a taste of the rules we’d have to define in order to build an adequate automatic scheduling system. And that’s just for dealing with calendars!
The problem that we’re finding is that it’s very hard to lay everything out explicitly—because everything is interconnected. To paraphrase the late astronomer Carl Sagan: If you wish to schedule a meeting from scratch, you must first define the universe.
The old, Western worldview
This approach—which seemed most natural to us—is the exact one that the first generation of artificial intelligence researchers took to try to build AI.
Last week I wrote that we’d be publishing a few excerpts from a book I’m writing about the worldview I’ve developed by writing, coding, and living with AI. Here’s the first piece, about the differences between the old (pre-GPT-3) worldview and the new.—Dan Shipper
When I say the word “intelligent,” you probably think of being rational. But language models show us that this assumption is wrong.
To us in the West, smarts are about being able to explicitly lay out what you know and why you know it. For us, the root of intelligence is logic and reason to ward off superstition and groupthink; it is clear and concise definitions to eradicate vague and wooly-headed thinking; it is formal theories that explain the hidden laws of the world around us—simple, falsifiable, and parsimonious yet general enough to tie together everything in the universe from atoms to asteroids.
Our romantic picture of ourselves as “rational animals,” as Aristotle said, has produced everything in the modern world—rockets, trains, medicines, computers, smartphones.
But this picture is incomplete. It contains a huge blind spot: It neglects the fundamental importance of intuition in all of intelligent behavior—intuition which is by nature ineffable; that is to say, not fully describable by rational thought or formal systems.
How to build a thinking machine
The best way to understand what I’m talking about is to imagine trying to build a thinking machine. How would you do it?
Let’s start with a task, something easy and basic that humans do every day. Maybe something like scheduling an appointment.
Make your team AI‑native
Scattered tools slow teams down. Every Teams gives your whole organization full access to Every and our AI apps—Sparkle to organize files, Spiral to write well, Cora to manage email, and Monologue for smart dictation—plus our daily newsletter, subscriber‑only livestreams, Discord, and course discounts. One subscription to keep your company at the AI frontier. Trusted by 200+ AI-native companies—including The Browser Company, Portola, and Stainless.
Let’s say we’re a busy executive who gets the following appointment request:
New request
From: Mona Leibnis
Hey,
I’m available Monday at 3 p.m., Tuesday at 4 p.m., Friday at 6 p.m.
When can you meet?
We want to build a machine to intelligently schedule an appointment. How would we go about it?
We’d probably start by giving our machine a couple of rules to follow:
- First, check available time slots on my calendar.
- Then, compare my open slots to the open slots on the invitee’s calendar.
- If you find one, add the appointment to the calendar.
That all seems pretty reasonable. You could definitely write a computer program to follow those rules. But there’s a problem: The rules we’ve specified so far can’t handle urgency or importance.
For example, consider a case where you desperately want to meet with someone and you’d be willing to move another appointment in order to make the time work. Now we have to introduce a new rule:
- If it is urgent that I meet with the invitee, you can reschedule a less urgent appointment in order to make the meeting happen sooner.
But this rule is incomplete because it introduces the concept of urgency without defining it. How do we know what’s urgent? Well, there must be some rules for that too. So we need to delineate them.
In order to measure urgency, we have to have some conception of the different people in your life—who your clients and potential clients are, and which clients are important or not.
Now things are starting to get hairy. In order to determine the relative importance of clients, we have to know about your business aims—and about which clients are likely to close, and which clients are likely to pay a lot of money, and which clients are likely to stay on for a long time. And don’t forget—which clients were introduced by an important friend whom you need to impress, so while they may not be directly responsible for a lot of revenue, they’re still a priority.
This is only a taste of the rules we’d have to define in order to build an adequate automatic scheduling system. And that’s just for dealing with calendars!
The problem that we’re finding is that it’s very hard to lay everything out explicitly—because everything is interconnected. To paraphrase the late astronomer Carl Sagan: If you wish to schedule a meeting from scratch, you must first define the universe.
The old, Western worldview
This approach—which seemed most natural to us—is the exact one that the first generation of artificial intelligence researchers took to try to build AI.
In what is widely considered the founding document of the field of artificial intelligence, a 1955 proposal for a summer research project, John McCarthy and his colleagues outlined their vision for how to build intelligence:
“It may be speculated that a large part of human thought consists of manipulating words according to rules of reasoning and rules of conjecture. From this point of view, forming a generalization consists of admitting a new word and some rules whereby sentences containing it imply and are implied by others.”
In other words, human thinking is just arranging words according to rules, and making a new idea requires adding a new word and rules for how it links to others. This approach, called symbolic AI, aimed to create intelligent machines by encoding human knowledge into formal rules and symbols—just like we did in our scheduling example.
But it failed miserably—for the same reasons ours did. Trying to formalize even basic thinking in terms of explicit rules turned into an infinite task requiring more computing power than would be available in the lifetime of the universe. But it remained the dominant paradigm in AI research for 30-odd years. Why?
Why is this approach so appealing to us? Why was it the first approach AI researchers took? And why, even today, does it seem like such an intuitive picture of intelligence?
The answer is because this way of thinking about the world and our own minds is woven into the fabric of our culture. It is a form of thinking invented by the Ancient Greek heretic, Socrates, and which was reborn in the Enlightenment through the work of Descartes, Galileo, and Newton.
It comes from an underlying set of assumptions so basic that they are practically invisible:
- If we think logically and clearly enough about anything in the world, we can figure it out (like how you can solve any puzzle if you stare at it long enough).
- Once we figure something out, we should be able to write it down in a way that is so clear and explicit that anyone else can understand and test it for themselves (like a good recipe that anyone can follow).
- If something is really true, it should be true no matter where you are or who you are (gravity works the same whether you're in Tokyo or Toronto).
- Real truth works like math and can't contradict itself (A is A and B is B, and something can’t be both A and B at the same time).
- Finally, the truth is always simple, and if it’s not simple, it means it’s not yet the truth.
If you believe that the world is composed of atoms rather than spirit—it is because of this style of thinking. If you’ve ever washed your hands so that you don’t get sick—it’s because of this style of thinking. If you believe in the existence of inalienable human rights—it’s because of this style of thinking. If you’ve ever laughed when the new person you’re dating asks, unabashedly, for your exact birth time, or believed in an efficient market or that humans follow their incentives or that a seizure is a sign of a brain disorder rather than possession by the devil, or woken up to see the sun rising and felt wonder at the many thousands of miles that the Earth traveled through space as you slept—it’s because of this style of thinking.
The drive for simple, general, abstract, universal theories has been a resounding success. It’s responsible for our progress out of darkness and superstition and into modernity. It has brought about the Western world. But it also fails when it becomes totalizing—when we allow our search for these truths to blind us to the real complexity of the world, and to alternative ways of seeing it.
What we can say about the world is not all that we know about it. We know many things that we can’t say explicitly. And the drive to say everything explicitly is doomed to failure.
Seeing like a language model
But for the first time, we have a tool—neural networks—that allows us to know and work with what can’t be defined or said explicitly. Neural nets and language models don’t work by explicit rules. Instead, they absorb tacit patterns in language—statistical regularities across billions of words—and, in doing so, recreate the hidden web of associations that makes our world hang together. Out of those patterns emerges not just syntax but sense. What they give us is a way to capture the tacit, the intuitive, the unsaid parts of intelligence that our old, rule-bound worldview could never reach.
Large language models allow us to see anew about how we think and learn: not through the rigid application of universal laws, but through the recognition of patterns across vast landscapes of experience. When a chef seasons a dish, when a mother knows her child is lying, when a trader senses a market shift—these aren’t moments of mechanical calculation, but of pattern recognition so subtle and multifaceted that they resist explicit definition. Language models, in their very architecture, embody this more nuanced way of knowing. They remind us that intelligence isn't about following rules, but about dancing with context.
If we learn to see like a language model, it will lead us to an entirely new worldview—the successor to Western rationality. What follows is a sketch of this new worldview: what it is, how it contrasts with the old, and the transformations it enables.
Reality as a web, not a chain
The old Western worldview saw reality as one long sequence of linear causes and effects where A leads to B leads to C. To see reality in this way is to see it as something like a clock or a computer: a mechanical system where each part affects the next in a sequence. By breaking up and isolating each part of the system, we can construct a complete explanation for how it works.
The new worldview sees reality as an interconnected web of relationships. Chains exist—but they are always simplifications of a larger web. Everything is connected to many other things, existing in a complex network of mutual influences. To see reality in this way is to see it as something like a sentence or a song: Each note in a song becomes what it is through its relationship to the notes that came before it, and each of those notes become what they are through their own relationships.
Meaning, in other words, lies in differences.
Meaning as contrasts rather than definitions
The old Western worldview sought to define things. It sought to reduce them to their essence—to make them explicit in order to understand them.
But when language models see something different: Meaning emerges through contrast and relationship.
When they process text, they don't isolate words like carbon atoms under a microscope. A carbon atom in a leaf is identical to a carbon atom in a trunk, but that tells you nothing about what makes a leaf a leaf, or a trunk a trunk. Instead, language models see how words work together in the larger patterns of meaning. The word "dark" means something different when it's part of "dark chocolate" versus "dark thoughts" versus "dark matter," just as a carbon atom means something different when it's part of chlorophyll versus cellulose. What matters isn't the atomic unit, but its place in the whole: how it connects, what patterns it forms, and what larger structures it helps create.
This isn't a new discovery. Heraclitus saw that opposites define each other: Up and down, hot and cold, living and dead are one reality seen from different angles. Philo wrote of the "dividing Logos" that brings things into existence through contrasts. Just as "hot" only exists in relation to "cold" and "good" through its contrast with "bad," words in a language model's web of meaning create axes of significance through their oppositions.
What's new is that we can now see this principle at work in our most advanced technology and use it to build tools that work with meaning the way the human mind does.
But this raises a crucial question: If meaning emerges from an intricate web of relationships and contrasts, how can we ever explain anything completely?
Explanations as powerful, but always incomplete
When everything is connected to everything else, any attempt to explain one thing requires explaining everything—an impossible task.
The old Western worldview tried to escape this problem through reduction: breaking reality into simpler parts that could be explained independently. Like a mechanic taking apart an engine, we assumed we could understand the whole by understanding each piece in isolation. This approach gave us tremendous power—it helped us build rockets and cure diseases—but it also has fundamental limits when applied to complex systems like markets, brains, and companies.
The new worldview admits that certain things can’t be explained or put into words. Systems that appear to follow clear rules at one level emerge from a deeper level that doesn't work by those rules at all. A gene's function changes based on its cellular environment, an individual's behavior shifts depending on their social context, a word's meaning transforms through its relationships with other words.
The same is true of language, economies, minds, companies, and social systems. These systems are composed of parts, but their behavior is exceedingly challenging to reduce to explicit rules. This doesn't mean we should abandon the search for explanations; rather, we need to recognize that some things can only be understood by other forms of knowing that look more like intuition than rationality.
Correlation over causation
If reality is a web rather than a chain, then our notion of causation breaks down. Instead of simple A-causes-B relationships, we find that everything affects everything else. Does depression cause poor performance at work? Or does poor performance at work cause depression? Rationality looks for a single answer; seeing like a language model, we can see that the answer is: both.
The new worldview recognizes that sometimes we can indeed find a simple chain of causes in the complex web of reality. But often, to understand what comes next requires applying many, many thousands of subtle, interacting correlations—patterns that can't be reduced to simple if-then statements. By embracing correlation over causation, we gain a richer understanding of how complex systems behave and evolve over time.
Where do these correlations come from? Context.
Context as priceless rather than as worthless
The old Western worldview saw context as noise to be filtered out. You sift away extraneous variables to get to truth in the form of singular causes and elegant, universal explanations for phenomena. This is how scientists in laboratories work: A good experiment is precisely controlled, with all extraneous variables eliminated, so that you can see the pure, context-free truth.
The new worldview sees context as essential to meaning and understanding. Just as a word's meaning depends on the sentence it appears in, and a sentence's meaning depends on the paragraph and cultural context, so too does any phenomenon depend on its wider web of relationships. What appears true in isolation may mean something completely different when viewed within its full context. This holistic perspective acknowledges that stripping away context often strips away the very essence of what we're trying to understand.
This has profound implications because we are also part of the context.
Participatory rather than objective knowledge
If context shapes meaning, then who we are—what we believe, how our brains function, what happened in our past, what our ideas and culture are—shapes the world we see.
To put it a different way: Knowledge is always participatory. This isn’t to say that there is no reality or that anything goes. Rather, because reality is so complex, anything that we can say explicitly about it is by nature incomplete. Our understanding is always shaped by our perspective, methods, and the tools we use to observe and measure. This participation isn't a flaw to be eliminated, but rather an essential feature of knowledge itself.
This view doesn't imply that general theories aren't valuable. But it recognizes their limitations in complex domains, in which the deepest knowledge lives in the particular. A chef's understanding of cooking can't be reduced to a recipe book; a doctor's knowledge isn't captured in diagnostic flowcharts. The real expertise lives in their experience with countless particular situations.
Pluralism over monotheism
Western science took up where Christianity left off and replaced one monotheistic God with a different monotheistic God: the laws of physics. The ultimate aim for Western science and rationality was the grand theory, a theory of everything, that could explain the universe in a single, elegant formula.
The new worldview recognizes the limits of abstract, general, parsimonious explanations—and thinks of all explanations as at best incomplete. It trades monotheism for pluralism. This pluralism sees that different frameworks and perspectives can each hold valuable aspects of reality. No single view tells the whole story.
Paradox and tension over resolution and certainty
The old Western worldview’s ideal end state of knowledge was the reduction of the world to a machine. Which has its appeals. A machine implies the elimination of uncertainty—and, therefore, ultimate control.
This quest for certainty worked in many areas of knowledge; it also blinded us to the many different areas of the world where the elimination of uncertainty is impossible.
The new worldview, instead of trying to eliminate uncertainty, embraces it as an essential feature of reality. We learn to work with uncertainty rather than against it, recognizing that some of our most profound insights come from exploring the edges of what we know. This approach allows us to remain open to new possibilities, adapt our understanding as circumstances change, and maintain intellectual humility.
When artists face a blank page or entrepreneurs a dwindling bank account or engineers a complex technical problem, they embrace uncertainty as a creative force, knowing that the path forward emerges through engagement with the world rather than abstract pre-planning. This mindset shift from control to participation, from certainty to possibility, opens up new ways of thinking and problem solving. It allows us to navigate complexity with greater resilience and adaptability, recognizing that our greatest breakthroughs often come when we're willing to dance with the unknown.
A philosophy of tools
If you believe that there is a shift underway in how we see the world, if this new worldview makes sense to you, there is an important question to answer: Why now?
The answer lies in the old adage: To a person with a hammer, everything looks like a nail. Another way to say this is that our tools change how we see the world, and how we see ourselves.
Now we've built new tools—language models—that can work with what previously only our intuition could handle. They reveal the world's inherent web-like nature. To truly see complex systems, we needed tools that were themselves complex systems.
So the question becomes. What do we do with this new worldview?
In the weeks ahead, we’ll look at how science, business, and creativity will change through its lens.
Dan Shipper is the cofounder and CEO of Every, where he writes the Chain of Thought column and hosts the podcast AI & I. You can follow him on X at @danshipper and on LinkedIn, and Every on X at @every and on LinkedIn.
We build AI tools for readers like you. Write brilliantly with Spiral. Organize files automatically with Sparkle. Deliver yourself from email with Cora. Dictate effortlessly with Monologue.
We also do AI training, adoption, and innovation for companies. Work with us to bring AI into your organization.
Get paid for sharing Every with your friends. Join our referral program.
Was this newsletter forwarded to you? Sign up to get it in your inbox.
Ideas and Apps to
Thrive in the AI Age
The essential toolkit for those shaping the future
"This might be the best value you
can get from an AI subscription."
- Jay S.
Join 100,000+ leaders, builders, and innovators

Email address
Already have an account? Sign in
What is included in a subscription?
Daily insights from AI pioneers + early access to powerful AI tools
Ideas and Apps to
Thrive in the AI Age
The essential toolkit for those shaping the future
"This might be the best value you
can get from an AI subscription."
- Jay S.
Join 100,000+ leaders, builders, and innovators

Email address
Already have an account? Sign in
What is included in a subscription?
Daily insights from AI pioneers + early access to powerful AI tools
Comments
Don't have an account? Sign up!
Hi Dan,
Amazing reality check. I'm an alumni of the Capra Course where I learned about the web of life and it's interconnectedness. Reality is visible through the context and experience of each person, right down to senses. I'll read your text a few more times.
Dan, in many ways, it seems that, in this article and your previous "Five New Thinking Styles for Working with Thinking Machines", you are channeling physicist Prof. David Deutsch's books "The Fabric of Reality" (1997) and, especially, "The Beginning of Infinity" (2011), wherein he asserts (these quotations from his 2009 TED Talk): "The search for hard-to-vary explanations is the origin of all progress." and "That the truth consists of hard-to-vary assertions about reality is the most important fact about the physical world." His core view is that better and hard-to-vary explanations lie at the core of humanity's progress, especially since the Enlightenment. Your discussion herein (e.g., that "This mindset shift from control to participation, from certainty to possibility, opens up new ways of thinking and problem solving. It allows us to navigate complexity with greater resilience and adaptability, recognizing that our greatest breakthroughs often come when we're willing to dance with the unknown.") seems fully resonant with Deutsch's worldview. Yet in your "Five New Thinking Styles...", you observe that "The search for rules and essences, the obsession with process and sculpting, is ultimately a search for explanations. Explanations are the holy grail of the West—they are what we search for in science, business, and life." Exactly as per Deutsch. Yet, extending these ideas: "The pursuit of predictions over explanations turns problems of science into problems of engineering. The question becomes not, 'What is it?' but instead, 'How do I build something that predicts it?' The shift from science to engineering will be the biggest boost to progress in this century. It will move us beyond Enlightenment rationalism into totally new ways of seeing the world—and ourselves." Exactly. Which is why the coming AI revolution -- we're just at its brink -- will be the single-most radical, evolutionary and disruptive world-shaking change in which we'll be privileged (or doomed) to participate. "May we live in interesting times..." will be an understatement. Many thanks.
This really resonates, especially the web vs. chain framing. One thing I'm wrestling with: while LLMs can theoretically synthesize across contexts, in practice (today anyway) they often narrow to immediate tasks—becoming 'single stack developers' rather than maintaining the broader systems view. Humans still do (or coordinate) most of the synthesis work.
Regardless, this view maps beautifully to Russell Ackoff's analysis vs. synthesis distinction from systems thinking. I sent you some thoughts on that via email (apologies it's a long message). Would love to hear how you see this evolving.
You're familiar with Michael Polanyi I presume. This piece, and this development would have been music to his ears.
I wrote something similar - before LLMs - or before they'd achieved what they have now - here: https://clubtroppo.com.au/2020/03/09/the-ghost-of-descartes-economics-purposes-perspectives-and-practical-problem-solving-part-one/