Was this newsletter forwarded to you? Sign up to get it in your inbox.
A world with thinking machines requires new thinking styles.
Our default thinking style in the West is scientific and rationalist. When was the last time you heard someone talking about a hypothesis or theory in a meeting? When was the last time, when sitting down to solve a problem, you reminded yourself to think from first principles? When was the last time you tried an experiment in your work or personal life?
Even the frameworks we use to understand business are scientific: It’s unlikely that Harvard Business School professor Michael Porter would have looked for or found five “forces” governing business without physics as inspiration; Clay Christensen’s jobs-to-be-done framework is close to an atomic theory of startup ideas.
We romanticize science and rationalism because it's been so successful. Since the Enlightenment, when Galileo, Newton, Descartes, and Copernicus began to think in this way, we have used rationalism to generate modernity. It's where we get rockets and vaccines from, and how we get computers and smartphones.
But new technologies demand new thinking styles. As the AI age unfolds, we are shifting away from what former Tesla and OpenAI engineer Andrej Karpathy calls Software 1.0—software that consists of instructions written by humans, and which benefits from a scientific, rationalist thinking style.
Instead, we're moving into Software 2.0 (a shift that Michael Taylor recently wrote about), where we describe a goal that we want to achieve and train a model to accomplish it. Rather than having a human write instructions for the computer to follow, training works by searching through a space of possible programs until we find one that works. In Software 2.0, problems of science—which is about formal theories and rules—become problems of engineering, which is about accomplishing an outcome.
This shift—from science to engineering—will have a massive impact on how we think about solving problems, and how we understand the world. Here are some of my preliminary notes on how I think this shift will play out.
1. Essences vs. sequences
In a pre-AI world, whether you were building software or teams, or writing books or marketing plans, you needed to strip the problems you were facing down to their bare elements—their essence—and work your way forward from there. In building software, you need to define your core user and the problem you want to solve; in writing books, you need a thesis and an outline.
In a post-AI world, we are less concerned with essence and more concerned with sequence: the underlying chain of events that leads to a certain thing to happen. Language models do this when they predict what word comes next in a string of characters; self-driving cars also do this when they predict where to drive next from a sequence of video, depth, and GPS data.
To understand this better, consider the case of a churn prevention feature for a SaaS business in a pre-AI world. In order to automatically prevent a customer from churning, you needed to define what a customer who might churn looked like with explicit rules—for example, if they hadn’t logged into your app in a certain number of months, or if their credit card was expiring soon. This is a search for essences.
In a post-AI world, by contrast, you don’t need to explicitly define what a customer who is about to churn looks like, or which interventions you might use in which circumstances.
All you have to do is identify sequences that lead to churn. For every customer who churns, you can feed their last 100 days of user data into a classifier model that categorizes inputs. Then you can do the same for customers who haven't churned. You'll create a model that can identify who is likely to churn, in all of their many thousands of permutations, without any rules. This is what it means to search for sequences.
2. Rules vs. patterns
Another way to think of essences versus sequences, especially when it comes to intellectual and creative tasks, is as a shift from the search for rules to a search for patterns.
In a pre-AI world, you needed to define the rules of the game you were playing—to think from first principles and apply them to your circumstances. In a post-AI world, you need to build and use models that recognize underlying patterns—patterns that can’t be reduced to simple rules.
Consider building software. Pre-AI, you needed to define exactly which tasks you wanted a user to be able to accomplish and exactly how you wanted the system to behave, so you could encode those definitions into rules written by programmers that were readable by a computer.
In a post-AI world, however, instead of defining rules, you need to look for examples. You don't need to explicitly lay out what users can and can't do with your app. You can create a mood board of your favorite UI elements, or a high-level list of how you want your application to behave—and AI will identify the patterns in your input and translate them into a ruleset.
Or maybe you’re running a creative team. In a pre-AI world, in order to create consistency, you needed to reduce your work down to principles and systems so that writers and designers could consistently capture your brand's voice and style.
In a post-AI world, you don't necessarily need to reduce your work in such a way. Instead, you can find examples that represent your taste and voice and feed them into models, which can reproduce the patterns they find without rule-following.
This technique arms each member of your creative team with a tool that can replicate your taste—one that can evolve with new examples, and that captures what previously couldn't be put into words.
The currency in this world are good examples for training your pattern matcher, not explicit rules to follow.
3. Process vs. intuition
If you are searching for sequences instead of essences, you are doing that so that you can build up intuition in place of process.
In a pre-AI world, in order to build an application, you needed to reduce your idea to a process—a set of rules by which your software would operate to accomplish the goal you set. Sometimes this was easy; for example, a customer relationship manager like Salesforce is naturally reducible to rules.
In a post-AI world, you can build applications for tasks that can't be reduced to rules. Consider, for example, optical character recognition (OCR), the technology that allows computers to recognize text in images. Human-level OCR is not reducible to a set of rules, but deep learning approaches now common in apps like ChatGPT allow you to create software that has a kind of "intuition" for recognizing characters.
There are many other parts of the world where this type of intuitive thought process is important, but that previously software was unable to touch. Consider a venture capitalist evaluating a startup pitch or a doctor evaluating a patient. They may be able to express their thought process in words, but there is something underneath those words that is fundamentally ineffable—and, up until recently, untransferable.
In a post-AI world, this is different. Intuition is transferrable and usable. It's not stuck in anyone's head, and it is not required that it be reduced to process.
4. Sculpting vs. gardening
If the raw material for creative work becomes sequences and intuition instead of rules, essences, and processes, how you do your work is going to look significantly different. It will look more like gardening as opposed to sculpting.
In a pre-AI world, creative work was similar to sculpting. Whether you were coding or writing, you were responsible for taking a block of marble and shaping it, bit by bit, into the form in your head. Every strike of the mallet was yours and yours alone, and every bend in the material was a product of your mind.
In a post-AI world, creative work is similar to gardening. The gardener’s job isn’t to carve a leaf out of dirt. It’s to create the conditions of sun, soil, and water that allow a plant to grow. That is what coding, for example, with AI code editor Cursor is like: It is much more about creating conditions for work to happen—by prompting the model with what you want to build—than it is typing code piece by piece.
5. Explanations vs. predictions
The search for rules and essences, the obsession with process and sculpting, is ultimately a search for explanations. Explanations are the holy grail of the West—they are what we search for in science, business, and life. Consider the questions we tend to ask any successful person: “Can you explain how you got here? What are your secrets?”
Consider the questions we ask of successful companies: “How do we explain their growth? How do we explain their legion of fans?” Our goal is to find explanations, because what you can explain you can control.
But explanations are notoriously hard to come by. The reason smart business people don’t read books like Think Like Zuck: The Five Business Secrets of Facebook's Improbably Brilliant CEO Mark Zuckerberg—a real book, by the way—is because we know it’s unlikely to contain the actual explanations for Mark Zuckerberg’s success. Zuckerberg might be able to explain some of it, but even he would probably say that he has developed an underlying intuition that guides his decision-making that is not fully explainable.
In a post-AI world, we will prioritize predictions over explanations, especially in complex domains. Zuckerberg’s intelligence might not be reducible to rules, but it can be encapsulated in a model if it is given the right sequences to train its intuition. This is starting to happen in science, too. The 2024 Nobel Prize winners in both physics and chemistry were computer scientists who built architectures for better predictions, not physicists who had come up with better explanations.
The pursuit of predictions over explanations turns problems of science into problems of engineering. The question becomes not, “What is it?” but instead, “How do I build something that predicts it?”
The shift from science to engineering will be the biggest boost to progress in this century. It will move us beyond Enlightenment rationalism into totally new ways of seeing the world—and ourselves.
Dan Shipper is the cofounder and CEO of Every, where he writes the Chain of Thought column and hosts the podcast AI & I. You can follow him on X at @danshipper and on LinkedIn, and Every on X at @every and on LinkedIn.
Comments
Don't have an account? Sign up!
Ver y interesting insights
Very stimulating. Love the section headings which sum to a really interesting set of qualities as we all move forward. Thanks.
Woaw - inpiring, pointing to the future of thinking where concepts become fluid.
This is a fantastic article Dan and I am having an amazing conversation with Ava (aka ChatGPT) about each of the sections how I'm aligned with you ,how I've thought this way for a long time and how do I take what you've outlined here and apply it to my business and my product offering.