Midjourney/Every illustration.

Seeing Like a Language Model

AI and the successor to the old, Western worldview

76 7

Last week I wrote that we’d be publishing a few excerpts from a book I’m writing about the worldview I’ve developed by writing, coding, and living with AI. Here’s the first piece, about the differences between the old (pre-GPT-3) worldview and the new.—Dan Shipper


When I say the word “intelligent,” you probably think of being rational. But language models show us that this assumption is wrong.

To us in the West, smarts are about being able to explicitly lay out what you know and why you know it. For us, the root of intelligence is logic and reason to ward off superstition and groupthink; it is clear and concise definitions to eradicate vague and wooly-headed thinking; it is formal theories that explain the hidden laws of the world around us—simple, falsifiable, and parsimonious yet general enough to tie together everything in the universe from atoms to asteroids.

Our romantic picture of ourselves as “rational animals,” as Aristotle said, has produced everything in the modern world—rockets, trains, medicines, computers, smartphones.

But this picture is incomplete. It contains a huge blind spot: It neglects the fundamental importance of intuition in all of intelligent behavior—intuition which is by nature ineffable; that is to say, not fully describable by rational thought or formal systems.

How to build a thinking machine

The best way to understand what I’m talking about is to imagine trying to build a thinking machine. How would you do it?

Let’s start with a task, something easy and basic that humans do every day. Maybe something like scheduling an appointment.

Let’s say we’re a busy executive who gets the following appointment request:


New request

From: Mona Leibnis

Hey,

I’m available Monday at 3 p.m., Tuesday at 4 p.m., Friday at 6 p.m.

When can you meet?


We want to build a machine to intelligently schedule an appointment. How would we go about it?

We’d probably start by giving our machine a couple of rules to follow:

  1. First, check available time slots on my calendar.
  2. Then, compare my open slots to the open slots on the invitee’s calendar.
  3. If you find one, add the appointment to the calendar.

That all seems pretty reasonable. You could definitely write a computer program to follow those rules. But there’s a problem: The rules we’ve specified so far can’t handle urgency or importance.

For example, consider a case where you desperately want to meet with someone and you’d be willing to move another appointment in order to make the time work. Now we have to introduce a new rule:

  1. If it is urgent that I meet with the invitee, you can reschedule a less urgent appointment in order to make the meeting happen sooner.

But this rule is incomplete because it introduces the concept of urgency without defining it. How do we know what’s urgent? Well, there must be some rules for that too. So we need to delineate them.

In order to measure urgency, we have to have some conception of the different people in your life—who your clients and potential clients are, and which clients are important or not.

Now things are starting to get hairy. In order to determine the relative importance of clients, we have to know about your business aims—and about which clients are likely to close, and which clients are likely to pay a lot of money, and which clients are likely to stay on for a long time. And don’t forget—which clients were introduced by an important friend whom you need to impress, so while they may not be directly responsible for a lot of revenue, they’re still a priority.

This is only a taste of the rules we’d have to define in order to build an adequate automatic scheduling system. And that’s just for dealing with calendars!

The problem that we’re finding is that it’s very hard to lay everything out explicitly—because everything is interconnected. To paraphrase the late astronomer Carl Sagan: If you wish to schedule a meeting from scratch, you must first define the universe.

The old, Western worldview

This approach—which seemed most natural to us—is the exact one that the first generation of artificial intelligence researchers took to try to build AI.

Create a free account to continue reading

The Only Subscription
You Need to Stay at the
Edge of AI

The essential toolkit for those shaping the future

"This might be the best value you
can get from an AI subscription."

- Jay S.

Mail Every Content
AI&I Podcast AI&I Podcast
Monologue Monologue
Cora Cora
Sparkle Sparkle
Spiral Spiral

Join 100,000+ leaders, builders, and innovators

Community members

Already have an account? Sign in

What is included in a subscription?

Daily insights from AI pioneers + early access to powerful AI tools

Pencil Front-row access to the future of AI
Check In-depth reviews of new models on release day
Check Playbooks and guides for putting AI to work
Check Prompts and use cases for builders

Comments

You need to login before you can comment.
Don't have an account? Sign up!
Harald Huebner 2 months ago

Hi Dan,
Amazing reality check. I'm an alumni of the Capra Course where I learned about the web of life and it's interconnectedness. Reality is visible through the context and experience of each person, right down to senses. I'll read your text a few more times.

Lorin Ricker 2 months ago

Dan, in many ways, it seems that, in this article and your previous "Five New Thinking Styles for Working with Thinking Machines", you are channeling physicist Prof. David Deutsch's books "The Fabric of Reality" (1997) and, especially, "The Beginning of Infinity" (2011), wherein he asserts (these quotations from his 2009 TED Talk): "The search for hard-to-vary explanations is the origin of all progress." and "That the truth consists of hard-to-vary assertions about reality is the most important fact about the physical world." His core view is that better and hard-to-vary explanations lie at the core of humanity's progress, especially since the Enlightenment. Your discussion herein (e.g., that "This mindset shift from control to participation, from certainty to possibility, opens up new ways of thinking and problem solving. It allows us to navigate complexity with greater resilience and adaptability, recognizing that our greatest breakthroughs often come when we're willing to dance with the unknown.") seems fully resonant with Deutsch's worldview. Yet in your "Five New Thinking Styles...", you observe that "The search for rules and essences, the obsession with process and sculpting, is ultimately a search for explanations. Explanations are the holy grail of the West—they are what we search for in science, business, and life." Exactly as per Deutsch. Yet, extending these ideas: "The pursuit of predictions over explanations turns problems of science into problems of engineering. The question becomes not, 'What is it?' but instead, 'How do I build something that predicts it?' The shift from science to engineering will be the biggest boost to progress in this century. It will move us beyond Enlightenment rationalism into totally new ways of seeing the world—and ourselves." Exactly. Which is why the coming AI revolution -- we're just at its brink -- will be the single-most radical, evolutionary and disruptive world-shaking change in which we'll be privileged (or doomed) to participate. "May we live in interesting times..." will be an understatement. Many thanks.

@michael.fisher 2 months ago

This really resonates, especially the web vs. chain framing. One thing I'm wrestling with: while LLMs can theoretically synthesize across contexts, in practice (today anyway) they often narrow to immediate tasks—becoming 'single stack developers' rather than maintaining the broader systems view. Humans still do (or coordinate) most of the synthesis work.

Regardless, this view maps beautifully to Russell Ackoff's analysis vs. synthesis distinction from systems thinking. I sent you some thoughts on that via email (apologies it's a long message). Would love to hear how you see this evolving.

Nicholas Gruen 2 months ago

You're familiar with Michael Polanyi I presume. This piece, and this development would have been music to his ears.

Nicholas Gruen 2 months ago

I wrote something similar - before LLMs - or before they'd achieved what they have now - here: https://clubtroppo.com.au/2020/03/09/the-ghost-of-descartes-economics-purposes-perspectives-and-practical-problem-solving-part-one/

Rolf Schulte Strathaus 2 months ago

this was a great post. Changing the way to do things. From structuring and organizing to providing context and preferences. I realized that I have moved to working this way. You gave the underlying reasoning for this. thanks

Jack Cohen about 2 months ago

What a read!

2 quotes came to mind:

"We see things not as they are but as we are." -The Talmud

The part about our yearning for frameworks that explain it all reminded me of a (very) short story by Borges:

Of Exactitude in Science

…In that Empire, the craft of Cartography attained such Perfection that the Map of a Single province covered the space of an entire City, and the Map of the Empire itself an entire Province. In the course of Time, these Extensive maps were found somehow wanting, and so the College of Cartographers evolved a Map of the Empire that was of the same Scale as the Empire and that coincided with it point for point. Less attentive to the Study of Cartography, succeeding Generations came to judge a map of such Magnitude cumbersome, and, not without Irreverence, they abandoned it to the Rigours of sun and Rain. In the western Deserts, tattered Fragments of the Map are still to be found, Sheltering an occasional Beast or beggar; in the whole Nation, no other relic is left of the Discipline of Geography.

—From Travels of Praiseworthy Men (1658) by J. A. Suarez Miranda