Capability Blindness and the Future of Creativity

We used to be sculptors. We're all about to be gardeners.

DALL-E/Every illustration.

Humans tend to believe that the world is static—that things will be the same tomorrow as they are today, and as they were yesterday.

We leave no stone unturned in our hunt for opportunity, but we often don’t think to pause before we write something off as useless—and we don’t flip over old stones to see if anything’s changed. For example, the latest Claude model—Claude 3 Opus—is a fantastic writer. With the right prompt, it can write for short bursts in a voice that genuinely sounds 70-80 percent like me, you, or any other writer.

Claude mostly wrote this tweet, for example, though I edited it. I supplied Claude with examples of previous podcast transcripts and tweets, as well as some guidelines about how to adapt one into the other. Claude did the rest, and it did a fantastic job.

This may seem like a small feat, but GPT-4 can’t do this. And you’d be surprised how much writing fits this pattern of summarizing content from one form and adapting it for another. As I wrote in a recent essay on this topic, once you start looking, you see the summarizing everywhere. 

But it seems like no one has noticed this step-change in capabilities.

That’s understandable. It’s a common mistake we make when evaluating parts of the world that move quickly: I call it capability blindness.

During the first big generative AI wave, which started last year, many of us grappled with the exciting—or scary—reality that chatbots might be able to mimic our unique voices and writing styles. I tried OpenAI’s GPT-3, then GPT-4, and quickly realized they were good but had a particular taint. They could help in the writing process—researching, supplying ideas, editing words—but couldn’t be trusted to write very much on their own. I couldn’t enlist AI as a ghostwriter just yet.

But over the last year, the newest language models have been noticeably better. Unfortunately, we are often capability blind: We don’t notice what’s new because we’re jaded by our old experiences and feel that it’s a waste of time to try again.

This is not a new phenomenon. In May 2012, after Facebook went public, the New York Times opinion columnist Ross Douthat argued that the social network was a bad business:

“It doesn’t make that much money, and doesn’t have an obvious way to make that much more of it, because (like so many online concerns) it hasn’t figured out how to effectively monetize its million upon millions of users.”

At the time, he was right. In 2012, Facebook generated $5 billion in revenue and only $53 million in net income. A decade later, Douthat is—obviously—wrong. In 2023, Facebook, now known as Meta, generated roughly $134 billion dollars in revenue and $23 billion in net income. What changed? The company figured out how to effectively monetize its massive user base—something that many observers, including Douthat, believed it couldn’t do. 

My point here isn’t to dunk on Douthat; I am sure some of the things I’ve written will look silly a decade from now. It’s to remind you that the best way to latch onto the future is to think through affirmative possibilities, to remember that the world is in flux, and to keep turning over old stones.

This mindset is particularly important with AI. Some of the tasks it struggles with today are going to be child’s play by next year. Like Claude, which has improved leaps and bounds since its last model, and will likely continue in this direction.

Now what? 

What’s rare in a world with infinite good writing?

If you believe what I wrote above, a good next question is: What becomes rare in a world where it is significantly cheaper to write well? (I’ll admit, I have a dog in this fight.)

First, while Claude 3 Opus can write quite well in my voice, it can’t write anything and everything in my voice. It’s really good for repetitive tasks that I do often, where I am effectively summarizing one piece of writing into another, and I have examples to provide.

But Claude still can’t write a good article as me—yet. It only works well in circumstances where the length isn’t too long, it has lots of examples to guide it, and it’s clearly summarizing from one form of content and adapting it for another. And I think it will be a long time before it can write great pieces that aren’t summaries of existing content. It’ll be longer still until it can write great long-form pieces. (Complexity scales super-linearly with the length of a piece of text, so longer pieces get harder and harder to output.)

The things that are likely to remain hard to do with AI alone are where the value sits for writing:

Original research

As I wrote a year ago, uncovering new facts is still going to be valuable. Writing them will be much more of a commodity. AI is able to handle 90 percent of the latter, which means reporters in the future will mostly have to be intrepid, with a nose for scoops.

Original thinking that is written in long-form

Writing that isn’t a summary of someone else’s work will still be difficult for AI to do for a while. Anything outside of a well-defined format that requires original thinking—by which I mean anything that isn’t primarily a summary of someone else’s already available work—is still going to be hard to produce.

Novel audience acquisition strategies

Audiences—and acquiring them—will be valuable in a world where anyone can write well. This is already true, and it will become even more true with AI. Every’s Evan Armstrong wrote about this last year: “More and more power will accrue in those companies that have novel acquisition methods that do not rely on any gatekeeper.” He’s right, and I’d add that having an owned audience to which you can distribute your content will also be important.

But there’s at least one more thing:

Consistency

Consistently publishing content (whether short- or long-form) is going to be a significant differentiator for creators or brands. If you are consistent, you can take up significant space in the mind of your audience. If you’re not—even if you’re brilliant—no one will remember you. (Think of all of the people who’ve had a few TikToks go viral but have never turned the success into a sustainable career. TikTok is consistent; they are not.)

Mindshare is hard to build, and it’s a slow process—but once you have it, it’s pretty durable. My Every co-founder Nathan Baschez wrote about this in his 2020 piece about why content is king. Mindshare comes with several benefits familiar to anyone who thinks about startups: network effects, brand power, and switching costs. These have always been factors for creators and media brands. But they’ve also been somewhat overlooked.

From sculptor to gardener: the future of creativity

The next question is: What is the future of creative work in this world? Are we looking at a future where no creative work is actually produced by humans?

Previous eras of creativity have mostly looked a bit like sculpting. A sculptor takes a block of material and carves it, slowly but surely, into shape. Nothing happens without her hand. Even when an assistant is involved, the sculptor pores over the project, because their human input is important at every point of the process. So too with writing, or programming, or painting.

This era of creativity is going to look more like gardening. A gardener doesn’t grow plants directly. Instead, she sets up the conditions for the garden to grow. She takes care of the soil, the water, and the sunlight—and lets the plants do their thing.

So too with AI. As more of our time is spent being model managers, we won’t be directly making as much creative work. That’s like pulling up a plant to help it grow. Instead, we’ll be creating optimal conditions and letting the models do their work.

There’s one difference, though. A gardener can’t directly modify her plants. She can’t change their DNA by hand. But a skilled model manager can take any output of a model—sentence, code, image, or video—and modify any part of it themselves.

So we won’t have to leave sculpting behind as a creative skill. We’ll just be able to use our chisels and hammers more judiciously—and only when it really matters.


Dan Shipper is the cofounder and CEO of Every, where he writes the Chain of Thought column and hosts the podcast How Do You Use ChatGPT? You can follow him on X at @danshipper and on LinkedIn, and Every on X at @every and on LinkedIn.

Was this newsletter forwarded to you? Sign up to get it in your inbox.


Dan also leads a consulting practice at Every focused on helping mid-to-large sized organizations implement AI and train their workforce to adopt it. Interested? Reach out.

Like this?
Become a subscriber.

Subscribe →

Or, learn more.

Read this next:

Chain of Thought

How Sora Works (and What It Means)

OpenAI's new text-to-video model heralds a new form of filmmaking

1 Feb 16, 2024 by Dan Shipper

Chain of Thought

How Hard Should I Push Myself?

What the science of stress tells us about peak performance

2 Oct 17, 2023 by Dan Shipper

Chain of Thought

The Most Important WWDC Announcement That You Missed

Apple has a shot at solving the mental health crisis

2 Jun 9, 2023 by Dan Shipper

Thanks for rating this post—join the conversation by commenting below.

Comments

You need to login before you can comment.
Don't have an account? Sign up!

Every smart person you know is reading this newsletter

Get one actionable essay a day on AI, tech, and personal development

Subscribe

Already a subscriber? Login