DALL-E/Every illustration.

When AI Gets More Capable, What Will Humans Do?

We used to be sculptors. We're all about to be gardeners.

45

Sponsored By: Notion

This essay is brought to you by Notion, your all-in-one workspace solution. Startups can now enjoy 6 months of Notion Plus with unlimited AI, a value of up to $6,000—for free! Thousands of startups around the world trust Notion for everything from engineering specs to new hire onboarding. Access the limitless power of AI right inside Notion to build and scale your company.

When applying, select Every in the drop-down partner list and enter code EveryXNotion at the end of the form.

Apply now

Predicting the future can feel like a fool’s errand. But so is writing it off. As humans peering into the abyss, we tend to assume progress in some ways and stasis in others. In this essay by Dan Shipper, which we’re republishing to cap off a week of prompt engineer (and new Every columnist) Michael Taylor’s tactical AI pieces, he warns against capability blindness: the often false assumption that technology won’t drastically improve over time. He also confronts the problem of what role humans will play when AI creativity improves, a question that led Every to launch our new tool, Spiral, which you can try out. We’ll resume regular publishing on Monday, July 8, after the July 4 holiday in the U.S.—Kate Lee

Was this newsletter forwarded to you? Sign up to get it in your inbox.


Humans tend to believe that the world is static—that things will be the same tomorrow as they are today, and as they were yesterday.

We leave no stone unturned in our hunt for opportunity, but we often don’t think to pause before we write something off as useless—and we don’t flip over old stones to see if anything’s changed. For example, the Claude model Claude 3 Opus is a fantastic writer. With the right prompt, it can write for short bursts in a voice that genuinely sounds 70-80 percent like me, you, or any other writer.

Claude mostly wrote this tweet, for example, though I edited it. I supplied Claude with examples of previous podcast transcripts and tweets, as well as some guidelines about how to adapt one into the other. Claude did the rest, and it did a fantastic job. It did so well, in fact, that we built a tool specifically to make this kind of work easier, Spiral.

This may seem like a small feat, but GPT-4 can’t do this. And you’d be surprised how much writing fits this pattern of summarizing content from one form and adapting it for another. As I wrote in a recent essay on this topic, once you start looking, you see the summarizing everywhere.

But it seems like no one has noticed this step change in capabilities.

That’s understandable. It’s a common mistake we make when evaluating parts of the world that move quickly: I call it capability blindness.

During the first big generative AI wave, which started last year, many of us grappled with the exciting—or scary—reality that chatbots might be able to mimic our unique voices and writing styles. I tried OpenAI’s GPT-3, then GPT-4, and quickly realized they were good but had a particular taint. They could help in the writing process—researching, supplying ideas, editing words—but couldn’t be trusted to write very much on their own. I couldn’t enlist AI as a ghostwriter just yet.

But over the last year, the newest language models have been noticeably better. Unfortunately, we are often capability blind: We don’t notice what’s new because we’re jaded by our old experiences and feel that it’s a waste of time to try again.

This is not a new phenomenon. In May 2012, after Facebook went public, the New York Times opinion columnist Ross Douthat argued that the social network was a bad business:

“It doesn’t make that much money, and doesn’t have an obvious way to make that much more of it, because (like so many online concerns) it hasn’t figured out how to effectively monetize its million upon millions of users.”

At the time, he was right. In 2012, Facebook generated $5 billion in revenue and only $53 million in net income. A decade later, Douthat is—obviously—wrong. In 2023, Facebook, now known as Meta, generated roughly $134 billion dollars in revenue and $23 billion in net income. What changed? The company figured out how to effectively monetize its massive user base—something that many observers, including Douthat, believed it couldn’t do.

My point here isn’t to dunk on Douthat; I am sure some of the things I’ve written will look silly a decade from now. It’s to remind you that the best way to latch onto the future is to think through affirmative possibilities, to remember that the world is in flux, and to keep turning over old stones.

Create a free account to continue reading

Ideas and Apps to
Thrive in the AI Age

The essential toolkit for those shaping the future

"This might be the best value you
can get from an AI subscription."

- Jay S.

Mail Every Content
AI&I Podcast AI&I Podcast
Cora Cora
Sparkle Sparkle
Spiral Spiral

Join 100,000+ leaders, builders, and innovators

Community members

Already have an account? Sign in

What is included in a subscription?

Daily insights from AI pioneers + early access to powerful AI tools

Pencil Front-row access to the future of AI
Check In-depth reviews of new models on release day
Check Playbooks and guides for putting AI to work
Check Prompts and use cases for builders

Comments