When AI Gets More Capable, What Will Humans Do?

We used to be sculptors. We're all about to be gardeners.

DALL-E/Every illustration.

Sponsored By: Notion

This essay is brought to you by Notion, your all-in-one workspace solution. Startups can now enjoy 6 months of Notion Plus with unlimited AI, a value of up to $6,000—for free! Thousands of startups around the world trust Notion for everything from engineering specs to new hire onboarding. Access the limitless power of AI right inside Notion to build and scale your company.

When applying, select Every in the drop-down partner list and enter code EveryXNotion at the end of the form.

Apply now

Predicting the future can feel like a fool’s errand. But so is writing it off. As humans peering into the abyss, we tend to assume progress in some ways and stasis in others. In this essay by Dan Shipper, which we’re republishing to cap off a week of prompt engineer (and new Every columnist) Michael Taylor’s tactical AI pieces, he warns against capability blindness: the often false assumption that technology won’t drastically improve over time. He also confronts the problem of what role humans will play when AI creativity improves, a question that led Every to launch our new tool, Spiral, which you can try out. We’ll resume regular publishing on Monday, July 8, after the July 4 holiday in the U.S.—Kate Lee

Was this newsletter forwarded to you? Sign up to get it in your inbox.


Humans tend to believe that the world is static—that things will be the same tomorrow as they are today, and as they were yesterday.

We leave no stone unturned in our hunt for opportunity, but we often don’t think to pause before we write something off as useless—and we don’t flip over old stones to see if anything’s changed. For example, the Claude model Claude 3 Opus is a fantastic writer. With the right prompt, it can write for short bursts in a voice that genuinely sounds 70-80 percent like me, you, or any other writer.

Claude mostly wrote this tweet, for example, though I edited it. I supplied Claude with examples of previous podcast transcripts and tweets, as well as some guidelines about how to adapt one into the other. Claude did the rest, and it did a fantastic job. It did so well, in fact, that we built a tool specifically to make this kind of work easier, Spiral.



Startups receive 6 months free of Notion Plus with unlimited AI, valued at up to $6,000! Notion is the go-to workspace for thousands of startups worldwide, helping them manage everything from engineering specs to new hire onboarding to fundraising. With Notion Plus, access the limitless power of AI to streamline and scale your operations seamlessly. Simplify your processes and enhance productivity with one powerful tool.

Apply now and select Every in the drop-down partner list, then enter code EveryXNotion.

Apply now

This may seem like a small feat, but GPT-4 can’t do this. And you’d be surprised how much writing fits this pattern of summarizing content from one form and adapting it for another. As I wrote in a recent essay on this topic, once you start looking, you see the summarizing everywhere.

But it seems like no one has noticed this step change in capabilities.

That’s understandable. It’s a common mistake we make when evaluating parts of the world that move quickly: I call it capability blindness.

During the first big generative AI wave, which started last year, many of us grappled with the exciting—or scary—reality that chatbots might be able to mimic our unique voices and writing styles. I tried OpenAI’s GPT-3, then GPT-4, and quickly realized they were good but had a particular taint. They could help in the writing process—researching, supplying ideas, editing words—but couldn’t be trusted to write very much on their own. I couldn’t enlist AI as a ghostwriter just yet.

But over the last year, the newest language models have been noticeably better. Unfortunately, we are often capability blind: We don’t notice what’s new because we’re jaded by our old experiences and feel that it’s a waste of time to try again.

This is not a new phenomenon. In May 2012, after Facebook went public, the New York Times opinion columnist Ross Douthat argued that the social network was a bad business:

“It doesn’t make that much money, and doesn’t have an obvious way to make that much more of it, because (like so many online concerns) it hasn’t figured out how to effectively monetize its million upon millions of users.”

At the time, he was right. In 2012, Facebook generated $5 billion in revenue and only $53 million in net income. A decade later, Douthat is—obviously—wrong. In 2023, Facebook, now known as Meta, generated roughly $134 billion dollars in revenue and $23 billion in net income. What changed? The company figured out how to effectively monetize its massive user base—something that many observers, including Douthat, believed it couldn’t do.

My point here isn’t to dunk on Douthat; I am sure some of the things I’ve written will look silly a decade from now. It’s to remind you that the best way to latch onto the future is to think through affirmative possibilities, to remember that the world is in flux, and to keep turning over old stones.

This mindset is particularly important with AI. Some of the tasks it struggles with today are going to be child’s play by next year. Like Claude, which has improved leaps and bounds since its last model, and will likely continue in this direction.

Now what? 

What’s rare in a world with infinite good writing?

If you believe what I wrote above, a good next question is: What becomes rare in a world where it is significantly cheaper to write well? (I’ll admit, I have a dog in this fight.)

First, while Claude can write quite well in my voice, it can’t write anything and everything in my voice. It’s really good for repetitive tasks that I do often, where I am effectively summarizing one piece of writing into another, and I have examples to provide.

But Claude still can’t write a good article as me—yet. It only works well in circumstances where the length isn’t too long, it has lots of examples to guide it, and it’s clearly summarizing from one form of content and adapting it for another. And I think it will be a long time before it can write great pieces that aren’t summaries of existing content. It’ll be longer still until it can write great long-form pieces. (Complexity scales superlinearly with the length of a piece of text, so longer pieces get harder and harder to output.)

The things that are likely to remain hard to do with AI alone are where the value sits for writing:

Original research

As I wrote a year ago, uncovering new facts is still going to be valuable. Writing them will be much more of a commodity. AI is able to handle 90 percent of the latter, which means reporters in the future will mostly have to be intrepid, with a nose for scoops.

Original thinking that is written in long-form

Writing that isn’t a summary of someone else’s work will still be difficult for AI to do for a while. Anything outside of a well-defined format that requires original thinking—by which I mean anything that isn’t primarily a summary of someone else’s already available work—is still going to be hard to produce.

Novel audience acquisition strategies

Audiences—and acquiring them—will be valuable in a world where anyone can write well. This is already true, and it will become even more true with AI. Every’s Evan Armstrong wrote about this last year: “More and more power will accrue in those companies that have novel acquisition methods that do not rely on any gatekeeper.” He’s right, and I’d add that having an owned audience to which you can distribute your content will also be important.

But there’s at least one more thing:

Consistency

Consistently publishing content (whether short- or long-form) is going to be a significant differentiator for creators or brands. If you are consistent, you can take up significant space in the mind of your audience. If you’re not—even if you’re brilliant—no one will remember you. (Think of all of the people who’ve had a few TikToks go viral but have never turned the success into a sustainable career. TikTok is consistent; they are not.)

Mindshare is hard to build, and it’s a slow process—but once you have it, it’s pretty durable. My Every co-founder Nathan Baschez wrote about this in his 2020 piece about why content is king. Mindshare comes with several benefits familiar to anyone who thinks about startups: network effects, brand power, and switching costs. These have always been factors for creators and media brands. But they’ve also been somewhat overlooked.

From sculptor to gardener: the future of creativity

The next question is: What is the future of creative work in this world? Are we looking at a future where no creative work is actually produced by humans?

Previous eras of creativity have mostly looked a bit like sculpting. A sculptor takes a block of material and carves it, slowly but surely, into shape. Nothing happens without her hand. Even when an assistant is involved, the sculptor pores over the project, because their human input is important at every point of the process. So too with writing, or programming, or painting.

This era of creativity is going to look more like gardening. A gardener doesn’t grow plants directly. Instead, she sets up the conditions for the garden to grow. She takes care of the soil, the water, and the sunlight—and lets the plants do their thing.

So too with AI. As more of our time is spent being model managers, we won’t be directly making as much creative work. That’s like pulling up a plant to help it grow. Instead, we’ll be creating optimal conditions and letting the models do their work.

There’s one difference, though. A gardener can’t directly modify her plants. She can’t change their DNA by hand. But a skilled model manager can take any output of a model—sentence, code, image, or video—and modify any part of it themselves.

So we won’t have to leave sculpting behind as a creative skill. We’ll just be able to use our chisels and hammers more judiciously—and only when it really matters.


Dan Shipper is the cofounder and CEO of Every, where he writes the Chain of Thought column and hosts the podcast AI & I. You can follow him on X at @danshipper and on LinkedIn, and Every on X at @every and on LinkedIn.

He also helps lead a consulting practice at Every focused on helping mid-to-large-sized organizations implement AI and train their workforce to adopt it. Interested? Reach out.

Like this?
Become a subscriber.

Subscribe →

Or, learn more.

Thanks to our Sponsor: Notion

Thanks again to our sponsor Notion, your connected workspace for everything. Startups can unlock 6 months of Notion Plus with unlimited AI, a $6,000 value—for free! Join thousands of startups using Notion to manage all aspects of their business, from engineering to onboarding to fundraising. Access AI capabilities to boost efficiency and scale your operations.

Get started with Notion Plus by selecting Every in the drop-down partner list, then entering code EveryXNotion.

Read this next:

Chain of Thought

I Spent a Week With Gemini Pro 1.5—It’s Fantastic

When it comes to context windows, size matters

3 Feb 23, 2024 by Dan Shipper

Chain of Thought

Using ChatGPT Custom Instructions for Fun and Profit

How to 10x ChatGPT with personalized answers

17 Sep 15, 2023 by Dan Shipper

Chain of Thought

What I Saw at OpenAI’s Developer Day

Bigger, smarter, faster, cheaper, easier

5 Nov 7, 2023 by Dan Shipper

Thanks for rating this post—join the conversation by commenting below.

Comments

You need to login before you can comment.
Don't have an account? Sign up!

Every smart person you know is reading this newsletter

Get one actionable essay a day on AI, tech, and personal development

Subscribe

Already a subscriber? Login