Generative AIs Narrow the Taste Gap

Musings from the frontier of AI and the written word

Prompt: a human brain sitting on a typewriter bright colors by casey weldon by lee madgewick tesselation

AI that writes like the average English speaker is fine. But AI that writes like you is magical.

When Nathan was building the earliest versions of Lex, Every’s AI-powered writing tool, we wanted it to eventually be able to write in the voice of whoever was using it. That seemed doable, but like it would also require lots of fine-tuning to get it to work. 

As we played with Lex we noticed something strange: it could already write in anyone’s voice—no training required.

To be sure, it’s not perfect. Depending on your writing style it might be better or worse at doing completions. But if you feed a Zen koan into Lex, it will output a koan. And if you feed Twitter hustle porn into Lex, it will output Twitter hustle porn.

This is a big part of Lex’s “wow” moment. It’s a lot like looking in a mirror for the first time. You’re seeing something familiar but also totally unexpected. 

There’s a strangeness to seeing a machine write something that you could have said or could have thought but haven’t yet. Even stranger is the idea that the machine doing it doesn’t know anything about who you are or what you think except a few lines of input text…yet it seems to be able to write as you better than any professional ghostwriter could.

I want to talk about how this is even possible, and also what its implications are. Technology is redrawing the lines around what it means to be a writer—who writers are and what they do. It’s also narrowing the taste gap—the gap between what you hope to make and what your skills allow for—by making it easier for writers to write great stuff without years of trial and error.

The coming shift is going to be profound.

How to sound like me

Here’s a very high-level explanation for how GPT-3 does text completion:

GPT-3 looks at the text that came before the point where you want completion and predicts what words are most likely to come next. In order to do this, it uses a statistical model to learn the associations between words and word sequences. The model is trained on lots and lots of source data (basically everything on the entire internet) to predict what comes next from what comes before.

How does this apply to me?

It might seem like an intractable task to figure out how to write sentences that sound like me. After all, after any sentence I write there’s an infinite number of sentences that could follow it.

But Lex shows that if you think about this problem probabilistically, it’s not as impossible as it seems. 

To make this easier to think about, you might think of the set of possible sentences as an infinite space. Any particular point in the space represents a sentence that could follow the one I just wrote. 

Given that the space is infinite, in principle, it should be hard for a machine to find a sentence in that space that’s close to the one I would’ve written on my own. But in practice, it seems that I (and everyone else) like to play around in a comparatively small corner of all of the possible sentences that we could write. 

For example, the sentences that follow from whatever I’m writing are very likely to follow the rules of English grammar (with some minor exceptions 😉.) That narrows down the possibility space by a lot. But technically, its size is still quite large.

There are more constraints, though, than meet the eye. 

If grammar deals with the rules of sentences, Lex and GPT-3 make it clear that when we write we’re following a great number of rules that aren’t limited to things like syntax. In fact, there’s a sort of tribal grammar to what we write—the way we say things, and even what we say and think—that constrains us much more deeply than any of us care to admit. 

Lex has an easy time mimicking me because it has read everything that has influenced the way I write. (It’s probably also read my writing.) It hasn’t only picked up on style and tone—it’s also imbibed the ideas that I spend a lot of time immersed in. It knows I’m more likely to insert a reference to “network effects” into my writing than I am to reference “negative capability” because based on just a few sentences of input text it can tell I think more about startups than I do about Romance-era poets.

I use a tribal grammar that’s built on all of the people and ideas I’ve read and interacted with in my life—and compared to the set of all possible people, it’s a small list. The people who read my writing are part of the same tribe with the same set of fundamental paradigms.

Because of all of this, what I’m likely to say and think next is a statistically tractable problem. We don’t admit this because, culturally, we put a huge premium on freedom of thought and of speech. 

When it comes to what we say and think we imagine ourselves as explorers, free to roam an infinite expanse of space. But in reality we're a lot more like farmers in the 1800s: most of us never leave the towns where we reside. Some of us do move, but it's slow and it takes a while.

Lex, language, and free will

It might seem depressing or scary or infuriating to know that we operate by a sort of tribal grammar. It feels so…deterministic. We thought we had free will, but really we’ve all been NPCs this whole time. 

It reminds of a time in high school when I recounted the argument against free will to my unsuspecting father, who was making me spaghetti bolognese.

“In a nutshell,” I said, “you are a product of either your genetics or your environment, and so you don’t have any free will and aren’t really responsible for anything that you do.” 

My dad slammed dinner down on the table, splattering meat sauce everywhere. “What about this?” He began waving both hands wildly in front of my face. “Is this a product of my genetics? Or is it my environment?”

“Environment,” I said.

This story resonates with me deeply right now. I feel like banging my hands around randomly on the keyboard every time I’m writing. “What about this? Could you have predicted THIS?”

The reality is that there’s still a lot that Lex can’t predict about what I’m going to write. For example, it’s not very good at metaphor beyond cliches. It struggles with the mapping of one concept onto another—and so it struggles to provide the “click” of helping me understand complex ideas in a new way. It’s only good, so far, at summarizing those ideas in ways that are pretty close to the ways other people have summarized them previously. 

And even though it’s saying things that are in the realm of stuff I’d say, it still requires editing to make it feel totally like me. It’s a little too…bland, or the average of the people who write sort of like me—right now. 

A final thing Lex can’t do is to make you feel less alone. There’s something fundamental to writing that’s about feeling like you are connecting to another human being—as Every writer Nat Eliason points out—and so people are skeptical of writing that comes out of AI. It’s missing that special oomph or sparkle that makes it mean something—even if the words are all right.

I think all of these objections are only temporary. 

There’s nothing to suggest that Lex won’t eventually be an incredible metaphor generator—just that it, so far, hasn’t been trained to be one. (Related: this week I fine-tuned it on similes and it worked well.) Similarly, there’s nothing to suggest that fine-tuning Lex won’t make it more scarily accurate at mimicking me.

Finally, I think there are a lot of reasons to suspect that our feelings of missing the human connection in AI-written writing is more of a passing discomfort than something that will prevent people from using these tools ubiquitously.

For one, you don’t have to disclose that you’re using a tool like this—so most readers won’t know the difference. For another, arguing that a new technology is too impersonal and eliminates connection is a common initial objection to advancements like this. For example, take texting—it feels personal to me, but my parents much prefer a phone call. The same arc is likely when it comes to these kinds of tools. 

Lastly, as Every writer Fadeke Adegbuyi points out, people who are immersed in the world of these models are already often left feeling like their model is sentient. If you play around with a chatbot like Replika, you’ll feel the same prickles of emotions that you might feel in a conversation with a human. So AI is capable of creating that feeling of connection—so there will be a lot of demand for reading whatever it has to say. 

So where does this leave human writers then? In an exciting place.

New technology redraws our categories

If you were a writer 200 years ago, one of the demands of your profession was penmanship. Today, penmanship is unbundled from the task of writing. It’s nice if you’ve got it, but it’s unnecessary.

Lots of things that we previously put in the category of being required for writing are going to go the way of penmanship: a voluminous memory for facts and quotes, a fantastic note-taking system, an ability to summarize complex ideas in simple form, the desire to face and fill a blank page.

So what’s left? I think it’s going to be a little like photography. Fifty years ago you had to have a lot of expensive equipment and skills in a dark room to be a great photographer. Today, everyone is taking professional-grade pictures all the time. 

There is still a distinct difference between a professional photographer and someone who takes pictures of their friends at a party. But professional photographers are better than they were 50 years ago, and getting close to the level of a professional photographer is much more attainable for everyone else.

The same thing is true of writing. Sure, you didn’t ever have to purchase expensive equipment to do it. But you did have to spend the equivalent amount of time trying over and over to write great pieces—and pick yourself up again after every failure.

Ira Glass famously talked about the taste gap for upcoming creatives:

“Nobody tells this to people who are beginners, I wish someone told me. All of us who do creative work, we get into it because we have good taste. But there is this gap. For the first couple years you make stuff, it’s just not that good. It’s trying to be good, it has potential, but it’s not.”

These tools will narrow the taste gap in writing for large numbers of people. And it will change the arena of skill development in writing from things like memorizing the rules of grammar and punctuation to prompt engineering and fine-tuning.

So what will stay the same?

Ultimately, writing is thinking. It’s a process of updating the writer’s—and thereby the reader’s—model of the world. It’s also a process of feeling—letting the flow of sentences change your experience, and knowing whether that change is good.

The tools we use to engage in that process are changing. But the underlying activity is always going to be the same. That makes me excited for the future.


I'm opening up 1-2 new slots in my coaching practice. I help founders and creatives ship consistently, beat procrastination, and learn how to regulate their emotions. Click here if you want to set up an intro session.

Like this?
Become a subscriber.

Subscribe →

Or, learn more.

Read this next:

Superorganizers

How Josh Kaufman Does Research 

The author of The Personal MBA shares his process for finding answers hiding in plain sight

184 🔒 Aug 20, 2020

Superorganizers

How hard should I push myself?

What the science of stress tells us about peak performance

550 🔒 Jan 26, 2021 by Dan Shipper

Superorganizers

The Double Life of Productivity’s Most Famous Doctor

YouTuber Ali Abdaal shares how he makes the productivity videos that have netted him 1M subscribers

136 🔒 Sep 24, 2020 by Dan Shipper

Rallying in a Recession

Four strategies from businesses that beat the odds

39 Dec 5, 2022 by Lewis Kallow

The Sunday Digest

Living with AI, Disney’s Future, Recession Leadership and More

Everything we published this week.

17 Dec 4, 2022

Comments

You need to login before you can comment.
Don't have an account? Sign up!
@Fourth_Eye about 1 month ago

This is a good post, and overlaps with some of the thoughts I've been having around AI recently. I even wrote something recently that included the Ira Glass quote in an earlier draft–and that's actually the one part I'd like to push back on here, because I think the emphasis isn't quite right. While it's true that the taste gap will narrow with AI tools, advancing in the right direction still requires the original guiding compass of "good taste." AI tools can level up skills faster, but they can't teach taste.

As with the photographer example–the technical aspect is so streamlined as to be barely relevant at this point, so being a professional photographer is much more about cultivating taste, a vision, a cohesive artistic voice, and a skill for working with people. A small percentage of artists will reach the highest level in their medium, and I don't think that's going to change. AI tools will raise the floor of quality, but the ceiling, the peak performance, will remain inaccessible to most, because it's the human element that gets people there, not the technical. This is all to say, the process of traversing the taste gap might be quicker, but the key variable, the "good taste" part, is still up to humans to figure out.

Thanks for reading Every!

Sign up for our daily email featuring the most interesting thinking (and thinkers) in tech.

Subscribe

Already a subscriber? Login