Midjourney/Every illustration.

We Need to Talk About AI Autopilot

The more reliable AI gets, the less we check its work. Research explains why, and what to do about it.

Like 18 Comments 3

To read more of Katie Parrott’s writing about how AI is changing work, read the latest articles in her column, Working Overtime. To read more essays like this, subscribe to Every.


Of all the ways I imagined AI might change my career, “forgetting I already did the assignment” was not on the list.

I had already sent my client a finished draft of an article on hiring best practices in South America, when I happened to reread the brief. A familiar phrase made me realize I had read it before. Then there was the statistic I was pretty sure I had already fact-checked. I clicked back through my files, and there it was: same client, same topic, same deliverable, dated four weeks earlier. It was completed, filed, and forgotten so completely that when a clerical error sent the same brief to my inbox again, I sat down and did the whole thing over.

My first thought was that this was probably early-onset something, and I should call my doctor. My second, more rational thought was that I had not lost my mind—but I had outsourced it. I had been moving so fast and delegating so much of the work to AI that my brain hadn’t even bothered to store a memory of completing the assignment.

What scared me most was thinking about all the smaller moments when I had not caught myself.

This kind of outsourcing isn’t new. Plenty of people would admit to feeling lost navigating an unfamiliar city without a phone to rely on, and I for one am lucky to remember my own phone number, let alone someone else’s. But AI does more than take work off your plate; it steps into the judgment calls you used to make yourself.

I am the last person to scold anyone for using AI. I have built AI into nearly every part of my job, and it has helped me write more rigorously, research more thoroughly, and take on projects far beyond what I used to think of as my wheelhouse. But when you accidentally offload the wrong parts—like fully understanding the purpose and intent of the piece, as I did in this case—you run the risk of atrophying the skills that matter most to you. You might even put your name on work you don’t realize you don’t stand behind until someone else starts asking questions. And if you are using AI for any kind of qualitative work, such as writing strategy, marketing, communications, I would bet you are doing some version of this too. Understanding why it happens is the first step to deciding which parts of the job you want back.

When trusting your tools becomes a bad thing

One group that would understand this immediately: airline pilots.

In the 1990s, researchers studying automated cockpits started noticing a strange pattern. Pilots with thousands of flight hours and lives on the line sometimes followed incorrect automated recommendations, even when the instruments in front of them suggested something was wrong. The automation had been right often enough that their brains stopped cross-checking it with the same scrutiny.

A 2010 review of decades of automation research described a larger pattern: The more reliable an automated system becomes, the more likely humans are to let it pass unchecked. When a system is usually right, your attention starts treating it as if it will keep being right.

AI is the most fluent automated system most of us interact with in a day. And fluency has its own trick. In 1999, a pair of psychologists showed people identical statements in fonts that were either easy or hard to read. The easy-to-read statements were rated as more true. It was the same words and same claims, but the version that went down smoother was judged more accurate. Your brain takes “that was easy to process” and misfiles it as “that must be correct.”

AI output goes down very smoothly. It’s grammatically polished, the tone is confident, and the clean formatting suggests something that has already been edited. The polish lets your eyes glaze over.

Every model upgrade makes the illusion of right-ness worse. The outputs get cleaner. The formatting gets better. The reasoning looks more plausible. The tool makes fewer obvious mistakes, which means the mistakes that remain are harder to see. You are reading something that looks finished, and your brain—which has been filing “looks finished” as “is correct” since long before AI existed—obliges.

Create a free account to continue reading

The Only Subscription
You Need to Stay at the
Edge of AI

The essential toolkit for those shaping the future

"This might be the best value you
can get from an AI subscription."

- Jay S.

Mail Every Content
AI&I Podcast AI&I Podcast
Monologue Monologue
Cora Cora
Sparkle Sparkle
Spiral Spiral

Join 100,000+ leaders, builders, and innovators

Community members

Already have an account? Sign in

What is included in a subscription?

Daily insights from AI pioneers + early access to powerful AI tools

Pencil Front-row access to the future of AI
Check In-depth reviews of new models on release day
Check Playbooks and guides for putting AI to work
Check Prompts and use cases for builders

Comments

You need to login before you can comment.
Don't have an account? Sign up!
Pascal Gillan 1 day ago

Please please please fix the 'Listen' functionality, I've tried it in several different places, web and phone, it keeps failing.

It's by far my favorite way of ingesting your content

FYI - I did sent a bug report a while back

Cristian Nicola 1 day ago

As I mentioned in my feedback: in the case of pilots, if the plane went down, they would be the ones considered at fault regardless of the automated system.
An interesting article is to break down what happens when people send out unverified AI work
In your case, the obvious would be reputational risk with a client. Fuzzy and easy to ignore.

@antonacci.michael.d 1 day ago

Spot on! When I teach people about AI, skill atrophy is the main lens I have them focus on.

There was a recent study (maybe out of Anthropic?) that found people are less likely to check AI work the more polished the output is. And like you rightly pointed out, every model upgrade makes this problem worse.

"brain first, AI second" is the workflow we implement at my work. I like your other two changes, I'll give them a go and probably teach my coworkers.

We use analytics and advertising tools by default. You can update this anytime.