Midjourney/Every illustration.

How I Escaped AI Autopilot

The more reliable AI gets, the less we check its work. Research explains why, and what to do about it.

Like 20 Comments 3

To read more of Katie Parrott’s writing about how AI is changing work, read the latest articles in her column, Working Overtime. To read more essays like this, subscribe to Every.


Of all the ways I imagined AI might change my career, “forgetting I already did the assignment” was not on the list.

I had already sent my client a finished draft of an article on hiring best practices in South America, when I happened to reread the brief. A familiar phrase made me realize I had read it before. Then there was the statistic I was pretty sure I had already fact-checked. I clicked back through my files, and there it was: same client, same topic, same deliverable, dated four weeks earlier. It was completed, filed, and forgotten so completely that when a clerical error sent the same brief to my inbox again, I sat down and did the whole thing over.

My first thought was that this was probably early-onset something, and I should call my doctor. My second, more rational thought was that I had not lost my mind—but I had outsourced it. I had been moving so fast and delegating so much of the work to AI that my brain hadn’t even bothered to store a memory of completing the assignment.

What scared me most was thinking about all the smaller moments when I had not caught myself.

This kind of outsourcing isn’t new. Plenty of people would admit to feeling lost navigating an unfamiliar city without a phone to rely on, and I for one am lucky to remember my own phone number, let alone someone else’s. But AI does more than take work off your plate; it steps into the judgment calls you used to make yourself.

I am the last person to scold anyone for using AI. I have built AI into nearly every part of my job, and it has helped me write more rigorously, research more thoroughly, and take on projects far beyond what I used to think of as my wheelhouse. But when you accidentally offload the wrong parts—like fully understanding the purpose and intent of the piece, as I did in this case—you run the risk of atrophying the skills that matter most to you. You might even put your name on work you don’t realize you don’t stand behind until someone else starts asking questions. And if you are using AI for any kind of qualitative work, such as writing strategy, marketing, communications, I would bet you are doing some version of this too. Understanding why it happens is the first step to deciding which parts of the job you want back.

Uploaded image

You’re not losing leads. You’re just too late

You’re getting leads. That’s not the problem. They’re coming in from your ads, your content, your funnels. But your response time? Slow. Inconsistent. Sometimes nonexistent. And by the time you follow up… they’ve already moved on. It’s not a lead problem. It’s a timing problem. HighLevel fixes that 👇

One platform where your leads, messages, automations, and pipeline all work together in real time. So nothing sits. Nothing waits. Nothing gets missed. Here’s what changes:

  1. ⚡Respond to new leads instantly
  2. 🔁 Automate follow-ups the second someone opts in
  3. 📲 Keep every conversation in one place
  4. 📊 See exactly where deals are slowing down
  5. 🧠 Build systems that run without you chasing them

No more lag. No more missed chances. No more playing catch up. Just faster responses and better outcomes. Most businesses don’t realize how much speed matters until it’s too late.

👉 Start your 30-day free trial before your next lead goes cold 🚀

When trusting your tools becomes a bad thing

One group that would understand this immediately: airline pilots.

In the 1990s, researchers studying automated cockpits started noticing a strange pattern. Pilots with thousands of flight hours and lives on the line sometimes followed incorrect automated recommendations, even when the instruments in front of them suggested something was wrong. The automation had been right often enough that their brains stopped cross-checking it with the same scrutiny.

A 2010 review of decades of automation research described a larger pattern: The more reliable an automated system becomes, the more likely humans are to let it pass unchecked. When a system is usually right, your attention starts treating it as if it will keep being right.

AI is the most fluent automated system most of us interact with in a day. And fluency has its own trick. In 1999, a pair of psychologists showed people identical statements in fonts that were either easy or hard to read. The easy-to-read statements were rated as more true. It was the same words and same claims, but the version that went down smoother was judged more accurate. Your brain takes “that was easy to process” and misfiles it as “that must be correct.”

AI output goes down very smoothly. It’s grammatically polished, the tone is confident, and the clean formatting suggests something that has already been edited. The polish lets your eyes glaze over.

Every model upgrade makes the illusion of right-ness worse. The outputs get cleaner. The formatting gets better. The reasoning looks more plausible. The tool makes fewer obvious mistakes, which means the mistakes that remain are harder to see. You are reading something that looks finished, and your brain—which has been filing “looks finished” as “is correct” since long before AI existed—obliges.

Why ‘I’ll review it’ is not a plan

Before the repeat work snafu, I would have told you I was reviewing everything before sending anything. The document passed through my field of vision, I tweaked a phrase, caught one weird sentence, and felt the warm glow of editorial virtue. My brain filed that as reviewed.

The feeling of having reviewed is easy to produce. The act of reviewing is harder. You have to form your own view before the model gives you one, check the claims, and notice where the draft has made an assumption you do not share. You have to ask whether the sentence would still feel true if someone screenshotted it and sent it back to you six months later.

We talk a lot about better prompting, better models, better workflows, and better agents. We talk less about the moments when we should slow down—because that’s uncomfortable and hard. In 2021, researchers tested ways to reduce overreliance on AI. The interventions that worked best were “cognitive forcing functions,” designs that made people form their own judgment before seeing or accepting the AI’s answer.

Those same interventions also got the worst ratings from users. People did not like being made to think first. Of course, they didn’t. The whole appeal of automation is that it reduces effort. A tool that says, “Before I help you, please do the hard part yourself for a minute” feels like a speed bump. But speed bumps are the solution to autopilot.

What I am trying instead

My solution to autopilot is not to give up AI and return to some imagined golden age where I nobly suffer in a blank Google Doc. But I am making some changes to how I process and finalize work to curb the tendency to ship now, think later.

Change 1: Think before you look

Before I ask AI for a draft, I try to write down my own rough position. It’s not the polished version or a full argument. Sometimes it is only five bullets—some combination of what I think, what I know, what I am unsure about, what I refuse to say, and what would make the piece useful. Then, when the model gives me an output, I have something to compare it against besides vibes.

The card in my Notion to-do list for this article, with quick notes I sketched out before going into my interview session with the AI. (Image courtesy of Katie Parrott.)
The card in my Notion to-do list for this article, with quick notes I sketched out before going into my interview session with the AI. (Image courtesy of Katie Parrott.)


This is irritating. It also works. If I have made my own claims first, I read the AI’s claims differently. I can feel where it is smoothing over a distinction I care about. I can see where it is borrowing authority I have not earned. The draft becomes an object to argue with, not a current to float along.

Change 2: Build in a gap

If attention decays the longer you sustain it, it’s time to treat attention as the scarce resource it is and stop thinking I can review five AI outputs in a row without consequence. The answer is to introduce friction on purpose—distance between generation and review that gives your attention a chance to reset. Draft on Wednesday, review on Thursday. Write in the morning, come back in the afternoon. Send the model’s output to a different surface—for example, from the chat interface to a document, or from mobile to desktop—and read it outside the chat window your eyes have grown accustomed to.

Incidentally, a lot of this advice comes down to best practices that writing teachers have recommended for decades. A different day gives you a different brain than the one that’s high on AI’s generative excess.

Change 3: Make yourself explain why you’re accepting it

A 2026 study on AI-assisted writing found that making users explain their reasoning before accepting AI output cut mistaken acceptances roughly in half. You cannot bullshit a justification you are writing down.

So I’ve started doing it myself. Before I accept a recommendation, a framing, or a paragraph the model drafted, I make myself write one sentence answering a specific question: Why is this right for this client, this argument, this reader? If the best I can produce is “It sounds good,” I go back and look again. I have to be able to defend each sentence in front of an editor.

You still own the output

These practices help. They are also a fragile defense against tools designed to make output feel effortless, and I don’t think the long-term answer is expecting every individual to white-knuckle their way past six cognitive biases before breakfast.

This is also a design problem. The tools themselves should be building friction back in—making provenance visible, separating generation from approval, and treating human judgment as a workflow stage instead of a ceremonial click at the end. It is part of what excites me about Proof, Every’s document editor for AI-human collaboration, which tracks which words are yours and which came from the machine. The cognitive forcing functions that researchers have found work to keep our brain from giving into autopilot are design patterns that should be getting baked into products as well.

Knowing the mechanism does not exempt you from it. Every bias in this story predates AI by decades. We have always trusted fluent things too quickly, gotten worse at paying attention when nothing seems to be going wrong, and preferred the path that saves effort.

The duplicate assignment still embarrasses me, even if all it cost me in the end was a few sheepish emails back and forth with my client to ensure I wasn’t crazy. I am also grateful for it, in the way you are grateful for a warning that arrives before any real damage could be done. It taught me something the research has sharpened: The central risk of AI-assisted work is not the machine thinking for you. It is the machine making it feel as if you already thought.

I am trying to get better at noticing the difference. With most pieces, I draft on one day and review on another, make myself write down what I think before asking the model what it thinks, and hope the friction is enough to keep me in the work instead of floating above it.


Katie Parrott is a staff writer. You can read more of her work in her newsletter. To read more essays like this, subscribe to Every, and follow us on X at @every and on LinkedIn.

We also do AI training, adoption, and innovation for companies. Work with us to bring AI into your organization.

Discover Every’s upcoming workshops and camps, and access recordings from past events.

For sponsorship opportunities, reach out to [email protected].

The Only Subscription
You Need to Stay at the
Edge of AI

The essential toolkit for those shaping the future

"This might be the best value you
can get from an AI subscription."

- Jay S.

Mail Every Content
AI&I Podcast AI&I Podcast
Monologue Monologue
Cora Cora
Sparkle Sparkle
Spiral Spiral

Join 100,000+ leaders, builders, and innovators

Community members

Already have an account? Sign in

What is included in a subscription?

Daily insights from AI pioneers + early access to powerful AI tools

Pencil Front-row access to the future of AI
Check In-depth reviews of new models on release day
Check Playbooks and guides for putting AI to work
Check Prompts and use cases for builders

Comments

You need to login before you can comment.
Don't have an account? Sign up!
Pascal Gillan 12 days ago

Please please please fix the 'Listen' functionality, I've tried it in several different places, web and phone, it keeps failing.

It's by far my favorite way of ingesting your content

FYI - I did sent a bug report a while back

Cristian Nicola 12 days ago

As I mentioned in my feedback: in the case of pilots, if the plane went down, they would be the ones considered at fault regardless of the automated system.
An interesting article is to break down what happens when people send out unverified AI work
In your case, the obvious would be reputational risk with a client. Fuzzy and easy to ignore.

@antonacci.michael.d 11 days ago

Spot on! When I teach people about AI, skill atrophy is the main lens I have them focus on.

There was a recent study (maybe out of Anthropic?) that found people are less likely to check AI work the more polished the output is. And like you rightly pointed out, every model upgrade makes this problem worse.

"brain first, AI second" is the workflow we implement at my work. I like your other two changes, I'll give them a go and probably teach my coworkers.

We use analytics and advertising tools by default. You can update this anytime.