Why AIs Need to Stop and Think Before They Answer

How chain of thought works to get better responses

DALL-E/Every illustration.

When humans make requests of their AI assistants, what matters isn’t merely what they ask but often how. That’s the central premise behind chain of thought prompting, a method for getting the most out of ChatGPT or another chatbot. In the latest installment of Also True for Humans, Michael Taylor’s column on working with AI tools like you would work with humans, he dives into how and why this method works, why we’re not all that different from our machine counterparts—and what the number of piano tuners in New York City has to do with any of this.—Kate Lee


ChatGPT writes faster than we can read—but is the output worth reading? 

When I was writing this article, I worked with my editor to plan the outline and make sure I had a compelling pitch. So why do most people expect ChatGPT to “write a blog post on X” without taking the time to think?

AI does a better job when it’s prompted to make a plan first—just like humans do. Most AI applications have one or more research and planning steps, a technique called chain of thought (CoT). It’s an order of operations for the model to reason through a problem before answering.

When you’re getting mediocre results from AI, it’s often because you haven’t allowed the AI to plan sufficiently. Applying the chain of thought technique can result in an immediate boost in performance.

Let’s look into the science behind chain of thought prompting and how to get AIs to think through their answers before responding. It’s one of the easiest ways you can improve your prompts to get more sophisticated results.

Giving the AI time to ‘think’


Become a paid subscriber to Every to learn about:

  • Chain of thought prompting supercharges AI
  • The power of planning before writing, even for AI
  • Why giving models time to "think" leads to better results
  • How chain of thought mimics human reasoning processes
Learn more

This post is for
paying subscribers

Subscribe →

Or, login.

Read this next:

Also True for Humans

The Key to Great AI Prompting? Show, Don’t Tell

Provide examples to your LLM with few-shot learning

🔒 Sep 3, 2024 by Michael Taylor

Also True for Humans

How to Grade AI (And Why You Should)

Evals are often all you need

🔒 Jun 25, 2024 by Michael Taylor

Also True for Humans

How to Become an Expert at Anything With AI

Using ChatGPT and Claude for memetic analysis

2 🔒 Jun 26, 2024 by Michael Taylor

Every smart person you know is reading this newsletter

Get one actionable essay a day on AI, tech, and personal development

Subscribe

Already a subscriber? Login