Midjourney/Every illustration.

Compound Engineering: How Every Codes With Agents

A four-step engineering process for software teams that don’t write code

Like Comments

Our last coding camp of the year is Codex Camp—a live workshop about building with OpenAI’s coding agent, open to all Every subscribers on Friday, December 12 at 12 p.m. ET. Learn more and reserve your spot.

Was this newsletter forwarded to you? Sign up to get it in your inbox.


What happens to software engineering when 100 percent of your code is written by agents? This is a question we’ve had to confront head-on at Every as AI coding has become so powerful. Nobody is writing code manually. It feels weird to be typing code into your computer or staring at a blinking cursor in a code editor.

So much of engineering until now assumed that coding is hard and engineers are scarce. Removing those bottlenecks makes traditional engineering practices—like manually writing tests, or laboriously typing human readable code with lots of documentation—feel slow and outdated. In order to deal with these new powers and changing constraints, we’ve created a new style of engineering at Every that we call compound engineering.

In traditional engineering, you expect each feature to make the next feature harder to build—more code means more edge cases, more interdependencies, and more issues that are hard to anticipate. By contrast, in compound engineering, you expect each feature to make the next feature easier to build. This is because compound engineering creates a learning loop for your agents and members of your team, so that each bug, failed test, or a-ha problem-solving insight gets documented and used by future agents. The complexity of your codebase still grows, but now so does the AI’s knowledge of it, which makes future development work faster.

And it works. We run five software products in-house (and are incubating a few more), each of which is primarily built and run by a single person. These products are used by thousands of people every day for important work—they’re not just nice demos.

This shift has huge implications for how software is built at every company, and how ambitious and productive every developer can be: Today, if your AI is used right, a single developer can do the work of five developers a few years ago, based on our experience at Every. They just need a good system to harness its power.

The rest of this piece will give you a high-level sense of how we practice compound engineering inside of Every. By the time you’re done, you should be able to start doing the basics yourself—and you’ll be primed to go much deeper.

Uploaded image

Automate the marketing busywork

Too much marketing busywork? Automate it with Optimizely Opal—personalizing content, compliance checks, reporting, and more, all done with the help of AI. Use pre-built agents or build your own, then chain them into workflows. Opal learns your brand, plugs into your existing tools, and turns hours of manual work into near-instant results. Leave more time for strategy and creativity.

Compound engineering loop

A compound engineer orchestrates agents running in parallel, who plan, write, and evaluate code. This process happens in a loop that looks like this:

  1. Plan: Agents read issues, research approaches, and synthesize information into detailed implementation plans.
  2. Work: Agents write code and create tests according to those plans.
  3. Review: The engineer reviews the output itself and the lessons learned from the output.
  4. Compound: The engineer feeds the results back into the system, where they make the next loop better by helping the whole system learn from successes and failures. This is where the magic happens.

We use Anthropic’s Claude Code primarily for compound engineering, but it is tool-agnostic—some members of our team also use startup Factory’s Droid and OpenAI’s Codex CLI. If you want to get more hands-on with how we do this, we’ve built a compound engineering plugin for Claude Code that lets you run the exact workflow we use internally yourself.

Roughly 80 percent of compound engineering is in the plan and review parts, while 20 percent is in the work and compound.

Let’s dive in.

1. Plan

In a world where agents are writing all of your code, planning is where most of a developer’s time is spent. A good plan starts with research: We ask the agent to look through the current codebase and its commit history to understand the codebase’s structure, existing best practices, and how it was built. We also ask it to scour the internet for best practices relevant to the problem we’re trying to solve. That way when we begin to plan the agent is primed to do good work.

Once the research is complete, the agent writes a plan. Usually this is a text document that lives either as a file on your computer or an issue on Github. The plan lays out everything: the objective of the feature, the proposed architecture, specific ideas for how the code might be written, a list of sources for its research, and success criteria.

A planning document for a Cora feature. (Image courtesy of Kieran Klaassen.)
A planning document for a Cora feature. (Image courtesy of Kieran Klaassen.)


Planning helps build a shared mental model between you and the agent for exactly what you’re building, before you build it. Good planning is not pure delegation—it requires the developer to think hard and be creative in order to push the agent down the right paths.

As models get better, especially with small projects, you have to plan less and less—the agent just gets what you want, or maybe does something surprising but good. With complex production projects, though, a good plan is an essential part of building high-quality software that works as you expect.

Once you have a good plan, the hard part is almost over.

2. Work

This is the easiest step because it’s so simple: You just tell the agent to start working. The agent will take your plan, turn it into a to-do list, and build step-by-step.

One of the most important tricks for this step is using a model context protocol like Playwright or XcodeBuildMCP. These are tools that allow the agent to use your web app or simulate use on a phone as it’s being built, as if it were one of your users. So it will write some code, walk through the app and notice issues, and then modify the code and repeat until it’s done. This is especially good for design tasks and workflows because it can iterate on a prototype until it looks like the design.

With the latest generation of agents like Opus 4.5, what comes out of the work phase is much more likely to be functional and error-free, and is often actually pretty close to what you envisioned, especially if your plan was well-written.

But even good output needs to be checked, and that’s what we do in the next step.

3. Assess

In the assessment step, we ask the agent to review its own work, and we review the work too. This can take many forms: We use traditional developer tools like linters and unit tests to find basic errors, and we test manually to sanity-check what was built. We also use automatic code review agents like Claude, Codex, Friday, and Charlie to spot-check the code for common issues.

The latest AI makes the assess step even more powerful. Our compound engineering plugin, for example, reviews code in parallel with 12 subagents that each check it from a different perspective. One looks for common security issues, another checks for common performance issues, another looks at it to see if anything was overbuilt, so software isn’t bloated or too complex. All of these different perspectives are synthesized and presented so that the developer can decide what needs to be fixed and what can be ignored.

The real power of compound engineering happens in the next step—where we make sure we never encounter the same problems again.

4. Compound

This is the money step. We take what we learned in any of the previous steps—bugs, potential performance issues, new ways of solving particular problems—and record them so that the agent can use them next time. This is what makes compounding happen in compound engineering.

For example, in the codebase for Every’s AI email assistant Cora, before building anything new, the agent has to ask itself: Where does this belong in the system? Should it be added to something that already exists, or does it need its own new thing? Have we solved a similar problem before that we can reuse? These questions come with specific technical examples from mistakes we’ve made in the past that prime the agent to find the right solution at the right place in the codebase.

These rules are built up in a mostly automated fashion. After we do a code review, for example, we’ll ask the agent to look at the comments, summarize them, and store them for later. The latest models are smart enough to do all of this with very little extra instruction—and they’re also smart enough to actually use it the next time.

The beauty of this is that these learnings are also automatically distributed to the team. Because they get written down as prompts that live inside of your codebase or in plugins like ours, every developer on your team gets them for free. Everyone becomes more productive: A new hire who’s never been in the codebase before is as well-armed to avoid common mistakes as someone who’s been on the team for a long time.

Compound engineering: Looking and learning ahead

This is just a basic overview of compound engineering—each one of these steps can and often is its own article. We have also not addressed some of the constraints that still exist with this new way of producing software—namely, how fast a developer can decide what they want to build, process and improve a plan, and describe what good looks like.

We are still just scratching the surface of the possibilities of compound engineering and its broader implications. Manually writing tests or writing human-readable code with lots of documentation is now unnecessary. So is asking an engineering candidate to code without access to the internet, or expecting it to take weeks for new hires to commit code. And so is being locked into a particular platform or technology because the legacy code is too hard to understand, and replatforming would be too expensive. We’re excited to write more about what this new way of engineering unlocks.

If you’re interested in this topic, we highly recommend you read some of the other articles we’ve published about it:

  1. “Stop Coding and Start Planning”
  2. “Teach Your AI to Think Like a Senior Engineer”
  3. “My AI Had Already Fixed the Code Before I Saw It”
  4. “How Every Is Harnessing the World-changing Shift of Opus 4.5”

And make sure you come to our Claude Code Camps and other events and courses. We’ll have more writing over the coming days and weeks on each step of this process, and how it’s changed engineering at Every and beyond.


Dan Shipper is the cofounder and CEO of Every, where he writes the Chain of Thought column and hosts the podcast AI & I. You can follow him on X at @danshipper and on LinkedIn, and Every on X at @every and on LinkedIn.

Kieran Klaassen is the general manager of Cora, Every’s email product. Follow him on X at @kieranklaassen or on LinkedIn.

To read more essays like this, subscribe to Every, and follow us on X at @every and on LinkedIn.

We build AI tools for readers like you. Write brilliantly with Spiral. Organize files automatically with Sparkle. Deliver yourself from email with Cora. Dictate effortlessly with Monologue.

We also do AI training, adoption, and innovation for companies. Work with us to bring AI into your organization.

Get paid for sharing Every with your friends. Join our referral program.

For sponsorship opportunities, reach out to [email protected].

The Only Subscription
You Need to Stay at the
Edge of AI

The essential toolkit for those shaping the future

"This might be the best value you
can get from an AI subscription."

- Jay S.

Mail Every Content
AI&I Podcast AI&I Podcast
Monologue Monologue
Cora Cora
Sparkle Sparkle
Spiral Spiral

Join 100,000+ leaders, builders, and innovators

Community members

Already have an account? Sign in

What is included in a subscription?

Daily insights from AI pioneers + early access to powerful AI tools

Pencil Front-row access to the future of AI
Check In-depth reviews of new models on release day
Check Playbooks and guides for putting AI to work
Check Prompts and use cases for builders

Comments

You need to login before you can comment.
Don't have an account? Sign up!