Model Wars
Plus, the AI tell that’s exploded in company documents, a skill for creating product videos, and comics
April 24, 2026
GPT 5.5 is here, and OpenAI’s latest model has it all. It’s fast enough to use constantly, personable enough to collaborate with, and assertive enough to carry a plan through serious engineering work. If you didn’t catch our full review, including benchmark results, Reach Test ratings, pricing, screenshots, and advice on when to reach for GPT-5.5 versus Opus 4.7, read our Vibe Check or rewatch the livestream, where we grilled OpenAI’s Dominik Kundel and Romain Huet on how they’re using the model.
But how will that shift the balance between OpenAI and Anthropic? That may be a product question as much as a model question. Every engineer Nityesh Agarwal and Monologue general manager Naveen Naidu weigh in.—Kate Lee
Inside Every
Codex versus Claude Code
This week, Anthropic tested removing Claude Code from the $20 Claude Pro plan, prompting an outcry from users and drawing jabs from OpenAI executives on X, perhaps feeling emboldened by the big launch they knew was coming.
The exchange kicked off a Slack debate between Nityesh Agarwal, our resident Claude Code devotee, and Naveen Naidu, who rides hard for OpenAI’s coding app Codex.
Nityesh’s take: Anthropic potentially raising prices is “simple market economics”—there is a huge demand for Claude products because they’re the best available, so they can charge more. On the other hand, OpenAI’s response underscores how frustrated the company has become playing catch-up as it scrambles to replicate Claude Code, Cowork, and skills. From a product standpoint, Claude in the browser and the Claude Code command line interface (CLI) are better than ChatGPT and Codex.
Naveen’s response: Anthropic’s models are powerful, but they also burn through way too much compute in production. OpenAI is much stronger on infrastructure, and GPT 5.5 is a token-efficient model. And while it’s true Anthropic is first to market with a lot of products and features, including computer use—which allows AI to operate your computer on your behalf—OpenAI is better at execution. Naveen consistently reaches for ChatGPT and the Codex desktop app, while he finds the Claude Code app too buggy to spend any time in.
Where they agree: The Claude Code app is, indeed, bad—Nityesh concedes he only uses the CLI. And both labs misjudged how much compute they would need, but in opposite directions: Anthropic is struggling to keep up with demand, whereas OpenAI has invested heavily in infrastructure and is now scrambling to get people to use its products.
Data point
It’s not just a grammatical pattern; it’s an AI tell
Four times.
That’s how much the usage of “not just a ___, it’s a ___” sentence construction rose in large U.S. company documents between 2023 and 2025, per Barrons.
Like the em dash, the correlative constructions are so beloved by LLMs that human writers now avoid them so as not to be accused of writing with AI.
Hot take alert: That’s a bummer. The great profile writer Taffy Brodesser-Akner’s work is teeming with them. Or it was, pre-ChatGPT. Her 2018 New York Times Magazine feature on Goop uses some version of “not X, it’s Y” in almost every other paragraph.
I doubt even a writer as beloved as Taffy could get away with that today. It’s not that her trademark style is any less effective—it’s that no one would believe she wrote it.
Steal this workflow
How to (almost) one-shot a product video
After days of battling open-source video creation tool Remotion and Claude Code, trying to one-shot a video for a product relaunch, Austin Tedesco, Every’s head of growth, figured out how to get a polished clip. Here’s the workflow he runs any time he needs a social video for a product launch or feature demo, like the one he created for the relaunch of Sparkle, our agent-native app that cleans and organizes files on your Mac.
Step 1: Screen-record yourself using the product you’re doing a clip on. All you need is raw footage of yourself clicking through features in real time.
Step 2: Send the recording to a model—Austin prefers Opus—and have it draft a storyboard. The recording provides a ground truth for how the UI works and what the copy says. This prevents the most frequent cause of fake-looking launch videos: plausible-but-hallucinated labels and features.
Step 3: Iterate on the storyboard. Go back and forth with the model until the hook, pacing, and beat-by-beat plan feel right.
Step 4: Hand the storyboard to a coding agent and have it build the video in Remotion. With the screen recording and the corresponding storyboard, the first full render is usually publishable. It’s not a true one-shot, but it saves a lot of time.
Now, next, nixed
Prompts are the new installers
Companies and developers are trying a new way to let users download an AI tool. Instead of asking them to press a download button, users copy a setup prompt, paste it into Claude Code or Codex, and let the agent install the tool.
Now: Copy prompt, paste, install. This is how we install Every’s agent-native document editor Proof: Paste a prompt into your assistant, and it handles the setup. The prompt is doing the job the download button used to do: It gets the user from “I want to try this” to “It’s running in my workflow.”
Next: Someone designs the standard version of this. The copyable prompt block becomes a normal part of product pages and GitHub READMEs (the instructions for software projects), especially for developer tools. It should work on the web and on a repository homepage, and feel as obvious as a “Sign in with Google” button.
Nixed: The download button as the main way in. The old-school way of installing software—clicking a download link and running a setup file—still makes sense when software requires direct hardware access or needs to work offline, but for AI-native tools, the front door is: Copy this prompt into your agent.—Katie Parrott
Model happenings
News you might have missed
- Cowork shipped live artifacts. Claude can build dashboards and trackers inside your workspace that pull fresh data from your apps and refresh each time you open them—pouring narrative gasoline on the SaaSpocalypse fire.
- OpenAI gave Codex screen memory. Codex now retains what’s on your screen across tabs and sessions, so you don’t have to re-paste context every time you start a new task.
- OpenAI launched workspace agents in ChatGPT. The Codex-powered feature lets teams create custom shared agents that can pull information from different sources, analyze it, and turn it into a draft or next step. It’s another signal that agents are becoming a shared team resource, rather than purely individual AI assistants.
One last thing
Nityesh has been having a lot of fun with ChatGPT Images 2.0
A couple of his recent creations include a vintage poster to celebrate the release of Monologue Notes, a new feature in our agent-native recording app, and an infographic about securing Claudie, the consulting team’s always-on AI employee.
Laura Entis is a staff writer at Every. You can follow her on LinkedIn. To read more essays like this, subscribe to Every, and follow us on X at @every and on LinkedIn.
For sponsorship opportunities, reach out to [email protected].
The Only Subscription
You Need to
Stay at the
Edge of AI
The essential toolkit for those shaping the future
"This might be the best value you
can get from an AI subscription."
- Jay S.
Join 100,000+ leaders, builders, and innovators
Email address
Already have an account? Sign in
What is included in a subscription?
Daily insights from AI pioneers + early access to powerful AI tools




Comments
Don't have an account? Sign up!