.png)
Hello, and happy Sunday! Tomorrow is OpenAI DevDay, and we’ll be hosting a watch party for paid subscribers in our Discord server. For all you free subscribers reading this, you can upgrade to get access. See you there!—Kate Lee
Was this newsletter forwarded to you? Sign up to get it in your inbox.
Knowledge base
"Vibe Check: Claude Sonnet 4.5" by Dan Shipper/Vibe Check: The days of waiting around for AI responses might be over. Anthropic’s new Claude Sonnet 4.5 is blazing through tasks at nearly twice the speed of its predecessors. Every’s team spent last weekend putting it through its paces, and the results are impressive. Read this to learn whether it’s worth switching models for your everyday tasks.
"I Inherited a Broken App—And Made It Mine" by Yash Poojary/Source Code: Every’s AI file organizer Sparkle had been passed around like a hot potato through four different engineers. Then it landed on Yash Poojary’s desk. He declared war on Mac's Spotlight and transformed Sparkle into an AI command center that keeps you in your flow state. Read this for a refreshingly honest take on why ownership is the only true moat in the age of AI—and download Sparkle Search to check out the fruits of Yash’s labor.
"He's Building the Plumbing For AI to Use the Internet" by Rhea Purohit/AI & I: Remember when you had to explain to your grandparents how to use email? Now imagine teaching that to an AI. That's the challenge Alex Rattray is tackling in building model context protocol (MCP) tools at his company, Stainless. As he tells Dan Shipper on this episode of AI & I, he’s essentially developing instruction manuals that help AI models interact with software the way humans do with buttons and menus. 🎧 🖥 Watch the full interview on X or YouTube, or listen on Spotify or Apple Podcasts.
"How to Make AI Write Less Like AI" by Chris Silvestri: Scaling content production without scaling judgment is a recipe for disaster. After an embarrassing email from a client, Chris Silvestri developed a three-phase system that transforms mediocre AI writing into genuinely effective content. Read this for a practical framework that works for everything from marketing copy to board decks, complete with side-by-side examples showing the dramatic difference proper context makes.
Hello, and happy Sunday! Tomorrow is OpenAI DevDay, and we’ll be hosting a watch party for paid subscribers in our Discord server. For all you free subscribers reading this, you can upgrade to get access. See you there!—Kate Lee
Was this newsletter forwarded to you? Sign up to get it in your inbox.
Knowledge base
"Vibe Check: Claude Sonnet 4.5" by Dan Shipper/Vibe Check: The days of waiting around for AI responses might be over. Anthropic’s new Claude Sonnet 4.5 is blazing through tasks at nearly twice the speed of its predecessors. Every’s team spent last weekend putting it through its paces, and the results are impressive. Read this to learn whether it’s worth switching models for your everyday tasks.
"I Inherited a Broken App—And Made It Mine" by Yash Poojary/Source Code: Every’s AI file organizer Sparkle had been passed around like a hot potato through four different engineers. Then it landed on Yash Poojary’s desk. He declared war on Mac's Spotlight and transformed Sparkle into an AI command center that keeps you in your flow state. Read this for a refreshingly honest take on why ownership is the only true moat in the age of AI—and download Sparkle Search to check out the fruits of Yash’s labor.
"He's Building the Plumbing For AI to Use the Internet" by Rhea Purohit/AI & I: Remember when you had to explain to your grandparents how to use email? Now imagine teaching that to an AI. That's the challenge Alex Rattray is tackling in building model context protocol (MCP) tools at his company, Stainless. As he tells Dan Shipper on this episode of AI & I, he’s essentially developing instruction manuals that help AI models interact with software the way humans do with buttons and menus. 🎧 🖥 Watch the full interview on X or YouTube, or listen on Spotify or Apple Podcasts.
"How to Make AI Write Less Like AI" by Chris Silvestri: Scaling content production without scaling judgment is a recipe for disaster. After an embarrassing email from a client, Chris Silvestri developed a three-phase system that transforms mediocre AI writing into genuinely effective content. Read this for a practical framework that works for everything from marketing copy to board decks, complete with side-by-side examples showing the dramatic difference proper context makes.
"Seeing Like a Language Model" by Dan Shipper/Chain of Thought: We've been thinking about intelligence all wrong. Language models reveal that intelligence emerges not just from explicit rules and logical reasoning, from intricate pattern recognition across vast webs of relationships. Dan explores how this shift transforms everything, from how we understand reality to how we approach knowledge. Read this—the first of four weekly pieces from Dan—if you want to understand the philosophical earthquake happening as AI reshapes our fundamental worldview.
Alignment
The accountability sponge. Last week in San Francisco, I took my first robotaxi ride. The journey was so smooth it was like I was drifting on a still lake. The car anticipated every lane change and merge with beautiful mechanical grace, and, better yet, no driver meant no small talk about the weather and I could belt out Taylor Swift without shame. It was everything I wanted the future to be.
A couple of days after my ride, I found out that police pulled over a Waymo-driven car for an illegal U-turn just a few miles south of where I’d been. The cops were flummoxed—the law doesn’t say anything about whom to ticket if there is no “whom” driving. The car was free to go.
Starting next year, California laws are changing so police can report robotaxi violations to the Department of Motor Vehicles, and ultimately Waymo (that is, Google) will pay. It sounds like problem solved—except a Google employee who's never seen the code will ultimately sign a check for a decision made by a neural network they might not be able to explain. And what happens if there’s a major crash or even a death? Who takes accountability?
Because I work in medicine, I’ve seen AI starting to diagnose conditions, and it's already better than humans at reading certain scans. But when it misses something or sees something that isn't there, the ensuing lawsuit won't name the algorithm—it’ll name the doctor who clicked "approve,” because malpractice law needs a fleshy human with insurance.
This could lead to a new sort of job in our AI age accountability sponges. Definition: humans whose primary job is to absorb legal liability for decisions we don't or can’t comprehend. Liability without ability, the most human job imaginable.
The robotaxis will keep rolling through San Francisco, and I'll keep riding in them. But there's a certain irony that as AI gets more autonomous, we humans may end up being its professional fall guys.—Ashwin Sharma
That’s all for this week! Be sure to follow Every on X at @every and on LinkedIn.
We build AI tools for readers like you. Write brilliantly with Spiral. Organize files automatically with Sparkle. Deliver yourself from email with Cora. Dictate effortlessly with Monologue.
We also do AI training, adoption, and innovation for companies. Work with us to bring AI into your organization.
Get paid for sharing Every with your friends. Join our referral program.
Ideas and Apps to
Thrive in the AI Age
The essential toolkit for those shaping the future
"This might be the best value you
can get from an AI subscription."
- Jay S.
Join 100,000+ leaders, builders, and innovators

Email address
Already have an account? Sign in
What is included in a subscription?
Daily insights from AI pioneers + early access to powerful AI tools
Ideas and Apps to
Thrive in the AI Age
The essential toolkit for those shaping the future
"This might be the best value you
can get from an AI subscription."
- Jay S.
Join 100,000+ leaders, builders, and innovators

Email address
Already have an account? Sign in
What is included in a subscription?
Daily insights from AI pioneers + early access to powerful AI tools
Comments
Don't have an account? Sign up!
We already have a system for civil law (financial) liability without ability. It's called insurance. Its existence doesn't invalidate all of your concerns, but it may put them in perspective.