Mini-Vibe Check: Claude Managed Agents Handle the Infrastructure Work
The case against LLMs, and new words for our AI age
April 15, 2026 · Updated April 21, 2026
Was this newsletter forwarded to you? Sign up to get it in your inbox.
‘AI & I’: The case against LLMs
Today, we’re releasing a new episode of our podcast AI & I. Dan Shipper sits down with Eve Bodnia, founder and CEO of Logical Intelligence, which is developing an alternative AI model to LLMs. They discussed a question most people in AI are afraid to ask: What if LLMs aren’t going to be the most powerful form of AI?
Bodnia argues that LLMs have intrinsic weaknesses, notably non-language tasks such as spatial reasoning, logical verification, and real-time data analysis. Her solution: energy-based models (EBMs), which map possible outcomes onto a mathematical landscape. Likely outcomes sit in valleys, and unlikely ones sit on peaks. Whereas LLMs process one token at a time, an EBM scans the full terrain to find the lowest point, or the most probable answer. Bodnia argues that it’s this approach, not bigger LLMs, that will lead to the next AI phase shift.
Watch on X or YouTube, or listen on Spotify or Apple Podcasts. You can also read the transcript.
Here’s how LLMs and EBMs are different, according to Bodnia:
- Architecture transparency: You can’t see inside an LLM; you can only evaluate its outputs. EBMs are governed by physics, which means their architecture is legible while they’re running. “Think of it as something that doesn’t play a guessing game, with an architecture that essentially allows it to self-align as it processes information,” she says. “It’s no longer a black box.”
- Language-based versus data-native: LLMs are language-dependent even when the task has nothing to do with language, like data analysis. “If your data is numbers, relationships, and functions, and you try to map those rules into words and then search for the next word, you’re losing a lot of information,” Bodnia says. EBMs work directly with the underlying data structure, including numbers and spatial coordinates.
- Sequential versus panoramic reasoning: An LLM is like navigating San Francisco without a map. Each turn constrains the next, and if you go down the wrong street, you can’t reverse course. An EBM, by contrast, has the bird’s-eye view—it can evaluate multiple routes at once and course-correct before hitting a dead end.
Miss an episode? Catch up on Dan’s recent conversations with LinkedIn cofounder Reid Hoffman; the team that built Claude Code, Cat Wu and Boris Cherny; Vercel cofounder Guillermo Rauch; podcaster Dwarkesh Patel; and others, and learn how they use AI to think, create, and relate.
Mini-Vibe Check: Claude Managed Agents
Or that feeling when the problem you’ve spent a lot of time solving gets solved for you
We’re all about agents at Every. Which means many of us have devoted a lot of time to building the infrastructure that makes them run.
That work matters a lot less now since Anthropic launched Claude Managed Agents earlier this month in public beta, a hosted service that handles sessions, memory, tool use, and credentials. You say how you want your agent to operate, and Claude makes it happen.
It’s a true “oh shit” moment, says Dan, one that frees up considerable energy to focus on other problems—good!—and commoditizes a skillset you may have spent months developing—destabilizing, maybe!
The Only Subscription
You Need to
Stay at the
Edge of AI
The essential toolkit for those shaping the future
"This might be the best value you
can get from an AI subscription."
- Jay S.
Join 100,000+ leaders, builders, and innovators
Email address
Already have an account? Sign in
What is included in a subscription?
Daily insights from AI pioneers + early access to powerful AI tools
Comments
Don't have an account? Sign up!