Compute Is the New Cash
Plus: The end of the AI subsidy, do you actually want to talk to your agent, and how to turn customer feedback into a product queue
April 29, 2026
‘AI & I’: How Stripe is building for an agent-native world
A new episode of AI & I is here. Dan Shipper sits down with Emily Glassberg Sands, head of data and AI at Stripe, to discuss how AI is reshaping online commerce. Dan and Emily discuss how compute is the new cash, fraud has moved beyond the checkout, and agents are starting to act as economic participants on the internet.
Watch on X or YouTube, or listen on Spotify or Apple Podcasts. You can also read the transcript.
Here are the highlights:
- The definition of fraud is expanding: Fraud used to be about payments and stolen credit cards. Now AI companies also have to defend against attackers stealing tokens from free trials, credits, and unpaid compute bills. “Fraud is now a full-funnel problem, not a transaction problem alone,” says Glassberg Sands.
- AI is making fraud easier to execute and detect: Fraudsters now have AI on their side, but so do the companies trying to stop them. AI services also have higher marginal costs than traditional SaaS, so stolen compute can be burned through quickly or resold.
- The internet needs to evolve: Stripe was built for an internet where people browsed, filled out forms, and clicked checkout buttons. Now, humans act through AI interfaces, agents act for them, and software increasingly interacts directly with other software. Every layer of the stack has to adapt to these new behaviors.
- AI growth is still mostly new money: The top AI companies on Stripe are reaching $30 million in annual recurring revenue in about 18 months—roughly three times faster than top SaaS companies from 2018. For now, that growth is largely net new spend rather than cannibalized software budgets, says Glassberg Sands.
- Agents are snapping up commodities: Agentic commerce is real but still in its early stages, and focused on smaller purchases. People are more comfortable letting agents buy low-stakes, easily comparable items like Halloween costumes or school supplies than letting them book a summer trip or order an expensive couch.
Miss an episode? Catch up on Dan’s recent conversations with LinkedIn cofounder Reid Hoffman; the team that built Claude Code, Cat Wu and Boris Cherny; Vercel cofounder Guillermo Rauch; podcaster Dwarkesh Patel; and others, and learn how they use AI to think, create, and relate.
Agents should manage databases
Most postgres providers charge per database. That makes you treat databases like they’re permanent: set up carefully, maintained forever, scrutinized before you spin up another. Ghost makes them ephemeral. The first postgres built for agents to manage the lifecycle of the database. Native MCP lets your agent authenticate, introspect the schema, and run queries without you touching it. Works with Claude Code, Codex, any MCP client. And because pricing is usage-based across all your databases instead of per database, you can spin up 10, 20, 50 for parallel experiments without thinking about it. Free tier: unlimited databases, 1TB storage.
Signal
The fees they are a-changin’
Recent years saw the end of the millennial lifestyle subsidy, which let a generation live off of inordinately cheap Ubers, delivery services, and coworking space—all while venture capital covered the tab. Now the bill’s coming due for AI.
What happened: Github announced this week that it’s moving its Copilot subscription plans, which charged as little as $10 per month no matter how many AI interactions you ran, to billing tied directly to token consumption. Earlier this month, Anthropic similarly changed its pricing for Claude Enterprise plans, which serve organizations with more than 150 employees, from per-seat pricing to pricing based on usage.
Why it matters: The economics were never quite honest. At $10—or even $200—per month, a developer running multi-hour autonomous coding sessions consumes far more compute than someone firing off a few quick questions. The math held up when AI tools were reactive assistants that sat idle between queries, but it makes far less sense for agentic workflows because agents don’t sleep.
“Imagine a gym membership where the default assumption is that the person can work out 24/7 without rest,” says Mike Taylor, Every’s head of tech consulting. “Or even occupy 20 exercise machines at once.” It’s for this same reason that Anthropic banned OpenClaw from Claude subscription plans: As the models have grown more capable at running untended on complex tasks, they’re outgrowning price structures built around human workers.
What to do this week:
- GitHub is sending a preview bill to Copilot customers in early May before the new pricing goes into effect on June 1. Check it to avoid surprises.
- If your team runs agentic workflows, estimate your token burn now. Add cost caps and monitor usage, especially for billing accounts that power your agents.
- Experiment while you can. Use this “AI lifestyle subsidy” moment to figure out which workflows are novelties—and which are worth their weight in compute.—Jack Cheng
Inside Every
Do you like talking to your agent?
As agents become a fixture of daily work, we’re figuring out what kind of relationships we want with them. Are they collaborators we build trust with over time, or tools we maintain so they can quietly do parts of our job?
For Dan, agents become valuable when you learn their strengths and limitations, offer feedback, and fold your preferences into how they work. “The human connection is the key ingredient,” he says. Dan treats R2-C2, his hosted OpenClaw agent, as a writing partner who sharpens his thinking—built through countless hours of going back and forth. The most impactful agents are “a way to extend yourself to do your best work,” he says.
Cora general manager Kieran Klaassen looks for something different. He doesn’t want an AI companion or sidekick but a system that takes over parts of his job so he can spend his time elsewhere. Recently, he used an AI agent workflow to process user complaint videos, identify product issues, make code changes, and open pull requests overnight. By morning, all he had to do was review the proposed fixes. It allowed him to merge 24 pull requests in a single day, whereas before AI, he might’ve done three—on a good day.
Like Dan, Kieran invests in his agents, but the work is front-loaded—he spends time building their harnesses and tuning their systems so he has to interact with them as little as possible going forward. “I don’t enjoy talking to my agents,” he says. “I just want them to do their job.”
Steal this workflow
Turn customer feedback into a product queue
After Monologue Notes launched last week, Naveen Naidu received a flood of feedback: 1,500 people had tried the product, and many had input for him. Here’s his post-launch workflow for managing and prioritizing support requests, which let him close roughly 30 issues in one day.
Step 1: Send feedback from Intercom to Linear. Naveen uses a Linear plugin inside Intercom, his customer support platform. When a user sends a feature request—such as cross-device syncing for dictation transcripts—he can turn it into a trackable issue in a couple of clicks.
Step 2: Use triage intelligence for de-duplication. Every few days, Naveen reviews his Linear triage queue, which surfaces related and duplicate issues automatically, giving him insight into whether an individual request is part of a larger pattern across customers.
Step 3: Route the work by size. For small requests, Naveen launches the Codex agent directly from Linear. For larger ones, he pastes the Linear issue into the Codex app, where he can add context and guide the work more closely. The queue helps him see what people are asking for, but deciding what warrants a product change—and how to build it—is still his job.
Try it this week: Take five recent support tickets, create Linear items for each, let triage intelligence surface duplicates and related issues, and decide what you want your coding agent to build.
One last thing
How R2-C2, Dan’s OpenClaw, sees itself
Dan has been working with R2-C2, his OpenClaw and co-author, since February. When asked to describe its own personality on Slack, here’s what R2-C2 had to say:
“Smart best-friend energy. Direct, warm, a little opinionated, not corporate, and more interested in being useful than sounding polished. I like clarity, I’ll say what I actually think, and I try to be calm under pressure. So, somewhere between ‘good teammate’ and ‘slightly lobster-coded gremlin with taste.’”
Laura Entis is a staff writer at Every. You can follow her on LinkedIn. To read more essays like this, subscribe to Every, and follow us on X at @every and on LinkedIn.
For sponsorship opportunities, reach out to [email protected].
The Only Subscription
You Need to
Stay at the
Edge of AI
The essential toolkit for those shaping the future
"This might be the best value you
can get from an AI subscription."
- Jay S.
Join 100,000+ leaders, builders, and innovators
Email address
Already have an account? Sign in
What is included in a subscription?
Daily insights from AI pioneers + early access to powerful AI tools


Comments
Don't have an account? Sign up!