
TL;DR: Today we’re releasing a new episode of our podcast AI & I. Dan Shipper goes in depth with Alex Komoroske, the cofounder and CEO of Common Tools, a public benefit corporation building a new way for us to interact with AI. Watch on X or YouTube, or listen on Spotify or Apple Podcasts. Here’s a link to the episode transcript.
Was this newsletter forwarded to you? Sign up to get it in your inbox.
Most people use AI to get things done. But what if it could help you grow—nudging you toward who you want to become, not just what you want to do right now?
That’s the vision Alex Komoroske is building toward. As the cofounder and CEO of Common Tools, Komoroske is designing what he calls a “coactive fabric”—a new kind of digital space meant to foster human-AI collaboration. He doesn’t have more specific words to describe what this means yet because the paradigm he’s creating is still coming into focus, and he says it’s unlike anything that exists today.
Komoroske has spent years thinking deeply about how we interact with technology. He was the head of corporate strategy at Stripe and spent 13 years at Google, leaving as a director of product management. He’s also the author of Bits and Bobs, a public Google document where he records his raw thoughts and ideas every week.
In this episode of AI & I, Dan Shipper sat down with Komoroske to explore what it means to build “intentional technology”—AI that aligns with our true, longer-term goals, instead of serving our monkey-brained desires, like most algorithmic social media feeds do today. They talk about how AI can break out of the siloed architectures the technology has inherited from the early internet, and the practical ways in which Komoroske’s vision can become a reality.
You can check out their full conversation here:
If you want a quick summary, here are some of the themes they touch on:
How to build AI that serves your values
Komoroske has no doubt that LLMs will be as powerful a technology as the printing press, electricity, and the internet. But that power cuts two ways. As he puts it, we can either go down the path of “engagement, maximizing hyper-aggregation, and going after what you want, not what you want to want”—or we can choose to build technology that’s deliberately designed to honor the goals we want to work toward.
For Komoroske, intentional technology rests on four pillars. It must be:
- human-centered, serving the individual rather than the corporation
- private by design, so data remains under the user’s control
- pro-social, encouraging people to meaningfully integrate with society
- open-ended, letting anyone build new experiences on top of the technology (instead of a single corporation gatekeeping optionality)
When these pieces come together, Komoroske believes that LLMs can act as an “exocortex” that works as an “extension of [your] direct agency.”
The structures that can house intentional technology
Komoroske sees intentional technology as grounded in a simple principle: If you’re not paying for the compute you use, then it’s probably not serving your best interest. When you pay—whether through a subscription or usage fees—you’re making sure the system works for you, as well as the platform you’re using.
According to Komoroske, true user control also means having full ownership of your data. While running your own local models is one approach, it’s horribly inconvenient in practice. While that might get easier over time, for the time being, Komoroske points to "confidential computing," cloud-based systems that are encrypted, secure, and verifiable, giving users the benefits of powerful infrastructure without compromising privacy.
Komoroske is also careful not to demonize the business strategy of companies like OpenAI to maximize user-engagement. “[I]t's not nefarious play, it's just the default thing that you would do,” he says. Komoroske sees intentional technology as creating a parallel ecosystem that lets users maintain their own private context and bring it with them wherever they go, without handing it over to a central platform.
Why AI needs to break free from the internet’s old silos
The modern internet is built on a security rule called the “same-origin paradigm.” The core idea is that each website or app (roughly defined by its domain, like google.com) is treated as an isolated “island.” That island can store and access your data but in theory it can’t reach into other islands, like facebook.com or dropbox.com. This makes the web safer: A site you visit can’t just steal your information from another tab.
But it also means data becomes trapped in one place. If a new startup wants to help manage your calendar, for example, you’d have to manually transfer your data to it, and trust that it won’t misuse it. Most people don’t bother and just stick with Google Calendar, which already has all their information. This tendency for data to pile up in existing services, what Komoroske calls “data gravity,” is why big platforms get even bigger.
AI chatbots have inherited this same model. Each bot—whether it’s ChatGPT, Claude, or another—lives in its own sealed environment. As a result, your personal context gets fragmented across them, and the companies that have the lion’s share become hard to leave.
Komoroske has a couple of ideas about how we can break the stalemate. One way is to return to how desktop software used to work, a file system-like model, where different apps can access the same user data. The other, more advanced solution involves confidential computing. In this setup, your data runs in a secure enclave in the cloud—fully encrypted and inaccessible even to the company that hosts it. (Komoroske has a full piece on this coming soon—keep an eye out to hear more from him.)
AI can expand our humanity—or engineer our apathy
Komoroske pushes back on the fear that AI and language models will swallow what makes us uniquely human. He believes LLMs can welcome a “new era of human flourishing” by expanding our ability to think as individuals and empathize with each other, as a society—if we choose to design it that way.
That’s where intentional technology comes in. Without this alignment, Komoroske warns, AI could just as easily be engineered to give us the “precise dopamine drip” that risks making humans “extremely passive,” isolated, and disengaged. The choice is clear: AI can either deepen our humanity or dull it; what matters now is which path we build toward.
Here’s a link to the episode transcript.
You can check out the episode on X, Spotify, Apple Podcasts, or YouTube. Links are below:
- Watch on X
- Watch on YouTube
- Listen on Spotify (make sure to follow to help us rank!)
- Listen on Apple Podcasts
What do you use AI for? Have you found any interesting or surprising use cases? We want to hear from you—and we might even interview you.
Miss an episode? Catch up on Dan’s recent conversations with founding executive editor of Wired Kevin Kelly, star podcaster Dwarkesh Patel, LinkedIn cofounder Reid Hoffman, former a16z Podcast host Steph Smith, economist Tyler Cowen, writer and entrepreneur David Perell, founder and newsletter operator Ben Tossell, and others, and learn how they use AI to think, create, and relate.
If you’re enjoying the podcast, here are a few things I recommend:
- Subscribe to Every
- Follow Dan on X
- Subscribe to Every’s YouTube channel
Rhea Purohit is a contributing writer for Every focused on research-driven storytelling in tech. You can follow her on X at @RheaPurohit1 and on LinkedIn, and Every on X at @every and on LinkedIn.
We build AI tools for readers like you. Automate repeat writing with Spiral. Organize files automatically with Sparkle. Deliver yourself from email with Cora.
We also do AI training, adoption, and innovation for companies. Work with us to bring AI into your organization.
Get paid for sharing Every with your friends. Join our referral program.
Ideas and Apps to
Thrive in the AI Age
The essential toolkit for those shaping the future
"This might be the best value you
can get from an AI subscription."
- Jay S.
Join 100,000+ leaders, builders, and innovators

Email address
Already have an account? Sign in
What is included in a subscription?
Daily insights from AI pioneers + early access to powerful AI tools
Comments
Don't have an account? Sign up!