Every illustration/komoroske.com.

Why Aggregators Ate the Internet

The hidden architectural choice that makes big platforms bigger—and how we could change the rules for AI

19 5

Comments

You need to login before you can comment.
Don't have an account? Sign up!
Will Johnson 2 days ago

The problem is that with other parties (e.g. doctors, insurance companies, tech companies, the government) having my personal information is that once they have it they have it forever. I can't unshare it.

Is this some sort of Inspector Gadget type of encryption where I can send a piece of data (or some sort of blockchain address to avoid having to maintain the files myself) and only grant the recipient access to the data for a certain period of time? When the period ends then the data is no longer decrypted and they cannot access it. Enabled by Confidential Compute? If so, cool!

My understanding is that once something is decrypted the cat is out of the bag and it can't be put back in. If something like this was possible it would do more for user privacy than any number of well-intentioned laws.

Backofthenapkin 3 days ago

🚧 The Bug Beneath the Browser
Reading Alex Komoroske’s sharp post on the same-origin paradigm had me nodding—until the pivot to AMD’s secure enclaves. A solid feature, sure. But a narrow fix to a systemic problem.

The flaw isn’t in the chip. It’s in the model.
What’s broken is how we authorize, move, and trust data online.

Here’s why hardware-bound solutions won’t solve the deeper architecture problem:

🌱 Too Local: CPU fixes assume data lives in one secure zone. But like people, data needs to move, combine, and adapt.
🌾 Weeds in the Garden: Healthy systems allow messy overlap. Ecosystem diversity—not lockdown—is what drives innovation.
🌊 The River Always Wins: A rock can block flow. But real resilience comes from trees, soil, and balance. Nature distributes control—and software should too.

Komoroske nails the true villain: the same-origin paradigm. It traps your data in silos, makes permission binary, and rewards companies that aggregate context at scale. The result? Big gets bigger.

🧩 What we need isn’t tighter perimeters—it’s a system where trust, access, and privacy aren’t locked in a zero-sum game.

That’s why I’m following researchers like Geoffrey Litt, who’s pushing decomposable documents and malleable software. Instead of rigid apps, we get fluid tools that operate on user-owned, user-governed data. You bring the tool to the context, not the other way around.

🔑 The future of AI isn’t in the model or the chip. It’s in the architecture of agency.

Dan Shipper 3 days ago

@backofthenapkin this seems very AI generated

Backofthenapkin 3 days ago

@danshipper What magic decoder ring are you using? I write with a partner to sharpen for clarity and brevity. Feels like common sense, not a conspiracy. Like your business, clarity is currency. I don’t mind splitting the check with AI.

Oshyan Greene 3 days ago

I don't understand how your thesis that Same Origin is responsible for data silos can really be justified. APIs exist, they can have any rules the developers want them to, the auth can work in whatever way is desired/can be developed. People already trust many big companies with their data, with little or no assurance of genuine privacy or confidentiality of data. People tell secrets to AI every day without considering who has access to it and what they might do with it. People take nude pics all the time, put it up on the cloud, and get hacked. These problems aren't solved by Confidential Compute. And as far as permissions and sharing, some people like myself have photos in both Amazon and Google Clouds, our use of them is based on trust of those orgs, the lack of their ability to communicate with one another is *not* a trust issue. Google and Amazon simply have no incentive to provide data interchange in that context. The reason trusting Google with everything works is that they provide services that cover a wide range of common use cases, and they (generally) handle the integration between them internally. Again none of this has to do with Same Origin.

So the reason better data exchange systems haven't been developed before now is not, in my view, a technical one but simply an incentive one in the capitalist market. Companies could have put resources into creating open systems with good permissions models but they have no financial incentive to do so. They still don't, at least not significant ones. Maybe that will change with AI, but I don't see the case being laid out that clearly above.

Federated systems with granular permissions systems already exist, they prove that tech is not the underlying issue. No need for a fancy Confidential Compute implementation, which anyway relies on trust, i.e. I can send my data to a novel 3rd party for some unique service not provided by the larger trusted data silo I use, but I have to have reason to trust that 3rd party, and data security in processing is a small part of that concern at a practical level. As I noted above confidentiality of data at runtime is generally not a concern for the mass market, as evidenced by people's use of social media, and now AI tools. Blockchain implementations likewise exist to solve similar trust issues. Confidential Compute is a red herring for this problem domain, though it obviously solves some important issues for business customers, and others that could actually be important for the AI age as we as consumers have more and more incentive to share ever more sensitive information with AI companies. But again that's a separate issue from Same Origin *and* from data interchange. At least as far as I can see.

Have I misunderstood the entire argument here?