The two biggest critiques of web3, analyzed
With a little help from Clay Christensen, strategy legend
New to Divinations? It’s a weekly column on business strategy written by me, Nathan Baschez, co-founder of Every and former first product/engineer at Substack and Gimlet Media. I use classic theories of business strategy to try and understand what will work (and not work) in technology, media, and web3.
If you’re not already on the list, join!
There are two critiques of web3 that I hear more often than any other:
- Pretty much anything that can be done on a blockchain can be done in web2. Sure, you have to trust a centralized provider, but 99% of the time that’s fine. Society requires trust to function.
- The main use-case of web3 is speculation and money laundering. We’ve been promised much more, but at this point blockchain technology is over a decade old. It didn’t take web1 nearly this long to flourish after the browser and HTTP were invented. Shouldn’t more have happened by now?
These critiques are so potent because they are at some level true. But that doesn’t mean that web3 is doomed! In fact, there’s a specific type of innovation that always follows this pattern of starting out looking confusing and questionable but ending up indispensable and dominant.
The pattern, first described by Clay Christensen in his massively important book The Innovator’s Solution, goes like this:
When a new technology is invented it tends to be built by one centralized company that has a lot of control, because they need that control to get the system working. But once it’s working, the technology’s architecture becomes stable and gets standardized. Once this happens, an alternate ecosystem can spring up to serve the same job-to-be-done, but in a more open and modular way. Instead of one full-stack company controlling everything from soup to nuts, companies can specialize, focus, build on top of each other, and extend the value in new and unexpected ways.
Web3 is that second type of innovation. It’s not about totally new jobs-to-be-done. It’s mainly about taking ideas we’ve already figured out, and putting them on an open and modular footing, to make them more extensible and to allow anyone to build new value without permission or platform risk. This kickstarts a robust positive feedback loop—a network effect, specifically—that can outweigh even the most powerful moats.
This is an idea I’m honestly pretty excited about. It’s not necessarily new to crypto enthusiasts, but once it clicked for me, it made the hype around web3 make much more sense.
Ready? Let’s go deeper 🔮 :)
The Two Big Critiques
The first critique says there are very few human needs that only blockchains can fulfill. Pretty much all of it can be built within web2. Sure, you’d need to trust a centralized provider, but society is built on trust! It’s good to have trusted entities that have the power to control things and make them work well.
Plus, democratic capitalism is the original decentralization: if one company proves itself to be untrustworthy, that’s an opportunity for another better company to take its place. If laws prevent this, we can change the laws through the democratic process. Of course this process is neither perfect nor smooth, but it’s good enough to get the job done without blockchains, which are slow, expensive, and inflexible. The benefits of decentralization feel so vague and abstract. Like, ok censor-proof publishing is fine I guess, but am I really going to leave Twitter for that? Nope. Want proof? Just look at how Parler and Gab worked out.
The second critique appeals to history, and says that blockchain technology has been around for over a decade, and yet all anybody is using it for is financial speculation and money laundering. If web3 is so great, shouldn’t it have been further along by now with real use cases?
I mean, speculation is interesting, but it’s a far cry from what web3 promises: video games with gear you can truly own and take to any other game; instant opportunities unlocked when you verify your on-chain credentials, like a college degree as an NFT helping you get a job instantly; social networks with digital economies that make your reputation portable to all other networks.
Plus, most of the speculation is happening because people believe those other use cases will one day happen, so the price goes up. If the other use cases besides “currency” never happen, then the vast majority of the web3 speculation and hype will disappear, and it won’t even be good for the “speculation” job-to-be-done anymore. Sure, maybe bitcoin will remain a quirky digital gold type thing that retains some value, but if that’s all that happens then certainly the current atmosphere in tech will come to be seen as rather silly.
Are the critiques true?
I think these critiques are so potent because, yes, they are kinda true.
There isn’t much that blockchains enable that would be impossible to implement in a web2 way. It wouldn’t be trustless and decentralized—but the benefits of decentralization are way too abstract and vague to compel most people. And to be honest, the excuse that crypto just needs “better onboarding” is starting to wear thin. It’s been years! Shouldn’t crypto be simpler and more compelling by now?
But though I think these critiques are both mostly true, the recent realization I had is that doesn’t necessarily mean web3 is doomed. In fact, many new technologies that have been hugely impactful and successful in the past have faced similar critiques—and overcome them. Turns out, web3 fits the pattern of a very specific type of innovation fairly perfectly. It’s less about serving totally new value propositions, and more about setting a new foundation for better serving existing value propositions. It’s more re-invention than invention. This process is messy and slower “zero-to-one” innovation, but in the end it wins every time, because open ecosystems are more flexible and extensible than closed ones.
Here’s where Clay Christensen’s theory of modularity comes in, and helps us zoom out and understand how new technologies spread.
How Technologies Evolve
New technologies evolve in two phases:
First, somebody has to figure out how to make the entire system work, end-to-end. This is “zero to one” type innovation. It requires a centralized, full-stack player who has control over all the important components of the system, so they have the degrees of freedom necessary to solve all the problems with the technology.
This is how computers first came to market: IBM built the hardware (storage, memory, CPU, I/O, and everything else), programming language, software, and did customer support. They needed to control all these layers because the optimal architecture had not been discovered yet, and everything depends on everything else. The constraints you encounter when designing CPUs affect the programming language design, the software depends on the language, etc. There are tons of interdependencies.
But eventually the full-stack player runs out of room for improvement. They get to a point where customers don’t care so much about the changes they make to the system, and they’re not willing to pay more. At this point the architecture reaches a local maximum, and it stabilizes.
Once stability sets in, the second phase of innovation happens: a new value chain emerges to compete with the full-stack player, but with an open architecture based on well-defined standards. Once the computer industry established the basic requirements of all the components like storage, memory, CPU, and OS, it became possible for independent companies to specialize and just create one component.
This system starts out kind of messy but ends up winning because the open architecture is a much richer environment for market ecosystems to improve performance and extend functionality. For example, once you have a computer industry that has a market for hard drives rather than an internal function at a company like IBM, then some genius can invent new ways to achieve better performance, like SSDs, and as long as it conforms to the standard interfaces expected by the industry then it can be quickly adopted.
In Christensen’s words, from chapter 5 of The Innovator’s Solution:
“Modularity has a profound impact on industry structure because it enables independent, nonintegrated organizations to sell, buy, and assemble components and subsystems. Whereas in the interdependent world you had to make all of the key elements of the system in order to make any of them, in a modular world you can prosper by outsourcing or by supplying just one element. Ultimately, the specifications for modular interfaces will coalesce as industry standards. When that happens, companies can mix and match components from best-of-breed suppliers in order to respond conveniently to the specific needs of individual customers.”
(In crypto, this ability for components to work together in an open way is called “composability.”)
This pattern isn’t unique to computing. It’s happened over and over again in many industries: cars, oil, railroads, cameras, movies, etc.
It’s perhaps true that IBM could have invented the SSD back in the day when they built all components of the computer, soup to nuts. But it’s far more likely that innovations will actually emerge when anyone can build on top of an open, permissionless architecture.
This does not mean there will be no companies with power and everything becomes totally modular. Some layers of the value chain will still be controlled by a company that maintains a tight integration. And in fact, the most valuable thing about Clay Christensen’s theory is that it helps us predict which layers of the value chain will be the ones where power naturally accretes.
(If you’re interested I wrote a whole post on that here, but the TL;DR is power belongs to the company who controls the integration between the layers of the value chain that determine performance for the things that matter most to end users.)
What this means for web3
Just as the computing industry moved from a totally integrated system controlled by IBM to a more open and modular one, web3 is a movement to rebuild the internet in a more open, modular way.
This explains both of the biggest critiques of web3:
- It seems as though nothing in web3 couldn’t be built in web2 because web3 is in some ways more about re-inventing than inventing. It’s about taking stuff we’ve already figured out and rebuilding it on a more open and modular architecture, so that it can be improved upon and extended in unpredictable ways. Saying web3 ideas can be built in web2 is like saying nothing in open source software couldn’t have been built in a closed source way. It’s sorta true, but also misses the point: openness and composability make it possible for technologies to be built on top of each other without permission. It may take awhile to get going and seem like closed architectures win at first, because they do! But in the long run open architectures usually win. Which leads us to the second critique...
- It’s taking web3 a long time to gain adoption because there is no single centralized integrator playing the role of IBM. This is why people often complain that crypto is so confusing and hard to get started with. Nobody is in control! But that’s exactly the point. We’re in the middle of the phase where modular, open alternatives are being built. These systems are not good enough yet to shift many of our daily activities to use crypto, and the improvement trajectory is somewhat messy and hard to predict, but if it fits the historical pattern, it’s one that can’t be ignored.
Once these realizations clicked for me, I became a lot less certain that web3 was just a fantasy.
In particular the comparison to open-source software felt extremely compelling. In the 90s there were lots of companies that sold source code that you could use to build applications, but very few of those still exist. If it’s just code, open alternatives win because of network effects. The more people that use a piece of code, the more incentive there is for someone to build tools on top of that code. This creates a positive feedback loop where the most popular projects get more useful, making them more popular, increasing the incentive to make them more useful. This is why open source beat closed source.
Ask any software developer—the web as it exists today would be impossible without open source. If I had to point to a single cause of the tech boom from 2010’s to today, it would be either open source or cheap cloud computing. Frankly it’s a tie.
And all this happened with surprisingly little at stake: in the case of open source the main incentive is just individual software engineers gaining prestige and high-paying jobs. Nadia Eghbal has written extensively about the incentive problems in the open source community holding back progress.
It reminds me of a line I wrote a couple months ago:
“Some days I feel like the holy trinity of NFTs, DAOs, and DeFi might replace the very foundation that society rests on. Other days it feels like 90% vaporware and Ponzi schemes that collectively emit more CO2 than a medium-sized country. The challenge, as I see it, is to hold both of these ideas at once.”
I’m going to keep trying to hold these ideas, and navigate the changes we’re living through with clear eyes.
I’m excited to see what happens next!