When Adam Neumann lost his job running WeWork, the startup then valued at nearly $50 billion, there were two sins that always struck me as particularly grievous. The first was that he was having WeWork lease out buildings that he owned (e.g., he was profiting from the supply) and the second was that he sold the trademark for “We” to WeWork for $5.9 million (a bit of a stretch, but it could be characterized as him profiting off the platform).
When Sam Altman lost his job running OpenAI, the startup valued at over $50 billion, there were many sins that struck me as particularly grievous. The first was that he was looking to raise billions to start his own chip company that could sell to OpenAI (e.g., profiting from the supply), and the second was that he was separately raising billions to build a hardware device that OpenAI models would be housed on (e.g., profiting off the platform). To further muddy the waters, OpenAI’s main partner, Microsoft, buys power from a company in which Altman is a major shareholder. OpenAI’s in-house VC fund has led a Series A round in a company in which Altman is also a major shareholder. Altman owns shares in some of the buzziest tech companies in the world, including Stripe, Instacart, and Humane. Many, if not all, of those companies are using the tech that OpenAI is selling.
This is enough conflicts of interest to make a corporate lawyer's brain smush in on itself like a dying sun. It is a ludicrous amount of mis-aligned incentives.
Now, the Neumann-Altman comparison isn’t exactly a fair one. WeWork is worth a grand total of zilch, while OpenAI was about to close a $86 billion tender offer before this weekend’s debacle. A bankrupt real estate company can’t really be compared to a company that regularly releases products that are straight out of science fiction. But the fact that my conflict-of-interest parallel is so easy to make should cause everyone to pause in their full-throated defense of Altman.
Any board, non-profit or not, could consider all this equity mish-mash a fireable offense. At the risk of being heavy-handed with this analogy, before this weekend’s events, Altman was looking to both make the uranium and build the power plant for his nuclear reactor company. It is a level of profiteering that I find disquieting. You can make an argument that Altman is uniquely qualified to lead all these companies, but he should be doing so in a way where he isn’t the one primarily benefiting. A clearer structure would be setting up an AI holding company for the chips, models, and hardware so that all the investors in OpenAI could benefit. As it is, I feel uncomfortable that the company’s employees and investors have helped jump-start an industry that Altman is attempting to corner.
Actually, maybe the nuclear analogy is right. When Altman helped found OpenAI, the team was explicitly worried about what would happen if a non-moralistic organization were to discover artificial general intelligence. After all, an AGI would be a technology as powerful and impactful as nuclear weapons are today.
To combat this, they explicitly made unusual corporate governance choices. From the founding blog post: “OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.”
So they set up a board with the power to fire the CEO if he went off-mission. In June of this year, Altman himself called that a positive quality of the organization! That Altman got canned is a feature, not a bug, of the organization he designed. A fundamental part of the job of a CEO is to manage their own board. Altman himself was the one who pushed the professional investors off his board earlier this year. If he didn’t want to get fired by them, he should’ve built it more carefully and managed it more closely.
Find Out What
Comes Next in Tech.
Start your free trial.
New ideas to help you build the future—in your inbox, every day. Trusted by over 75,000 readers.
SubscribeAlready have an account? Sign in
What's included?
- Unlimited access to our daily essays by Dan Shipper, Evan Armstrong, and a roster of the best tech writers on the internet
- Full access to an archive of hundreds of in-depth articles
- Priority access and subscriber-only discounts to courses, events, and more
- Ad-free experience
- Access to our Discord community
Comments
Don't have an account? Sign up!
Thank you for giving this topic a try. The noise around this headline is intense and curious people want to get a more direct storyline. What comes through is the truth that too much money and too much fame too fast does not mean you can break laws, cheat, steal, and live in your own bubble. Eventually and always the innocent among onlookers says "The king has no clothes." And then we all see what was in front of us the whole time.
Really appreciate the level-headed approach here. Far too much of the dialog around this whole situation is polarized, in one "camp" or the other. It can be true both that the board handled this badly *and* that there was good cause to do what they did. How things end up should be determined by the validity of the concerns the board had, not by their poor initial handling of his dismissal. But that's generally not the world we live in, appearances are often nearly everything, and Altman currently has the best optics unfortunately. I say unfortunately not because I dislike him, but because I think the wild imbalance here in public perception is likely to cloud a fair assessment of the situation. The stakes are high here, even if we set aside potential existential AGI concerns, just as far as the real world impact of OpenAI to date, and its likely future effect on the worldwide economy, etc, etc. They need to get this right, not just *look* right in the moment.