The Case Against Sam Altman

The board is bad at their job, but they may not be crazy

Midjourney/prompt: "a watercolor of a sad robot"

When Adam Neumann lost his job running WeWork, the startup then valued at nearly $50 billion, there were two sins that always struck me as particularly grievous. The first was that he was having WeWork lease out buildings that he owned (e.g., he was profiting from the supply) and the second was that he sold the trademark for “We” to WeWork for $5.9 million (a bit of a stretch, but it could be characterized as him profiting off the platform). 

When Sam Altman lost his job running OpenAI, the startup valued at over $50 billion, there were many sins that struck me as particularly grievous. The first was that he was looking to raise billions to start his own chip company that could sell to OpenAI (e.g., profiting from the supply), and the second was that he was separately raising billions to build a hardware device that OpenAI models would be housed on (e.g., profiting off the platform). To further muddy the waters, OpenAI’s main partner, Microsoft, buys power from a company in which Altman is a major shareholder. OpenAI’s in-house VC fund has led a Series A round in a company in which Altman is also a major shareholder. Altman owns shares in some of the buzziest tech companies in the world, including Stripe, Instacart, and Humane. Many, if not all, of those companies are using the tech that OpenAI is selling. 

This is enough conflicts of interest to make a corporate lawyer's brain smush in on itself like a dying sun. It is a ludicrous amount of mis-aligned incentives. 

Now, the Neumann-Altman comparison isn’t exactly a fair one. WeWork is worth a grand total of zilch, while OpenAI was about to close a $86 billion tender offer before this weekend’s debacle. A bankrupt real estate company can’t really be compared to a company that regularly releases products that are straight out of science fiction. But the fact that my conflict-of-interest parallel is so easy to make should cause everyone to pause in their full-throated defense of Altman. 

Any board, non-profit or not, could consider all this equity mish-mash a fireable offense. At the risk of being heavy-handed with this analogy, before this weekend’s events, Altman was looking to both make the uranium and build the power plant for his nuclear reactor company. It is a level of profiteering that I find disquieting. You can make an argument that Altman is uniquely qualified to lead all these companies, but he should be doing so in a way where he isn’t the one primarily benefiting. A clearer structure would be setting up an AI holding company for the chips, models, and hardware so that all the investors in OpenAI could benefit. As it is, I feel uncomfortable that the company’s employees and investors have helped jump-start an industry that Altman is attempting to corner. 

Actually, maybe the nuclear analogy is right. When Altman helped found OpenAI, the team was explicitly worried about what would happen if a non-moralistic organization were to discover artificial general intelligence. After all, an AGI would be a technology as powerful and impactful as nuclear weapons are today. 

To combat this, they explicitly made unusual corporate governance choices. From the founding blog post: “OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.”

So they set up a board with the power to fire the CEO if he went off-mission. In June of this year, Altman himself called that a positive quality of the organization! That Altman got canned is a feature, not a bug, of the organization he designed. A fundamental part of the job of a CEO is to manage their own board. Altman himself was the one who pushed the professional investors off his board earlier this year. If he didn’t want to get fired by them, he should’ve built it more carefully and managed it more closely. 

Learn more

This article isn't finished...

Start a $1 trial to read the rest and get access to all of Every.

Premium Every subscribers get one email a day with a smart, thoughtful essay going deep on AI, tech, and personal development. Join 80,000+ founders, operators, and investors to access:

  • This article and hundreds of other essays
  • Our community to interact with Every's writers and other readers
  • A reading experience free of ads or paywalls
  • Our AI-writing app Lex and priority access to our other AI experiments

A trial is only $1 for your first two weeks and $200 per year thereafter.

Subscribe →

Or, login.

Read this next:

Napkin Math

What Are AI Agents—And Who Profits From Them?

The newest wave of AI research is changing everything

6 Mar 28, 2024 by Evan Armstrong

Napkin Math

Oh No, I Kinda Want to Work for Elon

An examination of the man-child who would be king

12 Sep 21, 2023 by Evan Armstrong

Napkin Math

Falling Out of Love With Michael Lewis

The complicated demands of business writing

17 Oct 11, 2023 by Evan Armstrong

Napkin Math

Crypto’s Prophet Speaks

A16z’s Chris Dixon hasn’t abandoned the faith with his new book, 'Read Write Own'

13 Feb 1, 2024 by Evan Armstrong

Chain of Thought

Quick Hits: New AI Features From Arc and ChatGPT

The future arrived faster than I expected

2 🔒 Feb 2, 2024 by Dan Shipper

Thanks for rating this post—join the conversation by commenting below.

Comments

You need to login before you can comment.
Don't have an account? Sign up!
Georgia Patrick 6 months ago

Thank you for giving this topic a try. The noise around this headline is intense and curious people want to get a more direct storyline. What comes through is the truth that too much money and too much fame too fast does not mean you can break laws, cheat, steal, and live in your own bubble. Eventually and always the innocent among onlookers says "The king has no clothes." And then we all see what was in front of us the whole time.

Oshyan Greene 6 months ago

Really appreciate the level-headed approach here. Far too much of the dialog around this whole situation is polarized, in one "camp" or the other. It can be true both that the board handled this badly *and* that there was good cause to do what they did. How things end up should be determined by the validity of the concerns the board had, not by their poor initial handling of his dismissal. But that's generally not the world we live in, appearances are often nearly everything, and Altman currently has the best optics unfortunately. I say unfortunately not because I dislike him, but because I think the wild imbalance here in public perception is likely to cloud a fair assessment of the situation. The stakes are high here, even if we set aside potential existential AGI concerns, just as far as the real world impact of OpenAI to date, and its likely future effect on the worldwide economy, etc, etc. They need to get this right, not just *look* right in the moment.

Every smart person you know is reading this newsletter

Get one actionable essay a day on AI, tech, and personal development

Subscribe

Already a subscriber? Login