Sam Altman Is Dead. Long Live Sam Altman.
How to think about the chaos at OpenAI
Launch alert! Dan started a new video podcast where he interviews the world’s experts on how they use ChatGPT. The first episode is with Gumroad founder Sahil Lavingia, who discusses how he uses ChatGPT to think about everything from purchasing real estate to writing tweets.
It was all a dream?
Picture this: you’re sitting on a plane at 30,000 feet. The pilot’s voice comes on. “We’ve reached cruising altitude,” he says in his comforting Waffle House accent. A flight attendant is walking up and down the aisle hawking a snack box for $25. The man next to you is snoring at your shoulder, and he’s kicked off his Nikes so you can vaguely smell his feet. But the Wi-Fi is working. You’re playing with ChatGPT on your laptop. Everything is right with the world.
Suddenly, the fasten seatbelt light pings on. Then the hum of the engines disappears. In the eerie silence, the plane lurches and tilts downward. Your stomach leaps into your throat, and your hands clutch the armrests of your seat. You look out the window and realize you are not cruising any more—you are, instead, descending toward the ground at an angle that could only be described as acute. You look around wildly and watch the flight attendants as they rush in a panic through the curtain to first class, a gleam of terror in their eyes. Oxygen masks flop from the ceiling like dead jellyfish. You hear screams from the front of the plane.
The pilot’s voice comes on again—except this time it’s not the Waffle House voice. It’s someone new. You have a hard time understanding what they’re saying through the screams of the other passengers, but you hear the phrase “not consistently candid” a few times. You don’t totally understand, but you know this: the original pilot is no longer the pilot. There’s a new pilot now.
Meanwhile, the great maw of the ground is rushing toward you. Gravity is doing its inevitable work. You have only seconds left before you become two-dimensional. In your terror you glance at your ChatGPT session. It is repeating, “I’m sorry, I can’t do that” over and over again.
You try to scream, but the oxygen mask on your face prevents you from doing so. This is it. The end.
And then you wake up. You are alive. The plane is fine. The flight attendants are still passing out $25 snack boxes. The guy next to you still smells like provolone that’s been left out of the fridge for a few weeks. ChatGPT is still warmly responding to your chat messages.
It was just a bad dream. Nothing has changed. But you feel like you’ve aged 15 years.
That’s what this weekend felt like to me.
I am talking, of course, about the drama at OpenAI. Sam Altman was fired by the board on Friday afternoon. Then president and co-founder Greg Brockman resigned. The tech world went wild with speculation: did Altman lie? Did he steal? Did he secretly murder someone? What could have possibly led to the CEO of the most important new technology company in the world being ousted so suddenly? My texts from people in and around the company ranged from shock, to fear, to schadenfreude.
By Saturday, it seemed all was lost. Tech luminaries were posting devoted eulogies on Twitter as if he’d died. And then… it all seemed to change again. There were reports that OpenAI staffers were threatening to resign en masse. That Microsoft and other large OpenAI investors were clamoring for Sam to come back. That the board that fired him had offered to resign. Altman posted on Twitter/X, “i love the openai team so much.”
Sam Altman is dead. Long live Sam Altman.
OpenAI’s culture clash
This whole debacle has already turned into a tribal war online between supporters of Altman and those of OpenAI chief scientist Ilya Sutskever, who led the board’s move to fire the former. It’s being portrayed as a fight between accels (people who believe in accelerating the rate of AI and technology progress) and decels (people who favor stopping or slowing down progress for safety reasons).
I think this is a mistake. Sustkever can hardly be described as a decelerationist. He’s OpenAI’s co-founder and chief scientist largely because he’s hinged his career on making AI progress, and he was one of the few people to bet on deep neural networks long after people had given up hope that they’d produce the kind of advances that OpenAI learned to use them for.
There is no accelerationist versus decelerationist clash at OpenAI. But there is a clash of cultures: between people with a research background, like Sustkever, and those who come from startups, like Altman and Brockman. I alluded to another facet of this schism when I wrote that ChatGPT “was born in sin” in my Dev Day piece.
OpenAI started as a non-profit research organization built to serve all of humanity. Because of ChatGPT’s success, it has begun to evolve into a product organization built to serve customers at scale—a mission for which Altman and Brockman are extraordinarily well-equipped.
I believe that this second mission can serve the first, but apparently Sustkever and the rest of the board recently decided that they didn’t. OpenAI’s odd governance structure—which did not include any representation by OpenAI’s biggest investors, and was overseen by a board whose members had relatively little board experience for a role so important—allowed them to proceed by summarily firing Altman.
Aside from the obviously preposterous nature of how this process played out, the question is, why? Why would they do this now? And why so quickly?
One theory is that Sustkever believed that Altman was getting involved in too many other ventures: reportedly a NVIDIA competitor to make AI chips and a hardware company devoted to building AI phones. He might have worried that Altman’s control over other parts of the value chain would give him power beyond the strictures of OpenAI’s founding charter.
Another theory is that OpenAI made a research breakthrough that truly scared Sutskever—but that Altman wanted to proceed with commercializing it too quickly. The day before Altman got fired, he spoke at a panel with other CEOs at the APEC summit, where he said:
“On a personal note, four times now in the history of OpenAI, the most recent time was just in the last couple of weeks, I’ve gotten to be in the room when we push… the veil of ignorance back and the frontier of discovery forward. And getting to do that is the professional honor of a lifetime.”
That certainly sounds like the company has made some important progress recently. If Sustkever has a different evaluation of its risks than Altman, it’s possible that this is at the root of the disagreement that led to Altman’s firing.
I have no idea which theory is correct, or whether there is another explanation. We don’t yet know Altman’s side of things (though he’s been quite gracious publicly for someone fired out of the blue).
It’s important to reserve judgment until more of the facts come out. But for now, whatever Sutskever’s goals, he probably did not advance them particularly well, because supporters of AI safety now look foolish. AI is, as far as I know, the only field in which startups have been built with safety research at their core from the beginning. That’s why OpenAI is structured as a non-profit and Altman has no equity position in the company. But, barring some revelations, its focus on safety and resulting corporate structure created a ridiculously bad and chaotic situation. In the long run, it’s likely to make the cause of safety harder to justify, rather than easier.
The next few months are going to be even weirder than any one of us expects. Buckle up. I’ll be here to tell you about it. —Dan Shipper
"Introducing ‘How Do You Use ChatGPT?’" by Dan Shipper: There is a special group of people out there who are using ChatGPT to 10x their productivity. In this new podcast, Dan interviews them and shows how they do it. Watch (or listen to) this if you feel like you aren’t getting enough value from AI tools.
"Does Your Startup Feel Chaotic? Good." by Jean Hsu: Bad chaos makes you Theranos, but good chaos creates billionaires. Read this to figure out whether your company has the good kind—or the bad.
"To Go 0 - 1, First Go -1 to 0" by Ruchi Sanghvi: Speaking of chaos, founders also undergo a messy personal transformation as they decide what company to start. Read this if you’re trying to figure out what to work on next.
"COGS: How I Bankrupted MoviePass" by Evan Armstrong: Evan likes three things—ripping off dummies, free movies, and accounting terminology. Read this to understand why the cost of goods sold is fundamental to your business.
"How to Mourn Omegle" by Meghna Rao: The internet’s original promise has been broken—it turns out that connecting strangers doesn’t make the world a better place. Read this homage to Omegle, an idealistic website killed by the realities of cyberspace.
Chain of links ⛓️
- OpenAI has built one of the most high-velocity cultures in tech. Here’s a deep dive on how they did it with ChatGPT engineering manager Evan Morikawa. (I had dinner with him in San Francisco last week and can attest that he's extremely smart.)
- ChatGPT’s new web-browsing feature has some haters who prefer the au natural version that just hallucinates the answer to a question.
- Trend alert: There’s a new UI trend in AI that’s blowing people’s minds: allow the user to manipulate a simple version of a finished product, and have the AI build it in real time. There are prototypes in AI art and AI-assisted programming. This solves the steerability problems in chat I covered two weeks ago. Expect it to be a key UI component going forward.
- AI for science: Cardiologists can now use AI to predict heart problems. Who knows if this plays out, but AI is not going to be limited to deepfakes and ChatGPT. (Bonus: this paper summarizes all of the ways GPT-4 can be used to accelerate scientific progress.)
- Rumor mill: ChatGPT might be getting a memory. If so, it will 10x the power of your Custom Instructions. Get prepared. —Dan Shipper
Inside Notion's AI strategy
“One big lesson we learned is that in the real world, with imperfect user-generated data, retrieval-augmented generation is not a trivial problem. It took a lot of tuning to get our search system to surface relevant knowledge base pages for our models to run inference over for different questions.
Another lesson is that it’s extremely important for engineers to spend time looking at real-world inputs to these system. Many failures are hard to anticipate, and creating a weekly 'bad output triage' session helped us understand the problems deeply enough to get where we are today. We treat every individual case of hallucination, bad reasoning, or other mistake like a bug and try to tweak the system to remedy it.”
Translation: LLM performance is currently bounded by retrieval. Figuring out what knowledge to retrieve for your model to reason with is a hard problem that is specific to the type of question being asked.
I’ve been saying for years that companies need librarians, and Notion's chatbot brings us one step closer to an automated librarian inside your knowledge base. —Dan Shipper
The napkin math
M&A is horrible RN
The EU is looking to kill Adobe’s $20 billion acquisition of Figma. I’ve heard rumors that the FTC is also considering a challenge to the deal. With the IPO markets closed and an aggressive regulatory environment, startups need to do something no investor has ever asked of them—make a profit. No one is coming to save you.
And it isn’t just a tech thing! M&A is reduced globally. Private equity is down 36% year-over-year. Don’t count on a juicy acquisition premium anytime soon.
The AI hardware renaissance is here
The Collison brothers—co-founders of Stripe—love books, science, progress, and… mops? They co-led a $24 million Series A round into a company selling an upscale Roomba with AI computer vision. The souped-up vacuum and mop retails for $1,795. It's the latest in the uber-expensive, AI-adjacent hardware renaissance, following Humane's launch last week.
Services (not software) is hot
If you’re a tech company that makes your customers 20x more productive, you could sell them said tech. Or, you could just keep the tech and go head-to-head with your Luddite-ish competition.
For 20 years, almost all VC-backed companies have picked the first option, but I’ve always thought that more companies should take a go at the second. I am no longer alone in this opinion, as evidenced by a car wash chain “known for its superior technology” raising a $30 million Series B. I’ll have a deeper research report in the weeks to come, but expect this trend to continue. Many VCs I know are looking at similar value capture-type deals, where a company builds the tech and then exclusively deploys it. —Evan Armstrong
The examined life
- You don’t want Elon Musk’s life, according to Elon Musk.
- Julia Child’s culinary notes are awesome.
- The most important trait as a founder? Being honest with yourself. (This pairs well with "Admitting What Is Obvious.")
The Every team is going on a Thinksgiving from Nov. 20-24. Instead of publishing, we’ll be devising new theses for how the world will work next year—from the disruption of venture capital, to AI as a creative tool, to how to build a billion-dollar company without cosplaying as a cowboy for Vogue.
We are grateful to all of you for supporting us. None of what we do would be possible without you. Our team will be back the following week with new ideas and a new AI tool for subscribers to try.
Have a great Thinksgiving! We’ll see you next week.