Transcript: ‘How AI Startups Can Win With Better Strategy’

'AI & I' with investor Mike Maples

3

The transcript of AI & I with Mike Maples is below. Watch on X or YouTube, or listen on Spotify or Apple Podcasts.

Timestamps

  1. Introduction: 00:02:20
  2. Innovate the business model, not just the product: 00:06:02
  3. How startups can compete against the likes of OpenAI: 00:15:49
  4. Mike’s take on DeepSeek: 00:19:34
  5. Why the future has always belonged to the tinkerers: 00:21:44
  6. How small teams today can make big money: 00:24:03
  7. Find niches that incumbents can’t or don’t want to enter: 00:28:55
  8. The qualities of the truly AI-native: 00:45:50
  9. How AI changes the funding model for software companies: 00:52:28
  10. Knowledge work is moving toward systems-level thinking: 00:57:05

Transcript

Dan Shipper (00:02:20)

Mike, welcome to the show.

Mike Maples (00:02:21)

Thanks for having me. I've been looking forward to this.

Dan Shipper (00:02:23)

Yeah. So for people who don't know, you are a legendary investor at Floodgate, which was one of the first seed firms. You were an early investor in Twitch, Lyft, Okta, and a bunch more. And you're also the author of the book Pattern Breakers, which is an excellent book that I've read.

We've also reviewed it on Every, which I guess I would summarize by: It's sort of a guidebook about how there's no guidebook to building companies. So, it's a little bit Taoist, a little bit Zen, which I love. I think that's so good and so important. But I think you have a lot of emphasis on founders winning by being extraordinarily different, and breaking the established patterns of how you're supposed to run a company. I loved it. And I'm excited to chat with you about that and everything going on in AI on the show.

So one of the things that I'm personally curious about is that you started investing when seed wasn't really a thing and helped to invent this new way of capitalizing companies for an earlier era of pre-AI startups. And I think that that is an example of the kind of thing that you talk about in your book Pattern Breakers, which is taking a look at the landscape of what companies need and how companies are funded and being like, there's this thing that it seems to make a lot of sense to me that there should be a seed-stage funding mechanism, and just going and doing it. And, I'm curious, my feeling right now is that AI is sort of radically changing the economics of starting a business. Software is orders of magnitude cheaper to make today than it was 10 years ago. And I'm curious, using that same sense of, okay, I'm looking at the environment and looking at how things change. And I’m maybe pushing away the established structures for a second. How do you think that that might change investing and how companies raise money and all that stuff?

Mike Maples (00:04:40)

Yeah, I've been wondering about this a lot lately. So, as you know, one of the things that I emphasize in startups is the power of harnessing inflections, right? So I like to say that business is never a fair fight and the startup has to have some unfair advantage to win. And the way they do that is they harness inflections. Inflections allow the startup to wage asymmetric warfare in the present and show up with something radically different. Without inflections, they have to play in the incumbent sandbox. And so they're limited in their upside. So every now and then, though, you get something that I like to call a sea change. And when I was a kid the sea change was mass computation in the personal computer. And computers used to be really expensive and then they became asymptotically free and ubiquitous. And you had one on every desk in every home and a whole new set of companies emerged. Software became a real business for the first time. Software used to be what you gave away because mainframes were expensive. You had to keep them running all the time. And so the assumptions got inverted. And you had a bunch of companies using the software licensing model—Oracle, Microsoft, SAP—companies that then you had, in the nineties, the era of mass connectivity, which I think was extended with the iPhone and in mass connectivity rather than processing power becoming free, communications bandwidth starts to become free and you start to not just have computers everywhere, but you have everybody in the world and every device in the world connected in these networks and new business models came out of that subscription and SaaS and advertising. It's interesting. There wasn't any software in 1990 that really mattered. All those companies got consumed by Microsoft because they could put it in the OS or outcompete them. 

So why do I think the AI sea change matters? What I see happening with the sea changes is that some business models become relatively more attractive. And some business models become relatively less attractive. And there's only nine business models that I know of in human history. And so the most recent business model I know of is 250 years old. It's the subscription model. And so what I like to do is I like to say, okay, if there's nine business models so far in humanity, and every time there's a technology sea change, there's a migration of attractive business models from one set to the other, how might that migration occur this time? Because what you want when you're a startup is to be counter position to the incumbents. The incumbents have the advantage that discussion is wrongheaded. Of course, the incumbent has the advantage if you play by the rules of the incumbency, but what you want to do is you want to say, how does AI make some business models relatively more attractive and less attractive and how can I as a startup exploit those new opportunities, not just inside in my product, but some type of an insight in my business model go-to-market strategy. It disorients incumbents and where they have a disincentive to retaliate or to copy your strategy. So those are mostly what I'm looking at these days from an AI point of view.

Dan Shipper (00:06:02)

So, I think one of the things that I see a lot of from the business model perspective, and right now we're talking about business models for startups. I would also like to talk about business models for venture—funding startups. But business models for startups, just to start there for a second. One of the things I'm seeing a lot of is paying per outcome as opposed to paying per month, which I think is a really interesting one. Is that something you have your eye on? 

Mike Maples (00:07:31)

Oh, absolutely. So there's a business model called tailored services with long-term contracts. And right now most people think that's unattractive.

Dan Shipper (00:07:36)

What are the tailored services of long-term contracts? 

Mike Maples (00:07:43)

That could be the defense subprimes. It could be a contract research organization for a pharma company. It's somebody that you offer services on a contract basis—usually it’s labor intensive, usually it’s cost-plus. And the conventional wisdom today is that those are not attractive opportunities for software companies.

Dan Shipper (00:08:05)

Like a law firm or something?

Mike Maples (00:08:06)

A lot like a law firm. Perfect example. So an example like a law firm: A legal services AI company I was involved with a few years ago was called Text IQ. And they would go to a big corporation and they would say, when you're in a lawsuit— Let's say you're Apple and you're in a lawsuit with Samsung. There's a ton of documents that have to be discovered for the court case. And so the way that happens in reality is they hire these outsourcer firms of people to go pore through these documents and they charge them on a cost-plus basis. And so what Text IQ said is we've got AI: Why don't you just send us all your documents, and we'll send you back the ones that are discoverable, and we'll have more accuracy? Well, now you're not competing for software license, desktop revenue, or per-seat revenue, or even a subscription price. You're saying, hey, look, I'm a substitute for that labor spend. You used to spend $50 million a year on this contract outsourcer that sorts through these documents. I can do it for a tenth of the price and much better. And now you're competing over that labor cost bucket rather than the software spend bucket. And how many seats do I get?

Dan Shipper (00:09:15)

Well, that's interesting because there's also a cost per task done. So it's the cost per document processed or whatever, which is what OpenAI does when you send them a prompt— they send a response. But even if they send the response and the response isn't good, you still pay for it. And then there's other companies that are capturing part of the value that they generate. So if they increase your— Let's say it's a SDR bot. If they increase your sales by some amount—your close rate—they take a percent of that only when it's successful. Have you looked at those two?

Mike Maples (00:10:59)

Yeah and so, I do like the outcome-based pricing models a lot. They both have their virtues, right? The thing about OpenAI is you could use DALL-E to generate some art that you don't think looks pretty enough. But OpenAI probably deserves to be compensated for the fact that you did that, right?

Dan Shipper (00:11:19)

Yeah, it's sometimes hard to know if the job was done well or not. It's not so clear. And sometimes it's the customer's fault that the job wasn't done well, right? And so it’s tricky.

Mike Maples (00:11:30)

Back in my ancient days, when I was a founder, I used to have this expression when I would sell enterprise software. I called it: What does it take to ring the bell? And so if you go to the carnival, how there's that thing where you have this big mallet and you hit this thing and it hopefully goes all the way up and rings the bell. But if it doesn't go all the way up, it makes no sound. It has to go all the way up and ring the bell. It's binary. And so, what I used to say to the folks that I would work with is that the customer doesn't care that your software ran according to how the specification works. That's not what they're buying. They have a job to be done. They're paying you. They're hiring your product to do a job. And so we need to understand what it's going to take to ring the bell for doing that job. And if we ring the bell, they're going to say, this is amazing, I want more of this. If it doesn't ring the bell, they're not going to care that the mechanism of our system works. They're not going to. be interested in that. And so for me, the outcome-based models that we were just talking about a minute ago are asking that, what is the job to be done, in a Clayton Christensen sort of lens? And then what does it mean to ring the bell? And. Can I get paid if I unambiguously succeed at that over and over again?

Dan Shipper (00:12:45)

And the thing that makes that interesting over a SaaS model is that the incumbents are all going to be SaaS. And if you're guaranteed to get $20 a seat or whatever it is, the idea of moving to a pay-for-performance model is very unappealing. So, to your counter positioning point, that's a thing that startups can do that incumbents. Some incumbents already do this in the customer service world. This has been a thing for forever, but in general this is not a thing and so incumbents are not going to be able to do this very well.

Mike Maples (00:13:19)

Yeah, I think that this counter positioning thing is a really important thing to maybe double click on and so, a great example is in the nineties, if you were a startup, the words that you dreaded to hear was Microsoft has decided to compete in your market. Because you're just like, okay, I guess I'm out of business because even if they start losing, they're just going to bundle this thing in Windows and I'm just hosed, right? And so that was happening to a lot of companies—Netscape just disappeared, basically, because Microsoft decided to bundle the browser in the operating system and go full ham against Netscape. Well, then the internet happens, and then some people start to discover that you can monetize not by selling by the seat or by the desktop, but by selling ads—and that was Google. And Microsoft had no answer to that. You can't bundle something in your operating system and deal with the fact that Google is pricing ads. It doesn't solve the problem. It doesn't impact their business at all. And so Google was counter positioned to Microsoft from a business model perspective. And counter positioning is one of the most powerful ways a startup can have an insight. Most people think an insight is just about the product, but it can also be about what is the product and how you deliver the product. And the how can have an inside as well. And quite often the very best, most valuable companies have an inside around business model that's facilitated. Google's business model couldn't work before the internet. The technology wouldn't have provided the empowerment necessary for Google to monetize with ads. But now all of a sudden it did. And so that's what we look for with this counter positioning and to your point, right? Now it itself sells the work, not the software. If I'm a company, if I'm a SaaS vendor, and I charge a subscription by the seat, and that's all I've ever done, think about how embedded that must be in the culture, right? Every product manager thinks that way. The CFO thinks that way. There's nobody—

Dan Shipper (00:15:26)

The company who knows how to react to your strategy because the investors think that way. Everybody does. If you change your business model, everyone's going to lose their mind.

Mike Maples (00:15:32)

Yeah. So how would you even think about changing it? Midstream, even if you knew to have the insight, that perhaps you should consider it. You just wouldn't have the wherewithal to do it because it's just so embedded in your culture. Your entire value delivery system is predicated on a different model.

Dan Shipper (00:15:49)

Yeah. Well, let's keep talking about counter positioning. And I want to bring up— I think, if I have to pick who Microsoft is in the AI world—huge, huge, huge tech companies like Microsoft and Google aside—I think the one right now to think about counter positioning or at least a lot of startups are afraid of is OpenAI. OpenAI is moving from Microsoft mechanics being this API developer tool to a product company. They're releasing all of these consumer-facing products. ChatGPT is sort of taking over there. And so I think a lot of founders are thinking about, well, what if OpenAI includes this as part of ChatGPT or includes this in some new product that they release? And I'm curious how you would think about counter positioning that.

Mike Maples (00:16:43)

Yeah, so there are a couple of things I find really interesting about OpenAI from counter positioning. So maybe we start with startups and then just there's some general stuff with DeepSeek and things like that. So let's just take an example. I'm involved with a company called Applied Intuition. And they create simulation software.

Dan Shipper (00:17:05)

I love that name, by the way. 

Mike Maples (00:17:07)

Yeah, it's pretty good. It creates simulation software for autonomous vehicles and also technology stacks for electric vehicles—and these car companies, other than Tesla, don't really know how to do EVs, don't know how to do AVs. They don't really even know how to do software, right? Their entire business model is predicated on a supply chain that’s 100 years old, where they get parts from Bosch and chips from all these people and parts from different tool and die shops and everything else. So, Applied Intuition says, okay, we've got a bunch of people from Google and Waymo, and now some people from Tesla and all the best autonomous vehicles, all the best EV companies in the world. We can build the entire thing that you need to sort of update your strategy and roadmap to have the software-defined car, which is where the future is going now. If you're GM or if you're Porsche or you're these big companies, that's pretty valuable, but you can't just get that when Sam Altman releases his next demo at a demo day event, right? If you're gonna have a software-defined car, there's a whole lot of things that you have to know intimately about the processes of how cars are made and manufactured and tested, and the whole supply chain and how the delivery system works. And so to succeed as a company and to really ask for giant contracts from these companies, you have to have not only AI expertise and products, but you have to have multidisciplinary expertise. So Qasar and Peter, they grew up in Detroit, but before they got in at Google and Waymo, they were in the car industry at GM. And so, like companies, where one way I like to think about it is that everybody disses on these companies that are just an AI wrapper, right? And I'm like, if the thing that you're wrapping on top of involves a process that you really know about that most people don't, That may be a path to a great company. And so I think that that's what I'm interested in.

Dan Shipper (00:19:23)

The AI wrapper thing was so silly. I see less of that now, which is nice. But it was a very silly thing when it first started.

Mike Maples (00:19:34)

So one other thing about counter positioning and OpenAI that I think is interesting, and I'd love to get your read on, is one way I have internalized the DeepSeek stuff is that when in the early days of the internet, all of the researchers from Bell Labs and AT&T, Time Warner, the government, said this internet thing's a toy. It’s never going to be good enough. We've tried this before, it doesn't work. These protocols are not going to be robust enough, and in the short term, you would have been right. None of these things looked all that interesting or impressive, but I was talking to Steve Sinofsky about this the other day. It was at Microsoft at the time when the internet took off and he was at Cornell and he saw CU-SeeMe. And he goes to Gates, this is going to be a tidal wave. This is going to be a giant new phenomenon that we got to really pay attention to. DeepSeek reminds me of that. So the culture in AI for the hyperscalers right now is you can solve all problems by throwing money at it. And the DeepSeek guys said if we're limited with some fundamental constraints, what would we do?

I think that there's gonna be a cultural shift in AI where many people adopt that mindset and that's important because in the early days of mass computation, the IBM PC had a 640 K memory limit. And so the Microsoft programmers had an advantage because they could write small, efficient code. It wasn't how many thousand lines of code anymore. It was how efficient your code is. And I think that we might see the same phenomenon here where people come from the bottom up. With very frugal low-cost by design solutions, it'll be hard for OpenAI and Anthropics and those guys. I have huge respect for what they're doing, but it'll be hard for them to respond to that because they're culturally embedded in their operating model is to solve everything by throwing money at it, hire the best people—throw money at it and just keep going. Keep going faster.

Dan Shipper (00:21:44)

That's so interesting. You said so many things I want to talk about. So one is this toy thing where people and governments and big companies sort of ignore the Internet at first because they were like, we tried it and it doesn't work. It doesn't scale or whatever. I think you have the same history with neural networks where, in the beginning of AI in the fifties, neural networks were around then, but they were mostly ignored because the early AI people, particularly like Marvin Minsky, proved that single-layer neural networks were not as powerful as other types of Turing machines or current couldn't do certain types of computations. And I think academia sort of by and large felt like neural networks were not understandable enough. There was no theory, and so it felt like a toy and it was basically ignored. Except for a few neural network researchers in the eighties and nineties, and then industry adopted it and it blew up because they were like, it just works, who cares? Who cares what the theory is, which I think happens all the time. And I'll stop there. I'm curious if you had anything to add to that.

Mike Maples (00:22:02)

Yeah. And it's funny because I even— When I was working on this book with pattern breaker stuff, one of the examples I used was the Wright brothers with the airplane. And so all the experts said it's going to take a million years to create a flying contraption that can fly humans in it. And the New York Times ran an ad called “Flying machines that won't fly.” And it said that it's a waste of time to try. And they had quotes from the head of engineering of the Army and all this stuff and 69 days later, the Wright brothers at Kitty Hawk flew their first plane. And they were a couple of bicycle mechanics. And so they what you see is that you time and again, the experts are attached to their mental model how the world works and it's the tinkerers, it’s the people who have permissionless innovation who just tinker with stuff and make something work, and before they have to even change the science, right? People's understanding of Bernoulli's equation and all that stuff got modified and improved because of the success of the Wright brothers with their planes. People tend to think that abstract science precedes engineering, but quite often engineering and tinkering causes science to evolve to explain the unexplainable. And that's what I see happen more often in practice.

Dan Shipper (00:24:33)

100 percent. I think the next point that you made is this big money vs. small team thing, which I think happens all the time too. Constraints breed creativity. And I think in general, being able to throw money at a problem means you don't have to spend time thinking about how to make it more efficient. And so I think your question about are OpenAIs and Anthropics of the world in trouble, I think that's an interesting one. I would bet not right?

Mike Maples (00:25:00)

I would too.

Dan Shipper (00:25:02)

My feeling about that is— Obviously the sort of cliche thing is like, okay, it's going to stimulate demand or whatever, which is fine. I think that is actually true. I think that they'll be able to most likely integrate this and have more efficient servers and that can serve the demand that they currently have—I think this will work. The thing that it seems to me that this opens up is I think we have mass AI figured out, which is how do you scale these models up so that a billion people can use ChatGPT? And how do you make that efficient and smart enough to work and all that stuff? But I think, one thing that people don't talk about nearly enough is that the capabilities of models today are in many ways not limited by the intelligence or the intelligence of the technology. They're limited by the risk profiles of the companies that are serving them. And if you're a gigantic company you're opening on— You have to go give government briefings before you launch anything. You're gonna be pretty careful about what you put out and I think the deep stuff is interesting because it means that— And I mean risk in all sorts of different ways. There's lots of different ways to take risks. But it means that small teams can build little models for problems that look like toys that an OpenAI would be like, we wouldn't do this. And I think that is the big thing. I don't think that that takes away ChatGPT, but it does mean that we have way more AI in different corners of the world than we would have otherwise, which I think is not good.

Mike Maples (00:26:55)

You know what, Dan? One of my favorite examples of this actually comes from the field of rocketry because it's so visceral. So Elon Musk, he'll launch a starship and if it blows up, he's like, okay, we instrumented it. We got telemetry. We’ll make it better next time. NASA's not going to do that. If NASA launches a rocket, they don't sit there and say, eas come, easy go, it blew up. And so the fact that Elon has a different risk profile and is not attached to whether it's successful with capital-S it, it changes the calculus of what he can do and it changes the speed with which he can move. And so I like to say that, in many cases, the big company, it's not quite as dumb compared to the small company. To your point, they have a different risk profile and there are just certain things they can't do. When I was working with the guys at Justin.tv, which became Twitch, if they launched something and it's insecure, so what? Nobody knew who they were. But Google can't do that. And Microsoft can't do that. And the big companies can't do that. Hollywood guys can't do that. Netflix can't do that. So not having to be burdened by what could go wrong is a big factor in trying things that could go right.

Dan Shipper (00:27:14)

That makes total sense. I want to go back to something that you were talking about earlier, talking about this company Applied Intuition, which you said sells to large car manufacturers. And I assume when a large car manufacturer buys them, it goes into a Ford vehicle and a customer is maybe using it and maybe has no idea what it is, but they're using it. Is that sort of how it works?

Mike Maples (00:27:34)

I think so. It's less of an end user type of thing, although that might change. I need to be careful what I say. But, but the primary customer is the car company that says, oh, my god, the architecture of cars has changed, what do I do?

Dan Shipper (00:28:55)

Yeah, so the strategy question I want to ask you is how you think about relationships because, I think, that's a common that's going to be a common thing for a lot of AI companies, especially if you're working on more foundational model-type things is you're going to be integrated into something else that has a consumer layer, and that's where OpenAI started. And then they were like, actually, we want to own the UX layer because that's how everything took off. they figured out a form factor that worked and then they have a data flywheel. There's all this stuff, right? And my last company was an OEM. And that is a difficult position when you're serving customers. There's an end user and then there's customers you need to sell to. It's hard to generate a lot of power or strategic advantage in that situation, and it's hard to make a great product. And I'm curious how you think about OEM-type strategies and when they work vs. when they don't. Yeah, it's tricky.

Mike Maples (00:29:45)

And what are some examples where it's worked? I'd say Applied is working really well. Intel has been great. Intel was a good one for PCs. Another good example would be Qualcomm back in the day with licensing their spread spectrum, technologies, and chips. And so it can work. Broadcom would be another.

Dan Shipper (00:30:00)

Twilio, I guess?

Mike Maples (00:30:05)

Twilio is an interesting one. And in fact, I like thinking of Twilio as a design-win business more than a dev tool. I'm framing a lot, actually. And so the term I like to use to describe it as a design-win model, where you want to become viewed by the customer as integral to their product strategy. And so if they have a slide that shows all these blocks and triangles and arrows and stuff, you need to be a big square in that slide, what you provide. Sometimes, like Twilio, you solve a problem that they really have. But they just have no interest in solving it on their own. So if you're Uber, do you really want to have an entire team building a messaging, update, texting platform? That's a substitute for Twilio. Probably none of your best developers want to do that inside of Uber. And so you're, hey, I'll just pay Twilio. Every time the earth turns a click or I send a message, I'll send them tiny fractions of a penny. That's okay. So that can work. 

The other way I think it can work is if you solve something existentia for the customer. So in the case of the car companies, the end customer or the customer you're selling to for the OEM, actually. So the problem that the car companies have is that the Tesla is just a fundamentally different architecture than ICE vehicles. And it's not just it's got a battery and they don't. It has to do with how many, what their operating system is like, and how many chips they have and how messages flow throughout their messaging bus. And Tesla is designed the way a car would be designed by Silicon Valley-type of thinkers. Whereas the ICE vehicles of today are mostly an amalgam of a bajillion parts suppliers that they've done business with for a very long time. And it's whatever Bosch has this year is going to be the new windshield wiper sensor thingy that I put in the Mercedes, right? And that's how they've operated. So they look at it and they're just like, look, it's just a completely different paradigm of how you'd build a car. And so you need somebody that can be your thought partner and how to build those things. And so that that can be another design-win model that works.

Dan Shipper (00:32:35)

That's interesting. First of all, the thing that makes me think of this is that there's this knife's edge, which is interesting, in this strategy, which is you have to be critical to their business, but somehow they don't want to do it themselves, which is like, there's very few things that are that. And that's really hard. Either you're critical and they're like, maybe we'll work with you, but then we'll buy you or we'll just replace you or you're not critical. And then it's horrible to try to sell that product. No one wants to do that.

Mike Maples (00:33:05)

I love that framing of it. I haven't quite internalized it that way, but you're right. It's like they either don't want to do it themselves because they just don't want to. Or they don't want to do it themselves because they can't conceive of how they would. And even if I want to, that's kind of academic, I can't. But in both cases, it's something that they actively choose not to do themselves. And there's a persistent reason for that to continue.

Dan Shipper (00:33:32)

Yeah. And I guess the reason like an Applied Intuition would work is— I'm thinking back to— You mentioned Clayton Christensen. I'm thinking about his “conservation of attractive profits” where, in the early days of new technologies, you want something at one company to integrate all the different steps of the value chain, basically, because you can iterate much quicker. So Tesla, they don't have this huge web of different suppliers. They probably have a few, but a lot of it, they're just doing themselves, whereas it sounds like GM or whatever has thousands of different modular manufacturers that they swap in and out because the architecture of the car has been around for so long that it's not changing. And so it doesn't have to be integrated. It can be very modular, which I guess is an easier OEM sell because Applied can just as long as they know that architecture, they can sell into it vs. a more vertical, more integrated company.

Mike Maples (00:34:33)

Yeah. Well, and here's how I internalize that, Dan. So, just to make sure that we're on the same page with the same language, what I understood from Clay, I've got a little bit of a crush, an intellectual crush on Clayton Christensen. I think the guy was amazing, and a great human being. So what I understood him to say is that in early markets, the products are never quite good enough. They don't perform well enough. And so what happens is, vendors get rewarded for having the integrated system because the customers will pay incremental dollars for incrementally better performance because they value that enhanced performance. And so, then what eventually happens is the performance gets mostly good enough. And what Clayton Christensen would call it is “overshot customers.” Now I'm trying to cram new features into my product to get customers to keep buying new things that I sell them but now they don't want the new things as bad and therefore you get this modularity argument, somebody else shows up and says, look, you're being overcharged. You don't have to have one guy be the system integrator anymore. In fact, you can just have a whole bunch of different components that you can mix and match and swap in and out. And so then the conservation of attractive profits goes to the modular suppliers rather than the integrated supplier, which I think is happening.

Dan Shipper (00:34:55)

That was a much better summary of contribution for attractive profits than I gave you.

Mike Maples (00:35:00)

Well, I don't know, but that's the brilliance of Tesla. Like Elon, everybody told him you should act like a car company acts. You should have modular components and suppliers and the supply chain. Elon understood nobody can make an electric car that's good enough. I have to control all the critical technologies because I have to have the ability to have something that rises to the level of good enough. Nobody's ever had that before. So that's another reason, right? Architecturally, he's just totally different, right? His whole paradigm of how to build a car is just different from start to finish.

Dan Shipper (00:36:35)

So is that an argument for AI companies owning the whole stack themselves right now as they're sort of innovating on what the products even look like and customers are willing to pay more for incremental value?

Mike Maples (00:36:55)

Yeah, what I like about what Clayton Christensen really had was a bunch of mental models for innovators. And whenever I think of a mental model, I always like to ask under what conditions. So under what conditions would I want to be the complete integrated solution? I believe that you want to be the complete integrated solution. If the customers are desperate for more performance. And we'll pay for that enhanced performance. So before Nvidia, there was Silicon Graphics and if you wanted to make dinosaurs in Jurassic Park, you had to buy the most expensive SGI machines, millions of dollars worth. And if you could make the graphics run twice as fast, Industrial Light & Magic would pay twice as much because it was mission critical to render those dinosaurs overnight, but now that there's chips commoditized and video as the better model, because they say, hey, I'll just sell you off the shelf these GPUs. So I think that the question always becomes under what conditions are you advantaged by being the integrated solution and under what conditions are you advantaged by being a modular component of the solution?

Dan Shipper (00:37:55)

That's interesting. And I guess what's your best guess about where we are now in the AI landscape overall? Because I think that there is this common thing and I actually felt this too like when o1 came out where people were like, I feel like my model o1 or o3 for even in the demos— I remember the demo, one of the demos is like list out the 13 Roman emperors and it's like, that's not really something that I care that much about generally. And most people are not doing Ph.D.-level research to be honest. And that was my first feeling, but to be honest now, I just use o1 all the time and I don't really use any other model—or now I use o3. So I'm curious what you feel about where we are and how much performance improvements in terms of intelligence people are willing to pay for.

Mike Maples (00:38:51)

Well, first of all, I'm really excited, but I'm probably in these tools too much now. I'm probably in these tools three, four, five hours a day. And there's a lot of things that I would benefit from in terms of enhanced performance. And if that's just me, I have to believe there's a lot of other people too. So I, the thing that I think is so interesting about AI is it really rewards the system thinker. And so I'll give you an example. I have this database of what I call 100 bagger startups. And I try to understand them all. I've got the original pitch deck for Air Bed and Breakfast before it was Airbnb and I've got it for Dropbox and Pinterest and all these companies. And I track if you'd bought a share in the seed round, what would have happened? I run the inflection theory against it. I run insights. I'd try to understand if our frameworks would cause us to decide. Well, now that I have that list, yeah. I could do all kinds of things. I can say, okay, please consider this list of 100 bagger startups. Which of Hamilton Helmer's seven powers were harnessed by each of them as their primary power? Which Clayton Christensen jobs to be done was the primary job that they did to get product-market fit? How long did it take him to get 10 million in revenue? How long did it take him to get $100 million in revenue? Which of them had the first-time founder as a CEO? Which of them replaced their CEO? If you're curious, it's like having an unlimited supply of smart people to go do that research for you. It's incredible.

Dan Shipper (00:39:41)

I feel the same way. I can read and think about so many more things than I would have been able to previously, and it makes it such a pleasure to get up every day. It's the best.

Mike Maples (00:39:50)

It's unbelievable. It's a miracle, right? I just wish I was in my early twenties again. I'd be dangerous.

Dan Shipper (00:39:56)

Me too. Well, I guess that that just makes me think like, why 100 bagger startups? Why not 100 bagger founders, right? How much is really in the Airbnb deck that's actually that useful?

Mike Maples (00:40:09)

Yeah. So I've been working on that question a lot. And so I've been applying our frameworks and backtesting them to prior startups. So I have these things that I call “atomic eggs” and we'll probably launch them here pretty soon. But, what an atomic egg lets you do is it lets you upload a pitch and then it runs a whole bunch of different generative models against it. 

So an example would be a Pattern Breakers insight stress test. So you could upload the Airbnb pitch deck and it would spit out this was the fundamental error insight with Airbnb. Or, this is the part that was non-consensus, or these are the inflections that Airbnb is harnessing and the AI has gotten really good at that. And then the other thing that it can do: I like the Sequoia arc framework. They talk about this idea, a “hair on fire problem” type? Is it a known problem type or is it a future vision problem type? You can run that against 100 bagger startups and then I could say a scale of 1–10, how confident are you that that's the right way to classify it? And then back to your point about founders, you can start to say, okay, there's all these founders, what jumps out at you as anomalies about these founders? What jumps out at you as commonalities about these founders? Okay, now let's group these startups in different clusters and run the same experiment again. And then once you get some patterns, you say, okay, how might those patterns shift in the world of AI? How might they be the same in the world of AI? You could have just wondered about that as you walked down the street in the past, but now you can act on that, right? You can act on that curiosity in real time. And that's just such a game changer, if you're curious about this stuff.

Dan Shipper (00:41:58)

How much does it—? Because you write a lot about pattern breakers, right? So I guess I'm thinking about business theories or strategy theories as pattern patterns, right? They’re always patterns that work under certain conditions. And sometimes they're more general than others, but they're usually not infinitely general. I don't know what the right word is. And I wonder for example, if you took the— Let’s say we wound back the clock and we went back to the eighties and we used all of the frameworks they had in the eighties and put them into AI and like gave them Cisco or whatever. I don't know. Pick whatever company you want. Google. Would it have been able to tell Google or the Airbnb pitch deck that it was a good company?

Mike Maples (00:42:54)

I don't know that it could have predicted that it was going to have the success it had and I apply a slightly less stringent standard. What I really want to know is should I spend time on this? And so what I need to know when I look at a pitch like Airbnb is there something that's wacky and good about this that I might overlook if I'm busy and tired that day? But if I can run a whole bunch of different tests against it, like you talked earlier about, these models, Charlie Munger is somebody else who I've always respected. And he had this saying, “the map is not the territory.” And what he meant by that is that if you and I want to go from San Francisco to Cupertino, and we use a flat map, and let's say we use Google Maps or whatever, the odds that we will get there if we follow the directions are basically 100 percent—like, 99.9 percent. In fact, I would argue that that map is a better representation of reality. Then all the complexities of all reality it's like you're trying to compress knowledge for the decision that matters. But if you don't want to go to Germany, the map is not going to be an accurate portrayal of the territory because a straight line is not the shortest path on a flat map that represents a globe. It would look like a curved line. And so what you learn is that it's like we talked about earlier. The question is, under what conditions is this model useful, and under what conditions are the boundary conditions exceeded? And that's why you want to have a whole bunch of them, right? You want to have the right tool for the right situation. And then when it exceeds the scope of the boundaries, you want to not use that tool because you'll get bad decision making.

Dan Shipper (00:44:41)

Are there any new things? Because one of the things you talk about in your book a lot that I like, because this is sort of how I work, so it's maybe confirmation bias, but I like the idea of living in the future, right? The best way to know what's coming is to use these tools all day, every day. And you start to see things that other people maybe won't see because they're just living in a different reality. And your reality is going to sort of spread it everywhere else eventually is the idea. And I'm curious if there's anything that you're feeling and seeing right now that you're sensitive to that is new and interesting to you.

Mike Maples (00:45:23)

Yeah some of these AI companies you'll go to, and there'll be somebody who's a couple of years out of college. And they'll be using Devin or Cursor, or these other products and they're creating these agentic-oriented entities that go out and get a bunch of stuff for them and bring it back. And they just almost act that's normal. So they're almost programming these virtual employees to go out and do stuff for them. And you'll sit with them and you'll say what motivated you to do that and to think about solving the problem that way? And they look at you funny well, how else would you do it? You want me to Google? And so the thing that I find interesting is and this is like how Zuckerberg was with social networking, right? Zuck didn't have to unlearn anything. He grew up at a time when the LAMP stack was coming out and you could A/B test things and broadband was everywhere before Facebook in the nineties. You had to have products that were well engineered because they just weren't scalable enough otherwise, right? You had to have experts that would architect and instrument the system so that it would be somewhat performant. Well, by the time Facebook comes around, Zuck's like, hey we just try it and see what happens by the afternoon and decide whether we want to keep with this or not. Now did Zuck say, aha, there's a disruptive trend and I'm going to leapfrog all these companies. No Zuckerberg didn't know anything about business at the time. It's almost like if you and I were raised in a world of Cartesian coordinates and now it's the world of polar coordinates and somebody's born in a world of polar coordinates and they don't even have to translate between the two. What else is there? That's the only thing there is. I think that some of these AI natives are like that. And so I really want to spend time with them. I want to spend time with anybody who says my entire lived experience in business is a world where. You're programming some form of AI assistance as a core function of the job.

Dan Shipper (00:47:26)

I love that. I see this all the time. We have a writer who started working with us probably, I would say two months ago, and he started working with us. He's had a very successful career, not as a professional writer, just working in AI at various tech companies and startups and has founded his own startups. But he's working for us mostly as a writer. And he writes our Sunday email where we talk about all the new model releases. He's such a nerd for new stuff that comes out, which is amazing. That's the kind of person you want to write about. And when a new model comes out, I'll often get early access. So, we'll get on the phone together. He'll write like a first take of all the things that we saw and then I'll go through and put my own take on it and whatever. So we sort of co-write things together. And the first one that he did it, I got the draft and I was like, ooh,he's smart. He's excited about this stuff, but he's not a professional writer. I can tell. It wasn't something that I just punched up and I can just publish. I had to rewrite the whole thing. and what was crazy is after we did that, I was just like, okay, I want you to take my draft and then your draft and I want you to put it into 01 and pull out what changed and he did that and we did that a couple of times and we just covered the launch of DeepSeek together. And the first draft he did, it was like he made a year's worth of progress in a month. I've worked with so many writers in my career at this point, and I've seen where people were when I first started working with him. It takes them like 1,000 drafts to make the amount of progress that he made in a month. It's crazy.

Mike Maples (00:49:13)

Yes. It’s so interesting, right? And I'm finding the same thing, Dan. So I started working on these mental models for seed. And these generative models, I started to say to myself, what is a good mental model in the first place? Has anybody ever defined what one is? What should it contain? What makes it good vs. bad? Under what conditions is it good or bad? And there wasn't a whole lot about it. There's a couple of books on mental models, but not a whole lot. So I said before I start just saying, here's a mental model of jobs being done. I should create a foundational document that's the taxonomy of a good mental model and the questions it should answer and the flow that it should take. So I did that. Now I can just say, I'm just going to write about jobs we've done for what it is, and then I run it against this framework and it says you're missing A, B and C. And I'm like, hey can you elaborate on that? And it just, it just adds it and within 30 minutes, you have something that's just off the hook, right? It's just so good. That's great. And you just look at that and you're just like, it just feels like magic. It feels like you put on some cape and just learned how to fly all of a sudden and I'm just like it goes back to reward system level thinking, right? You had to zoom out and say, wait If I'm going to someday have 100 mental models, I ought to define a connected canonical baseline, good one. And I ought to have a theory about what makes it good. And I ought to apply that theory to everyone that I do, because I'm going to get leverage if I do that. But now I'm going to make the AI do the work for me. And it teaches you stuff, right? Now you say, oh, I thought I knew jobs we'd done were a mental model, but there are boundary conditions I hadn't thought about before that are interesting. And so, yeah, it's just such a great time to be alive with this stuff.

Dan Shipper (00:51:11)

I agree. I want to go back to the original question I asked you, because it's still on my mind, which is, software is getting so much cheaper to make. The VC model, even the seed model, which you pioneered, is predicated on a different world where it was expensive to make software at first, and then it was free to distribute. And I'm curious how you think that that might change the VC model if at all, and I'll preface this by saying this is a selfish question because I run Every. I don't even really have words for the kind of company we are. We have a newsletter with 100,000 subscribers and then we have three different software products and we're 10 people. It's a whole different like thing and—

Mike Maples (00:52:01)

I use that Sparkle thing, by the way. It's cool. 

Dan Shipper (00:52:04)

You do? Oh, I love that. That's great. Love to hear that. And like, I want a different funding model. And I'm working through different options, but I'm curious how you think that that might change.

Mike Maples (00:52:20)

Yeah, so I've been thinking about it a lot. So there's two different angles and there's the angle that you're describing. And another person that I respect who thinks a lot about this is a guy named Greg Eisenberg on Twitter. And so let me see if I can capture what I think it is. It's that you have a situation where what it takes to build a product has collapsed yet again, just like it did with a LAMP stack and it's profound in a lot of ways. It's not just that it costs you less money to build a product, but you had the ChatPRD person on a few episodes ago. ChatPRD lets one person have the entire idea premise of the product in their own mind and doesn't require them to therefore have a giant team of other people. So it changes the dynamics of who can build software and what it takes to build it. 

And so you start to say, okay are you going to have these tiny little companies that generate a ton of revenue and they don't even have to generate that much to be wildly efficient and profitable. Why would you need VC money at all? And I'm pretty sympathetic to that point of view. Although I tend to go to the founders and say look I’m not under pressure to put a lot of money into you. Our funds are small and all things being equal, I'd rather have it be one and done and we try a few things. The other thing that I think is really interesting is that I'm trying to find kindred spirits around. The LAMP stack didn't just collapse the costs of startups. It created a new way of building. It created a new model of building, right? So you used to have waterfall development and you had to define everything that's in the release upfront and then you go on a death march for a year and you ship it and it either succeeds or it bombs. And that was just how products were. 

And then the LAMP stack comes out and you have lean startups and Agile. And what I'm seeing is happening now, and I'm not sure what to call it. So right now I'm calling it Darwinian engineering, or digital Darwinism. So if you think about it in an ecosystem, you don't have the individual elements and players in the ecosystem be programmed in a literal way. What you have is a system designer, if you will. And then the system gets to operate autonomously from the designer. And so I sit there and I think, man that rhymes for me. So I think about it. It's like natural evolution rather than traditional development and that you're going to have AI tools that shift from Agile to continuous adaptation and you're going to build software elements and components that are adaptive by design. And that can sense and respond to the inputs that they get in the real world independent of the program. So rather than have a business model canvas, you have a business model dashboard that's live status of what's happening. And so if you're a gaming company, you're going to shift from iterating games to creating living worlds and that stuff. So I'm really interested in what that means for what a product manager is? What does that mean for dashboards of the future? What does it mean for how QA happens? All that stuff.

Dan Shipper (00:55:55)

I've thought about this too much because, I think we actually met originally because you read my article on the allocation economy. And so I sort of started to think a lot about what is the role of someone who's working in the allocation economy? And how is that different from someone in a knowledge economy? And a way that I've been thinking about it is in a knowledge economy or just any previous economy, the work you're doing, especially as an IC, a little bit more, still a little bit as a middle manager or an executive, but a lot of this is as an IC is you're a sculptor. 

Everything that happens happens because you did it with your hands. you have your hands on every little piece of it. And I think working with AI models is a lot more like being a gardener. You're setting the conditions for the thing to grow, and then it just sort of grows. And the conditions are like hyperparameters. It's the sun and the soil and the water and whatever, and that's going to change what comes out. And when ChatGPT responds to a prompt, no one at OpenAI decided that it was going to say that, which is totally different from Facebook or whatever—someone decided what you were going to see on Facebook, or maybe if it's fixed, maybe a little bit, because they have a it, but let's just say the New York Times, someone decided what's on the homepage. And it's totally different and you're right, you can tune stuff, but it's much squishier because you're tuning the environmental conditions rather than the specific thing that happens. And I think that's such a different way of working. It's such a different way of building products. 

If I think about what we're building at Every, I don't think we're quite there yet. What I see is obviously like building an organization, you are doing that, but for individuals who are building products one of the things I see is it's so easy to build a feature. You can just build it in an hour. So sometimes you just build a lot of features and you're, oh, now the product is noisy. It's kind of messy, you know? And also the hard thing is figuring out what to build, not actually building it, which is a different thing, but we're not yet in a world where it's fully adaptive, but I do think you're right. You can see that with ChatGPT canvas or Artifacts or whatever, where it's starting to build its own UI and stuff. And I think that's where we're going.

Mike Maples (00:58:38)

Yeah. And it's just interesting, right? And it goes back to systems-level thinking. It’s one thing to think of yourself as building components or building tools or building the end thing. It's another thing to say. I'm building an ecosystem and the elements of the ecosystem operate under certain first principles. But there's a lot of emergent properties that are going to occur in that ecosystem that are a function of the dynamism of the system and how it interacts with people. I think that that's just a fundamentally different worldview about how you architect products. And so I think that that's another thing that we said earlier—very low cost, low end disruptive innovation ideas. But I think there's also this, hey, the way software ought to be built in the first place is interesting as well.

Dan Shipper (00:59:31)

Yeah. It reminds me of Notion, for example. It's like Notion, it has a block system. It has these atomic elements that you can build anything with rather than they built a specific feature to do a specific job, which is, it's a different way of thinking about products. It's like making a language vs. making a hammer.

Mike Maples (00:59:50)

That’s right. And so I think that's going to be really interesting. And I think so, but we used my example earlier. If I want to have mental models for investing, rather than just jumping straight to it, what I need to do is I need to zoom out a little bit and say, okay, let me think about this in a systems level way. What makes a good mental model in the first place? How do I make sure that I have a foundation built on something really powerful so that every subsequent piece of activity or thinking that I do is a multiplier effect on what's come before?

Dan Shipper (01:00:26)

Totally. Well, Mike, this is a pleasure. I feel like I learned a lot.

Mike Maples (01:00:35)

Me too. I'm really glad we got the chance to hang out.

Dan Shipper (01:00:38)

Thanks for coming on the show.

Mike Maples (01:00:39)

Thanks, Dan. It's great to see you.


Thanks to Scott Nover for editorial support.

Dan Shipper is the cofounder and CEO of Every, where he writes the Chain of Thought column and hosts the podcast AI & I. You can follow him on X at @danshipper and on LinkedIn, and Every on X at @every and on LinkedIn.

We also build AI tools for readers like you. Automate repeat writing with Spiral. Organize files automatically with Sparkle. Write something great with Lex. Deliver yourself from email with Cora.

We also do AI training, adoption, and innovation for companies. Work with us to bring AI into your organization.

Get paid for sharing Every with your friends. Join our referral program.

Find Out What
Comes Next in Tech.

Start your free trial.

New ideas to help you build the future—in your inbox, every day. Trusted by over 75,000 readers.

Subscribe

Already have an account? Sign in

What's included?

  • Unlimited access to our daily essays by Dan Shipper, Evan Armstrong, and a roster of the best tech writers on the internet
  • Full access to an archive of hundreds of in-depth articles
  • Unlimited software access to Spiral, Sparkle, and Lex

  • Priority access and subscriber-only discounts to courses, events, and more
  • Ad-free experience
  • Access to our Discord community

Comments

You need to login before you can comment.
Don't have an account? Sign up!
Every

What Comes Next in Tech

Subscribe to get new ideas about the future of business, technology, and the self—every day