The transcript of AI & I with Bolton and Watt’s Sam Gerstenzang and Dan Friedman is below. Watch on X or YouTube, or listen on Spotify or Apple Podcasts.
Timestamps
- Introduction and how Sam and Dan’s paths first crossed: 00:00:00
- What it means to be “the world’s slowest incubator”: 00:01:40
- Why Bolton and Watt runs companies to several million in revenue before handing off to a CEO: 00:04:50
- How specialization across the founding journey creates advantages: 00:07:30
- Building AI-durable businesses versus AI-native ones: 00:10:40
- How an AI agent transformed their customer discovery process: 00:16:10
- Where synthetic customer calls completely fail: 00:19:30
- Deploying AI inside established companies: 00:29:30
- Why newer projects see huge gains from AI while mature companies see 10 percent: 00:32:30
- A preview of what’s next for Bolton and Watt: 00:37:00
Transcript
(00:00:00)
Dan Shipper
Sam and Dan, welcome to the show.
Sam Gerstenzang
Glad to be here. Thanks, Dan.
Dan Shipper
You guys are both good friends of mine. You run the startup incubator Bolton & Watt, which I think is one of the most interesting incubators I’ve ever run into. And coincidentally—or maybe not so coincidentally—you run it down the street from us. I’m in the Every office in Boerum Hill, and your office is a few blocks away, right?
Sam Gerstenzang
Yeah, it is.
Dan Shipper
Dan, we went on a Jhana meditation retreat together a few months ago. There are just a lot of interesting overlaps. I really respect you guys. It’s very easy to say you’re running a startup studio, and it’s very hard to actually do it well. You’re one of the few people that do it well.
Sam Gerstenzang
I was thinking before this—I met both of you about 13 years ago in the New York tech scene. It was around 2012, 2013. We’ve been in the same scene for a long time. And Dan Friedman and our business partner Emma—I reconnected with Emma at a party you hosted maybe 10 years ago.
Dan Shipper
I didn’t realize it was at my party.
Sam Gerstenzang
It was at your party. So you are this interwoven part of the story in these funny ways.
Dan Shipper
I take all credit. I don’t know why I don’t get carry. That’s amazing. That’s really fun. Yeah, I remember you were at a16z. Okay, well, maybe let’s start with—for people who don’t know what you guys do, and what it means to be, I think you call yourselves the slowest incubator in the world. What does that actually mean, and how does your model work?
Sam Gerstenzang
We try to start a new company every couple of years, often in a really niche vertical that somehow combines software, services, and some real-world component. The idea is we come up with the idea, we run it ourselves through five or ten million in revenue, and then we go find a CEO who’s better than us to take it the next 10x. We remain involved as a partner to the company for its life—really involved board members who have spent thousands of hours thinking about the competitive landscape, the company, competitors, all of it.
It’s a really different relationship than a traditional incubator, which may say, here’s a million bucks, here’s an idea, I’ll coach you from the sidelines. We’re actually in the seats. That means we’re really concentrated. We’ve started two companies to date and we’re about to start our third.
Just to give you an example of the types of things we like to do—our first company is called Moxie, and it helps nurses open their own med spas. These are nurses doing aesthetic medical procedures—Botox, filler, lasers, et cetera. We’re their back office that helps them stay compliant, grow their business, and really everything you need to run a med spa. We now have hundreds and hundreds of these med spa clinics across the U.S., all in partnership with these nurses.
Our second one is a contemporary funeral home. We have no physical real estate whatsoever. We arrange everything online and over the phone. When we have in-person funerals, they’re generally at wedding venues that we’ve booked out a year in advance—Saturday night, totally booked, but totally open Tuesday morning at nine. We’re now the largest provider of funeral services in California and about to launch in a bunch of new states.
We have a taste for these weird businesses that are not YC-zeitgeisty, that have real implications, and that reimagine these kinds of bundles.
Dan Shipper
I love it. Where is Moxie stage-wise?
Dan Friedman
Moxie is a Series C company, into the tens of millions in revenue, 600-plus customers. The global team is about 200. It’s a comfortably mid-stage company. Vis-à-vis AI, we launched it some number of months before the release of ChatGPT. It is hilariously a just-before-AI company that had none of that in its conception and has had to adapt, almost as much as a company that was started in 2015, rather than being native to that way of thinking.
Dan Shipper
So your model is: I’m going to do all the really fun stuff—come up with the idea, then do all the hardest work of rolling the boulder up the hill for years until it starts rolling by itself, and then I’m out. How did you come up with that? Why do you do it that way? How is the whole thing structured?
Sam Gerstenzang
I think to some degree it’s an intersection of where we thought it would be the most fun and where the most value would be created. We looked at a bunch of other incubators, and there was sort of this model of someone who wanted to have a bunch of ideas and let those go. We realized that to maximize the success of every shot, we needed to actually go eat the glass, figure it out, push the boulder up the hill, figure out if the boulder wants to go uphill.
We thought that was the place where—you could ask someone else to do that, you could pay McKinsey to come up with startup ideas. But the hard piece is figuring out what’s actually a market signal, how do you make changes when no data quite says what to do. If we could get really good at that, we get permission to do all kinds of other things as well.
I think Dan and I both feel really lucky that it worked so well the first time, because if it didn’t, people would’ve said our model doesn’t make sense. Now we can say, we’ve done this two times and we’ll keep doing it this way.
Dan Friedman
The only thing I’ll add is—and maybe “years” is a bit of a gross heuristic—but my experience, and I think a lot of my founder friends’ experience, is that before this I built one company over 10 years. Years four through ten, there was a constant existential question: am I doing the thing that is most interesting and most useful? Am I spending my time the right way? I’ve learned the physics of the business and now I feel like I’m in purely execution mode.
That feeling of existential dread—one way to avoid it is simply to not be directly responsible for years four through ten, or to be responsible for years four through ten across five companies by the time we get there. I’m sure it will have some other psychological challenges, but I think we’re here optimizing for what’s most enjoyable and exciting for us.
Sam Gerstenzang
We also pretend on the founder journey that it’s the same skillset all the way through. What you do and how you do it really depends on the stage of the business. There’s a lot of value in specialization—we know what it looks like going from zero to one million in revenue over and over again, and here’s what it takes from one to ten, the systems, the people. We’re getting a lot of reps in at that early stage that very few people actually get to do and seeing what success looks like on the other side.
Dan Shipper
You guys also have an interesting model for how you break up the work between yourselves. How does that work?
Sam Gerstenzang
It’s changed a little bit over time. When we started, we both started Moxie together. Then Dan stayed on while I went to start the second business. At least for now, we sort of tag-team. I’ll run one business, then Dan starts the next, then I’ll start the one after.
One of the things I really appreciate about this working model is that Dan sits next to me, we each have full context of what the other person is working on. We can be great thought partners, push each other, and be emotionally supportive when things aren’t working. But we also have our own space to try things our own way. I’ve found it a really useful way to build companies where you have all the upside from co-founders, but also a much larger scope and span.
Dan Friedman
We have other partners inside Bolton & Watt, some of whom work with us within the company. And we have one who’s been with us for four or five months who specializes in the concept development phase—generating ideas, validating ideas. The thesis is that historically, when Sam or I rolled off one company, we rolled into a cold start on the search for the next one. Going forward, we want to be rolling into, “We’ve just validated our next idea and we’re pressing go.”
We’re starting to, bit by bit, not in an overly accelerated way, institutionalize and specialize in the different components that we want to be best in the world at.
Sam Gerstenzang
The two constraints for us are speed—we want to be the world’s slowest, but we could also be a little faster. One every two years is the current pace. The question we asked ourselves last year was, could we take it to 18 months? Maybe take it to a year? The two constraints are: do we have a great idea, and do we have a great one once a year? We are extremely big believers in execution and also believe the idea matters a lot. And then our other biggest constraint is talent. We need to find great early-stage employees, great executives, great CEOs. Those are the two things that limit us.
(00:10:00)
Dan Shipper
Okay, so now it seems like a lot of the opportunities you guys gravitate toward are real-world, unsexy businesses—funeral homes, med spas. At least with Moxie, it’s sort of a business-in-a-box type thing where you give someone the tools they need to start one of these businesses and you partner with them and take a cut of the revenue, or something like that. I’m curious how you’re thinking about that model in an AI world. You started Moxie right as ChatGPT came out for the first time. How are you thinking about how the model evolves now that AI is a thing?
Sam Gerstenzang
We were first attracted to those businesses because we thought they would be really hard to build against, and there was a huge existing opportunity. We’re pretty good at this sort of operational complexity where you almost have to build a services company and a software company at the same time. The competition looks a lot different. There aren’t 10 YC companies starting each of these. We like that space to play.
As ChatGPT came out and the world started to change, one of the things we realized in a kind of accidental way is the types of businesses we’re spending time on are maybe more resilient to AI trends. We should think about AI as an accelerant of the speed at which you can build these businesses. But fundamentally, the business model of a funeral home or a med spa doesn’t change because AI is out there.
The other day I was thinking—we named Bolton & Watt after the company that commercialized the steam engine in 1775. We’re hyper-aware of how technology changes businesses. And at the same time, we’ve chosen maybe a counterposition to say, a lot of things are going to change, let’s continue to be valuable, and how do we sit with the current.
Dan Shipper
How does that work in your mind, Dan? We’ve had some existential conversations about where AI may go. At certain times in our conversations, you were like, nothing’s going to stay the same, it’s all totally going to be different. Where are you currently, and how does that filter into the strategy?
Dan Friedman
I think our view is there are two good companies to start now. There’s the AI-native company that pushes the ball forward inside of some category, or there’s the AI-durable company that effectively uses AI where the core of the machine is not going to change.
If you look at our first two businesses—there’s no such thing as an AI-native crematory. It’s just not going to change dramatically.
Sam Gerstenzang
We’ll put it on the blockchain and use AI.
Dan Friedman
And we’re not expecting a robotic injector anytime in the next seven to ten years. The core work of a med spa will persist. Med spas themselves are actually conveyors of the latest medical technology—when GLP-1s come out, med spas are one of the early adopters of spreading it into their communities. But at the end of the day, what happens inside the walls of a med spa is not deeply impacted by AI.
However, around the edges, there are spots where it can really matter—reaching the right customers, serving them, communicating with them effectively at all hours of the day at an affordable price for the business.
We want to be great deployers of AI inside our operations. We want to help our partners deploy it to the maximum effective degree. We want them on the early edge, not the bleeding edge—there’s no need for them to be there. They can always be a few months behind and not take risk with their customer relationships.
We’ve kind of said, between AI-forward and AI-durable, we really like being great users of it inside that AI-durable category. Every idea we’ve had in the AI-forward category has had multiple formidable-looking competitors doing something almost exactly like our idea. We tend to want to look at a category and say we’ve got something interesting enough that it’s worth our time.
Sam Gerstenzang
All the smart people are listening to Dan and Every, and we’d rather compete against the less smart people.
Dan Shipper
I would rather not compete against you. So I love that.
If you’re concentrating on the things that aren’t going to change and using AI to get more operationally efficient rather than being truly AI-native in a YC way—what are the actual real operational efficiencies you’ve found? What’s worked, and what has not worked as well as you thought it would?
Dan Friedman
Maybe I could speak to the company discovery process, where I think we’ve seen probably the greatest transformation. It roughly maps to the principle that the more greenfield, the better AI can be. Whenever I talk to my founder friends that are seed stage, they’re like, “Oh my God, our engineering is 10x faster.” And then I talk to my Series D friends and they’re like, “We’re 10 percent faster. What is everyone talking about?” Just this classic dichotomy right now.
In the new company discovery process, every stage has been rethought. The first step is finding some verticals to poke around in. This used to be a week of Googling and maybe calling some friends to get the basic facts. Now it’s a mega-prompt to generate a list of categories and a mega-prompt to assess what we think is a good business based on our particular point of view, and start to narrow in on a couple different ones to go talk to real people about.
We did a little exercise this time around where we both did the AI curation, and then I did a human point of view. I actually felt in that moment like—do you remember the children’s story, the myth of the guy competing against the automated tunnel-building machine?
Dan Shipper
That’s a real thing. John Henry, or something?
Dan Friedman
John Henry, yes.
Dan Shipper
I think he actually did try to compete against the automated thing and then died. Is that right?
Dan Friedman
Okay. Well, anyway—I was the John Henry in that story, and fortunately I came out the other end. We did, as a group, select three categories. One of the three was my human point of view. Two of the three were the AI saying, “No, no, no, these are screaming matches.”
(00:20:00)
And then more interestingly, we built an agent identity that we call Matthew Bolton—which is a horrible name, because Matthew Bolton was the Bolton of Watt. Matthew Bolton is our assistant for the customer discovery process. He helps us prepare for every call—looks at the persona we’re talking to, looks at our current hypotheses and what our validation focus is, and basically says, here are the areas to dive into. Of course we review and make sure we talk about the right things, but he makes the prep more efficient.
Afterwards, the AI transcript goes directly into a Notion table. We run Matthew Bolton and it regenerates a point of view on each of our core hypotheses—what’s totally validated, what’s totally invalidated, where we need to dive in more. It pulls out the relevant quotes from the people we’ve spoken with.
It’s been a huge hit in that way. Where it’s totally missed for us—which is also kind of interesting—we’ve talked to other people who do synthetic customer calls, effectively making AI into the customer. We just can’t make that work at all. Basically anything that strikes us as a good idea, that passes some basic sniff test, the AI is like, “I’d love to buy this from you.” No matter what we do, it expresses a ten out of ten customer pull. We tried that, then flew to meet a real prospective customer and just fell on our face completely.
We’ve iterated our way past that, but we don’t think it’s actually useful for that. It doesn’t know the nuances of the psychology of someone who’s worked in an industry for 15 years and is deciding what to buy right now.
Dan Shipper
Or it just knows that you want it to say yes, so it does.
Dan Friedman
We desperately tried to have it not be in sycophancy mode and we can’t get it out of that. Maybe someone else can, but our point of view at the moment is it can help us talk to people effectively, but it cannot actually reduce the number of people we need to talk to to get to confidence.
Dan Shipper
This is really interesting. Are you able to show us Matthew Bolton? Show us the goods. I’m also curious about the prompts you’re using to research business ideas and filter them.
Dan Friedman
So we have different Matthew Bolton flavors. There’s an underlying agent identity doc. This was for a specific category dive we were doing into P&C insurance—property and casualty insurance. We break down our own thinking across a few docs. One is what we call a POV—a point of view—on what we think the opportunity is, what’s the problem we’re solving, who are we serving.
We have a hypothesis tracker with a list of what are the core things we believe about the category, what has to ultimately prove true, what we’re looking to discover through potential customer calls. And then we have transcripts and our own notes from every call we’ve had.
When we call Matthew Bolton, we ask it to reread our latest point of view, read the hypothesis tracker, read the most recent ten calls, and then for every hypothesis, re-update: what’s the evidence for, what’s the evidence against, what’s the strength of it. It basically points us in the right direction—where to spend more time, where to say, great, we’re good to go on this.
Dan Shipper
This is kind of sick. I love this.
Dan Friedman
Oh, thanks. That actually feels really meaningful. I’ve got to say, Sam and I this morning looked at each other and were like, are we sure we should be on this podcast?
Sam Gerstenzang
I literally said to Dan, let’s make a list of all the places we’ve tried to use AI and it hasn’t worked. So it’s nice to hear that from you.
Dan Friedman
Ultimately you just say “run P&C analysis” and it runs through all these different steps. What feels personally meaningful about this is we try to be intellectually honest with ourselves. That’s something we hold ourselves to. The nature of starting something new requires a manic energy and a little bit of a suspension of disbelief, because there’s just no reason any new company should succeed. This keeps us really rigorous and fact-based, which is what we aspire to. We can balance that with our own judgment—be honest with ourselves and ask it to remain fact-based and balance these two opinions.
Dan Shipper
Do you find that it actually works for weighing evidence? Because I find if you ask it for reasons for or against, it can come up with anything. Is it actually good at weighing evidence for you, or are you just surfacing it and making your own conclusions?
Dan Friedman
If we try to ask it an opinion on a high-level question, I haven’t trusted it with that. I tend to take those results with skepticism. But I think it’s really good at finding the quotes. At the end of the day, to support a hypothesis, in a perfect world what I want to bring to Sam is: here are the three key things that must be true in this idea, and I’ve got three quotes from different people that directly speak to each one. This just much more efficiently helps me get there.
(00:30:00)
Sam Gerstenzang
That’s the role Dan and I play for each other—keeping each other honest. And there’s a way to get the short version of that with AI where you’re like, make the best counterargument. It can sharpen your thinking even if you’re not looking for it to tell you you’re wrong. You can have it tease out: in what ways could you be wrong? That’s a really useful tool.
Dan Friedman
Sometimes I’ll say that and it makes some good counterargument and I’ll be like, oh, fuck off.
Dan Shipper
When you were talking about feeling outmoded on the research front—at what part of the research process? Because I feel like if I was going to send you versus Claude off to do some research, I’m sure that Claude would cover a wider breadth, but I feel pretty confident that if you gave me a research report on a sector, I would like it better than the one Claude made. It would just take you a lot longer.
Dan Friedman
No, I’m not that confident. I think it does a quite good job on basic market industry reports. It certainly beats the low- and medium-quality reporting that was out there for the one-week desk research upfront. There’s good stuff in public company filings whenever there’s a public company in the category we’re looking at, and sometimes we’re not just asking it to go do something—we’re uploading a bunch of thick PDFs too. It’s a quite good parser of that stuff.
Dan Shipper
I think maybe what I’m saying is I would trust it to give me the consensus opinion about how people think about a particular space, and I would trust you—even if you were using AI to do it—to come back with something that felt new and interesting in a way that I don’t think Claude can get to on its own.
Dan Friedman
I probably agree with that. I don’t think we’ve gotten good ideas from it. I think we’ve uncovered facts, and then we have our own earned point of view. We learned a lot through building Moxie, and for two years have been looking for another category where a Moxie-style business might be a good idea. We have a relatively differentiated point of view on what Moxie actually is and what makes it successful. Between Sam and me, we’ve probably talked to 60 people starting “business in a box” companies in different categories, most of which don’t have the properties we think are essential.
There’s a combination of the earned point of view through years of building with AI’s ability to consume massive amounts of information and fit that to our point of view.
Sam Gerstenzang
If you asked Claude to give you a bunch of business-in-a-box ideas, they probably wouldn’t be ones that felt good to us, because we have certain analogies and ways of looking at it. You can ask it to do the research and figure out the underlying category properties, but the default output isn’t the answer we want.
Dan Shipper
Aside from using AI in your company discovery process, you’ve been doing a lot of thinking about how to bolt on AI to your existing businesses that are not necessarily AI-native—the transformation process. What have you learned from that?
Sam Gerstenzang
It’s really interesting. Inception company, Series A company, Series C company—I think there are two parts to this. One, how do you actually get people to start using AI? And then, what’s actually worked and what hasn’t?
Dan and I were talking about this the other day—should you have an AI initiative? Is that a good idea or a bad idea? My perspective is it’s a bad idea because you don’t want to lead with the hammer. We all remember the time when everyone was putting everything in NoSQL whether it belonged there or not, or everyone was building a Slack bot ten years ago whether one was needed or not. But you do need something to shock the system—a new toolset.
The point of view I’ve come to is you shouldn’t give anyone credit for using AI. But you should make sure the expectation is they’ll deliver the best product and output knowing that AI exists. To do that effectively, you need to seed what the tools are, give a lot of good examples, and start demanding that when you see results from someone on your team, they’ve actually used the best possibilities. But you don’t get any points for generating a bunch of copy that was clearly written by AI and is bad to read. You have to vet the copy. But if you used a prompt to get 70 percent of the way there, or even 100 percent, that’s great.
You have to figure out who are the people on your team to seed these ideas so other people get examples, and then actually make it successful for those folks. One of the places we’re seeing the most action at inception is really on the experimental edge of things.
A few examples: really good at generating landing pages and pushing our thinking there, but then a ton of work to integrate that back into Webflow, have it fit with our system, have a consistent header and footer. There’s almost two phases—research and development, then production.
We’ve had a number of people on the team build throwaway apps that have been really useful. For example, one of the challenges we have with the funeral home business is that people call in and mention some town name, and we need to know whether we service that area. A designer spun up an app where anyone can type in a hospital name or a town name and it resolves whether we can serve there. Instead of spending engineering time on figuring out how to deploy that safely and integrate it, it’s just a separate app that’s a link from our main one. Enabling those types of things has worked really well.
Our engineers have gotten more productive, but a lot of the core things an engineer does are still the same. They’re faster coding, but deployment still takes work. Maintenance still takes work. We have two different pieces being enabled by AI, but in parallel paths.
Dan Shipper
It sounds like the greenfield things are quite fast, and then anytime it has to touch something that already exists, it’s a speedup, but it’s not totally changing everything.
Sam Gerstenzang
That’s exactly right. There are some places where it’s made a huge difference. Our talent team was reminding me the other day that they’ve made something called Sam GPT, which they trained on all my blog posts and use to reach out on my behalf to potential candidates on LinkedIn. It’s a model trained on my voice that I forgot existed. That’s worked really well for them. There are these places where it kind of unlocks and enables something within an existing system.
Dan Shipper
I also think it’s interesting—the approach of you don’t get credit for using it, but you do get credit for doing the best possible thing given that AI exists. What that seems to solve is you don’t want people doing Potemkin villages of “I used AI for all this stuff” and it’s all just for show. But my experience—we do a lot of AI transformation work with big companies—is that if you just say that, people will continue doing it the way they already know how. They’re like, “Well, if I want the best quality, I have to do the thing I already know.” How do you deal with that?
Sam Gerstenzang
I think it’s finding a few people to set the example and then start comparing the work. We don’t count how much of your PR was AI-generated. If you commit and push a bad PR because AI wrote it and you’re like, “blame AI,” that’s a terrible outcome. Instead, we can point to an engineer who’s done all this great work, ask them how they do it in a public forum, and continue to raise the floor. That’s what we’ve found to be much more effective. It takes a lot of active work to find these examples across the company, seed those examples, and continue to elevate them.
Dan Shipper
Have you guys seen any change since Opus 4.5 came out? I feel like there’s been a big change, at least for us. Has that filtered into any of your businesses, since they exist in a slightly less tech-forward part of the economy? Or are you still in the regular ChatGPT wave?
Dan Friedman
The early word out of Moxie’s engineering is: yeah, this is better, but not a step-function different experience. We’re doing a lot of work to retool in order to experience more benefit. We’re seeing exactly the pattern of a newer engineer working on a more greenfield project moving much faster than someone working on something that touches multiple parts of the system. A 40-person product engineering team, more mature codebase. We have not seen the night-and-day transformation that the X sphere is reporting.
Sam Gerstenzang
We saw something similar over the last year, where everyone’s talking about GEO instead of SEO and everything’s going to change in agentic commerce. That’s one place where we’ve also seen much more incremental change. We’re getting more traffic from ChatGPT, but it’s almost just another channel. We have to think about it the same way we do paid search. There’s a cat-and-mouse game to figure out how to get free results, there’s going to be a paid version, but fundamentally, we don’t think people are going to buy a funeral via chat. That may not be true for a lot of products—when I do a flight search, I still prefer to do that myself rather than ask a travel agent. For us, it wasn’t really a shift in how we thought about marketing. It was just, okay, here’s another channel we have to make sure we’re ahead of, but it doesn’t fundamentally change our business.
(00:40:00)
Dan Shipper
Can you guys give me a preview of what your next business might be, or what areas you’re interested in?
Dan Friedman
I think we cannot. And I don’t know if Sam knew I would say that. Not because we’re hiding a secret so much as the level of embarrassment at literal day zero is so extreme that I don’t think I can tolerate it. We’re not 30 days from a launch announcement. We’re a few months out. We do know now what it is—as of the last five days. We’re in that phase where we just have to build more before we feel comfortable talking about it publicly.
Sam Gerstenzang
I think we can maybe go back to the original theme. I think it’s a useful distinction to draw between things that are purely enabled by AI—where the unit of work has changed—and this other category of a hard bundle in the real world. Those types of things we’re really interested in.
With P&C insurance, for example, there are some transformation aspects and some things that are going to work the same. We’re looking for some sort of secular change in the world. It could be the death rate going up. It could be that more people want Botox. AI is this megatrend, but we’re looking for the intersection of AI and some other trend, rather than AI being the primary trend. That helps guide the types of things we’re interested in.
Dan Friedman
When do you start to talk about new products, Dan?
Dan Shipper
All my products start as basically blog posts of “I built this little thing over the weekend.” So pretty much immediately. I think of new products as content first, and then businesses second if they seem like they have legs.
Sam Gerstenzang
It’s been so fun to watch you share all these little toys and experiments, because it feels like the edge of what’s fun and cool and potentially useful. You almost don’t fully ask the last question, which I actually really appreciate because it takes you into all these new places. Our businesses are like, “Will someone pay for this?” That’s the first question we ask. We’re both doing something kind of weird, but in really different directions.
Dan Shipper
Totally. And what’s also fun is—I don’t know if you remember this—but when I started Every, you were also thinking about newsletters, in an almost similar way but very you. I think you were doing cars first—wanting to do vertical-specific newsletters that people would pay for, like cars or trucks. So we’re always kind of on parallel tracks, but with very different personalities that come out in the way the businesses get built.
Sam Gerstenzang
Totally. We were running an automotive professional publication. The idea was to do a vertical stack version of The Information. It ended up not working, or maybe would’ve worked if executed differently. But yeah, we were both in professional newsletters and they were two completely different things.
Dan Shipper
It’s very hard. Cool guys, this is awesome. I love having you. Whenever your new thing launches, I would love to have you back to talk about how AI is involved in that. Just love being on our parallel paths in Brooklyn together.
Dan Friedman
Next time in person.
Dan Shipper
Sounds good.
Dan Shipper is the cofounder and CEO of Every, where he writes the Chain of Thought column and hosts the podcast AI & I. You can follow him on X at @danshipper and on LinkedIn.
To read more essays like this, subscribe to Every, and follow us on X at @every and on LinkedIn.
For sponsorship opportunities, reach out to [email protected].
The Only Subscription
You Need to
Stay at the
Edge of AI
The essential toolkit for those shaping the future
"This might be the best value you
can get from an AI subscription."
- Jay S.
Join 100,000+ leaders, builders, and innovators
Email address
Already have an account? Sign in
What is included in a subscription?
Daily insights from AI pioneers + early access to powerful AI tools
Comments
Don't have an account? Sign up!