Transcript: ‘How a Top Podcaster Rides the AI Wave’

‘AI & I’ with Nathaniel Whittemore

16

The transcript of AI & I with Nathaniel Whittemore is below.

Timestamps

  1. Introduction: 00:00:51
  2. How you can get value of AI right now: 00:02:15
  3. Nathaniel goes through his X bookmarks: 00:14:07
  4. Why content should have a point of view: 00:20:25
  5. Tools that Nathaniel uses to curate news about AI: 00:23:47
  6. How to use LLMs to structure your thoughts: 00:31:27
  7. Why the history of Excel is a good way to understand AI’s progress: 00:38:40
  8. The AI features in Descript that Nathaniel uses: 00:45:46
  9. AI-powered tools to help you generate content:00:49:11
  10. Nathaniel’s tips on using Midjourney to generate YouTube thumbnails: 00:58:32

Transcript

Dan Shipper (00:00:52)

Nathaniel, welcome to the show.

Nathaniel Whittemore (00:00:53)

I'm so excited to be here.

Dan Shipper (00:00:55)

I'm psyched to have you. We've been trying to set this up for a long time. I love your podcast. For people who don't know, you are the host of The AI Daily Brief, a daily news and analysis podcast on AI, which is consistently one of the top-ranked—or maybe the top-ranked AI podcast on the technology charts. You do 15,000 or 20,000 downloads an episode, which I am jealous of. And you are also the cofounder and CEO of Superintelligent, which is a fun and fast platform for learning AI. And I checked out Superintelligent before the episode and it looks awesome. So, thanks for joining.

Nathaniel Whittemore (00:01:31)

Yeah, it's great to be here. I think the show is super fun and I'm glad to be hanging out.

Dan Shipper (00:01:37)

Sweet. So what we decided that we're going to do today is go through how you think about making a podcast every day with AI. So go from show concept to actually having it out there in the world and what AI tools and prompts and all that kind of stuff you use to do that. So, maybe just lay out for us the background of how you think about this.

Nathaniel Whittemore (00:02:08)

Sure. Okay, so a couple of things about this. One is a general framework for how to think about getting value out of AI right now. You and I were just talking about this, but one of the things that we found— So, Superintelligent at this point, the platform has been live since April, but we have about 500 tutorials across just a huge range of topics. Each of these is a video tutorial. They each come with a sort of project that goes with them where people can actually go and use the tools. And one of the things that people are regularly surprised by is that although some number of them are these big capacity-changing things like text-to-UI, or write an app in five seconds, where a lot of people are going to get the most value in the short-term is some random thing that's super basic that just saves them 20 minutes at a time on a thing they do every day, right? 20 minutes a day across an average work year is something like two-and-a-half weeks that you could save. And the media coverage of AI is so breathless that people are kind of trained to think they’ll wake up one day and their job's gone and it's not exactly playing out like that.

And so if you think in these terms of where are these small efficiencies that I can win back that in the aggregate make a big difference, then I think it allows you to start to think about all of your different processes and workflows through that lens and ask, is there a way to AI-ify that, that’s going to make something much easier for me. Or maybe it's not just a time thing. It makes it better because I hate writing copy or whatever it is.

Dan Shipper (00:03:49)

I absolutely love that. I just think of the breathlessness— You're so right about it. On the one hand, there's the AI companies and a lot of them are sort of branding themselves as autonomous agents, or whatever, because it feels so sexy. And on the other hand everyone else is like, it's going to ruin the economy and jobs and it's going to kill us all or whatever. And there's no one in the middle of just being like, hey this is actually really useful. It doesn't do everything on its own yet. And so I think when a lot of people try AI for the first time, they're like, well, this sucks. It's not doing the thing I expect it to. And, actually, no. That's because it's a tool that you need to learn how to use. It's not just a magical cure-all for everything, but if you know how to use it's really powerful. And yeah, maybe it doesn't do your entire job for you right now, but those 20-minute things here and there, really changes what work you can do and what you can get done in a day and in a year. And I think that's super powerful.

Nathaniel Whittemore (00:04:47)

Absolutely. And I think that we're still so at the beginning of— We're at a stage in AI's development where we're basically asking everyone to go figure out how to reinvent their own workflows. And that's so not how things work in the real world. In the real world, the way that things work is that there is some very small number of people who are the sort of super-creative experimenters and tinkerers who spend hours and hours and hours figuring these things out just because they actually like the joy of tinkering. And then they share what works. And I think that part of why this isn't happening right now is that so many people are not sharing what they've figured out that they're doing, which is why I think shows like yours are so important. And a big part of that is actually policy in the corporate sector, right? So there was a study that recently came out—Microsoft and LinkedIn that found 75 percent of knowledge workers are using AI at work, but 78 percent of them are not disclosing it. They're not talking about it. They're basically smuggling AI into work because they don't want to be told that they're not allowed to do it anymore because this is the thing that's so magic.

If you've ever had to create YouTube thumbnails, right before Midjourney or DALL-E or something like that. And then you get to use an image generator as part of that process. You are never—and I mean never—going back to the way that you used to do things. It would just be insane. You would have to pry Midjourney out of my cold, dead hands. And so the natural tendency then is for someone who has figured that out if they don't want to be put in a position where someone's going to tell them that they can't do that. So I think we actually have an artificial barrier right now, even in terms of how these insights are flowing between people that is really sort of undermining how much benefit this stuff can have, but I think that's breaking down a little bit now. And again, shows like yours, I think, are a big part of that.

Dan Shipper (00:06:44)

I think that's so interesting. I hadn't really thought that people just actually are not sharing—maybe they don't want to get it taken away. Or there's also maybe a little bit of a stigma, depending on what community you're a part of to be using this. That is also definitely a thing. And yeah, that's definitely what I wanted to do with the show. One of my little bits I do is I think ChatGPT is like sex in high school. Everybody is talking about it, but very few people are actually doing it, you know? And maybe the nuance that you're adding to this is more people are doing it than you think, they're just not talking about it because they're ashamed, and I think both actually can be true depending on the community or the group of people you're talking about.

Nathaniel Whittemore (00:07:30)

100 percent. That analogy actually completely holds.

Dan Shipper (00:07:32)

Yeah. I'm sort of curious. You're running Superintelligent. You're teaching people how to use AI in a practical way. Before we get into this specific podcast stuff, where are you seeing the most power-ups, the most level-ups for people with the least amount of effort?

Nathaniel Whittemore (00:07:50)

I think there are categories of roles naturally— The utility and use cases are a little bit more apparent right away. Digital marketing is probably the easiest example where they're already living inside different tool platforms. If you're using something like Facebook’s self-serve app platform or Google’s self-serve app platform, they're integrating AI for you into the asset generation process. So that's an area where it's just such an obvious place to use it for copy and things like that. We'll get into a bunch of this. So that's one area. I think with writing, people are starting to figure out if and where a ChatGPT or an LLM is valuable. Although, I think actually this is another area where there's a little bit of nuance. If someone is primarily a writer or a primary part of their role is writing, they actually, I think, tend to use these tools less than someone who is not primarily a writer, but still everyone has to do a meaningful amount of writing, especially knowledge worker-type jobs, and I think a lot of where the benefit is actually, now it's for people who that was like nails on a chalkboard before they could make it faster. It's better. It's not going to replace someone who's a great writer.

I mean, you guys write amazing essays. How many of your people are using ChatGPT to write those things? I would bet zero. Maybe they're using it for brainstorming or something else, but it's just different. But again, this is another, I think, casualty of the fact that we're not talking about it enough is people think, well, I'm a writer and ChatGPT can't do it as well as me. It's like, no, it can't. But there's a ton of people probably inside your organization who don't like writing for whom this is a huge benefit.

Dan Shipper (00:09:34)

I think you're totally right. I have someone on my team who is incredible and when he joined—it was a couple of years ago—he was much more junior and he speaks incredibly. He speaks great English, but English is a second language and you could kind of tell in his emails and in his writing that it was the second language. And the minute ChatGPT came out, his emails became perfect and it was amazing. And now he's used ChatGPT so much that he can write better emails without having to use it anymore. And yeah, and I also think you're right—I don't use ChatGPT to write my articles, but I do use it for many micro-tasks in the process of writing my articles and that I think is super valuable. And honestly, probably, I actually use Claude much more cause I think Claude's a better writer, but ChatGPT is good for certain writing tasks.

I think you're right. It's back to that thing where people are expecting too much of it, to do too much all at once. Oh, write an article for me all at once. And, yeah, if you're not a professional writer, having it do that could be really helpful because the quality level that you need to get to is lower, but if you're a professional writer and you're saying, please write an entire article for me, you're going to hate it. But you can use it as a tool and that can actually help save those 20 minutes instead of going and looking up some complicated topic. You can just have to summarize it for you and then put it into your article, which happens all the time.

Nathaniel Whittemore (00:11:04)

100 percent. I also think that this pattern of filling skill gaps rather than strictly augmenting things that you're already an expert at holds in a bunch of other areas as well. I don't know if you've ever experimented with any of the AI website generators or anything like that. But there's literally zero doubt that the results that you're going to get from a 20-second generation from Framer, or something like that, are not going to be as good as sitting down, customizing WordPress templates to be exactly what you want, inserting your own graphics. The difference is that they're instant. You're spending all of your time changing color schemes, tweaking copy, and so it's the same sort of patterns. If you're not a web builder, and you need to do something fast, they're unbelievable tools. However, they're not changing the fact that the web developer or the web designer, rather, is still sort of super premium if you're looking for something great.

Dan Shipper (00:12:05)

So I feel like we framed up how AI can be valuable in general. I'd really love to go into, in particular for your podcasting workflow, help me think about how a podcast comes together for you, what that process is like, and then let's start getting into the specific parts of how AI can be useful.

Nathaniel Whittemore (00:12:24)

Absolutely. And so I think let's try to have this be— Or the framework that I'll try to bring to this is so many people now are creating content. And so we'll try to abstract a little from just specifically podcasts to a broader content-creation process, because I think, like I said, a huge number of people are doing that. Let me share my screen. Alright. So for the purpose of this The AI Daily Brief, as you mentioned, is a daily podcast and video. It's actually two videos on YouTube that come together. There's a headline news section, which is about five minutes on fast, 30-seconds, one-minute updates on whatever's happened that day or the day before, and then a more analysis type section that's more 10 minutes or so, where we go deeper on a particular topic. So the day that we're recording, Ilya just announced his new Safe Superintelligence company. So that's the main episode. And then the headlines were things like Dell working with Elon on xAI, which raised their stock price, and some Chinese startups infiltrating the U.S. for AI purposes, even though there's controversy there. So that's the show that I do. And because it's two YouTube videos that get turned into a podcast that then have to be promoted and shared everywhere, there's just a lot of work. And actually this is one of two daily podcasts that I do on top of running Superintelligent. So I am very much in the market for ways that make this faster. And so I will say that I'm going to show both things that I actually use as well as things that one could use that for whatever reason I happen not to. And so the first area with this sort of a contrast is when I'm trying to figure out what I'm going to cover from an AI news perspective, it's informed by two things. One is I'm living on Twitter, bookmarking things, day in, day out. And a huge part of what I care about is not just the news itself, but the discussion around it, the metaanalysis of how people are responding to the news. I think that's really what makes the show different from just any sort of reporting type of a thing. And I'm just bookmarking them throughout the day. There's no real way— There probably is a way to make that faster with AI, but for me, it’s just something that's so integrated into my normal experience where I'm just bookmarking, bookmarking, bookmarking left and right vs.—

Dan Shipper (00:14:47)

Can we see your bookmarks? I'm kind of curious. I want to understand a little bit more what's your taste? What's the thing that makes you go, ooh, I need to bookmark that for the show, you know?

Nathaniel Whittemore (00:15:00)

So sometimes it's going to be because it relates to a particular topic. So, for example, Claude 3.5 comes out, Artifacts come out, and I know instantly that is a topic that is going to be both, well 1. I'm going to make super tutorials about it, 2. I'm going to cover it on the show. And so I'm bookmarking both just the actual news itself, particularly from key players, so Mike Krieger is the new chief product officer at Anthropic, previously the co-founder of Instagram. And I'll tend to bookmark the announcement itself, but then also people interacting with it, right? So the first few hours after this came out, everyone was just sharing their generations, so you have a lot of people making clones of games, a lot of people just reacting to where it sits, especially in terms of comparison to GPT-4. And so this is a whole category of things just, what's all of the discourse and discussion around a particular topic.

So you can see today it's almost all Claude for me because I'm keeping track of so many different tools for Superintelligent, I will often just bookmark things that I want to go back to—Galileo's super cool text-to-UI tool just announced a partnership with Replika that makes it easy to go from the code that Galileo is producing to the actual IDE. So I don't know how I'm going to use that or even if I will. But for me, to some extent, bookmarking is also a mental trigger. It's a way to remember one additional thing. What else is on here that might be elucidating? Sometimes it's just big conversation like Elon is clearly talking about Safe Superintelligence when he says, “Any given AI startup is doomed to become the opposite of its name,” which is a pretty clever tweet, I have to say, even if you disagree with the premise.

Dan Shipper (00:16:53)

Is this the kind of thing where you're bookmarking it and then right before you record the show, you're scrolling back through to write a little doc for yourself? How do you come back to it?

Nathaniel Whittemore (00:17:05)

The way that the AI show works is I actually don't script it at all and that's because— So, I've done these sort of daily shows both semi-scripted, fully scripted, and then completely unscripted. However, when I started the AI show, I decided that I was just going to do it completely unscripted. One, I think, it brings a different type of energy that I like, that you're sort of rambling through it a little bit but I also, just from a pure time perspective with so many other things going on, I can't start another show that's going to be two YouTube videos as well, if I don't do it this way. So basically what I'm doing on any given show is I will kind of roughly think it through in my head in terms of what the architecture of a particular show is. And then I will literally put the tabs on a window that I'm going to go through with Descript in the order roughly that I'm going to go through them. And so for the Ilya episode today, I went back through all of the conversation and discourse and started to bunch the commentary into some different themes. So some of the themes were contrast with OpenAI and whether there was going to be a talent rush from OpenAI to Safe Superintelligence. A second theme was excitement that Ilya is doing something and sort of lauding Ilya for his contribution so far. A third theme was basically skepticism that there was any sort of business model there that could justify whatever money was going into this or how much it was going to cost to do it. And so it's sort of these buckets of themes that I'm going to put there. Sometimes it's articles. I'm really using the linearity of the tabs to talk over it and sort of do my structuring for me.

Dan Shipper (00:18:48)

I love that. I think it's so cool that the person who is doing literally the top consistently AI podcast is like, I don't script it. I kind of put my tabs in order and I just free associate for 10 minutes and that's the podcast. It's so good. It's amazing because all these people are out here being like, I got to like the script and it has to be perfect. And it's actually, it's a sort of common pattern. I was hanging out with Ali Abdaal a month or two ago. And he was talking about his— He's a really big YouTuber for anyone that doesn't know. And he was talking about his process and he's experimented with every different kind of video, from completely unscripted to completely scripted. And the thing is that, first of all, he has found that there's no correlation he can find between performance and how edited and how scripted it is. But what he has found is that his top video is one that was completely unscripted and he just sat down and just reeled it out for five minutes and that was it and mostly he does basically what you do, which is he has an outline, with three points on it—your equivalent would be tabs. And then he just free associates. And I think that really kind of gives them that really nice balance between, okay, I kind of know the overall structure, but also I feel like I'm talking to you. You feel like you're talking to me and you're not fighting that sense of it being kind of wooden and delivered, which I think is really cool.

Nathaniel Whittemore (00:20:24)

The most important thing that I've found when it comes to just— What the goal of a piece of content is. I think that what elevates it is just having a little bit of a point of view or a theme that you're trying to convey, or point that you're trying to make. So, it doesn't have to be a big, complex, radical point, but what people respond to, there's so many sources of news and just raw information. The difference is how people, how content creators contextualize that to help people think about how it should be. So for the Ilya show, just to stay on that example, because we've been talking about it and I think it'll be familiar to a lot of your listeners. Part of the perspective that I wanted to bring was just the different ways these three or four different ways that people were talking about it. That's almost like I'm replacing my perspective on it with other people's. And that's what I try to do primarily. It’s not a bully pulpit show for me. I want to show how the world is reacting to particular news. The one contribution that I made that I wanted to kind of share as a framework for thinking was around this question of the business model and whether— What do these investors think they were going to get? And my argument was basically that my guess is that for some number of investors, they believe that the upside potential, the value of actually creating super intelligence is so enormously incalculable if it actually is achieved that, in fact, wasting any amount of time between now and then on some intermediate business model is actually a huge distraction to the upside potential they're going to get, right?

If someone is basically putting odds on how likely to succeed a team is at achieving a trillion-dollar outcome or a multi-trillion-dollar outcome. And it would be a compelling argument, I think, to some that trying to win enterprise business between now and then is actually going to slow you down. And that you don't really care about the $3 billion that OpenAI is making because the multiple trillions of that superintelligence represents is the big prize. I don't know. Even if I believe this, I just think it's an interesting framework that I hadn't seen people sharing. And so I think that's— We try to do this with our tutorials as well, instead of just having it be, here's the stuff that you can do. We're trying to have thought about it, have experimented with the tool enough that we can offer a shortcut on some use cases, right? So it's basically if we've spent an hour figuring something out, or two hours figuring something out. We're going to give it to you in five minutes. So we're saving you that time. That's sort of the promise. And I think that all content to some extent follows this pattern of not just what's the thing that's being shared, but how am I supposed to think about it?

Dan Shipper (00:24:21)

Yeah, that makes a lot of sense. So, I know I completely derailed you from the original AI thing you were going to share. So let's come back to, you're on Google News, and you were sort of talking about, okay, here's how I gather the kinds of topics that I want to share on the show. We went into Twitter bookmarks, but it seemed like there's something that you want to share specifically about Google News.

Nathaniel Whittemore (00:23:46)

Sure. So the funny thing is that this is actually contrary to the entire way that AI or that search is evolving in the era of AI. But because I want comprehensiveness for me, going back through pages and pages and pages of the day's news on Google is actually the most relevant starting point outside of those Twitter bookmark, is just making sure that whatever random study that someone did that someone wrote an article about that's on page 10 of Google results. I want to see that. And I think it shows a moment where AI will not always be the answer to all problems. A lot of times when people are searching, what they're looking for is the answer to a question where this sort of AI overview or what you can get with Perplexity is exactly what you want. Sometimes you want long tail information that's just way out there that you wouldn't have access to otherwise. And actually a summarization is the enemy of that. And that's sort of where I am with this. However, there are AI tools that are valuable as part of this searching process as well. So I will often use Feedly.

Feedly has an AI feed feature that they've been experimenting with for a while now. You can create AI feeds. It's better if you actually customize it and get it down. But again, I've got this sort of weird use case where really what I'm looking for is just everything about AI. So I'll do a search for artificial Intelligence, and it comes back and basically says, there's way too much there. You shouldn't do it like that. But it's still valuable because they're going to kick up things that are sometimes deeper and more long tail than even I'm going to get from Google.

Dan Shipper (00:25:26)

So this isn't based on the things that you subscribe to in Feedly? Is it just the entire internet?

Nathaniel Whittemore (00:25:31)

If you used Feedly the way that they imagined it, yes, you can. I mean, you can use this tool in incredibly powerful ways to customize and home in. Again, I am in this particular use case voracious in a way that just a raw out-of-the-box use of it is actually good for me.

Dan Shipper (00:25:50)

Okay, cool. That's really cool. I didn't know about that. I feel like there's this whole discourse about people being able to create their own algorithms and all that kind of stuff. And Twitter should be replaced with a blockchain where you have your own algorithms. And it feels very nascent and we're all waiting for our algorithms, but this actually feels like you could do that with Feedly, but no one's talking about that for some reason. I really want to check this out.

Nathaniel Whittemore (00:26:14)

Yeah, think that it's likely to me that there is a power user type of use case of this that could be extremely valuable for something not dissimilar to what I'm doing, but just that is a little bit more focused or refined where this type of thing could save you a huge amount of information on surfacing the actual things that you want to cover. So, as part of this sort of research process I will also shout out Perplexity and AI research tools. 

I'm not sure exactly what percentage of time this would come up for me, but let's say that I'm digging into a technical topic. I'm coming at AI from a broad societal and business level, not a technical level. And like many people who have gotten into AI, specifically through the context of generative AI, there are concepts that people have known for decades at this point that get thrown in, bundled in with terms. Anytime that I'm looking at research papers, I will often go to something like Perplexity to ask for background information on a particular topic, or I will use Perplexity sometime to remind myself of things.

So, a lot of times, part of the value proposition, I think, of a daily analysis show like mine is the percentage of people who listen close to every day is actually very high. When people listen, they tend to listen a lot. It tends to be sort of integrated into their workflows. And, because of that, I almost get to weave themes in and out over time that people have heard me talk about sort of over and over in different ways. But I sometimes have to remind myself of those things. For example, a really relevant thing that would come up quite a bit is “Biden AI executive order. Give the key details,” right? And this is something I spent a ton of time on. I talked about it then, but it was more than six months ago now. It was about seven months ago now. And so going back and reminding myself of all of the different provisions they're in and something like Perplexity, what's great about it. And this is the same promise of an AI overview-type of interface experience is if what I need is just that quick summary and reminder, that's what their answer does. But because it's all sourced, it's actually a quite fast way to actually get to the original source material as well. So Perplexity is a fairly common tool that I'll use in that very small prep period before I just let the thing rip.

Dan Shipper (00:28:50)

That makes a lot of sense. And I think this is actually broader than just podcasting. I think people underestimate how much any kind of content creation, whether it's podcasting or writing or whatever, is actually just summarizing other things in order to get to the point you want to make. Because if you're tracking the theme of how government and AI interact, and this Biden executive order is part of that, you might have a perspective overall about what the government should do or what the government's role should be. But in order to explain that perspective, you have to first explain the Biden executive order and you have to summarize it in a few sentences and know the intricacies of it. And that actually takes a lot of work to do to make sure you're right. Even if you basically know it, just if you haven't thought about it in a couple of weeks or a couple of months or whatever, you have to go back and read all this stuff. And I think AI is so good for doing that. And it happens for me in my writing too, where I'm often dealing with more complicated, either technical topics like how AI works, or I'm talking about more esoteric, philosophical ideas that maybe I studied in school or I read about a long time ago. And in order to frame up the discussion, I have to summarize that stuff. And it always takes a bunch of work. But I just do it for me in a sort of context or in a way, in a format where I can kind of just stick it right in there and I'll put in my voice or whatever, but that is so helpful and I think people underestimate how valuable it is.

Nathaniel Whittemore (00:30:16)

Totally. And just building off of that, so much of it too is, even if one isn't going to use precisely what it is, as you're trying to figure out how to think about something, it's incredibly valuable to go back and get the gist of arguments, right? So I just asked, as we were talking as another example, “What is a transformer in the context of AI and what are arguments about whether the transformer architecture can lead to AGI?” It's the type of question that's resurfacing around, again, Ilya's Safe Superintelligence thing. Are we on a path—? Some of the responses were that the path that people are exploring fundamentally is not going to lead to superintelligence. We need something totally different. That's why I'm skeptical. Going back and getting the background on what those arguments are and where they come from and who's on what side is something that previously was enormously time-consuming. You have to be a wizard and sift through so much on Google, whereas now it's just, it's so hyper-compressed.

Dan Shipper (00:31:14)

That makes sense. Yeah, totally. So I guess, once you've done your research, you have your set of things from Perplexity, from Twitter, from Google. What's your next step?

Nathaniel Whittemore (00:31:27)

So, as I mentioned, for me at that point, I'm just off. I set things up in my tabs and I let it rip. But I think that it's valuable here again, as we're trying to abstract this into a broader set of people. This is the phase at which someone might want to help you architecting or scripting, right? So pick your LLM, whether it's Claude or whether it's ChatGPT , or whatever works best for you. And there's a couple different ways that I can see people using this one would be outlined, right? So you don't need it to script everything for you. But I'm going to create a show about that same transformer architecture. Let's actually do this where I'll pull Perplexity back up and give it a try. I'll use the same prompt and I'll say, “I'm doing a podcast about this. Outline in four sections. What should I talk about?” Right? Whatever it is, this could be a way that you could figure out sort of roughly how to architect your show. Again, this is not necessarily just strict scripting. This is a way to think about how I can frame everything? How can I sort of bring some structure to this? I think stuff like this could be incredible— You could obviously also go from here to actual scripting, right? If that was the approach for you.

I think where scripting can be really valuable is especially if you have it's easier to not script when you allow yourself to be rambly because part of the way that humans work is you talk enough to figure out what you were trying to say in the first place and then you eventually get to it. If you're trying to be precise, right? If you have a limited time window, if you have 30 seconds to make your point or 60 seconds to make your point, that's compressing. What you're doing into something that is more scripted can be really valuable. And so I think LLMs are an obvious tool for that. 

I guess one more thing to flag and I'm interested in your experience with this. I have been spending a bunch of time exploring how good LLMs are at imitating my writing style. And I would say that I give them a D-plus to C-minus so far, right? It is not a thing that I think actually works. I've tried with Gemini loading up a bunch of my stuff. I've tried with Claude. I did a custom GPT. And what I will say is that it is distinctly more like me than it is just the raw, normal ChatGPT basis sounding, right? So it's not that there's nothing here. It's just that if you're actually looking to replace your own voice, I think at this stage, it's very unlikely that you're going to be satisfied with the results.

Dan Shipper (00:34:21)

Yeah, I have a slightly different perspective, but in a limited way. So I actually have found that Claude—very particularly Claude, not ChatGPT, not Gemini—is good at replicating my voice, like 80 percent of the way for specific kinds of writing tasks. And I think you should be skeptical of that because, like you said, mostly LLMs are not good at this. But I found that if you are doing a repeat writing task where you're taking a piece of content and transforming it from one form to another— So an example for me is I'm constantly taking podcasts I've done and taking the transcript and trying to figure out how to tweet the episode out. So I need to turn the transcript into a tweet. If I have a bunch of examples of having done that in the past, where I have the transcript and then the tweet that came out of it and I feed that to Claude along with a little bit of a style guide. It's actually really, really good at getting pretty close to my voice and doing and picking out the interesting thing and ordering it in the way that I would order it and all that kind of stuff, which is really, really hard to do. And so you just teed me up perfectly because we built this tool called Spiral that we launched two days ago. It's been going wild over the last couple days, which is really, really fun. We have like 2,000 signups in the first two days, something like that, and it's basically based on this insight I had. I realized this a couple of months ago and then I just built the first version of it and started using it and shared it with my team. And they were like, wow, this actually works really well. So we just released it. And I think for that narrow-ish use case, it's kind of stunning how good it is.

Nathaniel Whittemore (00:36:12)

Awesome. So this is actually a perfect segue and reinforces a lot of what we talked about at the beginning, that if we're talking about sort of the gap between people's expectations and reality, general purpose, hey, just imitate my writing. Underwhelming, right? You found this very specific type of use case where it worked really, really well. And then you actually institutionalize that or build that into a specific kind of purpose built tool that can do that type of thing over and over. And this is actually a pattern that we're seeing at Superintelligent quite a bit, which is that there was this notion, the first wave of startups that came out right after ChatGPT, very pejoratively were called ChatGPT wrappers and things like that. And what's interesting is that I think that what you're seeing now is that some number of those startups weren't ever actually ChatGPT wrappers per se. Instead, what they were customizations for a specific use case that people have over and over and over again that need a different either specification or sort of a customization of the model, or they need a different user experience to surround it, right?

Perplexity, I think, is a great example of this in some ways where they are, I mean, they're pulling from many different models, but really it's a triumph of designing for a specific type of use case and creating a great user experience around it, right? Rather than it just being this random chat window. Artifacts, which just came out, is basically just a different user experience that says, hey, actually, if we separated the output field from the instruction field, it would probably be a lot easier to help people use this tool. And so far, I mean, we're hours in, but I think that's pretty validated pretty fast Spiral to me is a great example of this, where it's like, there's a thing that a lot of people have to do over and over and over again, that in sort of gen zero or gen one of LLMs, you're hacking together with custom GPTs, or you're just you've got prompts that you save in Notion that you copy-paste every time that now there's a purpose built tool for that. And I think why people are responding so well is that there's a lot of people who have that same use case that you did that just cut out all that sort of stuff. And it's likely to become a beloved product because of that.

Dan Shipper (00:38:33)

Thank you. I really appreciate that. I hope so. And I think, yeah, you're totally like hitting on this thesis I have, which is if I think about what the best analog for AI in general and ChatGPT is in terms of the history of software, I think a really good analog is Excel. And in the same— The sort of overlap is Excel is really easy to get started. Anyone can start by just filling in a cell, but you can do things of almost unlimited complexity with it. It just progressively reveals its complexity. You can use it at any level. I think it taught people a new paradigm for how to think about using computers when it came out 30 years ago. And what's interesting is that Excel, it took over the market and then it was sort of progressively unbundled into all of the SaaS tools we see today. And I think when ChatGPT came out, everyone tried to build ChatGPT wrappers—the pejorative ChatGPT wrappers. And I think many of them didn't work because ChatGPT hadn't had enough time for people to use it and get to it, even knowing that they might want something else. And I think we're just at the point where enough people have been using ChatGPT and other tools like it to be like, I have this specific workflow where you can kind of unbundle it a little bit. 

I think it's still very early, but yeah, you can unbundle it into things like Spiral because people know what it is and why they'd want it. Whereas if we'd launched it all a year ago, I don't think enough people would know and so that's sort of the opportunity I see for these types of tools as they get more distributed and more and more people learn this paradigm of computing, which is a very different paradigm. There will be more opportunities to kind of spin out the more complicated workflows that people develop for their specific use cases and turn them into new products.

Nathaniel Whittemore (00:40:24)

Yep. At the risk of getting too off-track here, the other thing is that a lot of times when companies or a category of companies is being referred to pejoratively, you have to understand the context of who is being pejorative about it, right? And so the ChatGPT wrapper thing is pejorative specifically in the context of whether it's venture-backable, but it's only a small slice of businesses in the world that are actually venture-backable because the business model is not predicated on not 10x returns, but 100x and 1,000x returns. And there's just a very small number of companies overall that are that type of thing. What's fascinating about AI is that it's making more viable, non-venture-backed models for even more sophisticated products and so it’s kind of like AI is solving the problem, that's creating opportunities for builders on top of AI, which is a pretty cool thing.

Dan Shipper (00:41:14)

It is super cool. I mean, yeah, I want to get back to your workflow in a second, but I mean, we built Spiral end-to-end in two months. I built the first version in two days with AI and it's unbelievable how much easier it is, how much faster it is. We probably spent maybe $10,000 total to build it. You can build really amazing products and businesses really quickly with this stuff so it's super fun.

Nathaniel Whittemore (00:41:38)

Yeah. I think too that there is a lot of insight in your point that we are still so early in our experimentation and our explorations of how to use these things. I don't know if you saw this tool that was just announced this morning called Auto, but basically the way that it looks like it works is it's a research tool, but it uses tables instead of a chat interface. And so let's say that you're researching Nvidia’s corporate results or something like that. What are patterns in Nvidia’s corporate results? It's going to create an agent for each column basically in the table that then goes and fills out that information you can give it sources in the same way that you would give ChatGPT sources. And I actually think it's fascinating. I haven't had a chance to play around with it yet, but it's notable to me that any sort of data analysis stuff is actually, I think, a little lagging right now. People have been so enamored of the chat-based interfaces that the Excel-type of interfaces have been left behind a little bit. And as someone who interacts with workplace, enterprise-y type users every day, if people start figuring out stuff that really works in formats like Excel and tables, I think it could take off.

Dan Shipper (00:42:46)

That's amazing. Yeah, I did see that and I need to check it out more because it looks really cool. And I think it's a great example of that new paradigm of computing that we're just sort of peeling the layers of the onion back. So to get back to your workflow, we've basically gone through some of the research phase, we've gone through if you wanted to do some outlining. How would you do that? What's the sort of next step in the process?

Nathaniel Whittemore (00:43:11)

Next step is the big one. It's the recording. And there are, of course, a million different ways to do this. The tool that I use every day is Descript. So Descript is basically a— For my purposes, it's not dissimilar from Loom or something where I'm selecting a part of the screen that I'm recording. That set of tabs that I've set up and my little floating head is going to be down there in the bottom or I can do a direct-on shot. 

So for me, I have a very specific formula. I do a direct-on headshot that starts the thing and then I sort of do the floating head screen over thing for the rest of it. And Descript is sort of a two-part thing. It's a recording tool. So that's one part of it, but it's also an editing tool. And so Descript was one of the first tools to really have text-based video editing. So it's going to create a transcript of what you create and allow you to highlight parts of the transcript, delete them, and do it instead of the sort of timeline-based editing that's traditional in video editing. This makes it massively more accessible for people who haven't used Adobe Premiere Pro or things like that before. And actually when I started The AI Daily Brief for a little while I was editing it myself because I just wanted to learn how to use this and how to get fast at it. I also wanted it to be that if for any reason an editor that I hired in the future crapped out because it's a daily show, I could just jump in and so Descript—or there's a bunch of things like it now—I think that this paradigm of text based editing has found its way into basically every tool at this point, make the video creation—capture and editing process—I think a lot faster than it used to be.

Dan Shipper (00:44:48)

I love Descript. It's so smart. I mean, we use Riverside for the show, but I think Descript is also really incredible. I just have to say, if you can edit text, you can make videos. It's such a good headline. It just reminds me of that Dodgeball quote? “If you can dodge a wrench, you can dodge a ball.” I wonder if they're referencing Dodgeball.

Nathaniel Whittemore (00:45:07)

The guy that started Descript is a funny dude. His name is Andrew Mason. He started Groupon and he is a quirky, funny dude. A quirky MFer relative to the world of startup CEOs, I will say.

Dan Shipper (00:45:17)

He truly is. I think he wrote a really funny resignation announcement from Groupon. So if anyone's looking for good corporate humor, definitely go check that out. But yeah, I think Descript is great. I love the AI text editing-to-video pipeline they've built. And also they just have all these little things that make it nice, like they'll just get rid of your ums and ahs and like do a bunch of auto-editing stuff. That's really helpful.

Nathaniel Whittemore (00:45:46)

Two other AI features that I wanted to call out just for completeness. To your point though, the ums and ahs, I think is probably the most used of any of them, the filler words they call it, and it's really good. And I have found that it actually tends not to over-edit those. It's the one challenge because people say if like is your particular tick, sometimes they're important because you're actually talking about someone liking something. But in general, the ums, ahs, all those things, it does a really great job on the other one that I've been experimenting with a little bit more is their eye contact tool. So with Superintelligent we have a bunch of different content creators who are creating tutorials. Most of them don't do scripting in general, it's sort of similar to the process that I have, it's an outline into a video, but they will often do a start—part of a kickoff line or two that they do script, because they know exactly what they want to say, and so if they haven't got like a teleprompter set up, you can see if it's that someone's reading something, and it's a very distracting thing. And so we've played around with eye contact, which is basically a post-scripting production editing tool that Descript has that has your eyes looking at the camera. And what I would say is when you are the only thing in camera, right? If the full video is you, the eye contact tool can be a little zombie-ish, it can be a little aggressive. However, if you are doing it in the context of a floating head, right, where you're a small little part of the corner, it's gangbusters—it is so good. So if you're doing something that's semi-scripted where you are doing the little floating head thing, I would encourage checking out the eye contact tool as an approach as opposed to a teleprompter-type setup.

Dan Shipper (00:47:39)

That's really interesting because I literally just bought a teleprompter for that exact reason, because I'm now doing a lot more ad reads and I'm doing more scripted videos or whatever. And just even for this podcast, I can look directly at you because you're in my teleprompter right now and I've been meaning to try some of those eye contact AI tools, because I think they're— Descript has it. There are a couple of other platforms that have it too. It's good to know that it works, but only if it's the only thing on the screen. Yeah, that’s a really interesting little tip.

Nathaniel Whittemore (00:48:13)

Teleprompter is a great example of all of these little lines that people are figuring out that will evolve over time. It's highly likely that in two years it'll be as good as any sort of teleprompting, but right now teleprompting is way better for if you want to fully maximize how good it is.

Dan Shipper (00:48:29)

How does it work though? If you're looking like you're physically looking down, it fixes your eyes, so you're looking at the camera, right?

Nathaniel Whittemore (00:48:36)

The more that you're bopping around, the more that you're looking down, the harder it is, right? So if you're reading something that's down on the ground where your whole head is tilted, that's going to be comparatively bad. Whereas if it's the way that most people do this there, they set up their thing just off the side. It's like “resources, sign in, sign up,” and you can just see that they're not looking directly at cam.

Dan Shipper (00:48:56)

Yeah. Okay. That makes sense. Cool. Yeah. Descript is awesome.

Nathaniel Whittemore (00:49:00)

Okay, so Descript, we've taken the video, we've edited the thing, and now we are back in doing all of the next set of stuff. And so we're back to LLMs, right? Because we want to rip transcripts out. Maybe we have a transcript from Descript, but you just articulated the perfect thing that you would use a Spiral for, previously a ChatGPT or something like it, to go from that transcript to potential titles to social media tweets and there are tons of different tools for each different piece of this, right, you could do. SEO.ai is one of these things that's sort of powered by all those LLMs that you would work with, but is hyper-focused on just thinking about SEO content, right? So if you're creating a companion blog post, this is gonna be a really good tool for getting it to rank, right? Suggesting key terms that maybe you didn't use in the podcast itself, but would be good to include that. There's tools like Hoppy Copy, which is totally focused on written copy. I think their specialization, basically the thing that they started with is email newsletters, but they also have social post copy—LinkedIn, Twitter, etc. And again, these are all purpose-built tools that are exploring really, really focused use cases or applications of LLMs. If you're comfortable inside ChatGPT, if you're comfortable inside Claude, a lot of these things you might build a process that does this for you. Again, everyone is now figuring out which of these things work even better for them, you know? And so it may be that if an LLM saves you 40 minutes of writing time, something like Spiral saves you an hour or whatever, I think that the writing that happens after any piece of content comes out is one of the best areas for this, even if you are a good writer, just because it's so exhausting from a sheer human standpoint to produce something and then have to go back and live inside that world again.

Remember, I mean, this is probably a little specific to me, but I will often by the time I get video edited back from my editors or the final podcast, I'll have to go read the transcript to even remember exactly what I was talking about because when it's done, I'm on to the next thing. And even though it's only six or seven hours later—whatever, three hours later. Literally, dragging myself back into the world of that piece of content is painful. And so I like writing. I think I'm a good writer. I like thinking about titles and stuff. I still think that this area is one where almost everyone is going to find benefits from using some version of these writing tools to help with the copy that comes out.

Dan Shipper (00:51:53)

I 100 percent agree. I mean,you're speaking my language, obviously, because I built Spiral for that exact reason. I have to spend all this time—I record a podcast and then I get an edit back a couple of days later and have to spend all this time watching it and then being like, okay, what was the main idea? What do you think is interesting? Or actually, I have a ghostwriter who helps me with this and she has to do that. And that takes her a lot of time. And then she sends it to me and sometimes I don't like it. So then I have to do it. And then the whole thing is a mess. And it's also both a mess and it's hard work and it's skilled work. Not anyone can do it. It's hard to do but it's simultaneously hard, but also a little bit brainless. You're kind of doing rote work that takes a lot of skill to do. That combination is sort of rare and so that's why I wanted to make Spiral. And it definitely seems to be helpful for those kinds of rote, but still skilled tasks. I'm curious—like, Hoppy Copy. We're now thinking about competitive research for me. So I'm really interested in how it actually works. Can we do—I don't know—an email campaign or a social post or something like that? What's going on there?

Nathaniel Whittemore (00:53:08)

Sure, let's see if we can find anything that we can do with that's still free. No, we don't. It looks like this is the number-one thing that we're finding with— This is sort of an aside. On the one hand, I think that AI tools are training people to pay for software again. There's no like, we'll grow at all costs and then not charge you. Every tool is charging. The problem is that there's only so many $20 or $30 a month subscriptions that you can do before you just run out of time. And we're seeing that fatigue happen with Superintelligent users right now, where one of our most in-demand things—you can see it right on the front page. “Free Tools Only” is our most popular playlist because people want to know tools that have a sufficient amount, or enough free usage that they can actually get value out of them.

Dan Shipper (00:54:09)

Yeah, that makes sense. And that's honestly what we're doing with Every. When you subscribe to Every, you get Every, you get Spiral, and you get Lex, our AI document-writer app. So it's not free, but you get a bunch of things bundled together, so you're not shelling out for a bunch of different subscriptions. But, yeah, that makes a lot of sense.

Nathaniel Whittemore (00:54:29)

I think it's hard. Listen, I think it's a better environment net-net for things to charge enough money that they can be sustainable businesses and then to let chips fall where they may have, like people have to try them and they stick with the ones that work and they don't with the ones that don't. I think the biggest barrier to trying what we're finding, so we're having conversations with lots of folks around. Hey, we'll make some of the content that we're creating about you free in exchange for discount codes for our users. And that's sort of bargaining or whatever.

Dan Shipper (00:55:00)

That makes sense. That's really cool. Okay. Yeah. So, we unfortunately can't do— Is it called Hoppy Copy? Can't do Hoppy Copy because they have a paywall but, yeah, curious, what's next? If we don't want to pay for Hoppy Copy, where do you go next?

Nathaniel Whittemore (00:55:20)

So next is my favorite part, just from a sheer personal standpoint. So everyone who's into AI now had whatever conversion experience they had, right? There's sort of a moment where they were like, okay, well, that's going to change everything. And it's going to change everything about how I work. And so for me, it was actually not—it was image generators. And so the first place that I started to see this was my brother-in-law had just finished writing a fantasy novel and it had been his goal for like a couple years. He spent a ton of time working on it and he sent me a bunch of illustrations for it that were off-the-wall cool—just so cool. This was probably December 2022 and he had been using an implementation of Stable Diffusion. That was a Discord server called Unstable Diffusion where it's uncensored and you could do whatever. It's a very chaotic Discord server for anyone who goes and checks it out if it's even still there. And so I thought that was amazing because this stuff that he was creating—I instantly started pitching him on starting a consultancy to go help other people do that right now. Basically, I was like, you have to do this. And he decided not to—he wanted to keep writing, whatever. But so that was sort of step one. And then step two, I started playing with Midjourney and I found myself, I would be on a plane and instead of turning on the internet, I would just start watching a video. I would just endlessly create— I can probably find them. Hemingway at Le Deux Magots in Paris in the 1920s, and these random nostalgic images of Paris from whatever or California in the 60s or whatever it was, just for the sheer fun of it. I think that for me image -generation is very discreet because it's the thing that I found that most makes you feel like a wizard when you use it. This capacity that you never had before that you could all of a sudden create things. So I actually, I just love using image generators. I spend a ton of time doing it just for fun. And so when it comes to the creation process of any piece of content, you always are going to have imagery associated with it. You're always going to have thumbnails. And so, if you look back through my Midjourney, huge parts of it are me experimenting with different images that are going to turn into covers for episodes. So, you can see I'm trying to get at a computer reflected in someone's eye. This was for a Superintelligent YouTube cover—like, try this and it didn't quite work. So I tried, tweak this, the stylization settings, blah, blah, blah, blah, blah. And if you just scroll back through, I would say probably 90 percent of my Midjourney is me experimenting with covers for thumbnails for episodes.

Dan Shipper (00:58:24)

And what have you learned about what prompts work and how to get a good cover or thumbnail from Midjourney?

Nathaniel Whittemore (00:58:32)

Couple different things. One big, huge thing that I don't think is talked about enough is the more that you are able—or in a position—to let AI wander, I think the better the result is going to be at this stage. It's still hard to get something as super precise as your imagination can make it right. So the more that you know exactly what you want, the harder it is to get that thing, right? The more that your mind's eye has figured out the harder it is. Whereas if you're a little bit more open to a vibe that you're trying to capture, I think that's what I find like Midjourney is really, really good for explorations of. And so you start to experiment with words that connect you to vibes. So, describing the style of illustration is different than line drawing is different than cartoon is different than a specific type of cartoon—like Pixar style or something like that. So using those words that sort of get you in the style is going to be super valuable, especially again, if you like to be broad. If you look at a lot of my experiments, they're not super long prompts. Now, this is particularly based on the use case here with a thumbnail for a YouTube video. I have a ton of openness and flexibility for what I'm doing. The brand guidelines for me are much more about the way that elements are brought together on a YouTube thumbnail than exactly what the art looks like behind the thumbnail so that's a particular thing. If you have a more precise type of brand approach, dialing in and figuring out what's going to work, or what things kind of come back to being valuable is super important.

I guess I can give you an example of something where I did try to dial in a particular style, a different use case. So the only game that I've ever loved is Magic: the Gathering. And my favorite way to play it is a format called Cube, where you basically bring together cards from all the different sets from all time. And you kind of construct your own set that you then go draft with friends. And at some point, a couple of years ago, I started actually just designing my own cards that were missing. And a lot of it was based on how I started a project to create cubes for each season of the year. I really like seasonality. And fall is my favorite season, and so I wanted something that was like an early-Americana-themed set and so Magic has a lot of cards that are like horror-inspired, gothic horror and things like that, but they don't have a lot of things that are reminiscent of the Pilgrim times and the Salem witch trials. And that's the vibe that I was really trying to capture for these custom cards. And there's a million custom card makers on the internet for magic cards, a hugely popular game and tons of people like making their own custom cards, but you still have to insert art for all of these things. And so give us whiplash and try to get to some of these things I wanted to dial in a style for this. And so the most important thing when I was really trying to dial in a style is reference points that you can come back to. So I experimented a lot with certain phrases.

Let me see if I can just find— So I experimented with a bunch of different reference points for this—some of them were stylistic, so I tried 1700s paintings, a time and a style like oil painting. I tried landscape painting, Hudson River School, basically trying to figure out something that worked. One of the things that works really well is if you have a particular artist you can really ground a style. So Winslow Homer was one that I used to create sort of a consistent style that looked similar across a lot of this different art. And again, I was trying to create images that worked together so that the feel was all similar across. We've got this bucolic image of someone planting or in harvest time, but I also wanted that to work with some scary image of this weird guy. And I think that by referencing a particular style, a particular artist, it's one of the ways that you can get more consistency, but consistency in image generation, we could have a whole separate conversation on. 

Dan Shipper (01:03:20)

Totally. I think that's really interesting. And I just liked that point you made earlier about, one of the reasons why you like image generation so much is because it feels like a magic power that you just didn't have before. And I feel like we think a lot about AI as this thing that speeds up things that you already do. But, and that ties back to your earlier point where it's like, you love this because you couldn't do this before. Same thing for writing, where it's most useful for writers who are not professional writers and it's so beautiful that there's this flowering of human creativity going on where it's like, now we can make art and we can code and we can like do all this stuff that all we had to do is prompt it now instead of spending years and years trying to get good at it or whatever and it's that's not to say that those skills are not valuable anymore but it is to say that millions of people don't have to start at square zero anymore, they can start on first base a little bit and like get the taste of what it's like to be good at this before they actually have any skill and I think that's so cool.

Nathaniel Whittemore (01:04:21)

I'm with you. I completely understand why in so many places there's so much backlash to this stuff. But I am ridiculously optimistic about it and I'm optimistic for a couple of different reasons. One is that I think net-net more people being able to create more stuff, having the tools of creation, is just better, right? It just is. Gatekeeping access to creation does not make the people who can create it better. There's always going to be a spectrum of what people can do. That's one thing. 

Second when it comes to art, I very much understand why people are— We have a whole set of societal conversations to have around the ethics of training and what that looks like. And it's going to be both a society conversation and it's going to be a legal conversation. And frankly, different legal systems are going to come to different answers about this. But I think that what I would say to an artist, if they were worried, is that ultimately, it's still going to be human artists who create the styles and approaches that people riff off of and templatize. I don't believe that the relevance of human creatives goes away. I think that they become benchmarks and reference points where people get to go do fan fiction for their favorite artists, basically. I think that it's going to, if anything, increase that. And the last thing is that I just think in general—to the point that you were just making—it would be absolutely insane to me if the way that all this played out is that— It's pretty binary. Either we can do and create the same amount of stuff we're creating and doing now, just with less time and less money spent on it. That's one possible outcome. Or, we can fill in all of that time and resources just creating more stuff. And it seems very obvious to me, if you look at the entire pattern of human existence, that we're just going to create more stuff.

If code takes one tenth of the time, we're not going to have one tenth of the coders. We're going to code 10 times as much stuff. That's just the way markets work. That's the way capitalism works. It's just the way that humans work. We just make more. Our appetites are voracious for more stuff. And so that doesn't mean that stuff will be good. There'll be more good stuff proportionally. It just means that there's going to be more of everything and that'll create its own challenges. But I dunno. It's hard from where we're sitting now to sort of fully project out into those futures, but I think it's just going to be very cool. It's going to be interesting. It's going to be dynamic. It's going to be exploratory. And I think that we will find a dynamic, interesting, more creative world on the other side of it.

Dan Shipper (01:07:06)

I'm 100 percent with you. I have exactly the same viewpoint. We will just make more and want more stuff. And that overall arming people with more creative powers is going to be better. Obviously there's lots of societal conversations to have. I think we need new ethics for this kind of thing about what's acceptable and what's not. I think there are certain people we need to take care of—all that kind of stuff. But generally I think it's going to be really good and really fun. And I am definitely feeling that energy from this conversation. It's been such a pleasure to get to talk to you and get to know you. If people are looking to learn more from you and hear your podcast or check out your company, where can they find you?

Nathaniel Whittemore (01:07:51)

Superintelligent is that besuper.ai, and that's basically the handle everywhere. So it's besuper_AI on Twitter, besuperai on Instagram and on YouTube and then the name of the podcast is The AI Daily Brief. It’s also @aIdailybrief basically everywhere—YouTube and Twitter. And then I am @nlw everywhere. So any of those places is a good place to find me.

Dan Shipper (01:08:19)

Amazing. Thank you so much.

Nathaniel Whittemore (01:08:21)

Yeah, Dan, it's been awesome to be here. Keep building. Keep doing this podcast and I'll be excited to come back in a year and see all the stuff that you've built.


Thanks to Scott Nover for editorial support.

Dan Shipper is the cofounder and CEO of Every, where he writes the Chain of Thought column and hosts the podcast AI & I. You can follow him on X at @danshipper and on LinkedIn, and Every on X at @every and on LinkedIn.

Find Out What
Comes Next in Tech.

Start your free trial.

New ideas to help you build the future—in your inbox, every day. Trusted by over 75,000 readers.

Subscribe

Already have an account? Sign in

What's included?

  • Unlimited access to our daily essays by Dan Shipper, Evan Armstrong, and a roster of the best tech writers on the internet
  • Full access to an archive of hundreds of in-depth articles
  • Unlimited software access to Spiral, Sparkle, and Lex

  • Priority access and subscriber-only discounts to courses, events, and more
  • Ad-free experience
  • Access to our Discord community

Comments

You need to login before you can comment.
Don't have an account? Sign up!
Every

What Comes Next in Tech

Subscribe to get new ideas about the future of business, technology, and the self—every day