The transcript of AI & I with Evan Armstrong is below.
Timestamps
- Introduction: 00:01:04
- How to develop good taste: 00:04:28
- Dan uses Claude to articulate his taste in books: 00:13:34
- How to use LLMs to explore art cross different mediums: 00:21:06
- The way Evan chooses his next essay topic: 00:33:45
- Go from research notes to clear thesis in Claude Projects: 00:38:20
- How Evan uses AI to master new topics quickly: 00:46:51
- Evan leverages AI to power through writer’s block: 00:59:21
- How to use Claude to find good metaphors: 01:04:28
- The role of AI in building an audience: 01:11:44
Transcript
Dan Shipper (00:01:41)
Evan, welcome to the show.
Evan Armstrong (00:01:43)
Thanks for having me. Long time coming.
Dan Shipper (00:01:45)
Yeah, I'm excited to have you. For people who don't know, you are the lead writer for Every. So, you write twice a week for Every. We've been working together for about three years—two years full-time. And of all of the writers that we've ever worked with, you're the one that survived the gauntlet and actually became a real writer. And I'm just psyched to have you on—both because I love hanging out with you and getting to work with you every day. And I think you're just you have that sort of rare combination of: you're really smart, you know a ton about business, you're funny as hell, and you actually want to be a writer. And it's just been such a pleasure to watch you go from being, I guess, a professional marketer, SaaS investor—a bunch of different things that you were doing—to now being a pro writer. So, welcome.
Evan Armstrong (00:02:39)
Well, thank you. I'm really excited to do this mostly because I'm going to chop up what you just said and, for my performance review at the end of the year, just play you back to you. That was the only reason. This is just to make sure that I hit my bonus for the year. That's the only reason I'm here today.
Dan Shipper (00:02:53)
I'm costing myself money.
Evan Armstrong (00:02:55)
Yeah, exactly. This is a very expensive podcast.
Dan Shipper (00:02:58)
Great. So, I think what we wanted to do is do a little bit of a non-traditional episode. That's a little bit less like me just interviewing you and more like us together talking about writing because we're both huge writing nerds, we love to write, we're both professional writers. And we're talking about writing, particularly in the context of AI. In each part of the writing process, how are we each using it? And to what effect? Where is it good? Where is it not good? All that kind of stuff. And you're actually teaching a course on this called, “How to Write With AI.” So, it's a good time for us to kind of explore together. And my hope is we both just come out of this, I mean, nerding out about writing, which is probably both of our favorite things to do. But also learning stuff from each other that we can apply to our process.
Evan Armstrong (00:03:46)
Yeah. I think when people picture Every, I think they envision it more editorially cohesive than it is, where— I would say, it's more contentious than people would think. We’re debating this stuff a lot. We're arguing over sentence structure. And I'm going to guess that today we're going to find out we use AI very, very differently in our writing process. Let's hope. I think that'd make a more interesting episode.
Dan Shipper (00:04:10)
It would. It would make it more interesting.
Evan Armstrong (00:04:12)
I think we're going to find we use it pretty differently.
Dan Shipper (00:04:15)
Okay, cool. Well, basically we've split up the segments—or the things we're going to talk about— into four sorts of distinct areas. One is taste: So, how do you know what good is? Two is the topic: How do you pick what you're going to write about? What do you write about? All that kind of stuff. Three is craft: So, that's actually, how do you do the writing? It's everything from outlining to writing to editing. And then four are the audience: How do you reach people?
And we did it that way because that's how your course is structured, and that sort of allows us to get the whole breadth of all the things you might use AI for in writing as a creative, intellectual pursuit. So why don't we start with taste? Can you tee it up? Talk about why taste, why is it important? And, yeah, we'll start from there.
Evan Armstrong (00:05:05)
Yeah, I think taste is a buzzy word and it's also a word that means nothing and everything in that taste is not— My line about this is that taste is not whatever people in Brooklyn are doing.
Dan Shipper (00:05:22)
I'm in Brooklyn.
Evan Armstrong (00:05:23)
I don't mean that as an attack against Dan, but more against the Brooklyn-industrial complex. They made our coffee too expensive and it's all their fault. Anyway, no, I think taste is the ability to articulate why something is good. You may say that you loved Dune: Part Two or you loved whatever latest article either Dan or I wrote, but being able to accurately describe why you liked a thing besides awesome, loved it—that's actually really hard. It's a discernible skill. And I couldn't really do it for myself until I started using AI more. And so I think it's an exercise that everyone who wants to write something good has to be able to say what good is.
Dan Shipper (00:06:06)
And what does that mean to you? What is good for you? How do you articulate it for yourself?
Evan Armstrong (00:06:12)
This is complicated. So, I think when we're talking about taste, you have to be, taste in what context? So in the capitalist context of taste, you're like, is my taste overlining with the problems that my product is solving? So Lenny—who's a friend of ours, he writes Lenny's Newsletter, does a great job. He has these four jobs to be done, and you gotta help me here if I'm missing some of them, but it's like, make me smarter, make me money, entertain me—he describes these jobs. And then there will be taste that goes next to those jobs. I have found that I am very bad as a writer of like, I gotta make a product. I'm just not good at it. I get bored, the writing is crappy. So, instead, what I have found is that good taste is basically things that I enjoy. So, I only write things that I would have a good time reading, and that usually ends up— Anytime I deviate from that, the audience doesn't like it, I don't like it, no one enjoys it. So, good taste is something that I like to smile while reading.
Dan Shipper (00:07:12)
And what do you smile about?
Evan Armstrong (00:07:15)
I think the peak of writing is the more boring the topic and the more entertaining it is, the more skill that's presented. So, you and I, Dan, we write about on one level, very boring things you mostly do, like, tokens. I'm talking about the next token, right? And I spent a lot of time on accounting, here's how finance works—no one cares. No one likes that. No one enjoys that. It's not fun at all. But, being able to crack jokes, to be able to make it accessible and have an energy to it, it's really hard. It's really, really hard to be accurate and enjoyable. And so for me, that's what I shoot for in my own taste, I'm like, is it something boring that I know I should know, but I don't? And I have a good time while reading about it. Very hard bar, but that's what I typically go for. Entertaining reading about boring topics is how I think about it. I'm curious, Dan, for you how do you articulate your taste? Where do you find your taste being fulfilled?
Dan Shipper (00:08:12)
That's a really good question. And I will say also, for me, this was one of my big unlock moments for AI, as I wrote this piece called “What I Do When I Can’t Sleep,” which is about using AI to discover my taste. And it sort of came at this particular time in Every history where we were kind of going through a little bit of an identity crisis—what are we going to be, what are we going to do? And I think I had to go back and think to myself: What do I want to do? Who am I? And both ChatGPT and Claude were incredibly good for identifying that. And I think the things that came out of it, for me, are going through the exercises, using those things to think about who I am and what I like. I really like writing that is intellectually stimulating, it's analytical, it's philosophical, but I really also like it when it's emotionally resonant, it's psychological or it pulls on your emotions in a certain way. I also like writing that's very very poetic and lyrical. So, an Annie Dillard-type person with the running joke at Every is that I relate everything back to Annie Dillard.
Evan Armstrong (00:09:31)
I'm laughing because literally, I think for most of your pieces I have to edit, I have to hold you back. You do not need to mention Annie Dillard here. This has nothing to do with her. You need to cut this section.
Dan Shipper (00:09:40)
I'm a broken record on Annie Dillard, but the Annie Dillard will continue until morale improves. So, I like that. I really like writing that is really accessible, even if it's dealing with a hard topic. So, Robert Sapolsky is a really good example of someone who I think does that incredibly well.
I also really, really love writing that is just practical. It's like, you can actually apply this or how it relates to you, even if it's somewhat esoteric. And I think generally the things that I'm drawn to are very interdisciplinary looks at the human experience, the relationship between humans and technology, the relationships between technology and creativity and psychology, and then sort of philosophy all bundled into that and business all bundled into that are the kinds of stuff that really, really get me going. And it's kind of interesting because it's it's one of those things that if I look back on my life, I can totally see that as a pattern and stuff I've loved for a long time, but I was never able to say it until Claude and ChatGPT told me like, hey, this is what's going on for you. And like you said, saying it, articulating it, it's so powerful because once it's articulated, it becomes something that you can aim at, you can aim yourself at, you can aim other people at, and you can start to refine how you write and what you're doing based on that. And I think that's sort of just this key underlying component to getting better.
Evan Armstrong (00:11:25)
I think it's interesting because for both of our answers there, if I had the stopwatch, I bet we both went on for two minutes. This is what I like, and it's kind of just an amorphous blob of things and emotions. And so when I think about taste, particularly when I saw you, because I was thinking about when you wrote that piece. That was June 23, 2023. After you wrote that, it wasn't that you could suddenly be like, I like Annie Dillard because she's lyrical and so my writing is gonna be more lyrical. Before that, your writing was lyrical. It wasn't like there was some huge shift in your writing, necessarily. It improved, of course, but the thing that I noticed is after you published that piece, you hit a new emotional plane where you're like, ah I am comfortable writing the way I want to write. Do you think that's a fair characterization of that change after you published that piece?
Dan Shipper (00:12:22)
I love that. I think that's so true. Yeah, I think part of it sort of relates to that piece I wrote, I don't know, maybe three or four months ago called “Admitting What Is Obvious,” which is admitting that I wanted to write as a core thing. And I think you're totally right. Articulating it gives you something to aim at and also allows you to incorporate it as part of your identity, which requires admitting who you are, which is actually very scary to do, because it feels like it's cutting off different other avenues that you can take. I like things that are not any of the things I mentioned. I like dumb, funny movies, or whatever, but that's not in my taste, and so that's scary to do. And it's also scary to feel like maybe you'll be ridiculed, like people won't like you if you say that you like this thing that no one else really likes—I don't know anyone that likes Annie Dillard except for me. They're out there, but I don't associate with them.
Evan Armstrong (00:13:25)
The Diller heads unite!
Dan Shipper (00:13:28)
And once you say it and you realize how basically no one cares and the people who do care are like, oh, that's kind of cool. It’s fun to watch someone just like something in public, then you're much more comfortable just owning it and being like, this is what I want to do. And I think you're totally right. It's such an important part of doing any kind of good creative work.
Evan Armstrong (00:13:51)
I'm curious, maybe we should talk about what you actually did in the article, for those who haven't—for the Every heads, beyond our deep-cut fans. What was the AI exercise that you did to develop your taste? I'm curious. Can we do it live? This is the AI & I show.
Dan Shipper (00:14:10)
We'll do it live. So, basically the way this worked—and I did this a long time ago. So, we'll see how well it does today. I'm sure it'll be good actually, but it'll be interesting to see how it updates. Basically I just had this note in my Notion doc. I was just thinking for a little while who do I actually just like as a writer. So, I started adding names and this was not just a one-time thing. It's a continual process because there's all these different contexts in which you're like, oh, I really liked that person, even though you forgot. So for a while I was kind of just updating this and I have Robert Sapolsky, Robert Pirsig. Sapolsky obviously does the really accessible deep science stuff. Pirsig is really accessible philosophy blended with fiction. Ursula K. Le Guin is really interesting psychological fiction. We've got Mary Oliver, that resonant prose. Bill Simmons, who's a very bloggy, funny, clear, simple kind of writer. So I just had all these people, right? And I have words for these people that— I can say all the words right now, but I could never say that before this exercise. So what I did was I just copied this into Claude and I was like, “Hey, here is a list of writers I like. Can you tell me the vibes of these writers in detail?” And then I just pasted my list and I went for it. And I kind of like asking for vibes from a language model—language models tend to do well with vibes—and it sort of gives me this really big list of things. “Robert Sapolsky: scientific, engaging, accessible,”—see that's a word that I used. “Pirsig: philosophical, introspective, Le Guin: imaginative, thought-provoking, feminist, Mary Oliver: nature-focused, contemplative, William James: philosophical, psychological, pragmatic.” And it sort of goes on. And one of the really interesting things here is you can even start to now like to pick out words that resonate with you. I don't know if you see any words here that you're like, ah that is actually something I hope to be.
Evan Armstrong (00:16:31)
I mean I have the highest of standards for myself. I should be all of these all at once. I don't know if you ever get that where I'm like, I should just be the best of every best writer and do it all in my 1,500-word blog post. I'm curious, one thing we should do is— You actually did not do this in Claude last time, you did this in ChatGPT.
Dan Shipper (00:16:58)
I did it in both.
Evan Armstrong (00:16:59)
Oh, you did it in both?
Dan Shipper (00:17:00)
It was both Claude and ChatGPT.
Evan Armstrong (00:17:01)
So listeners, you should know that Dan is a liar because in his post that I was reading this morning, he says ChatGPT.
Dan Shipper (00:17:09)
No, it's, it says, it says Claude and ChatGPT. You didn't read the post carefully enough.
Evan Armstrong (00:17:12)
Oh, oops. No. Okay. Here's the thing. Okay. No, this is—
Dan Shipper (00:17:20)
Don't come at me unless you've got—
Evan Armstrong (00:17:22)
No, no, no, no, no. Now you're changing the truth. You used Claude to do the notes part of the exercise. You didn't use Claude for the writer part of the exercise. Or maybe you did both and you just didn't include those details.
Dan Shipper (00:17:35)
I did both, but I just used one from ChatGPT and one from Claude.
Evan Armstrong (00:17:39)
The key to internet success is nitpicking needless details and starting a beef over it. So, this is what I'm trying to do here today.
Dan Shipper (00:17:44)
This is where it starts. This is how every beef of 2024 starts.
Evan Armstrong (00:17:56)
Yeah, this is how it begins.
Dan Shipper (00:17:50)
Okay. So, let’s keep going with this. So, basically so we've got this big list and you can start to pick out things that you think are interesting, but, to me, it's overwhelming. It's like, wow. How am I going to try to be all those things? It's impossible. So, one of the things you can do, which is really cool— Well, first, before I even get to the things you can do, I would press retry a few times just to see how it does it in different ways. You'll kind of explore the latent space and possibilities here.
And it may come up with different ways of describing things that might be better or worse. I don't think that one was particularly better. Yeah. I think this is slightly better because it's not necessarily citing their most important work, which I think is kind of irrelevant to this exercise. So, I'm going to go with this one and I'll say something like, “Can you synthesize the vibes down into something more compact? I want a summary that can help me express my taste. Do it in five sentences.” And we'll see what it does. What's really interesting about this is ChatGPT, when it does summaries like this, sort of tell me the vibes, and it gives you a big long list at the end. It usually has a summary paragraph that just tells you what it just said. And that paragraph is usually really good. Claude doesn't do that. So, it might be interesting to try this in ChatGPT after this, but ChatGPT—
Okay, it says, “Your literary taste gravitates towards thinkers who blend scientific rigor with philosophical depth, often exploring the intricacies of human nature and consciousness. You appreciate writers who can make complex ideas accessible, whether delving into neuroscience, psychology, or the cosmos. There's a strong current of introspection and mindfulness in your preferences, balanced by a dash of humor and pop culture savvy.” So, aside from the fact that it's like really complimenting me and gassing me up, this is actually really good.
And, for me, right now because I've seen this before, it's not a mind-blow moment. But when I first did it was like looking in the mirror for the first time and I was like, holy shit, this is what I look like, and I like it. And so that's the basic gist of the exercise is find people you like, throw it into ChatGPT, and have it synthesize something.
Evan Armstrong (00:20:17)
Sso you have this list and you've sat on it for a little over a year now. When you're editing a piece or writing a piece, do you ever find yourself mentally going through this checklist of attributes, or is it like— I'm curious how much this actually comes into play during your day-to-day process.
Dan Shipper (00:20:35)
It's not a checklist of attributes, but when someone gives me a piece and they're like, what do you think of this? Or should we publish this in Every? I am kind of explicitly being like, if I don't like it, I get a vibe that I don't like it. And then I can be like, well, it's just not accessible enough or it doesn't have that sense of curious optimism or it doesn't have the depth or the thoughtfulness or whatever. So in that sense, I totally do. And then another way that this works is when I'm editing myself, I do have some of those words in mind and I'm often either thinking about those words or I'm going back and being like, I feel dry. I don't have the vibe anymore, and I know that Robert Spolsky's got the vibe I'm going for, and I'm just gonna reread him. So, I'm curious for you, I know that you've done these exercises too. How are you tasting? How are you thinking about it and how are you identifying it with AI?
Evan Armstrong (00:21:39)
So I think I did it because I read this article. Or, you wrote this article last year and I was like, ooh, I should do that. So all credit to you for this exercise for myself. I didn't come up with this idea. I just copied you—the place that I took it differently is, I think I'm more multimedia than you are when it comes to tastemaking. I love cinema. I love exploring different forms of artwork and I find that informs what I write just as much as anything else, and so a lot of times, I will find the taste notes of what I'm looking for by talking— Well, talking first with Morgan, my wife, who's a humanities Ph.D. And so she has a much better articulation of all these things than either of us.
But if she's busy talking to ChatGPT about it and getting a better sense of if I liked a couple of movies, why did I like these movies? Or, if I liked a couple of posts, what artwork is related to these posts? Because I find that it's able to draw things that I haven't heard of, or I haven't been interested in, and it makes me more well-rounded. I worry when writers are like, I only read, because I think you can get in a little bit of a rut. Personally, I get into a little bit of a rut, and so I think it's important to be multimedia with your taste making. And so ChatGPT helps me do that.
Dan Shipper (00:23:15)
I'd be curious how you do that.
Evan Armstrong (00:23:18)
So read out to me five of the writers that you love right now, and make sure they're ones I can spell.
Dan Shipper (00:23:28)
Okay, cool. I'll give you a little bit of a list. And actually, before we do that, can you just introduce the exercise that you're going to do? Tell us what you're going to show me.
Evan Armstrong (00:23:38)
Basically my goal is— Those taste elements that you'd pulled out in your exercise where it was philosophical or lyrical. I think it's really interesting when you say, I like lyrical prose. How does that apply in other mediums? Because not necessarily— Lyrical is obvious in, say, a poem or in a song, but are there lyrical paintings? Is that a thing? And so, my big thesis when it comes to taste is that it's a blob of emotional permission to like what you want to like and it's not constricted to certain types of medium. So, it's not just writing. Being a great writer and having great taste as a writer does not mean you only read the best books.
It means you partake in the best movies, the best films, music, whatever it may be. So, what I want to do is take the list of writers that you have and then try to convince you to watch a movie because I've known you for years. I've given you like 20 movie recommendations. I think you've watched zero. So today we're going to fix that. We're going to fix it, Dan. If ChatGPT tells you, you'll do it. If I tell you, you ignore it. So, we're going to have ChatGPT do it for me.
Dan Shipper (00:24:53)
I love that. I also would like to do that with you too, just to see what we get. Because I really want to explore your taste too. So, let's start with what you want to do. So, yeah, let's pull up ChatGPT and we can see what it says.
Evan Armstrong (00:25:08)
Okay. So, Dan, who should I put in? Just give me five.
Dan Shipper (00:25:12)
Okay. So, I mean, we've already talked about Annie Dillard. I have to have her on any list. I would say Robert Pirsig. A more recent one is this guy H.D.F. Kitto. He's a classicist. He writes about Greece. He's amazing.
Evan Armstrong (00:25:28)
Is it kiddo like—?
Dan Shipper (00:25:30)
K-I-T-T-O.
Evan Armstrong (00:25:32)
K-I-T-E-O?
Dan Shipper (00:25:32)
K-I-T-T-O.
Evan Armstrong (00:25:34)
I like kiddo. K-I-D-D-O, but that's not right. Okay.
Dan Shipper (00:25:39)
That's his nickname. Kiddo. Also Iain McGilchrist. I'm sorry. His name is I-A-I-N. He's got an I in the middle.
Evan Armstrong (00:25:49)
Oh man.
Dan Shipper (00:25:50)
And then M-C-G-I-L-C-H-R-I-S-T.
Evan Armstrong (00:25:55)
So, I will say, “Here is a list of my friend's favorite authors. Please pull out the vibes of each of these authors and then recommend five movies that have similar vibes. Be specific in why they are similar.” So the response from ChatGPT is kind of funny because they put vibes in quotation marks—it's not a real word, which they may be right about—and this kind of— Oh, so, it listed out the five vibes for each of them. It kind of gave three for each of the five authors, and they gave five movie recommendations. And the five movie recommendations were The Tree of Life, My Dinner with Andre, The Seventh Seal, 2001: A Space Odyssey, and Wings of Desire. Have you seen or heard of any of these?
Dan Shipper (00:26:46)
So I've seen The Tree of Life and I hated it. I did not understand it at all.
Evan Armstrong (00:26:50)
Good. We're doing good.
Dan Shipper (00:26:55)
But, I know why it's recommending it and I have been recommended that before by other people. And it could be one of those things where it just wasn't contextualized for me appropriately. The description of it sounds amazing. I wanted to watch it, but the actual reality of it. I was like, this sucks. My Dinner with Andre—another one that I've heard of that I've never watched, but I know that I probably would like it. So, that's a good recommendation. The Seventh Seal, Ingmar Bergman. I've definitely seen Ingmar Bergman. I think I may have watched that one, but it was a long time ago. Basically, I went through a phase where I really liked Woody Allen movies, which, I really, obviously don't want to say—
Evan Armstrong (00:27:41)
The art is different from the artist. You don't have to like— If you're going to reject a film based on moral actions, you're just going to have to reject the genre.
Dan Shipper (00:27:48)
Okay. Well, I will say, when I was in college, I really liked Annie Hall and Manhattan and Ingmar Bergman's one of his big influences. And so I started watching Ingmar Bergman films. I can't say that I immediately loved him and watched him all the time, but I respect him as an artist and I like his work.
2001: A Space Odyssey. I must have seen that. I don't specifically remember, but—
Evan Armstrong (00:28:12)
If you've seen 2001: A Space Odyssey, you'd remember.
Dan Shipper (00:28:16)
I remember the Dave thing or whatever, but I can't tell you where I was in my life when I watched it. And then I've never seen Wings of Desire. Let me see. So, this, it says. “Wings of Desire: This film echoes the mystical and existential quality of both Dillard and McGilchrist's work. It tells the story of angels observing human life, capturing a contemplative spiritual mood that aligns with their philosophical inquiries.” That's interesting. So I would say generally it's on point. It's sort of steering toward more introspective philosophical movies, which I'm down for, but it's probably missing some things about movies that I love. There are movies I love that are not part of this. I don't know how where you typically go from here, but could be—
Evan Armstrong (00:29:09)
Yeah, I think this is actually a really good illustration of the problem with this exercise. You are a professional writer. Your living comes from writing words. And so, of course you're going to have a really fine-tuned, in-depth taste of writing. While, in comparison, movies— I know you just don't watch that many. It's not where you've really dived in deep. And so giving you the deep cuts I don't know if 2001 is not a deep cut per se, but it is very different from most modern films. And you're going to have this problem. My experience with LLMs is they'll either go way too deep or they'll go way too surface level. So, you might say, I like paintings. Who should I—? Have you ever heard of Van Gogh? So I think with this one, with these five movies, I would say, “My friend has only a passing knowledge in cinema. Can you recommend anything after 1985 that is slightly more accessible?” Now I'm curious. The five shows that it has are The Truman Show, The Secret Life of Walter Mitty, Dead Poets Society, A Beautiful Mind, and Into the Wild. Have you seen any of these?
Dan Shipper (00:30:58)
Definitely seen The Truman Show. Great movie. I've not seen The Secret Life of Walter Mitty. That sounds interesting, I've definitely seen Dead Poets Society. I've seen A Beautiful Mind. It was actually filmed in my hometown. And I've read Into the Wild, but I don't think I saw the movie.
Evan Armstrong (00:31:15)
Funny because I was just in the town where The Truman Show was shot three weeks ago. That's where we had our babymoon. So small. World you would love The Secret Life of Walter Mitty. It's so good. It's very accessible. It's about the collapse of Time magazine. And then Ben Stiller's character is the backroom photo specialist, and he goes on this journey of self-discovery to Iceland and goes outside. So, I really think you would love this movie. I really think you would dig The Secret Life of Walter Mitty.
Dan Shipper (00:31:47)
Cool. I want to watch it.
Evan Armstrong (00:31:22)
And so I think with the exercise the next step is the hard step. The one that requires the most emotional activation energy, which is where now you go and you have to go consume this stuff. You're like, okay, now I actually have to go do it, which I think cinema is a hard version of this, but art is a really easy one. So sometimes what I'll do is, I'm reading something like, I really like this, and I'll say, give me paintings like this, and it'll help me out as well.
Dan Shipper (00:32:23)
Well, I think that one of the reasons I really love this as an extension of the original exercise is that the original exercise that I did is sort of a backwards type thing, as if I found a lot of stuff and I just need to name it. But there's a whole part of taste, which is exploring new things you haven't heard of that are the things that you like. Because, often, there are things, I mean, always things exist in context and they exist as part of a chain of other works. And so if you like one thing, there's usually a whole related chain. Sometimes it's in different mediums and sometimes it's still a book or it's still a movie, but it's the five other directors that influence this one director. And so, I think, part of getting a taste is traveling those roads of who influences who and who thinks about what, and I think ChatGPT and Claude are also excellent for that. And I think that this is what this exercise is starting to do—you can explore those chains of influences.
Evan Armstrong (00:33:26)
And also I like it because you can bounce so easily from surface to surface. In my own writing, more and more I've been feeling like I've been doing philosophy where I'm skipping along from Foucault to Plato, whatever. And ChatGPT is excellent at helping me get the surface-level stuff and get enough understanding or at least know where to dig deeper, I think the idea that you brought up of there's more chains to discover. I'm just so passionate about content and about consuming good things. The world is just full of good stuff. And I just want people to know how to get into it. And so that's why I love these tools and discovering your taste this way, because you can just— There's so many good movies. There's so many good books and your entire life should just be filled with greatness. Every day you should consume something that blows your mind. There's enough out there that you'll never run out. And that's just such a gift and it's overwhelming to find, but this makes it a lot easier.
Dan Shipper (00:34:28)
Totally. I love that. You're talking about good vibes. I don't know exactly how that fits into my vibes list, but I'm picking up the energy and I love that energy. So, I feel like that puts a really nice bow on this taste aspect of what we're talking about here. It’d be great to go into the topic. I would love to hear what you think of the topic and then how you are using AI to help you pick topics and find topics to write about.
Evan Armstrong (00:35:00)
Yeah. I actually think this is maybe the least AI-y, AI-ish, AI-adjacent— What's the right—? I don't know, you're the AI & I guy.
Dan Shipper (00:35:11)
AI-ified.
Evan Armstrong (00:35:16)
—AI-ified part of my process. So, I publish twice a week, every Tuesday and Thursday, and then I write a decent chunk of our Sunday digest, or it's now called Context Window.
And so I don't have the luxury of a ton of time to think through all of my ideas of huge— Yeah, so I had to find a way to really go into the zone and pick the right thing. And I found— This is— I apologize. This is so Austin tech bro of me, I go every morning after I do my lift, I go and I sit in the sauna for 15 or 20 minutes. And if it's a Monday, Wednesday, or Friday, I think about what ideas are good and what I’m gonna write about. And if it’s Tuesday or Thursday— Yeah, so, that I just sit in the sauna—isn't that a dumb answer? Like, I just sit in the sauna until something appears in my brain. I'm like, hmm, that's a good idea. And that's literally it. Isn't that dumb? It's a dumb answer, but it's really, mostly how I do it.
Dan Shipper (00:36:16)
Many incredible thinkers have gone before you, having ideas in the sauna, so I don't think it's dumb at all. If it works for you— I think that's the thing that people miss about creative stuff is like, you got to just find what works and different things work for different people. And if sitting in the sauna until your brain melts out of your ears is the way for you to figure it out, power to you.
Evan Armstrong (00:36:40)
I think it's like The Matrix. I just spent all day jacked into the internet through Twitter and emails and talking with people. I just spent all day jacked into technology. Every day I'll have dozens of ideas about what to write about. For me as a writer, ideas are never the hard part. It's picking the idea and like knowing which idea will be right. And I find that if I try to sit down and write them all out, I just get overwhelmed. But if I listen to my subconsciousness and just let it bubble up naturally, where I'm not thinking about anything, usually the first idea that comes into my head is the best one. And so I just sit and wait, and it doesn't take long because I'm a wuss and I can only last 15 minutes or whatever. But I sit there, take a cold shower, come home, and write down the essay.
Dan Shipper (00:37:33)
That makes sense. As you might guess I have a little bit more of an AI-ified process for this.
Evan Armstrong (00:37:39)
Yeah, tell me about it.
Dan Shipper (00:37:40)
So, what I do a lot is, I feel like I think really well if I'm walking and talking. And so what I'll often do is I'll either just get up and take a walk and record a voice memo, and then I'll go transcribe the voice memo with Whisper and then I'll feed it into Claude or ChatGPT and have it basically pull out, okay, what are the interesting things? Or what I'll do is I will actually have a conversation with ChatGPT Advanced Voice Mode and I'll be like, okay, I'm just going to brain dump and then I want you to reflect back to me what you hear and then we'll go down a rabbit hole on a particular thing that I'm interested in. And I think it's really good at helping you find, okay, I have this morass of things swirling around in my head, what's an interesting thing to start with or what's a topic that I want to kind of dive down deeper into. And whenever I find one of those I like just put it in my— I use Things as my to-do list and I just like to have a little thing in my to-do list with a little headline. Usually if I have a piece I want to write—I want to write a piece right now called “Generalists Own the Future.” I just know that's the headline. And I just put it in Things. And basically that’s the handle for all the ideas, and when I sit down to write, I put it at the top and just kind of go for it.
I also find that there are others— And I actually can show you a demo for this, but there are other pieces. I'm writing this piece right now where it's not on my weekly cadence. Because I publish once a week and it's a much longer piece that requires a lot of research. And basically what I have is this— I'll show you. So, basically what I have. I'm writing this longer piece and it's taken me a month or two to write. Hopefully I'll finish it this week—or at least a draft. And it's about— What is it even about? I can't even say what it's about. That's how long and difficult and complicated it is, but it's—
Evan Armstrong (00:40:06)
Oh, I'm grimacing because I have to edit this. So, I'm like, Oh no.
Dan Shipper (00:40:11)
It's basically about the sort of underlying way architecture of language models and how that relates to some ancient debates in philosophy about appearance vs. reality which I can sort of get into, but more or less, it's some of the philosophical implications of language models. And I basically have this note in my notes that I've just been adding to for months at this point.
And it's highlights, it’s ideas I wrote. I read this book, The Cave and the Light: Plato Versus Aristotle And I'm taking a highlight—”figures as archetypes, not figures for profit, he's supposed to have said,” which is something about Pythagoras. And I have all these quotes in here. And one of the things that's really interesting about this kind of a project is I have an intuition about what I'm trying to say and I'm running into things all around the world that reflect that intuition, but I'm having a hard time saying it.
So, it's a little bit like the language model. Sorry, it's a little bit like the taste thing from earlier, where it's like I have tasted it already. I just don't know how to say it and being able to say it is a really important thing to actually being able to do it. And I find that language models are really good for synthesizing a morass of things like this into something much more compact.
So if I go into Claude, I can probably find historical chats where I'm like asking it, pull out— Basically, what I do is or what I was doing previously is I was every day going into Claude and having it write a thesis statement based on all the notes that I was collecting and then I was trying to rewrite it and make it better and I was just every day I was writing a new thesis statement until I refined something that's like, this is the idea, this is what I'm getting going to go with, and I found it was really really helpful for that. Let me see if I— How do you search for my— Oh, here we go.
So, I have this project in Claude called “Seeing Like a Language Model,” and I basically take that note file and I put it in as the knowledge for that project. So, it has access to all this, to this huge long note I've been collecting. And I'll start a chat and it’ll be given all of my attempts and all of my notes, “write out what you think my thesis for this piece is.”
And so, it'll say, “Based on your notes, it appears your thesis for ‘Seeing Like a Language Model’ is that language models offer us a new lens through which to view intelligence, knowledge, and our relationship with the world. This perspective challenges the 2,500-year-old church of reason that has dominated Western thought since Plato and Socrates by shifting our focus from essences to sequences from definitions to usage and from abstract rationality to contextual understanding this paradigm shift not only resolves longstanding debates about AI's capabilities, but also promises to transform fields from science and creativity.”
So I think that that's actually pretty good. And it's something that I couldn't quite say, and it's sort of distilling everything down into something that kind of gets me there, but it's not quite there, because there's all this stuff where it's like, what does it mean to shift our focus from essence to sequences, from definitions to usage? That's just philosophical mumbo jumbo basically, and so I'll say something like, make it better, and it will do it again. I think that a really good Claude trick is just always asking it to make it better. And it just will be better. Sometimes you can also ask it like, is this any good? What do you think of this? Please critique this. And then based on the critique, once it gives the critique, then say, make it better from the critique so it gives another one, which I don't want to read out. I don't think it's that necessary, but I'm just basically saying, make it better, make it better, make it better, and I'm saying, make it better. We don't want to define intelligence. We want to dissolve the question. And I think this is me sort of like saying, I want you to make it better in this particular way, and sort of seeing what it does, and I'm honestly just going down the list like that.
Now I'm saying, okay, “Please reflect on this thesis and all my notes I've led up to it. How do you think it could be improved? Remember, I want it to be a concise, accurate, interesting thesis statement, including a few bullet points or paragraphs of elaboration to unpack it. Just reflect, don't rewrite it.”
So I want to split up the task into two. And often, in a language model context, understanding that there are multiple tasks involved in what you're trying to do. And splitting it up—rewrite and reflect—instead of just like doing it both together helps it.
Evan Armstrong (00:44:50)
Just to clarify, that's just reflecting, don't rewrite yet is like, I don't want you to rewrite the thesis. I just want you to tell me how I can improve it. And then I'm going to assume in your next prompt, do you tell it to rewrite it? What's the reflection?
Dan Shipper (00:45:02)
Yeah, exactly, and if you don't do that, sometimes it will just try to rewrite it, and you want it to be very explicit about its thinking process and use its response to only think through what could be better and then do the rewrite after, so it says, “The thesis effectively captures the paradigm shift from essentialist thinking to a more fluid context dependent worldview,” blah, blah.
So it's giving me all the strengths. And then it says, “Areas for improvement: It could be more concise. It could have more clarity. It could have a personal angle. It could help talk more about practical implications and create more tension or conflict.” So, there's a lot of stuff I could do better.
And I'm like, rewrite it. And it rewrites it, and I think the interesting thing from all of this is I'm not then taking the thesis and just being like, okay, this is my thesis or whatever. It’s not going to give me the thesis, but what it is doing is reflecting back to me patterns that it sees in what I've been thinking about and distilling it down in a way that every time I ask it I'll do this literally every day until I have something I'm satisfied with.
Every time I ask it's a little bit of a kaleidoscope where I get to look at all my notes from a different perspective because it's sort of stochastic. And it helps me be like, okay, there are these phrases that it's using that are resonating or phrases that aren't, and then I'll go and write it myself until I have this over a period of days or weeks, I have a really concise thing that I'm trying to say. And once I have that, then I can start writing.
Evan Armstrong (00:46:35)
Is the stochastic element because it comes back a little bit different every time because of that, and that's why that's important?
Dan Shipper (00:46:43)
Yeah, exactly.
Evan Armstrong (00:46:45)
What's interesting to me about this process: one, the way you do it is interesting. But also, it's the exact opposite of how I think about my writing, I don't mean that, it's neither good nor bad, it's just very, very different. Typically for me, there's no note-taking—maybe, it's very rare for me to take a note, but just, it's all in my head. I can just draw out in my head when the time comes. And so the taste element helps me reach a level of emotional clarity and permission to do it. And then there'll be some initial spark. So some news item, some headline, some discussion I have with somebody that leads it off. And then, once I have that spark, I want to go deeper on some fundamental thing that that spark indicates. So, have you ever used Consensus before? Do you know what Consensus is?
Dan Shipper (00:47:39)
Like the blockchain thing?
Evan Armstrong (00:47:41)
No, Dan, the blockchain thing? This isn't Scam & I.
Dan Shipper (00:47:48)
That was a big blockchain company, wasn't it?
Evan Armstrong (00:47:51)
Oh, I shouldn’t be mean about the blockchain. There's some use cases I believe in, but it's too easy to dunk on. Okay. Well, I'm about to blow your mind. Let me show you this. This is Consensus. I think they just raised like $10 million or so in the last month or two. And think of it as a LLM mixed with Google scholar.
Dan Shipper (00:48:12)
I have seen this. This is really cool. I love this.
Evan Armstrong (00:48:20)
Yeah, so it helps you answer fundamental questions. And so a lot of my work thinks about monopoly dynamics, power dynamics within industries. And so, if I'm trying to get more scientific and more rigorous in my thinking, I'll come to Consensus and start pulling out studies, and just to just kind of fertilize the intellectual soil, if that makes sense, as I'm thinking about the topic, there might be some spark. I don't know. What was something— What was a news item that caught my interest lately? Oh, the open-source models for Meta. That's a really big deal. I want to think about it more correctly. So, what— I haven't done this before. So, this may fail spectacularly, but “What does—”
Dan Shipper (00:49:08)
We're doing it live, folks.
Evan Armstrong (00:49:10)
We’re doing it live. I find Consensus is better with science questions, but we're going to try this out. “What does open-source do on monopoly dynamics and software?” So, basically what it does, there's these syntheses and these copilot questions where it pulls out the various pieces and then does the key insights. If you ever read a bunch of papers, most of the writing is pointless and overly verbose. And so I love Consensus because it helps me get right down to the middle bit. So, summary will tell you this—like, okay, duh. It says, “These studies suggest open-source software changes, monopoly dynamics by increasing competition, altering market structures and influencing software, quality, pricing, and innovation,” which is, like, fine. It's not necessarily a great answer, but it's useful because it shows the impact of open-source on monopoly dynamics. It pulls out six papers, and I can start going a little bit more in depth. It's funny, but as a writer, you eventually learn that if you read three papers, you've gotten to about 95 percent of the depth of most experts. You can get there quicker than you think, as long as you're reading the right papers. And the issue online before LLMs was that you couldn't, it was really hard to know what the right paper was outside of just citations, which is a really flawed metric for reasons we can get into if you're interested.
So I use Consensus. So there'll be some spark for me like, oh, that's a good topic, but I need to better understand the fundamentals. And it'll be a combination of reading Consensus, having questions about, from the papers, pulling it into ChatGPT, explaining a term. And then eventually I'm like, okay, I have the idea. It's time for craft.
Dan Shipper (00:51:07)
Yeah. I'm curious. I really think that this Consensus thing makes total sense. And the place that makes the most sense for me is, as a writer, in order to write anything interesting, you have to understand the current context. What is the consensus on this topic already? That's why it's such a well-named product, and, normally we know that, but sometimes you want to go outside of your normal beat where I can tell you the current consensus on something AI-related pretty easily. But if I'm writing something that's not about that, just getting up to speed really quick, I think a tool like this is so helpful because otherwise it's hours and hours or weeks or months or whatever to really get there. And it limits what you can even write. And so this sort of expands the number of things that you can write confidently, which I think is really cool.
Evan Armstrong (00:52:10)
Yeah. Eventually what I want is a product where I can drop in all of the books that I have read. All the books that I know are adjacent to what I have read, all the papers into one place and build out my own Consensus data bank. We're not there yet, but we're pretty close. We're remarkably—. If someone could build that for me, I would introduce you to investors. We could get you funding. It's a really good idea.
Dan Shipper (00:52:38)
I mean, Claude Projects is sort of like this. I will do that fairly often as I'll drop books or parts of books into Claude Projects and use that as a jumping-0ff point for distilling down an idea or a thesis. I think one of the problems currently is, if you drop an entire book into Projects, it basically works, but having that much context can confuse the model a little bit because it doesn't know as much of what's important. And I think what's really interesting is sort of doing the dynamic selection based on what I'm writing, what are the books and what are the sections of books or papers or whatever that would be useful here and what wouldn't be deselecting is just as important as selecting in some ways. So, I'm really curious to see how we evolved that over time. Because I do think, yeah, that's the dream. That's the nerd dream, all my books are here. And help me get ideas out of it when I want to understand something or I'm looking to distill something to help me do that.
Evan Armstrong (00:53:41)
Do you think it's a question of context window or is it like, oh, you got to fine-tune your own model? Do you think it's like, oh, I have to use a technique like RAG?
Dan Shipper (00:53:50)
It’s just RAG basically.
Evan Armstrong (00:53:51)
You think it should be RAG? It shouldn't just be a 10 billion-word context window or whatever?
Dan Shipper (00:53:57)
I think context windows are great, and in general I would prefer to use just a bigger context window over RAG because, yeah, the more context you have, the better. But if you're talking about multiple different books and some of the books are 400 pages, there's just a lot of extraneous stuff. And the more extraneous stuff you have in context, the more likely you are to get off track, basically. So, the extraneous stuff becomes distracting and you can, if there was an easier way to select the parts that should be in the context vs. not and basically give the attention mechanism and the model just an easier time of knowing what to attend to. I think that would be better.
Evan Armstrong (00:54:45)
Well, if anyone has that product, you should email us. We'd buy it.
Dan Shipper (00:54:54)
Yeah, DMs open.
Evan Armstrong (00:54:55)
DMs open. That's right.
Dan Shipper (00:54:58)
Okay. So, I think that's the ideal phase. I'd be really curious to go into craft with you. So, how are you using it to write and edit your pieces?
Evan Armstrong (00:55:09)
Yeah, I think the big thing when you're talking about AI and craft it's mostly a tools question, right? How are you allowing the LLM to interact with what you're creating? And is it within a chatbot? Is it a copilot within an existing application in what you're creating? Is it something entirely different? And so that's the big question, for the chatbots. This is, I think, level one of when you're trying to get better. The most common thing that I'll do is, as a writer, I suck at conclusions. I suck. I say what I want to say. And that just like, okay, I said it like that I need to find a way to land it. I always struggle. And so it's very simple, but I'll just copy and paste it, drop into Claude and say, help me finish this. Give me 10 different subheads and the thesis that we could do. And almost always all 10 of those are wrong, but by being able to articulate what is wrong it's able to I'm able to say what is right. Is that similar to how you use the chatbots or—?
Dan Shipper (00:56:35)
That is similar. So, there is a kind of summarizing aspect or continue on when I'm stuck. So, yeah, there's a lot of different places that I will use it. So, right now I'm writing this “Seeing Like a Language Model” draft and I'm writing it in Lex. And Lex is sort of like Google docs with AI baked in. Everyone incubated it. It's now its own separate company run by Every co-founder, Nathan Baschez, and I use it because having access to AI in context just removes a little bit of friction. And that's really nice. And you can see if I show changes— Actually, let me show you, there's a way to do this AI. So, what you can see is what text is AI written? Right, which is really cool. So, the text in blue is written by AI. And I'll say I probably won't be using this actual text. What I'm trying to do with this draft, it's like 5,000 or 6,000 words, 7,000 words already, is I'm just trying to get it out. And there's a lot of times where I'm summarizing an idea that I already understand, but getting it into words. It's such a drag and I just don't want to do it. So, in this case I need to talk about Plato's theory of forms, and I just basically have Lex write the theory of forms and that is Claude on the back end, and then where I would have gotten stuck and been like, oh, I need to go read Wikipedia and read like the Stanford Encyclopedia of Philosophy and whatever. I just get those three or four sentences that are super generic. And it just explains exactly the idea. And then I can move on. And later on in my process, I'll probably go back and make these sentences my own and give it like the flavor that I wanted to have. But, this is enough for me to keep going. And I think that's a really, really, really valuable thing.
Another cool thing that I like to do is— Lex has this feature where you can ask it to complete things in comments. So, I'll highlight something like this. So, these are some notes that I have about a part of the article that I'm writing. It's basically about why science has been stuck, in certain fields like psychology, for a long time. And I want it to expand that part of the argument and give me examples of what I'm talking about. And rather than having to go into Claude and be like, here's what I'm writing about. Here's all the context, blah, blah, blah. I can just make a comment and say to Lex, can you turn this into something interesting?
And it will give me a bunch of thoughts on examples I can use or ways I could write a particular paragraph or section. And that's just a really, really easy, easy way to get the specifics in my head or get the examples I need or whatever so that I can keep going. So, it's really, really good for that. Those are my Lex things. I have — I'm curious what this brings up for you.more stuff to show you in Claude, but I'll pause there in case
Evan Armstrong (00:59:56)
Yeah, it's interesting because I also will use Lex and this is very different from how I use it. So, it's kind of fun to see the tooling here. I found that over time. I'm trying to shift more and more of my AI labor out of the Claude chatbot and the ChatGPT chatbot because it's distracting and, I don't know, it's clunky and I want to work in-line. So I can show you. Do you do any of the custom prompting?
Dan Shipper (01:00:26)
I don't.
Evan Armstrong (01:00:28)
Okay, you want to show— I built one yesterday.
Dan Shipper (01:00:29)
Yeah. I want to see that.
Evan Armstrong (01:00:30)
Okay. So, like I mentioned, one of the things I really struggle with is a conclusion. And I do it so often where I want to write a conclusion that I don't really know. It's just I don't want to keep doing it over and over again, copying, pasting, matching. I just don't want to do the prompting anymore. So, what you can do is you can go to prompt builders, and this is one I call, “What Would Evan Say?” And you can see, I name it here. I pick the model, so I'm using 3.5 Sonnet here, and I have a draft, or a system prompt, where I tell it, “Hey, I've got a draft I'm in the middle of. Below is going to be a big brain dump of things I've written in the past. Use that to create a list of things I might want to say next.”
And below that, I have, I think, one or two articles that I've written. Because when it gives me the conclusion, I don't want it just to do the ideas. I want it to do the tone and the voice. And I just want it to have the whole thing. So, that's the system prompt. The next is, it says, “What's up? Here's some ideas.”
Dan Shipper (01:01:50)
We'll bleep it. Just say the whole thing.
Evan Armstrong (01:01:46)
We'll bleep it. So it says, “What's up? Here's some ideas.”
Dan Shipper (01:01:51)
It says, “What’s up [bleep].”
Evan Armstrong (01:02:00)
No. And so I also automated the first message back, so think of this when you're doing the ChatGPT or the Claude experience. This is the first message that the AI will give you. And these are the instructions. And this is the system prompt is the thing to make the instruct the output good. So, what I'll do is, this is my draft that I published I think last Thursday or Tuesday. I can't remember when I last published, it's been a week, and so I'll just go to ask Claude, and then I say, “What would Evan say?” And it just gives me 10 ideas on how to do this to finish.
Dan Shipper (01:02:50)
“Welcome to the content thunderdome.” That's such an Evan thing to say.
Evan Armstrong (01:02:52)
Oh yeah, I was hoping you would do it again, but, yes. Last night when I was playing with this, it said, “Copyright is like a condom. It works, but it's more fun when you don't have it around,” which is something I would say. And then you would make me cut. But you can make these custom prompts for anything. And then what's also useful is, if you do the checks—do you use the checks at all?
Dan Shipper (01:03:25)
I do sometimes. Yeah.
Evan Armstrong (01:03:29)
So, checks are basically what— I really struggle with passive voice. This is something that Kate, our editor in chief, who is wonderful, is constantly getting me to try to be better at, which is passive voice. So, all I do is say, “Check for the passive voice for me, and just run the checks.” And good news—this is an already-edited piece, so we don't have it really in here. There's not too much passive voice. It'd be interesting to do it if we did with a new draft, the one that I'm working on today, but you can go through and do these edits. And the goal isn't necessarily to get rid of your editor, but it's more the better and cleaner the draft is that you turn over to your trusted thought partner, your writing partner, the better the feedback they can give you. And so if they're wasting time with like, you can't say that cliche or— The better the draft is, the better the end quality will be. And a lot of editing is rote where you're just, maybe you’re GPT-5 or whatever Claude’s next model away from being able to eviscerate 20 percent of the editing department at The Atlantic. They could just go away or their jobs could be something different here because the language models can do a good enough job. And so I come up with the spark, sauna, write, edit and Lex, and then I turn it back over to my editorial team.
Dan Shipper (01:05:04)
Yeah. I mean, I think, for us, obviously we have editors, which is great, but some people don't have editors. A lot of people don't have editors.
Evan Armstrong (01:05:17)
Peasants! No, I’m just kidding.
Dan Shipper (01:05:17)
Many people don't have editors, and also sometimes editors are asleep or on vacation or whatever. And so having that available all the time helps. And it also just helps to give Kate a draft, for example, that's just better. It helps her. You get better edits because of that, because she's not thinking about things that are like, something else can catch, which is really cool. Another thing I use Claude for that's sort of in the crafty realm is finding ways of doing metaphors and analogies and similes.
Evan Armstrong (01:05:45)
Nice. I do this too.
Dan Shipper (01:05:47)
And so I have this particular analogy in the piece I'm writing that's about Socrates and Socrates’ search for truth. And I'm talking about him sort of splitting the sea of words into words that express the truth and words that express opinion. And I want it to revise that metaphor for a different philosophical outlook called pragmatism. And, I think the specifics don't really matter, but I'm just saying to Claude, I'm writing this piece. I talk about the Socratic method using a metaphor of dividing up this sea of words. How do I modify it to show the pragmatic method by contrast? And it gives me something that I think is not that great, and then I'm just pushing it a little bit. I'm like, hmm, what about something like in a pragmatic viewpoint, you just find currents that are going to take you where you want to go.
So, basically what's happening here is it suggested something that I didn't like, but that pushed my brain to be like, okay, but here's a metaphor that I actually would like, or is sort of partially expressing what I want, and I'm just like going going back and forth with it on on on sort of building that metaphor. And it's not doing the best job. And so what I have to do is—again, this is in my Claude Project—so I'm like, what are some quotes from my notes that might help me write this section on, on pragmatism a little bit better? And what's really interesting about this is it makes the stuff that is relevant to pragmatism which is the sort of school of philosophy that I'm introducing in this part of the piece, it makes it more relevant to the model because it's just right there in the chat history.
And once I do that, it gives me a bunch of quotes, which is really useful just to see what I am thinking about in terms of pragmatism. And then I pick a few of the quotes that really express the idea that I'm trying to express this metaphor. And I say, “Okay, weave number eight.” So, number eight is each of these numbers is a number of a note that I've taken. So, “Weave notes number eight, four, and six into this metaphor.” And then it rewrites the metaphor into something that incorporates some of those ideas, and sort of keeps going with that. I ask it to reflect what we've talked about, reflect on what you wrote, how can you make it better? And then revise. And the thing it came up with is actually really good. It's like, “Okay. In the Socratic view, we dive into the sea of language, hoping to separate the murky waters of opinion from the clear streams of truth. But when you look closely at even the clearest drop, we find no indivisible essence of water, only space, molecules, and relationships. This is the pragmatist revelation. Meaning isn't the substance we extract, but a property that emerges through use. Just as water becomes refreshment when we're thirsty, cleansing when we're dirty, or danger when we're drowning, words shift their meaning based on context and purpose.” And like, this is good.
I want to make this better, but it's this dual collaboration. Some of the stuff that it's coming up with I put in there. I said, “When you divide a drop of water, you never find the pure substance.” There it's weaving things that I put in there. It's weaving in notes and it's weaving in its own sense of what good is and what's relevant to create this metaphor that I probably wouldn't have come up with on my own, and I think it's going to be one of the central metaphors of the piece and it's like beautiful and amazing. And I love it. And totally would not be possible in this way without Claude. It's really powerful for this.
Evan Armstrong (01:09:38)
How long did this exercise take you?
Dan Shipper (01:09:41)
I was doing it in the 10 minutes before this show, because I was actually just doing it for myself. I actually just needed it.
Evan Armstrong (01:09:50)
So, say, 10 minutes to come up with a metaphor. So, interesting. It's interesting seeing how much of our relationship with these models comes from our publishing deadlines because Every— No, I'm serious. I don't mean that in a derogatory or a moral judgment way, but more just so much of your usage is I'd really like to do that. I don't have time to do that, which is my own fault because at the start of this year, you and I sat down, I was like I need to publish twice a week. I need to get to that speed.
Dan Shipper (01:10:20)
I don't know. I don't know, man. You definitely publish more than me for sure. But if you want to talk about who's busier, I might give you a run for your money.
Evan Armstrong (01:10:28)
Oh, listen, I'm not trying to compare it all. Ooh, the podcast. I have to talk— Like, no. Come on. I do have a quick— Anyways, we don't need to talk about the podcast. It's very funny that the marquee post of your year was “Admitting What is Obvious”—I want to be a writer. And then the big thing you've done this year is do a podcast. It's kind of funny.
Dan Shipper (01:10:51)
I mean, that is one of the big things I've done this year. Let's be real here. But yes, the podcast is one of the big things and the writing. It is always a continual struggle for me to figure out how to prioritize both and wanting to prioritize the writing and all that kind of stuff. And I'm constantly finding that balance.
Evan Armstrong (01:11:12)
I think it's less of who is more busy? Because who knows? I'm not going to compare that. But it's more about the attitudes that we have or what we want to pull out of the models. I'm not quite as interested in a thought partner, where I'm just like, I got the juice. I just need you to clear the runway for me. You know, I just need you to let— Anytime I get blocked, fix my emotions, give me the next step. And I'll just take it from there. And for you it's a much more collaborative process. So, it's really, really different, but valuable. I want to try the— Maybe I need to take a note, and then once I start taking notes, I'll put them into Claude Projects.
Dan Shipper (01:11:56)
Yeah, no, I think that is very interesting, and I think generally it reflects how we work in general. I think I just like a more collaborative work process. And I think you like it more, I'm going to just clear the runway. I got the juice, you know? And so I think that's really interesting. And I think yeah, I think the note taking thing is just a psychographic. It's a thing that some people do, and some people don't. And I think there are some people who are like, oh, I should take more notes. And I'm just like, no, no, no. If you want to take notes, great. But, sometimes that's just not how your brain works and that's totally fine. You know? So, yeah. So, I think this is a really good summation of some of the ways that we use Clause for the kind of craft part of the writing process. Let's talk about the last one. So, audience. Tell us about audience and how you use it for audience.
Evan Armstrong (01:13:00)
See, I actually think you are much better than me at audience in general. I think this is actually one of your big strengths as a writer and one of my big weaknesses. That's why we're a good team. But I'll do the typical things. So, we have Spiral. Have you talked about Spiral on this podcast before?
Dan Shipper (01:13:20)
I've talked about it a little bit, but we should introduce it for anyone new.
Evan Armstrong (01:13:24)
Okay. So, Spiral is an app that Dan came up with the idea for. And then our internal team of engineers and designers in partnership with our entrepreneur in residence, Brandon built together. You can think of it— I don't know how you describe it, Dan. So let me try to describe it and you tell me how close I get.
I kind of think of it like a Mario pipe where—if you've ever played Mario, sometimes he'll go into the pipe and then he comes out in a different shape, or he comes out in a different place. And the Spiral is a pipe designer. And so you can take one body of text and stick it in the pipe and it transforms it into something else. So, pragmatically, it means you write an essay, it can make that essay into a good tweet, or you write a good tweet and it can turn that into a good YouTube description, or whatever it may be. Am I close?
Dan Shipper (01:14:23)
That's nice. That's more or less right. I mean, basically the insight is, as a writer, you're constantly like, I feel like I'm constantly— I do the core creative work and then I'm constantly translating that work from one format to another. So, I'm taking an essay and I'm like, I need to make a headline from it. And that is a new format of the essay. It's like a compression of that essay. Or I'm taking an essay and I need to tweet it or I'm taking a podcast transcript and you need to tweet it or make a LinkedIn post or whatever, or I have a podcast and I want to make show notes. There's so many things where you're transforming it from one form to another as a way to get distribution, as a way to reach people where they are in the channel that they are, for the amount of attention that they have, in the format that they expect. And that is a really skilled task. It requires a lot of knowledge and skill to do it well, and it's also really repetitive and kind of sucks. No one wants to do it, mostly, and I realized that Claude is actually good enough, if you do the right prompt for it to automate a lot of that, where it doesn't do everything for you, but it gets you 80 percent of the way there. And so I made a prompt for myself for a couple of these things for podcast transcripts to tweets or whatever. And I was using it, but it was hard for me to type. And I just felt like it could be really valuable for everybody but I felt like it would be hard to get people to use the prompt because it's big and hard to create. And so we have this thing called Think Week at Every, where we don't do meetings and we just sort of reflect and think about what we want to do next and also just spend a lot of time just letting our creativity run and I just built an app to do it called Spiral.
I did it in a couple of days and you can basically make a Spiral. You give it a bunch of examples of the tasks you want us to do, like, “podcast transcript to tweet,” you give it a bunch of podcast transcripts and tweets you've done. And then it just gives you a little text box where you can paste in a new transcript and it'll make tweets for you. Simple, but it works really well. And yeah, Brandon and the team took that MVP app and just built it into this beautiful thing that we launched a couple months ago. And I mean, it's doing really well. I use it a ton. I think you use it a ton. Everyone internally uses it. I think it's going to pass 5,000 users in the next couple weeks, which is pretty cool. So, yeah, so that's what Spiral is.
Evan Armstrong (01:17:06)
So I'll use Spiral mostly— Social media is the bane of my existence. I do not enjoy it. I am not good at it. However, we do not make any money if people do not find our essays. And so social media is a really key component. And so I'll use Spiral to take an essay and transform it into a tweet, which is my most common usage. I assume it's the same for you. I'm curious for your Spiral usage, if you move beyond just tweets. Are there other places you're using it?
Dan Shipper (01:17:46)
Yeah. And I'll show you an example. So yeah, I mean, I am using it a lot for tweets and LinkedIn posts and all that kind of stuff. And it works super well. It's kind of crazy how well it works. I'll show you, here we go. So, basically on the right over here we have a tweet, right?
It's got 157 likes, which is not the most viral tweet I've ever done, but it has 30,000 views, and it's about this model that just came out or— Not came out, but that open has been working on that the information reported on and I want to tweet about it. Because, I think part of my job is when something new comes out I want to tell people about it. But also composing that tweet, even though it's pretty rote, it doesn't require a lot of thinking. It's not a new idea. I'm not that interested in it, but I want to get a tweet out so that people see it, and people know about it because I think people rely on me for that kind of thing.
And so what I did was we have this Spiral that is an internal Spiral that converts articles into insightful conversational tweets and there was someone who LessWrong summarized all the news about about this—LessWrong is a forum basically for rationalists—and summarized all the news about this new model, so I read it, understood it, and then I was like, okay, I want to turn this into something I can tweet. And I just threw it into Spiral. I pasted it in here and I just pressed “generate multiple” and then it generated a bunch of tweets. So, “OpenAI Strawberry is about to change the game. This AI can solve complex problems on the first try without hallucinations.” “OpenAI Strawberry set to revolutionize AI. Here's what we know.” And it gives a bunch of stuff. And these are not things that I want to tweet just wholesale, but a couple of these things would have been hard for me to come up with. “It solves complex problems on the first drive without hallucinations. It generates high quality training data for Orion. It may be integrated into ChatGPT as soon as this fall.” Obviously I could write that, but it would take me a little while to figure out the bullet-point structure and what I want to say and having this thing like, oh, why does it matter? Right?
And so what I was able to do is I just took that, I pasted it into Twitter, I revised the headline, I revised the top because I didn't like the like “set to revolutionize AI” thing, because it just felt kind of cheesy, I edited a couple of things, I took out some of the emojis, whatever, and I tweeted it and it has 30,000 views.
And that's a really simple thing where, I would have done this ordinarily, but it would have taken me 30 minutes or 40 minutes or something like that. And it would have been a lot of brain work. And for this, I'm just editing, which is much easier than writing wholesale. And it took me five minutes. I was in bed. It was really, really easy, and I do that a lot for news article, news stuff. I do it for obviously all of our articles for our podcasts. And it means that I tweet more. I have more interesting stuff to say. And all my engagement and stuff is way up because I'm able to put out more stuff, which is really cool.
Evan Armstrong (01:20:52)
I'm curious as a writer, I know that some people are gonna watch this and have moral horror, right? That you're like, oh, you took someone else's content, you transformed it with an AI, and now you're gonna monetize on top of that. I think that's a case where some people, some writers, have real apprehension like this is the worst-case scenario. Do you think about that at all? Does it bother you? Or have you worked your way through those emotions?
Dan Shipper (01:21:19)
I mean, obviously, I think that's a really important question. And it does bother me if I think there's a case of stealing or plagiarizing or whatever—all that stuff is really bad. And I will say also it's much more common for me to be doing this with my own stuff than someone else's stuff. But I think this is a really interesting case that is important to reason through. My basic thing so far in this case is that I cited the information in the tweet and linked to it—both linked to the information and linked to the article that it originally came from. And I think there's a very, very well established practice of reading stuff from another news site and then summarizing it for your audience.
And that, in general, I think is not a problem at all. I think there's a further component, which is like, and you're using AI to do this. And I think that there are cases in which that's okay in cases where that's not okay. The case where it is not okay is you are lifting wholesale without attribution sentences or ideas that come from someone else. Where is okay if you are summarizing it in a new way for the audience that you have. And I think that that's totally fine. Obviously there are blurry lines there. And one of the problems with AI is it's hard to always tell where the output is coming from. And I think what we need are both new tools to detect, okay, did this sentence come from somewhere or not? Not was it AI generated, but does it come from somewhere or not? And then there is a new ethic about what is okay and what is not okay. Because when you have new technology, it changes ethics, and we just haven't updated our ethics to account for this new thing, but I think we will.
I don't think that this is going to be like, we never figured that out. I think in the next five to 10 years we will shift what is okay and what's not okay. And my general belief is that, for example, training these tools on publicly available text to create this intelligence layer is very different from training these tools, creating the intelligence layer, and then having the intelligence later output wholesale copyrighted stuff without attribution. Those are different things. And yeah, I think we'll come up with ethics to differentiate them.
Evan Armstrong (01:24:07)
Yeah, it's interesting because I'll have people do this to my writing. I don't know if you get people that do this, but they'll like to grab one of my posts. They'll make a thread of all my ideas and then it'll be, if not word for word copy, it's the same thing. They did a thread of my article or whatever. And then sometimes they'll give me attribution at the very end. They'll have a tweet. A shout out to Evan Armstrong for doing this, or for writing about this. Or they'll get called out in the comments by one of my readers—thank you readers for being vigilant Armstrong warriors— but they'll say, hey this sounds like my post that I had recently written. And I don't know. I listen to your logic—you're right. It makes sense to me. Transformation is totally fine, but when it happens to you, I'm like, no, I don't like this. I spit on you. I don't like it at all. But I then go do, and I'll use Spiral too— I haven't done it, the case where I'm pulling other news articles or whatever, but like you said, you gave attribution to the people who have the paywall, that did the original reporting. I think it's really important to acknowledge that even for us, which is a very AI-forward organization, We don't have it figured out. I don't know what's right or wrong. I can and I don't think anyone does. And that's the hard part.
Dan Shipper (01:25:39)
I think that's totally right. I actually have a very different response to that. Obviously someone talking about an idea I had that they got from me without citing me I don't like. But I do get people all the time summarizing articles or podcasts and being like, this is from Dan Shipper. Dan Shipper had this cool take or whatever.
And I feel like, I think in general, those kinds of things indicate that people value your content and are consuming it enough to do something with it, and I think as a writer, as an internet writer that attention is really, really, really important, and it always sort of bleeds back into subscriptions for Every or people reading more of my stuff or whatever. So, I actually like it a lot.
Evan Armstrong (01:26:33)
If you think about the bar of intellectual effort that's required to engage with the content. When you and I first started writing online and AI wasn't a thing, the responses I would get, and the ones I liked the most were the criticisms. I loved when somebody would take my post, think it was the dumbest thing they've ever read, and write a Substack about it. I loved that. Because, even though I obviously disagree, I liked that they took it seriously enough to try to refute it. And you could think that each successive layer of models, so 2 to 3 to 4 to 4.5, whatever, the intellectual activation energy is decreased because you can do more of the rote labor by the models, and so I wonder if there's a point eventually where we're at GPT-6 where we say, okay, this is no longer okay because you're just doing the whole thing without any effort on your own. It's a completely automated process. Maybe it's okay, but right now we're in this really fuzzy middle ground where it does require engagement. It does require enjoyment, but a lot of the work can be outsourced to an AI, you know?
Dan Shipper (01:27:43)
Yeah, I think my general feeling about that is what these models are good at right now is replicating patterns that have been seen previously, and there are certain formats, for example, this Strawberry tweet or whatever, it follows a particular type of format for a kind of tweet that currently works that changes a lot, and the models are actually very, very bad at when things change and need to be a new thing that works needs to be found. They're actually not very good at that, and I think it's unlikely that the language model architecture on its own is going to learn to do that, which means that even at the GPT-6 or -7 place until the models get much better at reasoning and learn from experience, they're not going to just not going to do that.
And I guess, who knows what's gonna happen, but I'm actually not particularly worried about that. It will happen. There’s going to be a lot more AI-generated content that's shitty. I think we will develop tools and sensibilities to differentiate between them. It's not gonna be perfect, this always happens in new technology. There's always trade-offs. There's always bad shit and there's always good shit, and I think generally we've actually been living in a sea of garbage for a long time, both on the Twittersphere and Facebook, but also just like scientific literature or whatever. There's a deluge. There's way more information available now than there ever has been.
And I think language models are the first tool we've ever built to actually more effectively deal with that information and get you the right information at the right time and sort of wade through it. Obviously they're not perfect, and what it means is the amount of information that you as a writer, I as a writer, can traverse in any given minute of our day is so much higher. And so, yes there's more stuff with less engagement, but what that means is you're bumping up against the things that you want to engage with more frequently because you have access to many more things for a much smaller unit of attention. And those things are filtered for you adequately by the language model, so I think you can look at it in another way, which is it will get you to the things that you want to engage with more deeply, more quickly, and that's sort of how I think about it.
Evan Armstrong (01:30:27)
Interesting. That was really far off for, I think, for “How to Write With AI,” maybe this gets cut in the end, but I don't know, but I was like, if I'm having some sort of emotional reaction, I am sure that listeners to the podcast will be as well, and so I thought it was worth bringing up.
Dan Shipper (01:30:45)
Totally. No, I think it's important.
Evan Armstrong (01:30:47)
I think it really is unsolved. I think you're right. There’s new problems. I do wonder if we'll build antibodies, if it ends up being like social media where now in 2024, I think people are more and more rejecting social media than they used to 10 years ago because they're like, oh wait, the stuff is rotting my mind. And I wonder if it will be similar to AI in the early years, people would get sucked in. But over time you build up social antibodies to it.
Dan Shipper (01:31:13)
I think so. And it changes like the way that Gen Z uses social media is just different from the way that we do. And that's just one of the interesting and beautiful things about how technology changes the human experience. We sort of co-evolved together and we've been doing that for a long time. Cool. So, is there anything else you wanted to show about using AI in your audience process or just in any part of the process? Or I'm curious if you have any reflections on the conversation.
Evan Armstrong (01:31:47)
I think the last point I'll make on AI and the audience is that we've spent all of today talking about AI in the context of generation, but a lot more importantly is AI in distribution, where most of the content that we see online is algorithmically selected. It's AI selected. We don't use large language models, but they're machine learning algorithms. And as I think about the content I make, I do have to think about it in the context of the algorithm in which it will be distributed, which is why when you were talking about Spiral, the Spiral that works for a tweet will not work necessarily for a LinkedIn post or a LinkedIn post will not work for Facebook and that is an audience, but it is an algorithmic thing as well. And so when I think about distributing online, it is like, oh, I want to automate away like writing tweets because I don't like writing tweets, unless I'm tweeting about movies, in which case no one will like them and I'll tweet them anyways. But it's more about as I tweet, what you have to think about what's going to engage the algorithm, which is like a multiple hour long conversation on getting into that. But I think a really important point that most people miss is that you're serving the algorithm—the AI algorithm—as much as you're serving the audience in the context of distribution.
Dan Shipper (01:33:08)
Totally. And there's a lot of taste involved in knowing how to do that, which is kind of interesting and it can be dystopian or it can be like this is sort of how you get to people, you know?
Evan Armstrong (01:33:20)
Yeah, no, I think it's just the shit sandwich you have to eat if you want to write online. There's just no way around it. You just cannot— You can kick and scream as much as you like, but it's just the way it is and you just have to deal with it.
Dan Shipper (01:33:34)
This is great. I think we're around time. Yeah, I'm really psyched that we got to do this. I had a lot of fun. I feel like I learned a lot. I hope you did too. And I'm also just psyched about the course you're running. So, tell us a little bit about the course before we sign off.
Evan Armstrong (01:33:51)
Yeah. So, I mean, it's very similar to what we have today. You can think of it in two parts. Well, the course's name is “How to Write with AI.” And its general sensibility is around how to write, so the elements that we discussed today of taste, topic, craft, and audience, and the writing principles you need to understand. And then for each of those principles, the accompanying tools that you can use to automate away the rote work, which is similar to the tools that we discussed today, which are Claude, ChatGPT, Lex, and Spiral. And then from there, so that's the lecture format, but the thing I'm really excited about is there's all these student groups that are going to be working together, and writers, we'll have editors that we trained that will be helping each of these groups. And then at the end, everyone will have a chance to share their essay with Every’s audience of, I don't know, where are we at Dan? What did our Monday metrics meeting say? 78,000! 3,000 more than we were last week. I think when I sat down, when you and I were talking about this idea, it was really like, how can we take the very hard, hard lessons that's taken us many years to learn and just share those with people, and so they don't have to go through the many years of suffering that we did.
Dan Shipper (01:35:23)
Totally.
Evan Armstrong (01:35:24)
That's what the class is. It's going great. We launched it last week. People are really excited. It starts September 17, 2024.
Dan Shipper (01:35:31)
Amazing. So, if you're listening to this or watching this and you want to learn more, we'll put the link down in the show notes, and you should follow Evan on Twitter @itsurboyevan. I guess we're calling it X now—on X @itsurboyevan. And obviously subscribe to Every where we both write every week. And Evan, thanks for joining.
Evan Armstrong (01:35:51)
Thanks for having me.
Thanks to Scott Nover for editorial support.
Dan Shipper is the cofounder and CEO of Every, where he writes the Chain of Thought column and hosts the podcast AI & I. You can follow him on X at @danshipper and on LinkedIn, and Every on X at @every and on LinkedIn.
Find Out What
Comes Next in Tech.
Start your free trial.
New ideas to help you build the future—in your inbox, every day. Trusted by over 75,000 readers.
SubscribeAlready have an account? Sign in
What's included?
- Unlimited access to our daily essays by Dan Shipper, Evan Armstrong, and a roster of the best tech writers on the internet
- Full access to an archive of hundreds of in-depth articles
- Priority access and subscriber-only discounts to courses, events, and more
- Ad-free experience
- Access to our Discord community
Comments
Don't have an account? Sign up!