Transcript: ‘How to Use AI to Become a Learning Machine’

‘AI & I' with Shopify’s former director of production engineering Simon Eskildsen

6

The transcript of AI & I with Simon Eskildsen is below.

Timestamps

  1. Introduction: 00:01:06
  2. How entrepreneurship and parenthood changed Simon’s learning rituals: 00:02:51
  3. How Simon accelerates his learning by using LLMs to find associations: 00:12:59
  4. Simon’s Anki setup and the flashcard template he swears by: 00:18:24
  5. The custom AI commands that Simon uses most often: 00:26:02
  6. How Simon uses LLMs for DIY home projects: 00:37:45
  7. Leveraging LLMs as intuitive translators: 00:40:48
  8. Simon’s take on how AI is reshaping the future of learning: 00:51:38
  9. How to use Notion AI to write: 00:59:10
  10. The AI tools that Simon uses to write, read, and code: 01:08:53

Transcript

Dan Shipper (00:01:07)

Simon, welcome to the show.

Simon Eskildsen (00:01:08)

Thank you so much, Dan. It's good to be here.

Dan Shipper (00:01:11)

It's good to have you. So, for people who don't know, you are the co-founder of Turbopuffer, which is a really cool AI startup doing better vector databases. Is that how you describe it?

Simon Eskildsen (00:01:23)

Yeah, it's essentially a search engine starting with vector search and we're trying to make it much more affordable and easy to run these things at scale, which is a challenge today that a lot of companies are having.

Dan Shipper (00:01:35)

That's awesome. I think you're one of the smartest founders in the space, especially at the layer of the stack that you're working at. We also go back a long way because you are one of the original— I used to do these interviews called Superorganizers interviews on Every, and you were one of the original interviewees. We did an interview together called “How to Make Yourself Into a Learning Machine,” which just went super viral. This is in 2020. And it was one of our first really, really big articles. And it was really incredible. You have this energy about you in that interview. We go through your reading habits and how you find new books and how you take notes on the books you read and how you turn the books you read into flashcards and all this kind of stuff. And it was, I think, super inspiring for any sort of note-taking nerds, of which I am one. And I'm really excited to get to talk to you again and hear how your brain and your mind is doing or adapting in the AI age. Because I think all the stuff that we were nerdy about four years ago, it's completely changed with the level of tooling available. And so I just want to hear what you're up to. I'm sure it's amazing.

Simon Eskildsen (00:02:51)

Yeah, I think AI has certainly changed how I approach my learning. Absolutely. It's an absolute dream come true. But I think my life has also changed dramatically from 2020. I am running a startup, which is more demanding than anything. And I think if you want to make yourself into a learning machine, it's a pretty good path to take.

There's nothing that challenges you more on your breadth and your skills than running a startup and building it from zero. So, that's been absolutely incredible. But it also means that some of my habits and systems have taken a little bit of a loss. But it might also be interesting to hear what the condensed rituals look like now. And then on top of that, I have a four-week-old baby, which means that my schedule is even more ridiculous.

Dan Shipper (00:03:46)

Congratulations.

Simon Eskildsen (00:03:47)

Thank you so much. 

So, I think the reading has definitely condensed. Now, I have perhaps an hour or so to read before I go to bed, oscillating between reading articles on Reader and then reading books. I can't get up to 50 to 70 books a year anymore, so my selection process has gotten much tighter than it used to be. And so that's been a big one. Another thing that I used to spend a lot of time on—and I think we talked about in the article as well—is that I used to spend a lot of time writing about the books that I'm reading and that's also had to go. I just don't have the time. 

But I still create a lot of flashcards. I joke with my wife that I'm going to have a party when I reach 10,000 flashcards. And I'm sure it will be absolutely— My friends will probably come because they like to indulge and make fun of all of my ridiculous rituals. But sometimes when I'm doing the flashcards, people want to follow along and they're like, why do you have a flashcard about whether it's better to have the window down or the A/C on at various vehicle speeds. And it's funny when you've been doing this for 10-plus years, because all of those things also carry memories of when you used that ridiculous fact, or the time you created the flashcard in the first place. So the flashcards have definitely stuck. I review somewhere between 50 and 200 of them every single day.

Dan Shipper (00:05:27)

Can we see your Anki? Can you show it to us?

Simon Eskildsen (00:05:28)

I could show it. I can show it. I think in the original article you wrote too, I have on my things to do-list, “cut toenails.” My life is completely in these systems that I regularly have nightmares about losing these systems because my brain is completely outsourced to it. This is what my Anki looks like. I don't think I've reviewed it. So this is a really easy day. We only did 11 cards. So we can take a look. I mean, this is a pretty ridiculous one, right? This is a restaurant that I used to go to. And the only distinguishing fact about this guy was that he had a really good radio-deep voice. And everyone has this dream of whatever restaurants they frequent that you get to know the waiters and it's back and forth and it's the usual, but it never happens. So this is my weird attempt. This restaurant doesn't exist anymore. I haven't seen this guy in a decade, but again, it brings me joy to see this kind of thing.

Dan Shipper (00:06:36)

That's amazing. I actually do that. I put it in my Notes app rather than in my flashcards. So then I just search whenever I'm back at the restaurant.

Simon Eskildsen (00:06:42)

That's perfect. And I think it's also things like your colleagues' kids' names and ages, these kinds of things where some people might find it ridiculous that you put a note about this and it's like, why can't you just remember? But let's be honest—most people don't remember those things. And if you write it down, then you will ask about it. And then eventually you'll remember it. That note is not really valuable anymore. So I also here have the ages and names of so many people who I've worked with kids, significant life events for them, wedding dates, whatever. Because I really do want to remember those things. But my memory is not capable of it. But then at some point I was like, oh yeah, it's July. Isn't this when Scott got married, or whatever? Yeah, this one. How many glasses of wine per bottle? I actually don't remember this one.

Dan Shipper (00:07:33)

I think it's four.

Simon Eskildsen (00:07:35)

There you go.

Dan Shipper (00:07:36)

I didn't need the flashcard.

Simon Eskildsen (00:07:39)

You didn’t need the flashcard. I feel like having a newborn, you forget the joys of a bottle of wine.

Good example too. If something where if you don't drink a ton and you're constantly— Then this might be worth it. I think for a lot of people, you don't need a flashcard. Dan, you live in New York. You're not going to need a flashcard for this one. So this one is going to be— And again, I didn't actually remember. I think the number I had in my head was six, but I think for Dan it's probably two, but it depends on how heavy-handed you are on the pour.

Here's another one. So this is also very common. When I peruse technical documentation, I'm constantly adding things into Anki againstead of note-taking. To be honest, especially on the schedule I'm on now with the type of work I do, I don't take that many notes. Most things just make it straight into flashcards right away. This is a Postgres column type. Postgres is a type of database. There's JSON-B and there's JSON. And I constantly forget when you're supposed to use either. This is kind of a bad flashcard because I think this is just in my standard flashcard template. It shows both sides of the card. So this is not really a valuable flashcard. This is the best one. “When should you use JSON vs. JSON-B?” Sometimes I just don't get the time to pull this up. I'll show you actually just, while I think of it. I used to have 20 different types of cards that I use. But I always use the same one now and I think this might be worth it to some people because at some point I had a different card type for every single thing that I was doing. This is the golden card type. It doesn't matter what app you use. This is the template I like. So you might be like, “How many glasses of wine does Dan think is in a bottle?” So, that's the front of the card. Then you have to think about, okay, can this be reversed? So there's two glasses of—

Dan Shipper (00:09:57)

I mean, I'd say the bottle is the glass for me, obviously.

Simon Eskildsen (00:10:00)

Yeah, there you go. This one doesn’t have a good reversal. But let me just see if I can think of— “Six glasses of wine,” right? So I'll just write something like that. Again, the reversal here doesn't really make sense, but it gets you the gist, right? So, if we're talking about an example from before, like, okay Dan's kids names are X and Y, and then you might have to back this as X and Y are whose kids.

Dan Shipper (00:10:31)

Yeah, you got to go back and forth.

Simon Eskildsen (00:10:35)

You gotta go back and forth. And I just— I don't deal with the—.

Dan Shipper (00:10:38)

This is really making me think of some AI stuff. So, there's this whole debate right now about whether or not language models are actually intelligent, right? And one of the big ones is that they don't understand when things are logically entailed, often. So, if they see all the time in their training data, “How many glasses of wine Dan thinks is in a bottle,” they'll be able to answer six, but they won't be able to answer the reverse. And people are like, oh, that's because they're not actually intelligent. And it's really interesting that, at least in the flashcard example, humans actually have to practice this all the time. What do you think of that?

Simon Eskildsen (00:11:16)

I think I haven't seen a ton of examples or tried a ton of examples of where they can't go in the reverse other than in these benchmarks where people pose them these problems to try to not make them think. I don't really have any big ideas of what a language model is and what it isn't. I just think of a large language model as an average of human knowledge or whatever—public human knowledge or public human knowledge plus what you can easily scrape. And whether they can reason, it's not something that I really use them for, probably because I don't really feel like they can right now. As soon as you get two to three levels of reasoning down, it just doesn't really do the trick for me. So, I'm sure a language model would do very well on something like this and probably even the inverse as well. So, I don't know if I have any direct thoughts on your example other than that's—yeah. No, I don't think they're super intelligent yet, but I think they're an incredible example of the average of the internet.

Dan Shipper (00:12:26)

That makes sense. I guess what this is making me wonder about is how the idea of extending your memory changes for you in a world where language models are available. You have the average of all of human knowledge available at your fingertips, where before you kind of did because you had Google, but Google itself is just a much worse version of a language model where you can get exactly what you need in the context you need when you need it. And I'm curious how that's changed for you, the function of and value of doing flashcards like this.

Simon Eskildsen (00:13:00)

The way that it's changed my learning the most is that what Google is really good at is you roughly know where to find what you're looking for and you can find it immediately. I read this paragraph once, and I think about it every single day because I think the best people that I work with or friends and things like this are where— Yeah. Google is good when you know what you're looking for, but when you're just looking for associations, that's when you have to go out and talk to people, right? If you Google, it’s whatever is SEO plus maybe a level out, right? But if we start to talk about something more interesting and associations for me, it's like, okay, I'm using this data structure here. This is what my data looks like. What might be some other things that I could do here? You just kind of got to talk to someone. But this is what the language models have really changed for me, right? Where I can go to it and be like, hey, I think it could be done like this. I don't know a ton about this domain. Can you just riff on this with me?

And then these models place that somewhere in that latent space. And then they can just find associations around it and pump that back to you. So that is a ping-pong-ing tool of whatever it is when you have a rough idea of the island that you want to land on. It can paint the picture for you really well. I found that it works extremely well for learning. And that wasn't accessible to me at my fingertips before. I couldn't be like, hey— Something I did the other day is, I like this brand but they didn't have a product that I was looking for. What are some other brands like this? It will just tell me that. Again, average of the internet. That I find very valuable. Or a year ago we were having— One of the wonders of living in Canada is that these little cabins in the woods are accessible for not too much money. And we have this retaining wall that we needed to build. And it was near the water and it's this whole complex thing. And there's all these legislations and it was all in French because it's in Quebec. And I was using a language model where someone had told me, oh, you need to build this type of retaining wall. I don't know anything about retaining walls. I don't care about retaining walls. I don't care to read 100 pages of French—and I don't really know how to speak French anyway—about what you're permitted to do near a waterline as it pertains to the retaining walls, right? You talk to a language model about this, and then they start to see, well, this retaining wall is not going to work for this reason. But actually, there is— The name slips me. There's this type of retaining wall, where you put it in a grid and then you put some rocks inside the grid. I'm sure you've seen one of these before. I don't remember the name of it right now, but it's like, oh, this might actually be a really good option that fits with these criteria. But no one had suggested it, right? 

These types of associations are just like, yeah, if you talk to someone who's an expert on retaining walls could tell you that, but anything associated like this, I have found the language models and incredibly useful both in my daily work, because it feels a bit like conversing with a Ph.D. and whatever vertical you're going in. But also as you go through your daily life, you often have to talk to a contractor or a vendor who is an expert in some vertical, but certainly not an expert in teaching you enough about it. And maybe also you feel a little ripped off because you don't trust them. That's been very, very good.

Dan Shipper (00:16:33)

That's really interesting. And how does that relate to you to the practice of making these flashcards and using them if at all?

Simon Eskildsen (00:16:40)

I think it's mostly— Flashcards to me is a sink, right? I don't really do anything with the intention of like, oh, okay, I'm going to sit down in my chair. I'm going to create some flashcards today. Once I come across a bit of knowledge that I want to retain for whatever reason, I'll put it in here. There might be a flashcard in there on the name of that retaining wall, whose name slips me. But probably not. But that would be the type of thing that would make it in there, but I don't really— I think I used to open in the morning, set aside 45 minutes with the intention of, okay, we're going to create some flashcards today. I don't have time for that anymore. Right now it's like, okay, I just encountered this piece of knowledge, made it into the sink of flashcards. So I know that this is retrievable-ish, right? And the reason for the flashcards and maintaining this knowledge is that lets me make these associations in real time too that are interesting. And suddenly I have high bandwidth through the tool that's right in front of me to do that. Of course, the best person to do that association with technically is my cofounder, Justin, who normally sits in this chair behind me. It's just free-flowing ideas of high-bandwidth. I feel like I can get some bandwidth with the language model, but the breadth of where I can get it to, and the verticals of knowledge that I can get is unencumbered. I don't have the network to know what the best type of heat pump is for these temperatures and what matters, right?

Dan Shipper (00:18:10)

Yeah. That makes a lot of sense. So I want you to finish telling us about these flashcards and then we'll move on to some of the AI stuff.

Simon Eskildsen (00:18:14)

For sure. And yeah, I think the best way to think about flashcards is that they are a sink of your knowledge and they are just a way that these things resurface. This is the card type that I really like. It's the only one I use for the past thousands of cards I've created. I haven't used anyone else. Then here you do the reverse, whether you want to reverse the card or not. In case of something like how many glasses of wine does Dan think is in a bottle—six glasses—you don't really need the reverse. So we're just going to leave that blank and an extra will just be a picture or something like that. You saw it on the other card. This is all you need. And then always put a source in when you create a flashcard. It's nice to know like, okay, this was in 2017. I talked to this person and they said this thing because again, there's a little bit of nostalgia with these cards. If you're actually serious about making this a habit, you're like, oh yeah, that was Naj at Carben in 2014 or whatever, right? You might not delete the card because that brings you a little bit of joy.

Dan Shipper (00:19:20)

Interesting. I haven't actually heard anyone talk about flashcards from that perspective. It's sort of like when people talk about how they get a whiff of someone's perfume and it reminds them of their mother or something like that. And doing flashcards as a sort of an evocative exercise or associative nostalgic exercise for different times in your life in the same way. Like, oh, I hear a song and I think of being in high school or whatever. I kind of love that. There's something romantic about it.

Simon Eskildsen (00:19:49)

I think it's just because I've been doing it. It's a major part of my life, right? I've been doing this since I was like 17, 18 years old, right? So it's been probably 12 years of flashcards. Yeah, there's a lot of history. I think a lot of people do flashcards for a period of their life. But I found it valuable enough to just stick with it, right? It's one of the three or so things that really have stuck with me. Do you want to do a couple more?

Dan Shipper (00:20:22)

I mean, let's do one where you get a good card that you know. I want to get that.

Simon Eskildsen (00:20:29)

Which prestigious university is in Pittsburgh? Oh, you probably know this. 

Dan Shipper (00:20:40)

I do know this.

Simon Eskildsen (00:20:42)

I don't remember this. Okay. Carnegie Mellon.

Dan Shipper (00:20:44)

You gotta give me a chance to answer before you flip the card.

Simon Eskildsen (00:20:52)

Then, English is not my mother tongue. But so I used to do a lot of flashcards with words and definitions. And basically, I had this whole flow where I highlighted on Kindle, it syncs to Readwise, and thenI process that as part of my highlights in a flashcard. Then at some point my wife said— It's funny, my wife has two reactions when I tell her one of these new words that I learned proudly. One reaction is, why don't you know that word? And then the other reaction is, Simon, that’s a dumb word. No one uses that word.

And I’m like, Jen, it's kind of like there's a missing third thing here. What about Simon, that's such an amazing word. I can't believe that. I'm so thankful that you taught that to me. So at some point I started scraping Google with the number of results for the word as a proxy for how useful it is. That's something where you could probably use a language model today to ask how common a word is. So “affable,” I mean, that's someone who's an admired or nice person, right? Good natured. Yeah. That’s another one. “What is the main industry of the Jilin province in China?” There's a lot of good tea. This is a really hard day today. I don't know. I mean, this is not a good flashcard because it’s too hard to skim. So I'm going to mark this and then I'll look at it at some point. Alright, I'll close this down, but yeah, there was a pot time in my life where I found the origin of every vegetable. That mattered to me because then I could map back to what vegetables were endemic to cuisine and it's like before the Colombian transfer of tomatoes, blah, blah, blah, blah.

Dan Shipper (00:22:59)

So what's the answer?

Simon Eskildsen (00:22:59)

The answer is Asia. I don't think it's pinned down much further than that, which makes sense, right? It's not really used outside of that cuisine.

Dan Shipper (00:23:07)

That makes sense. Alright, so we've reviewed the previous interview. We've kind of gone back in time and seen what's stuck and what hasn't. Now I wanna talk to you about AI stuff. So, yeah. How are you using AI? Let's start with your work.

Simon Eskildsen (00:23:27)

Definitely. The biggest thing, which I'm excited to show you because you shared with me before this that you actually don't use this tool. I would say that probably about 80 to 90 percent of my LLM use comes through this tool called Raycast. So it’s essentially a replacement for Spotlight on steroids. When you bring it up, it looks like this. It can do my schedule, search my Chrome history, things like that, open applications that can search my files, and so on. If you ask it something like— Okay, a question that I actually asked before this interview is that I have this little Yeti mic here. And the audio was really bad as I was doing my testing before our call. And I couldn't remember if you're supposed to speak into the side of the logo or the other side. So I asked, “For good audio quality, do you speak to the side of the logo on a Yeti microphone?” Now you can ask Spotlight this, but if you press tab, it will answer it right there. So you're just doing command-space, type your question, tab, done.

Dan Shipper (00:24:43)

What model is this? GPT-4o. So basically it's GPT-4o accessible with one hotkey command.

Simon Eskildsen (00:24:55)

That's right. You can bring it up into a full chat window and then expand, whatever. There's a history over here. I hid it because there's probably something embarrassing in there. So you can keep asking if Yeti is a good microphone? Should I use that or buy another one, right? So, this becomes a whole thing. You can upload files and all of that here. I like this a lot for day-to-day use. ChatGPT for Mac and so on is not super interesting to me. That's a whole other workflow, but I was already using Raycast and this was a really, really easy way to do it. So let me just show you, because there's a couple of other things that I do with Raycast. So you can configure inside of Raycast, which model to use. You can use Claude. If you care a lot about the speed, you can use one of the other models. I think these ones are hosted on Grok, right? So this works really, really well. Hotkey to bring up that full chat dialogue. You can do all of that. That works really well, but then you can also define these custom commands. So inside of here I have a couple that I use all the time. so one, for example, that I might use is like if I'm cooking something. I don't really have a collection of recipes or whatever. And it might be something that I'm not doing that often. Then I have to cook according to a bunch of dietary restrictions—a hummus recipe. Then, this is like a whole prompt that I've done and it gives me this format that is the most condensed list that I can think of that honors me and my wife's dietary restrictions, things like that. And it's just the simplest condensed version. And then I'll just put this in front of myself or send it to my phone or whatever.

Dan Shipper (00:26:52)

That's really cool. And when you send it to your phone, is there a specific way you do that or you're just copy-pasting it into your Notes app or something like that?

Simon Eskildsen (00:26:57)

Yeah, I just send a text message to myself or something like that. It’s usually nothing more elaborate than that. There might be a quicker way to do it, but it hasn't bothered me.

Dan Shipper (00:27:10)

Okay, cool. So the recipe idea is basically like, you can create any common prompt you're doing, you can sort of create a recipe that's available by command. And so this recipe is not called a recipe. It's called a command. It's called an AI command. And basically you put a recipe command in there so that whenever you want to get a recipe for something, it has a very specific list of requirements you have. That's actually pretty cool. I like that.

Simon Eskildsen (00:27:40)

I use it a lot. That's why I'm trying to share the things that I actually use rather than the things I've experimented with. I use this all the time. I gave a couple of examples. So yeah, you go into Raycast. You search for AI commands. We go here to “recipe.” We can edit the AI command. And then here you can see this is my prompt. I'm like, “Please create a recipe for—,” and then the argument. Use the formatting from this example. So here's an example where I just wrote about how I like to make mashed potatoes. We're listing ingredients put in parentheses, optional, extra ingredients, blah, blah, blah, blah. For optional ingredients, my wife is sensitive to this carb called fructan, please provide a substitute, specify the approximate calories, all of these things. And these models these days are so good that this just works.

Dan Shipper (00:28:30)

That's great. I love that. And no you don't have to flip through the chef's story about how their great grandmother's cousin made this recipe for them or whatever.

Simon Eskildsen (00:28:45)

No, there's nothing about how they found it in a clay tablet in the attic. There's none of that. It's great. So I really liked that. And again, I think of the LLM as an average of the internet, and this is a great way to do that. And especially when you try to just boost its creativity a bit by like— I cook enough that I only really need the list of ingredients, not actually the instructions and give me a bunch of optionals that might be good, interesting. And I can quickly see, oh, that's kind of a fun idea. So you can kind of tune this to how you like to cook. And I really like that. And I just don't use Google for this anymore because this is great. I still have this book called The Flavor Bible, which essentially is a thesaurus of, hey, this goes with this. So you look up butternut squash and it's like, maple syrup goes well with butternut squash. Sage goes well with butternut squash. And ricotta goes well—right? And you just start getting ideas for how to use this in a recipe. Maybe don't combine those things unless it's Canadian Thanksgiving. But some of those things start to create really interesting things. Now, these LLMs— Again, the average of the internet has that thesaurus built in.

Dan Shipper (00:29:56)

What about “improve writing”? What's your prompt for that?

Simon Eskildsen (00:29:58)

I don't use this a lot. I haven't used it a ton. So, this is what it looks like. This is an experiment. So I don't use this a ton because I find that the standard, just improving this or giving suggestions is good. Typically I'll dump the whole thing into Claude or ChatGPT or whatever is in whatever I use that week. And then I'll ask it like, “Here's the full document. Let's talk about this sentence. Give me some feedback.” I do that for my blog posts these days. I don't have any more particular flow.

Dan Shipper (00:30:37)

Okay. Any other commands that you think are worthwhile?

Simon Eskildsen (00:30:40)

Yes. “Define.” This one is one that I've spent a fair amount of time on and I use it a lot. This is one where I do my daily— I still review 20 books, articles, whatever highlights every single day in Readwise. Sometimes these will be singular words that I've highlighted that I don't know. I use this prompt to learn these words. The word that I showed earlier in the flashcard, which I think was “affable.” And it just says “a good-natured kind of human being,” that was not going through “define” here. So what I use here to define is, “I'm reading a book and I encountered this word, place or person, and this is the word. Please help me learn what this word is, the place or person or whatever it is this represents.” Then I say, “Give me six example sentences using this word. And please try to use some historical examples, something that's going to teach me something. Give me something with some well-known people from physics, computer science, geography to try to make this example sentence as educational as possible. I want to learn from this example. If it's a word that's always used in different forms, just stem the word. Also, give me some related words, synonyms, concepts, things that are related to this word. Third and finally, if you're capable of it, then generate an image that works for this word. Then I give an example of what this can look like.” Okay, Dan, give me a word.

Dan Shipper (00:32:12)

I want to do affable.

Simon Eskildsen (00:32:15)

Affable. Let's do it. “Define—” Oh, it's because it does it from my clipboard. So we'll just do it like that. And then “define.” So, “Affable describes someone as friendly, good-natured, easy to talk to.” And then, “Affable leaders like Gandhi often gain widespread respect and admiration due to their approachable and kind nature. Computer science: an affable user interface is one that's easy to navigate. Historical figures like Franklin were known for their affability.” So it's just like, I love this because it starts to— Oh yeah. yeah, Franklin. It connects with that chunk of knowledge immediately. And it just makes it much more fun to create a flashcard, right? It's like, oh yeah. Feynman, like, oh yeah, I haven't thought about him for a second. He seems like an affable guy. So immediately you're making these contacts, you may be learning something. So, this has really improved how fun it is to look up some of these words. When I see a word now and I'm reading, I get all jittery to run this prompt because it just works so well.

Dan Shipper (00:33:16)

I have a couple more words for you. I want to try this out. Let's do lambent. L-A-M-B-E-N-T.

Simon Eskildsen (00:33:23)

Okay, I have no idea what that word means. “Something that glows or flickers softly, often implying a gentle, radiant light.” Oh, you were really indulging in some recent writing, huh?

Dan Shipper (00:33:36)

I love words.

Simon Eskildsen (00:33:37)

“Lambent flames danced on the surface of the water, reminiscent of— Ooh, I'm drawn in here. “Isaac Newton once observed that the lambent glow of a candle could reveal the nature of light and color, leading to groundbreaking work in optics. The lambent auras in the polar skies are caused by charged particles from the sun interacting with Earth's magnetic field.” Right? This is pretty good.

Dan Shipper (00:34:06)

That's good. It feels romantic. It also feels like reading the diversity of sentences, I'll remember it better than just the definition, which I think is what you're going for.

Simon Eskildsen (00:34:21)

Exactly. I think it also has the prompt to try to make the images easy or the sentences easy to visualize. Because that’s also a great mnemonic aid. The lambent auras in the polar skies. I might remember from that alone.

Dan Shipper (00:34:37)

Yeah. Wait. Go down to “related.” I want to see the “glowing, flickering, radiant.” Okay, cool. Alright. I have one more word and then we can move on. Are you ready?

Simon Eskildsen (00:34:45)

Yeah.

Dan Shipper (00:34:48)

Eigengrau. E-I-G-E-N-G-R-A-U.

Simon Eskildsen (00:34:49)

God, is this German?

Dan Shipper (00:35:08)

Yeah, it's German.

Simon Eskildsen (00:35:14)

“Eigengrau is a German term that translates to ‘intrinsic gray.’ It refers to the uniform dark gray background that many people report seeing in the absence of light, often described as ‘brain gray.’

Dan Shipper (00:35:31)

Isn’t that a cool one?

Simon Eskildsen (00:35:34)

Yeah, how have you used this?

Dan Shipper (00:35:35)

I don't. I just have a list of words I like and there are these two on the top of my list.

Simon Eskildsen (00:35:41)

Eigengrau. “While Eigengrau is not a true visual input, it highlights the brain's role in creating our visual world similar to how phantom limb sensations work for amputees.” I think it's just really showing the strength of LLMs where you take the average of human knowledge and then you just go nuts on associations, but draw it in a particular direction in the latent space around things that are educational and connected. I love this prompt.

Dan Shipper (00:36:13)

That's great. That's awesome.

Simon Eskildsen (00:36:20)

Another one I use is just growing up in Northern Europe, especially in North American audiences, the writing is sometimes a little bit too direct. So I have this emoji suggestion that just adds an emoji—”friendlier, remove profanity.” I don't know if this was just like me testing out or if that was a problem at some point. They have a bunch of standard prompts. It's not something that I've used, frankly, these prompt templates and so on. I've been very skeptical. It's only the “recipe” and “define” that I really like. Other than that, I think the LLMs have gotten good enough that you don't have to worry too much.

Dan Shipper (00:36:58)

Yeah. That makes sense. Cool. I love it. Okay. So I know you also have a bunch of ChatGPT and Claude stuff to show. So let's move on to that.

Simon Eskildsen (00:37:05)

I mean, I'm subscribed to all the tools, right? I feel like being in AI, you have Perplexity, you have Claude, you have ChatGPT, and you also pay for this. It’s just part of the business now, you're spending $100 a month on these various subscriptions, jumping around them, getting inspired. ChatGPT is what I use for the most part. Now I just sort of vacillate between Claude and ChatGPT and ChatGPT doesn't have a search function, so I couldn't find the logs. But I have some fun examples of the kinds of things that I use so much at our cabin. It came with a really old big freezer. I have no use for a freezer somewhere that has two nines of uptime of electricity. Rural Quebec is not known for that. Actually to the point where I have a script that constantly pings it, and then I have a website for the cabin that will show kind of like a GitHub style uptime chart on how often the electricity is out. Regardless, you cannot keep anything in a freezer somewhere where the electricity goes out once every few weeks. So, I wanted to convert it to a fridge. And I was like, ah, maybe this is a fun project, you know? You have so much time before you have a newborn. And so I asked ChatGPT, how might I go about this because I've never done anything like that. I'm not that handy. And ChatGPT says, well, you go on Amazon, buy this device that's mostly used for homebrewing, you plug it into the wall, you plug the freezer into that, and then you put the temperature probe inside the freezer, then you just put it for five degrees Celsius. I don't know what that is in Fahrenheit. And then the freezer will just turn on when it's above five degrees and turn off when it's below five degrees. And now your freezer is a fridge. I thought I was about to buy a fridge for, what, $1,000 or something like that. Now I have this and, yeah, it costs an enormous amount of condensation because this compressor is going so hard, but it doesn't matter for this fridge because it's just for drinks. So, fantastic. 20 devices from Amazon converting. I would never have thought to do that.

Dan Shipper (00:39:30)

That's amazing. Wait, explain it to me again. So you're taking the temperature probe from the device that you just bought or the temperature probe from the original freezer and putting it somewhere else?

Simon Eskildsen (00:39:41)

You have the freezer, you plug the freezer into the wall. Now it's a freezer. Now you plug out the freezer, which you can essentially look at as an extension cord. So you plug in the extension cord and you put the freezer onto the extension cord. The extension cord has a temperature probe. You put it in the freezer, right? And so it just turns on and off and you can even have it so that it can cool and also heat if you have a device capable of it, but I think it's used for home brewing where you need fermentation ranges.

Dan Shipper (00:40:18)

That's amazing. Is the probe wireless or how do you get it in the freezer without it breaking the seal?

Simon Eskildsen (00:40:23)

It breaks the seal.

Dan Shipper (00:40:25)

Okay.

Simon Eskildsen (00:40:27)

I think especially in rural Quebec, you become very resourceful. So, this is a good hack. This is how you write the best software too.

And then, what else do we use it for? I mean, writing Quebecois French is an art in itself. I don't know. Again, I don't know French, but my wife uses it all the time to convert something into Quebecois French. The other biggest thing I use it for, I think, in business is when redlining and stuff like that, when talking to the lawyer, it's easier to just send a draft paragraph to them and then be like, hey, something along these lines, and then they'll edit it in drafts. I use it constantly. I think for most people, it's much easier to edit something, especially if it's something they don't know a ton about, than it is for them to actually start writing the first draft. And for something legal, that's certainly the case, right? So you're like, okay, I need to explain the exact algorithm for which we measure the uptime of Turbopuffer. In the way that we think makes sense because I don't think what the lawyer came up with and made sense. Let me just send it back in legalese and you just minimize round trips like that. Alos always when talking to accounting, I use it constantly because I don't remember what this term means, right?

And then again, it goes in the sink of the flashcards over time. But often a lot of these professionals will talk to you as if you already know everything about accounting and everything about legal and whatever. So when you have those conversations with these vendors that are in some other vertical, I find it extremely useful and then it makes it into the sink of the flashcards later.

Dan Shipper (00:42:15)

Yeah. I have that too. For a lawyer, for example, it's like I need to push myself into the latent space of lawyer language. And once I see that language, I can write in it but I just have an example of close to what I want is enough and that's one of the things I think about a lot with ChatGPT and Claude is it exposes how many dialects of English there are.

Because now you can do subtle translations between different dialects that we didn't even think were dialects. So from a tech guy to a lawyer or a small business owner to painter or whatever. We didn't think that those were different forms of English, but they actually really are. And ChatGPT is amazing—a universal translator for those kinds of translations.

Simon Eskildsen (00:43:05)

I couldn't agree more. I think you put it brilliantly, which is I have no idea how to access the legal latent space unless someone just puts me in it. And then edit and it's like, hitherto and it's like, yeah, let's go, I think, iterate on copy, especially if it's very crisp copy, right? Then, it's great also to just get suggestions. For this kind of stuff, I'm not using any particular tools, depending on the context I'm in, either I'll be iterating inside of ChatGPT, Claude, or just using Raycast, right? Just give me some other examples. It’s rarely the thing that it spits out that I end up going with. But again, it just comes up with words and things like that that I can use. 

The other thing that I found really useful for our physio exercises. So, I had this problem with— I had a tennis elbow or golfer's elbow. I think it depends on which side it's on. I don't know. Ask ChatGPT. And it's just like, okay, I'm just going to do an experiment instead of going to physio. Just going to ask you to do whatever it tells me to do for a week and see if it disappears. And it did, right? It was just like, oh, you have to do these wrist curls. And I'm like, okay, great. That saves me a round trip and $100 dollar physio fee. And I found that too with one of the problems I have, and I think a lot of people who work at a desk have just tight shoulders and a tight neck. And I always thought, okay, yeah, I need to just stand more. And then I have to roll and things like that. And then at some point I was like, I have had this problem for five years now. It’s not really going anywhere. And ChatGPT, again, I was just like, okay, let me just do these exercises that ChatGPT is doing. I'm not even really going to understand them.

I'm just going to do them somewhat blindly for two weeks. And it's been so much better, right? Everyone's just like, okay, soften the tissue, but it's like, no, actually strengthen these muscles. Right? So, I find that it's pretty good for that. I do find for those things, for exercise, stuff like that, you kind of do need to point it in a pretty tight direction. And it's not always amazing at reasoning about how it got there, but again, as an average of the internet, these are the things that work for this condition. It's pretty good. 

Another problem I had that no one really could help me with was that I have this random thing where I just get a blurry vision for like three, four hours and go half blind. It happens randomly every few months and I couldn't figure out what it was. And when I talked to the optometrist about it, they're just like, oh, it's just stress. I'm just like, well, so this is just going to be a problem for the rest of my life? There's nothing I can do because it's quite debilitating, especially if it hits at a bad time. We're driving or whatever, then I just have to pull over and wait until it's gone. And apparently it's called an ocular migraine and it can happen. One of the triggers for me is aspartame, and, okay, well, that's kind of concerning. But so that's another one. So I mean, ChatGPT has just become the thing that I ask all the time.

And I think that Raycast on the computer has made a huge difference in just asking everything. And then on my phone, on the home screen, I have the voice chat for ChatGPT. My wife uses that a lot. She’s always asking. She uses it a lot for gardening, but that's been really good. And I'm really looking forward to their next rollout. Do you have access to that yet?

Dan Shipper (00:46:36)

I do. It's really great. Yeah. I think if you're already a big voice mode user, you're going to love this.

Simon Eskildsen (00:46:41)

I think I'm also really excited for that part. For my daughter, I've seen that there are some toys where you can chat with these models and ask them questions. And I feel like she's going to grow up with these tools in a way where it's going to feel incredibly natural that she's just talking to Wally the walrus. And there was that really cute Claude, right? Where it's like tuned into the Golden Gate Bridge. And hopefully we get there. That's what makes me most excited about safety is like, okay, can I just hand this to my daughter and she will just ask and it will just help with the curiosity. That's really interesting to me.

Dan Shipper (00:47:24)

I think you'll really love the new voice mode because— So, I have a video about this. I'll send it to you after this on YouTube. I'm using the new voice mode and it's really good for reading. Because what I do is I just turn it on. And then it's just sitting there listening and then when I'm reading something and I encounter a word I don't know. I'm like, hey, what's this word? Or like, it's like a historical figure or I need more details on this particular battle in this book or this particular concept that I don't have a philosophical concept I don't quite understand. And it just gives me the answer and it's actually surprising how much when you're reading, how much there are things where you're like, I don't really know what that is, but like I kind of do, but like it's too much effort to ask.

And when you have a ChatGPT kind of voice mode assistant, they're listening. It lowers the bar so much that you're just asking all these questions. I think you learn way more. It's really fun.

Simon Eskildsen (00:48:16)

I like that a lot. And especially if, I mean, APIs are going to be built on top of these things. But it's like, if those can make highlights and then make it into my Readwise or whatever down the line, that's pretty exciting to me—what I've always wanted.

And I think it's going to take a little bit to get there. I really don't care for VR and AR as an entertainment device. There's two things that excite me about that. One is, can I use it instead of my monitor setup? When is this good enough that I can wear it all day? And code and work inside of this thing, right? That's exciting for me for a video call. I don't know when that will not be awkward anymore. And it's not super exciting to me, but the second thing that I've always been excited about is that the visual stimulus of VR and AR would help you remember so much better. That’s a medium for me to learn in. It's quite interesting. I got it. Unless I'm in this environment to work already. I don't know how likely it is, but if I can— We’re talking about these words like Eigengrau, and what was the first one with the light, right? If I can see that, it's going to be very hard for me to forget, right, if we can generate that kind of imagery.

So that's really, really interesting to me as well. And of course, these things will play together. And that might take a little bit longer. Again, I don't really care about it for any other use cases than that. But those two use cases do excite me. And it seems like that's starting to become the adjacent possible.

Dan Shipper (00:49:50)

That's the thing that I think people miss about AI stuff and maybe miss about just how technology interacts with human beings in general. A really good example is just the ability to read, for example, it actually changes your brain and you you take some of the stuff from your visual cortex and reorient it to help you read, and that makes you better at like analytical thinking, it makes you more likely to read, see sort of like particulars of a scene instead of a more universal, holistic perspective. So it actually changes humans to be able to read. And I think having language models will do something similar in this way that I don't think it's scary. I think it's actually really cool.

For your daughter, my nephew, who's a year-and-a-half, almost two now, being in a world where any question that you ask has an answer—an immediate answer. And no one's getting upset at you for asking is a crazy upgrade to children's brains, you know? Because previously a year-and-a-half or a two-year-old, still a little bit early. But three- or four-year-olds, they're asking all these questions and parents are like, I don't know. I don’t know why the sky is blue or whatever. And all of those questions are answered and maybe they're not even just answered, you're stepping into a scene that helps you understand it in this totally new way. And I think people are worried about, oh, AI is replacing us.

You're going to have bionics or whatever. And it's actually like, we don't even have to implant them into our brains. We will be different people. We will sort to flourish in this new human way that was previously impossible because the conditions weren't there. And that makes me really excited.

Simon Eskildsen (00:51:36)

Yeah, it makes me really excited too. I'm sure you were one of those kids too, that drove your parents crazy with questions. At some point I'd ask my parents, I remember this because I think I was like four or five years old. And it's one of my first memories where I just ask you questions like, Mom, what's the biggest plant in the world?

And it's just like, oh my god. I think it was like, I don't know, Simon, shut up, you know? And then I just remembered, I just got the Guinness Book of World Records. And then it's just, well, this shut you up for a while and now he's like, okay, well now I know who has the biggest nose ring too.

Great. But I think that would be very stimulating. I think that would be very stimulating for a lot of kids. I also think one of these things— So my mother's tongue is Danish and my grandparents’. I'm fortunate enough to be alive and they only really speak Danish, but it's very important for my daughter to also speak Danish, but it will be a challenge for me to be the only speaker here, right? I live in Canada and no one around speaks—means a tiny community. I don't really have any friends here who speak the language. So other than FaceTiming with her Danish family, there's not going to be a lot of exposure.

So yeah, we'll set all the UI interfaces to Danish. But maybe we can also set Wally the speaking walrus, right, by whatever model to only speak with her in Danish. Or maybe it should speak to her in Mandarin or Thai or whatever. I think one of the things that might be interesting for this generation as well is, I don't know how true this is. I haven't read the studies on it, but I feel like if all kids did was just learn languages before the age of 10, they could catch up on all the math and whatever they missed when they're 10 or 11 in four weeks. But the language thing and hearing those sounds is just so special.

And I can't say this and that will probably follow me for the rest of my life because I just wasn't exposed enough to that sound until the age of eight or whenever that gets locked in. And I think that would be exciting if Wally the walrus on Tuesday talks this language and on Thursdays this language, but primarily Danish, right?

I don't know how that changes, but I think it is interesting. One of the things just on the pronunciation that I find funny is that again, because ChatGPT is such an average of the internet, ChatGPT in Danish actually has an American accent. I don't know how true this is in other languages. I'm sure for French or Spanish or whatever, it's actually good, but the Danish one has an American accent, which is just hilarious.

Dan Shipper (00:54:24)

That's wild. I promise we're going to get back to some of some more AI use cases in a second, but I think this is too good not to share. I think you're going to love this. So I ran into this startup maybe a year ago. And their whole thing was you said you can't say th’s. And I think that that actually might be more flexible and plastic than you think.

And the reason why at least the startup said that you can't say th’s is it’s really hard for you to hear how off you are vs. what the actual sound is going to be because you didn't learn that pattern when you're growing up. And what they did is they had this tool where if you were practicing for example th’s, it would show you the waveform of the sound and then it went and you would speak to it.

You would say th or whatever you're doing or whatever the word is. And it would show a dot that would show you how far away you were in real time from that sound. And then you just practice all the time and you can watch these in real time. So you can see these micro-increments as you're getting closer and closer and closer, like tuning a string. And apparently if you do that, even non-native speakers who are past the critical period can learn to speak fluently. And I think that's incredible.

Simon Eskildsen (00:55:45)

That’s a nuts way to approach it. I think at this point too, I think there’s a little attachment to not getting completely rid of it. I think any native English speaker can hear that this is— I was not born in the new world. But also enough that it's like, oh, that's just part of my own history. So I'm not going to put in the work, but it's so interesting that it's possible not to. And there are sounds where if you can't say them, it's really problematic. I just sound a little bit like a toddler sometimes when I say a th and I'm speaking fast. And it turns into an F. But I definitely resonate with— The international phonetic alphabet just sort of has— It's really fascinating reading if you haven't read up on it, but it has this chart. And I love a good chart that simplifies something really complicated into a system that actually makes sense, like the periodic system or like how the planets are organized around the sun, like these are all systems and a good system not only categorizes existing knowledge, but also predicts future things.

The periodic system is, well, there might be a heavy element or we're missing one here or whatever. And the international phonetic alphabet is that for sound it's like the back of your tongue is here, the middle of your tongue is here, the front of your tongue is here. Your nose is doing this thing. And it's like all these parameters and now you're making this sound. And so there's this permutation and every language has. You know, just making this up, but I think it's somewhere between 20 and 40 sounds. And then different dialects also have different types of sounds. So, for example, in Danish, we have this sound that's also in my middle name called oeh, right?

So my middle name is Hørup. This is not a sound in most English dialects. You probably can't say this sound. Maybe if you spoke—yeah, good luck. And you know, one of my friends said, Simon, saying your middle name is a little bit like trying to barf while saying Europe—and that kind of works. Not so much when you have to spell it over the phone, but it works for the pronunciation.

But then, what I discovered when I was looking into the International Phonetic Alphabet is like, oh, actually in New Zealand English, they have the oeh sound. So they will say, I can't imitate it properly, but they will say something along the lines of a bird, with the oe drawn out. So they're able to say the sound, whereas in North American English, you say bird, right? It's more of an I sound. 

And French also has— So, they have an easier time to say it, but every language is just a mismatch of these 20 to 40 syllables and I was reminded of it again when we were choosing my daughter's name right? Because you want to choose some syllables that are roughly the same in the two languages and then combine them together. You know, Simon is Simon in English, but in Danish, it's Simon, right? And it sounds very different. So these syllables are very finite and every language is just like plopped together and whatever, 40, 20, whatever—you learn it's hard, but it sounds not irreversible.

Dan Shipper (00:58:52)

Yeah, I love that. I had no idea that they had mapped it out and it's like a periodic table of all the different sounds. That's really amazing. That stuff. I'm a nerd for that. So, I want to get further into a couple more, a couple more AI things that you're going to share. So I know you said you use Notion AI a lot for writing. Tell us about that.

Simon Eskildsen (00:59:15)

Yeah, this is a bit of a more recent thing. And generally when talking about tools, I like to not talk about them until I've used them for several months. But I think with AI tooling, it's changing so rapidly that it's worth sharing some of these things earlier. So writing inside a ChatGPT feels a bit awkward, like iterating on writing.

I think Claude is trying to change that with the artifacts. But Notion AI, I mean, that's where I do most of my notes and so on these days. And I just have a journal page where I'll just write about whatever's going on, or I'm trying to think through a problem. And they just have a really good integration that pulls in the necessary context, right?

So they're using some type of the kind of thing that Turbopuffer powers, right? They're using some kind of semantic search behind the scenes to pull in context from around your workspace. So, when you're writing and iterating on your writing, it can pull all of that in and that's super, super interesting.

And I found it great. Hey, I was having this discussion with someone and I feel like maybe I didn't represent myself well in this, and it just gives you feedback and it completes the writing. That kind of conversation has been really, really valuable to have. And I think Notion AI does it quite well. So, I found that really valuable to use.

Dan Shipper (01:00:06)

Yeah, I actually use Notion AI a decent amount for writing in particular. For example, preparing for these podcasts, I will often have a Notion doc with the run of the show and then when I had Tyler Cowen on, I was just like, okay, tell me all of Tyler's most recent books, gimme a summary, right?

And then I was like, here are some of the points I'm going to make. Which ones relate to these books? And it allowed me to create a document really fast with my ideas and a summary of his books and ideas so that I could talk to him about it on a podcast in a way that I could have totally done with ChatGPT or Claude, but it would just have been hard. It would have been much harder to do. It's like having it in context is really helpful.

Simon Eskildsen (01:01:23)

Definitely. I mean, that's part of why we've created Turbopuffer, right? It's clear that within the context window of what you're operating the model with, we can get a lot done. And I think even that we have not harnessed the full effects of, but I think Notion is showing that, hey, actually we can pull in a lot more relevant context from years and from other people and from things that have happened and perhaps even changes in how documents have been involved and you can get something even more interesting. And I think that's an exciting future. I think there's a lot of security and safety that we have to get right. But that really can help augment. And I think Notion AI is a really good example and probably one of the tools that are the furthest on that.

Dan Shipper (01:02:14)

Yeah, that's something I like to do. I'll take a bunch of journal entries and I'll throw it into Claude or ChatGPT and be like, okay, how did I change over these over these last years. What’s my trajectory or what do you notice about me—all that kind of stuff. Or, if I'm journaling about a decision, I'll be like, going back and forth on what to do. I'll put all my journal entries about that decision into it and be like,write the journal entry as if I decided A or write the journal entry as if I decided B. And it's so helpful for that kind of thing to pick up on the little things in how you think and speak and talk and then project that forward into the future.

Simon Eskildsen (01:02:55)

Absolutely. And there are tools coming out where— Actually a tool that is using Turbopuffer is this app called Dot that's trying to create a journal where you draw in the necessary context from your life to do basically what you're talking about there.

We're also seeing another company that uses Turbo Puffer. This company is called Shapes, where you have these shapes that have personality and you can talk and they join your Discord and you talk to them and they remember what's happened and what dialogue has been going on. And I think that's really interesting.

And I think also the context when there's a growth would get better at pulling in context from billions of documents to do really interesting things. And the tools will grow alongside that. But it will take a while. I think even now we're just even just using a small amount of completion and your Raycast on your computer is something that most people are not using. Most programmers are not even doing much beyond just using Copilot to to complete rather than actually having a conversation with their code. So, it's as much of an involvement of us as it is of the tools. But I think it's very, very exciting to be a part of.

Dan Shipper (01:04:08)

Totally. So what else? What else should we talk about?

Simon Eskildsen (01:04:18)

I think there's a couple of other places where I use the LLMs on a daily basis. So one is there's this tool called Superwhisper. Again, it's one of these tools that I haven't had the pleasure of using a lot. But it will allow you to just talk and then it will summarize. I'm experimenting with using that a bit more for journaling. So, just tell it like, hey, these are the things that happened today, or these are the things I'm going to do today. I haven't fully incorporated it into my workflow, but I think it is interesting and the tools are constantly getting better. The main problem I'm having is still that it takes a little bit too long for the transcription and for it to pass to the LLM and do the summary.

I want that to be a couple hundred milliseconds, but it takes a couple of seconds at the accuracy that I can tolerate, I guess, like my accent or whatever. So, I think that's interesting, but the tools are still a little bit too slow, but that's fantastic because that will be fixed within the next six months. Do you use that?

Dan Shipper (01:05:10)

I don't, but I've heard great things about it. I just have not tried it myself.

Simon Eskildsen (01:05:15)

Yeah, I think it's still a bit immature unless you really want it. Or I think I've lived on my keyboard since I was seven years old, so I can type faster than I can speak. so it hasn't been a huge issue, but I think for a lot of people that's huge. And I think probably part of where we're going to see that more is hopefully just that the dictation gets better in iOS in this upcoming release. I'm not on the beta, so I don't know, but I would assume that it does. And after that, when that makes it into the workflow of how we write messages, things like that, then I think it will spill over more, but it's just on the edge, I think. And it will take a little bit, I think, before it gets there for everyone, but it's definitely getting there.

Dan Shipper (01:06:01)

Yeah, what I want is— So, people send voice notes right now and I think voice notes are really helpful when you want to communicate something that's sort of complicated to someone else and you don't want to sit down and actually, structurally write out this thing. But what I want is a voice note that I have a conversation with an AI and then I'm able to basically send the AI to someone else. And have it explained to them very simply, maybe in text first, this is the summary of what I said. So, it's doing all the structuring for me, but then it has all of the background context of my whole conversation. So that if people have follow-up questions, they can just ask the AI and it'll be, well in some other part of the conversation that didn't make it into the structured summary, we dealt with that and here's what Dan said. And I feel like that would really really cut down on lots and lots of emails and Slack threads and text messages and back and forth when you have something that's smart enough to answer basic questions about. Usually when you communicate something you're only communicating the tip of the iceberg. But I think AI kind of allows you to send the entire context only to present the tip of the iceberg but then reveal different parts as necessary.

Simon Eskildsen (01:07:17)

Yeah. So essentially you ship the TLDR and then the little model comes along with the FAQs, right? And it has some confidence threshold on whether it can answer the context it has or whether this should be escalated. I think that's an interesting idea. And I think now I got used to Superhuman for email and I’ve started using their prompts more and more to reply to emails. And now they're shipping features where you can ask the AI and things like that. And I think that I hadn't thought about what the step after that was. But this almost feels a little bit like it's like a support agent, right? Where it has some confidence of when to escalate or not. The challenge is going to be to make it still feel authentic, right? It needs to be part of the client where you ship along this thing. And it's like, hey, I noticed you're going to ask about this. Actually, this is probably the answer. Do you still want to ask? I think that's super interesting. 

I think we're still so early in how this is going to be used. I guess there's maybe two schools of belief of, okay, we get AGI and who cares? And then there's a school of like, well, the tools are just going to keep getting better and people are going to learn how to use this. I don't know which timeline we're on. I think it's really fun to be part of timeline number two for as long as that lasts and that might last a very, very long time. It might not, I don't care. I'm having fun, on that timeline. So I think that's really interesting for messages. 

The other thing where I use LLMs a lot is just through the Readwise reader. So when reading, there are things that you do in-line, looking up things, defining, defining words and asking questions to the document. Then of course, inside of my editor, I use a tool called Supermaven to do completions. And then I use some plugins for Vim to try to emulate a little bit of what the Cursor editor does. The Cursor editor, I think, is by far the best AI code editor. They actually use Turbopuffer behind the scenes. So they're great friends. And I really want to use Cursor, but I can't take the latency of VS Code. So I'm still in Neovim. So I stitched together some plugins to send context to different models. But hopefully that's going to get solved. Either I'll move to Cursor at some point or have some of these plugins, but it's when you've been using Vim for 15 years, it's very difficult to go to something else.

Dan Shipper (01:09:57)

That's great. Yeah. I use Cursor. I love it. I'm not as Vim— What's the word for that? I'm not as Vim-pilled as you maybe. But yeah, I think Cursor is awesome. It's got a little bit weird. When it makes suggestions, if you're having it code in multiple files, you have to click into the file and then click to do the change or whatever. So there's some stuff to be worked out there, but it's sort of clearly the future of these kinds of interfaces.

Simon Eskildsen (01:10:32)

I think they're just showing that it's— Just every few months they make some new release of, hey, this is how we're going to do it. And they really just seem to be paving the way. One of the hot plugins now is just trying to do Cursor-like things in other editors. So I think they're doing really, really interesting things and on that timeline of like, let's just make the tools better. And I don't think the VCs are tweeting that these companies are 10 times faster because they're using AI. I don't think Justin uses any AI and is still doing very, very— I think it helps and it augments and especially for people to create that first draft and to have the conversations. But I don't think it's that massive step change yet. But it comes every single month. These things are getting a few percentage points better. But I don't believe the whole 10 times better thing yet. That's certainly not the case.

Dan Shipper (01:11:33)

I think it sort of depends on who you're talking about. So if you're already a 10x engineer, maybe not, but if you can't code at all, I think it makes you 10x better than you were.

Simon Eskildsen (01:11:45)

I mean, infinitely, right? Because you were just not going to bother. And so yeah, I think that's true. The same as like, okay, now I can become a 10x retaining wall, a correspondent with my contractor. So, I think it lifts from novice or less than to being able to converse with an expert incredibly quickly. But for the experts, it becomes mostly about doing things a little— Basically more about typing faster. And having substituting a conversation about what data structures we use for this or which algorithm here or how to make these kinds of trade-offs, right? But a lot of that also comes from things that are just not the average of the internet and therefore really hard to discover.

Dan Shipper (01:12:31)

Totally. Well, I feel like I could keep talking to you forever, but we are pretty much at time. Simon. This was as always a pleasure. We gotta do these interviews more often. Thank you so much for coming by and telling us what you're up to.

Simon Eskildsen (01:12:48)

Absolutely. Dan, it was really fun to nerd out.

Dan Shipper (01:12:50)

Cool. See you next time.

Simon Eskildsen (01:12:51)

See ya.


Thanks to Scott Nover for editorial support.

Dan Shipper is the cofounder and CEO of Every, where he writes the Chain of Thought column and hosts the podcast AI & I. You can follow him on X at @danshipper and on LinkedIn, and Every on X at @every and on LinkedIn.

Find Out What
Comes Next in Tech.

Start your free trial.

New ideas to help you build the future—in your inbox, every day. Trusted by over 75,000 readers.

Subscribe

Already have an account? Sign in

What's included?

  • Unlimited access to our daily essays by Dan Shipper, Evan Armstrong, and a roster of the best tech writers on the internet
  • Full access to an archive of hundreds of in-depth articles
  • Unlimited software access to Spiral, Sparkle, and Lex

  • Priority access and subscriber-only discounts to courses, events, and more
  • Ad-free experience
  • Access to our Discord community

Comments

You need to login before you can comment.
Don't have an account? Sign up!
Every

What Comes Next in Tech

Subscribe to get new ideas about the future of business, technology, and the self—every day