
Sponsored by: Kosmik
Forty years ago, the Macintosh revolutionized the way we work with computers, integrating everything a user could need on a metaphorical desktop. Forty years later, it's time to rebuild our digital home. Meet Kosmik, an all-in-one workspace where you can seamlessly integrate text, images, videos, PDFs, and links.
TL;DR: Today we’re releasing a new episode of our podcast How Do You Use ChatGPT? I go in depth with Tyler Cowen, professor of economics at George Mason University, one-half of the popular Marginal Revolution blog, and deep thinker about the impact of technology on life, work, and the economy. Watch on X or YouTube, or listen on Spotify or Apple Podcasts.
When did you first think about the impact of AI on your job?
My best guess would be at the tail end of 2022, when OpenAI released ChatGPT.
Tyler Cowen has been pondering this question for a decade. In 2013, he wrote a book about how AI would change the future of work called Average Is Over. In it, he argued that the economy will shift to reward those who can enhance the capabilities of technology. Today, his incredible foresight is more than words and theories—it’s our reality. The “intelligent machines” he wrote about are well and truly here, so who better than Tyler himself to light the way ahead?
Tyler’s day job is teaching economics at George Mason University, but he also moonlights as a prolific writer. He co-writes the leading economics blog Marginal Revolution, where he has published daily for over 20 years. He is also the author of 17 books, the latest of which is an AI-fueled interactive experience that analyzes the lives of influential economists and crowns one as the greatest of all time.
In this episode, we explore Tyler’s predictions about how AI will impact the economy, distilling decades of contemplation into insights. We watch him interact with ChatGPT and learn how he uses it as a universal translator when he travels, a reading companion, and a research tool. We also see him use AI search tool Perplexity and walk through how he fits the two tools together in his workflow.
This is a must-watch for anyone who is motivated to understand and thrive in the future of work with AI. Here’s a taste:
- The long and short of AI. Tyler thinks that the immediate effect of AI will be to level the playing field, boosting the abilities of those who currently perform at an average or even below-average level. He suspects, however, that the long-term effects will be considerably less egalitarian, where “the people who can start projects will be the major beneficiaries. They'll have better record keepers and translators and mathematicians and coaches and colleagues and advice givers.”
- AI for human coordination. Tyler believes that AI provides great advice on how to manage people, and that the people who leverage it for this purpose will be very productive. “A lot of companies might become much smaller, but still rather potent…I think Midjourney, when it first had its breakthrough, had, what, seven or eight people working there,” he says. He also wonders about a darker side of this coin: He thinks an underappreciated risk of AI is its ability to coordinate poorly run terrorist organizations, increasing their chances of creating harm.
- ChatGPT as a universal translator. Tyler says his use of ChatGPT varies significantly between his iPhone and his laptop. On his iPhone, he uses it for personal things: as a universal translator in Tokyo, to get local food recommendations in Paraguay, and to identify species of birds and plants in Honduras. (It’s no surprise that when I ask him later in the episode how his wife would describe him in five words, one of his responses is: loves to travel.) Tyler’s laptop-GPT use is centered around far more academic purposes, like learning about obscure history.
- Pushing ChatGPT into smart corners of the internet. Tyler has discovered the benefit of using specific ChatGPT prompts: the AI will go off to search an intelligent part of the ether and likely come back with a better answer than what a generic query would yield. “So if I ask it a question, say, from economics, what is inflation? The answer is not wrong, but it's not really better than Wikipedia because the question is too general. If I ask it a question, What is inflation? Answer as would Milton Friedman… [Y]ou're just, again, pointing it towards smarter bits, you know, in the matrices… [T]he stuff it knows connected with Milton Friedman is smarter than the stuff not connected to Friedman.”
- Perplexity AI v. ChatGPT. Tyler was reading a book about the Byzantine empire (as one does if you’re a professor, I suppose) when he found himself curious about the rate of inflation under one of its emperors. He turned to ChatGPT first, but it didn’t give him a clear answer. Tyler tries asking Perplexity AI the same question, and voila—in seconds, he has an answer with JSTOR and Reddit sources to back it up. “Google is for links, AI is for learning, Perplexity is for references, and [it] sometimes has context that GPT doesn’t because it's looking to tie it to references,” he says.
- ChatGPT in the classroom. Tyler is the coolest professor on the block. He’s teaching a class on the history of economic thought, and has incorporated ChatGPT into the curriculum. “GPT-4 is on the reading list. Everyone in the class is required to pay the 20 [dollars] a month to subscribe to it,” he says. This isn’t the first time Tyler has brought ChatGPT into the classroom: “I taught a class the year before to law students where I made them all write one of their three papers using GPT. Not solely, but you and GPT together figure out how to write the paper.” According to Cowen, teaching students the limits and scope of AI is an enriching experience.
- The Tyler Test. In the last section of the interview, I create a custom GPT based on Tyler’s personality, Tylerbot. We put it to the test by asking Tyler and his bot the same questions, and seeing how the machine matched up. Tylerbot got two out of three answers correct. Even the one it got wrong wasn’t technically incorrect; it just didn’t sound like Tyler. “We had two excellent ones and then one that's a perfectly fine answer, but has no Tyler in it. There you go,” Tyler says.
You can check out the episode on X, Spotify, Apple Podcasts, or YouTube. Links and timestamps are below:
- Watch on X
- Watch on YouTube
- Listen on Spotify (make sure to follow to help us rank!)
- Listen on Apple Podcasts
Timestamps:
- Intro: 00:57
- His predictions on AI’s immediate and long-term effects: 05:57
- How AI can be leveraged to manage people: 11:31
- Using ChatGPT as a universal translator during travel: 17:19
- Why he worries less about hallucinations: 21:00
- Using specific prompts to do deep research with ChatGPT: 22:00
- Why he prefers using Playground: 25:54
- ChatGPT goes head-to-head with Perplexity AI: 41:09
- Using ChatGPT in university classrooms: 49:58
- “Tyler” test: 57:59
What do you use ChatGPT for? Have you found any interesting or surprising use cases? We want to hear from you—and we might even interview you. Reply here to talk to me!
Miss an episode? Catch up on my recent conversations with writer and entrepreneur David Perell, software researcher Geoffrey Lit, Waymark founder Nathan Labenz, Notion engineer Linus Lee, writer Nat Eliason, and Gumroad CEO Sahil Lavingia, and learn how they use ChatGPT.
If you’re enjoying my work, here are a few things I recommend:
- Subscribe to Every
- Follow me on X
- Check out our new course, Maximize Your Mind With ChatGPT
My take on this show and the episode transcript is below for paying subscribers.
In 2013, Tyler Cowen wrote in his book Average Is Over that “intelligent machines” were creating a bifurcated economy. At the top, a small percentage of skilled workers were learning to use computers to do their jobs—and being compensated highly for it—while much of the rest of the economy stagnated.
At the time, he was writing about the iPhone and the internet. But his writing is remarkably prescient today as this generation of “intelligent machines” begins to live up to that label. His conclusions remain the same: People who learn to work with AI to get their work done will do well. The rest of the economy is a question mark.
This is the same point I made in my piece about the allocation economy and the key behind a lot of my writing over the past six months: AI is the most important creative tool of the decade—and those who learn how to use it well will be at a great advantage in this new world.
This was a fun episode to record. If you’re interested in going deeper, I highly recommend reading Average Is Over, and the rest of Tyler’s work.
Transcript
Dan Shipper (00:00:00)
If ChatGPT stopped existing today, how would that affect your productivity?
Tyler Cowen (00:00:06)
I would feel much less smart.
AI is for learning, Perplexity is for references. Google is for links.
I was in Tokyo. I use it as my universal translator.
Paraguayan food.
I was in Honduras.
Mostly when I travel.
It adds flavor to the trip itself.
Dan Shipper (00:00:19)
I actually created a clone of you. I want to do a segment with you called the Tyler test. I’ll ask you a question. I’ll have you answer it. And then I’ll ask the clone I made the same question. We’ll see if it answers the question in the question that you would.
What are the core lessons of economics?
Tyler Cowen (00:00:37)
The first would be incentives matter. The second would be there’s always an opportunity cost.
Dan Shipper (00:00:52)
Tyler, welcome to the show.
Tyler Cowen (00:00:53)
Happy to be here. Thank you for having me on.
Dan Shipper (00:00:57)
Of course. I'm really excited to do this. For people that don't know you, you're an economist at George Mason University. You're a prolific writer. You've written, I think, 17 books and you've been writing your blog, Marginal Revolution, for over 20 years.
And I find you to be just an incredibly deep thinker. You think a lot about the impact of technology on life, work, and the economy. And I think it's all incredibly relevant to the show. And I'm just, yeah, super excited to have you.
Tyler Cowen (00:01:23)
Happy to be here. Great.
Dan Shipper (00:01:25)
So what we tend to do on the show is talk very practically about how smart people use chatGPT and AI in their lives to get work done. But I think before we get to that, there are some sort of high-level questions I want to ask you about your work and your view on AI and its impact on technology and jobs and the economy. So I think let's start there. And in particular, I think there are a lot of people right now who are trying to figure out what AI is and how it's going to affect their work lives and the economy more generally. And I think you've been thinking about that question about—maybe AI in particular for a little while—but generally about how technology affects our work lives for a really long time. And in your book, Average Is Over, which you wrote in 2013, you talked about this stratification in the economy that's driven by what you called intelligent machines. And this is before real AI had even sort of come out.
Tyler Cowen (00:02:25)
Well, in chess we had real AI in 2013.
Dan Shipper (00:02:29)
I guess that's true. Let's say just before this generation of consumer AI had come out or was even on the horizon. And one of the things that you wrote is that if you and your skills are a complement to the computer, your wage and labor market prospects are likely to be cheery. And if your skills do not complement the computer, you may want to address that mismatch. So you talked about this sort of stratification between intelligent, technical knowledge workers and the rest of the economy, and you saw that gap and you saw that widening in 2013. And I'm just kind of curious for you to talk about that prediction, why you wrote that, what you saw then and what you think now, given the current generation of AI models.
Tyler Cowen (00:03:14)
Well, for a long time, even well before 2013, I thought artificial intelligence was the most likely place for the next major technological breakthrough to come. And I'm not an AI expert in any technical sense, but my intuition there was pretty simple. I was quite an avid chess player when I was very young. And at the time people thought, well, AI can't really play chess. Chess is too conceptual, too complex, too non-legible. And pretty quickly, as I think you all know, AI managed to overcome all those hurdles, and it's now almost godlike when it plays chess. So I thought if AI can do that—chess is really hard—AI can do all these other things in time, and I was convinced this would happen, and it's happening essentially. So that's the background of the book, and I wanted to be ahead of the trend and write about where I thought it was all going.
Dan Shipper (00:04:11)
And I think it's clear that you were ahead of the trend and I guess you made some predictions about how that would impact the economy.
Do you feel like that was spot on? Are there any revisions that you want to make? It's probably at least somewhat surprising the exact specifics of this current generation of generative models. and I'm sort of curious how you reflect on that.
Tyler Cowen (00:04:40)
What's happening now is a work in progress. I don't feel that we know yet which groups of people it will help the most and harm the most. Like many people, I didn't expect that the AI breakthrough would have so much facility with words and with emotions and with bedside manner. I don't mean that I ruled out the possibility, but back then I would have been surprised if I was told, well, those are some of the things that we'll be best at. So I thought it would be a bit more of an autonomous reasoning tool in a way different than what it is, maybe more along the visions of symbolic AI people. But it is incredibly facile at taking some idea and writing a rap song around it, or a poem, or a shanty song, or whatever you might want.
And that is different. I think what are sometimes called the wordcel classes may be under greater threat from AI than I would have predicted. But again, this hasn't all quite come to fruition yet. We're waiting, and I expect many surprises.
Dan Shipper (00:05:43)
Got it. So it sounds like you're not really sure and you don't, and you don't necessarily want to hazard a guess because it's too early to say.
Tyler Cowen (00:05:52)
No, I'm very willing to hazard a guess, but I want the uncertainty brackets to be understood. So my best guess is this, in the short run, it will be quite egalitarian. As people who can't do things well at all now can do them pretty capably, say writing an online essay to get into college. The smartest kids already could do that well, GPT doesn't help them that much and they were going to work hard on it anyway. But I suspect the longer run effects will be less egalitarian—that as people learn how to use these things and they get better, more people will use AI to sort of help them build out projects. So the people who can start projects would be the major beneficiaries. They'll have better record keepers and translators and mathematicians and coaches and colleagues and advice givers. But large language models, they're not a source of actually doing the idea. You can use them to get ideas, but they don't make the decision to go out and do the idea.
So some kind of hyped-up executive function maybe is what will be rewarded, say over a 10-year horizon. And that could be inegalitarian because it's talented people who have a lot of project ideas anyway, who might benefit the most. That's my best guess.
Dan Shipper (00:07:09)
I think that makes sense. And so to sort of summarize what I heard from you is: Right now, what it can do is it can give people skills to do tasks that they wouldn't ordinarily not have been able to do at all. So maybe you're not really a programmer and it can help you build a very, very simple version of an app, or maybe you couldn't really write a college essay, but it can do that for you. So it can bring the sort of the bottom quartile of skills up—
Tyler Cowen (00:07:39)
Even bottom 60-70 percent, right? Most people cannot write a college essay. Most people can't finish college.
Dan Shipper (00:07:47)
Right. So, it can augment those people's skills to let them do something that they couldn't do before. And then for other people who are, who are more skilled in a particular domain, let's say a highly skilled programmer, it makes them slightly better, but it's not going to necessarily make them do things that they could never possibly do before. And that's the immediate effect is what you're saying.
And then the, and then the sort of secondary effect, is something like: It enables people who want to make things in the world to go and make those things with the intelligence that it unlocks. So one of the ways I've been talking about it is like the allocation economy. People who know how to allocate resources—so those are people who have managerial skills or investing skills—are going to be able to deploy those much more quickly and cheaply by using AI instead of using, like building a human organization.
Tyler Cowen (00:08:48)
That's right. I think there's another longer-run effect that is not at all in place now, but at some point will be. And that is when we can use AI to evaluate people's talents. And that will be highly meritocratic in a narrow way. I'm not sure we'll all be happy with it. To be told just how good you are at something is perhaps unsettling and, on net, diminishes human happiness, but we will find potentially successful people much more easily at some point that is not in place now.
Dan Shipper (00:09:21)
I definitely find that it's really quite good for helping me understand who I am, like giving me a picture of who I am. So I'll often do things like take long journal entries and I'll throw them into ChatGPT or into Claude and I'll say like, “Hey, What are the psychological patterns that you're noticing here?” And it's really good at taking them out and putting them into words for me. And I think having that reflection, if you ask for it, is really powerful.
Tyler Cowen (00:09:44)
You know, I'm about to turn 62, so that's maybe of lower value to me, but I agree with your point about large language models. They're wonderful therapists and analysts, and they can be remarkably objective when that's what you ask for.
Dan Shipper (00:10:08)
This is sort of a slight tangent, but I want to just dig in on this sort of allocation economy idea that we've been talking about because I'm writing an article about this for Friday, actually. And, one of the, I think, interesting implications of that is right now, the skills of being a manager are not that widely distributed or talked about because there's only a small group of people who are managers. I think those skills may need to be more widely distributed in the near future as everybody is moving up a layer of abstraction. So there are even junior employees, for example, are thinking about things like, for example, a really common management problem is how far into the details should I go? I asked someone to do something. Do I micromanage them or do I let them go do it but then it comes back wrong? That's a specific skill that managers have to learn that junior employees don't have to learn that I think might be valuable in an economy like this.
Have you thought more about what are the specific— If we're thinking about allocation or allocating resources as a core skill of the next generation, have you thought about the specific ways or skills that are bundled under there that are going to be important?
Tyler Cowen (00:11:29)
That's a very good point. Also, if you look at even the top management consulting firms, the kind of advice they give. To many people, it seems rather anodyne and boring and repetitive and cliched. And that's a sign GPT models can do it really well. And I think they can. So that might just end up automated and you might still need the consulting firm to make it stick or to be the focal voice putting the message forward. The actual amount of labor you need doing that. So it's like everyone will have free McKinsey-level advice or nearly free.
But another thing I worry about, as you know, there's a lot of talk about AI and biorisk. What about developing dangerous pathogens? I think it's a big constraint on terror organizations that they're very poorly run. That's a much bigger problem than, Oh, they can't find the optimal pathogen to do us all in, right? They just don't succeed or even try very often. So if they have access to AI giving them better management advice. I think that's actually one of the biggest risks of AI technology. That is fascinating. Not the pathogen. Just here's how you run a successful terror group. And maybe current models are sufficiently protective that they won't tell you, though even there you can probably break through the dam. But there'll be very good AIs that will tell you that pretty soon, right?
Dan Shipper (00:12:54)
Yeah, I think that's such an interesting underappreciated risk because I think anyone that's thinking about AI risk is really interested in the hard problem of here's how you assemble chemicals in a specific way to make a bomb. The human coordination problem is actually a significant deterrent for people doing bad things. And human coordination is like one of the things that I think AI is going to be really good at. And we see that people talking about that already in terms of business. You wrote an article recently about ChatGPT and your career trajectory and what you said is that small integrated teams will produce the next influential big thing. Which I think you meant you can do a lot more with AI with a small team, and I think that's a really positive thing, but the negative is that you can do a lot more harm.
Tyler Cowen (00:14:01)
That's right. A lot of companies might become much smaller, but still rather potent. I think Midjourney, when it first had its breakthrough, had what, seven or eight people working there? It may be more now, but it's not going to be so many more.
Dan Shipper (00:14:18)
Yeah, it's very small. I'm kind of curious, on that point, one of the things that you've written a lot about, and you wrote about this in a previous book called The Age of the Infovore, is there are certain skills that are super valuable in the knowledge economy, specifically ordering knowledge, and also being very attuned to being able to remember and use small bits of information within a specific subject area that you associate with people who are autistic and that book in particular talks a lot about why those traits are actually quite valuable in this current era.
And one of the things that strikes me is like, that's also a lot of the things that ChatGPT or other AI tools are fairly good at and I'm curious how that changes or what that does to your perspective to look at it that way.
Tyler Cowen (00:15:17)
I think in the short- to medium-term, AI accelerates the value of those skills. Ordering information, grasping how things fit together, knowing a lot. Because you now have this extra way of learning, but there could be some point much further out where the AI simply does all the work and the value of that skill becomes quite low. And skills like charisma and executive function are what rise in importance. But I think to get to that point, we would need AIs really incorporated into major workflows and systems in a way that's fairly distant. So I don't think it's, Oh, just a few more years and then this is going to flip. I think for the foreseeable future, it will help the infovores, GPT models.
Dan Shipper (00:16:03)
And so I assume you include yourself in that. Are you thinking for yourself about upping your charisma level or you feel confident enough that it's far enough out that you don't need to really worry about that? Or maybe you feel charismatic enough. You don't need it.
Tyler Cowen (00:16:21)
Well, that's for the world to judge. I will say I've decided to do more personal appearances. Both as a way of projecting, but also as a way of learning things that I can't learn from, say a large language model. So it's definitely influenced my behavior already.
Dan Shipper (00:16:52)
Okay, cool. I'd love to start to get into some of the practical aspects of this, which is more specifically how you use ChatGPT. You're already in, in a screen share, but before we dive into it, I'm kind of curious how would you summarize at a high level the way that it fits into your life and your work?
Tyler Cowen (00:17:08)
There are two quite distinct ways I use ChatGPT. One is on iPhone and the other is on my laptop and they're totally different. So I don't know if you want to cover both or take them in sequence, but the iPhone is pretty simple. Do you want to start with that?
So if I'm in a foreign country, I was in Tokyo, don't speak Japanese, not many people there speak English. I use it as my universal translator. It'll get better than that, but it's fine already. It's not Star Trek great, but it's amazing. And the other thing I do is I use it to read menus. Or even just tell me what should I order? So I was in Buenos Aires, I was at a Paraguayan restaurant. Paraguay is one of the countries I've never been to. So I'm an idiot when it comes to Paraguayan food. I took a photo of the menu, asking well, “GPT, what should I order here? Which are the classic dishes?” And it tells me now that's amazing. I was in Honduras a few days ago. You see a plant, you see a bird you don't know, if you can take a photo of it, you take the photo. You ask, well, “What's this?” And it tells you. So I use it for that, mostly when I travel. Again, that's distinct from my main uses, but I'm quite sure it will be enduring.
Dan Shipper (00:18:20)
What do you think that that opens up for you? What is that like to be able to take a photo of a bird or take a photo of a menu and just know what the menu says or know what the bird is?
Tyler Cowen (00:18:32)
Well, you just learned something, right? But a big part of the value is the interactive experience with the other person you're with, that you in some way are discovering this together, makes it more memorable. And I think you're more likely to do it with another person or people than if you're alone. So it adds flavor to the trip itself, like you have these now multiple modes of how you discover things.
Dan Shipper (00:18:55)
That's really interesting. It reminds me— I don't know if you know Andrew Mason. He's the former, he's the former CEO of Groupon and he runs Descript, the, the podcasting software company.
Tyler Cowen (00:19:05)
I might've met him once, but I don't know him.
Dan Shipper (00:19:07)
He used to run this company that did audio tours of cities. And so what you would do is— And they were all, I think, created by the company and then eventually created by third party tour guides wherever you were—I did this in Rome—you could walk around and it turned the whole city into an audio tour, sort of similar to like the one that you go on in a museum, but it was the entire city. And you could pick one that's like history or food or architecture or whatever, and you would walk to a specific spot and then it would say, look to your left and this is a building and here's everything about the building. And it, that company didn't work, but it was a pretty magical product experience. And it sort of strikes me that in some ways ChatGPT is that without needing it to be a separate company.
Tyler Cowen (00:19:57)
Yes. And if you're sort of on the road out there and there's some fact you want to know Oh, what's the population of the city? It's easier and quicker than Google or Wikipedia. Maybe it's a small edge. But for something as important as that, I'll take a small edge. I just ask it.
Dan Shipper (00:20:15)
This actually reminds me of— Before we recorded, I asked Twitter for questions for people that people wanted to ask you. And, Patrick McKenzie, who you interviewed on your podcast recently, asked me to ask you, “How do you check the output for correctness workflow-wise and where do you feel comfortable not doing this?” And I think that's kind of related to this example because particularly in a country like Paraguay or Japan where you don't speak the language, it's hard to actually check the output. So do you have any specific rules or heuristics for when you, when you trust the output and when you don't.
Tyler Cowen (00:21:00)
I worry about hallucinations a lot less than most LLM users, I think. So a lot of my questions, Oh, what bird is this? Let's say it's wrong. And it's just holding my leg. Okay. Like, I didn't know what bird it was and now I think it's the wrong bird. Who cares? I'll just ask it again and move on. But pretty often I'll just ask it, are you sure that's the right answer? Or please correct any hallucination. And most of the time, if there's a problem, it does correct it. Not all of the time. I'd guesstimate 80 to 90 percent. If you just ask it again, it'll correct an error. So that's one layer of check. I think also the areas where I tend to use it, this is now laptop GPT, it just makes way fewer errors than the areas where a lot of other people tend to use it. So, that also limits the degree of hallucination.
Dan Shipper (00:21:53)
How do you characterize the areas where it makes fewer errors?
Tyler Cowen (00:20:57)
I couldn't give you a general answer, but I can tell you I'm typically using it to learn obscure history. And obscure history, maybe by the nature of the questions, you're sort of pointing it in the direction of an intelligent part of the information space. It's not too crazy, not too controversial. If you ask about the old Byzantine empire, it does pretty well. I did a whole podcast where I made believe I was talking with Jonathan Swift, the Irish writer 17th and 18th century. I asked it a series of detailed probing questions for an hour. This has been put online. It didn't make any mistakes—not one. And there were no redos or no trial runs. I just did it. And what I did on the first try was the final output. So presumably it's read a lot of Swift, read a lot about Swift. It's obscure and detailed enough. Maybe it just ends up boxed into the corners of truth a lot. So for me, I know how you can get it to hallucinate. If you just ask it, “What are the three best books to read on Jonathan Swift?” My best guess is one of the three answers won't exist. I understand that, but I don't run into that so often because I'm not asking it that question.
Dan Shipper (00:23:15)
Right. That's really interesting. I definitely have found that too. I haven't used it for obscure history, but I use it a lot actually to read old books. So, one example, I was reading Moby-Dick recently. And it's really helpful to just take a picture of a passage, ask it to explain what's going on, or even to visualize the passage where you're just like, “Hey can you show me what this looks like with DALL-E?” and I think the reason why it's really good for that is, it has access to all the texts. It's read all the texts and there's many decades of supporting commentary. So it knows what to say, whereas for more recent stuff, it's more likely to be copyrighted and less likely for it to have read it.
Tyler Cowen (00:24:02)
Another thing I use it for a lot where hallucinations are not a risk: So I have my own podcast Conversations with Tyler. I have guests on. I want to learn the background context to what they do.So I had one interview with Lazarus Lake. He runs ultramarathons. He hasn't really written anything. There's not that much written on ultramarathons that's readily available.
But if you keep on asking the thing questions about ultramarathons, you acquire all this background context—maybe some of it's wrong. It's not really going to matter. I'm the one asking questions, not answering them. And I just feel I know my way around the topic pretty well.
I'm learning now about the insurance industry—we can talk about that more—which, actually the published literature on economics of insurance and its history, it's very bad and sparse for whatever reason. If I keep on asking GPT, again, I'm not convinced the answers are perfect. But I know I'm doing better than I would do any other way. And I'm not the one required to know the exact fact. What I want is context.
Dan Shipper (00:25:10)
That makes sense. I guess, when you're using it, for example, in the insurance area, and you say you're not the one that is required to know the exact fact, don't you get sort of nervous that you're going to have something in your head that's kind of slightly wrong.
Tyler Cowen (00:25:25)
Well, I already do, right? That's the problem. I could show you one of the things I asked it if you want to look at something concrete.
Sponsored by: Kosmik
Forty years ago, the Macintosh revolutionized the way we work with computers, integrating everything a user could need on a metaphorical desktop. Forty years later, it's time to rebuild our digital home. Meet Kosmik, an all-in-one workspace where you can seamlessly integrate text, images, videos, PDFs, and links.
TL;DR: Today we’re releasing a new episode of our podcast How Do You Use ChatGPT? I go in depth with Tyler Cowen, professor of economics at George Mason University, one-half of the popular Marginal Revolution blog, and deep thinker about the impact of technology on life, work, and the economy. Watch on X or YouTube, or listen on Spotify or Apple Podcasts.
When did you first think about the impact of AI on your job?
My best guess would be at the tail end of 2022, when OpenAI released ChatGPT.
Tyler Cowen has been pondering this question for a decade. In 2013, he wrote a book about how AI would change the future of work called Average Is Over. In it, he argued that the economy will shift to reward those who can enhance the capabilities of technology. Today, his incredible foresight is more than words and theories—it’s our reality. The “intelligent machines” he wrote about are well and truly here, so who better than Tyler himself to light the way ahead?
Tyler’s day job is teaching economics at George Mason University, but he also moonlights as a prolific writer. He co-writes the leading economics blog Marginal Revolution, where he has published daily for over 20 years. He is also the author of 17 books, the latest of which is an AI-fueled interactive experience that analyzes the lives of influential economists and crowns one as the greatest of all time.
In this episode, we explore Tyler’s predictions about how AI will impact the economy, distilling decades of contemplation into insights. We watch him interact with ChatGPT and learn how he uses it as a universal translator when he travels, a reading companion, and a research tool. We also see him use AI search tool Perplexity and walk through how he fits the two tools together in his workflow.
Kosmik offers an expansive, infinite canvas that adapts to your needs. Whether you're organizing complex projects or just brainstorming ideas, the intuitive interface puts everything at your fingertips. With Kosmik's unique built-in browser, explore the web and capture content without switching constantly between apps. Centralize various media and create your single source of truth, for you and your team.
This is a must-watch for anyone who is motivated to understand and thrive in the future of work with AI. Here’s a taste:
- The long and short of AI. Tyler thinks that the immediate effect of AI will be to level the playing field, boosting the abilities of those who currently perform at an average or even below-average level. He suspects, however, that the long-term effects will be considerably less egalitarian, where “the people who can start projects will be the major beneficiaries. They'll have better record keepers and translators and mathematicians and coaches and colleagues and advice givers.”
- AI for human coordination. Tyler believes that AI provides great advice on how to manage people, and that the people who leverage it for this purpose will be very productive. “A lot of companies might become much smaller, but still rather potent…I think Midjourney, when it first had its breakthrough, had, what, seven or eight people working there,” he says. He also wonders about a darker side of this coin: He thinks an underappreciated risk of AI is its ability to coordinate poorly run terrorist organizations, increasing their chances of creating harm.
- ChatGPT as a universal translator. Tyler says his use of ChatGPT varies significantly between his iPhone and his laptop. On his iPhone, he uses it for personal things: as a universal translator in Tokyo, to get local food recommendations in Paraguay, and to identify species of birds and plants in Honduras. (It’s no surprise that when I ask him later in the episode how his wife would describe him in five words, one of his responses is: loves to travel.) Tyler’s laptop-GPT use is centered around far more academic purposes, like learning about obscure history.
- Pushing ChatGPT into smart corners of the internet. Tyler has discovered the benefit of using specific ChatGPT prompts: the AI will go off to search an intelligent part of the ether and likely come back with a better answer than what a generic query would yield. “So if I ask it a question, say, from economics, what is inflation? The answer is not wrong, but it's not really better than Wikipedia because the question is too general. If I ask it a question, What is inflation? Answer as would Milton Friedman… [Y]ou're just, again, pointing it towards smarter bits, you know, in the matrices… [T]he stuff it knows connected with Milton Friedman is smarter than the stuff not connected to Friedman.”
- Perplexity AI v. ChatGPT. Tyler was reading a book about the Byzantine empire (as one does if you’re a professor, I suppose) when he found himself curious about the rate of inflation under one of its emperors. He turned to ChatGPT first, but it didn’t give him a clear answer. Tyler tries asking Perplexity AI the same question, and voila—in seconds, he has an answer with JSTOR and Reddit sources to back it up. “Google is for links, AI is for learning, Perplexity is for references, and [it] sometimes has context that GPT doesn’t because it's looking to tie it to references,” he says.
- ChatGPT in the classroom. Tyler is the coolest professor on the block. He’s teaching a class on the history of economic thought, and has incorporated ChatGPT into the curriculum. “GPT-4 is on the reading list. Everyone in the class is required to pay the 20 [dollars] a month to subscribe to it,” he says. This isn’t the first time Tyler has brought ChatGPT into the classroom: “I taught a class the year before to law students where I made them all write one of their three papers using GPT. Not solely, but you and GPT together figure out how to write the paper.” According to Cowen, teaching students the limits and scope of AI is an enriching experience.
- The Tyler Test. In the last section of the interview, I create a custom GPT based on Tyler’s personality, Tylerbot. We put it to the test by asking Tyler and his bot the same questions, and seeing how the machine matched up. Tylerbot got two out of three answers correct. Even the one it got wrong wasn’t technically incorrect; it just didn’t sound like Tyler. “We had two excellent ones and then one that's a perfectly fine answer, but has no Tyler in it. There you go,” Tyler says.
You can check out the episode on X, Spotify, Apple Podcasts, or YouTube. Links and timestamps are below:
- Watch on X
- Watch on YouTube
- Listen on Spotify (make sure to follow to help us rank!)
- Listen on Apple Podcasts
Timestamps:
- Intro: 00:57
- His predictions on AI’s immediate and long-term effects: 05:57
- How AI can be leveraged to manage people: 11:31
- Using ChatGPT as a universal translator during travel: 17:19
- Why he worries less about hallucinations: 21:00
- Using specific prompts to do deep research with ChatGPT: 22:00
- Why he prefers using Playground: 25:54
- ChatGPT goes head-to-head with Perplexity AI: 41:09
- Using ChatGPT in university classrooms: 49:58
- “Tyler” test: 57:59
What do you use ChatGPT for? Have you found any interesting or surprising use cases? We want to hear from you—and we might even interview you. Reply here to talk to me!
Miss an episode? Catch up on my recent conversations with writer and entrepreneur David Perell, software researcher Geoffrey Lit, Waymark founder Nathan Labenz, Notion engineer Linus Lee, writer Nat Eliason, and Gumroad CEO Sahil Lavingia, and learn how they use ChatGPT.
If you’re enjoying my work, here are a few things I recommend:
- Subscribe to Every
- Follow me on X
- Check out our new course, Maximize Your Mind With ChatGPT
My take on this show and the episode transcript is below for paying subscribers.
In 2013, Tyler Cowen wrote in his book Average Is Over that “intelligent machines” were creating a bifurcated economy. At the top, a small percentage of skilled workers were learning to use computers to do their jobs—and being compensated highly for it—while much of the rest of the economy stagnated.
At the time, he was writing about the iPhone and the internet. But his writing is remarkably prescient today as this generation of “intelligent machines” begins to live up to that label. His conclusions remain the same: People who learn to work with AI to get their work done will do well. The rest of the economy is a question mark.
This is the same point I made in my piece about the allocation economy and the key behind a lot of my writing over the past six months: AI is the most important creative tool of the decade—and those who learn how to use it well will be at a great advantage in this new world.
This was a fun episode to record. If you’re interested in going deeper, I highly recommend reading Average Is Over, and the rest of Tyler’s work.
Transcript
Dan Shipper (00:00:00)
If ChatGPT stopped existing today, how would that affect your productivity?
Tyler Cowen (00:00:06)
I would feel much less smart.
AI is for learning, Perplexity is for references. Google is for links.
I was in Tokyo. I use it as my universal translator.
Paraguayan food.
I was in Honduras.
Mostly when I travel.
It adds flavor to the trip itself.
Dan Shipper (00:00:19)
I actually created a clone of you. I want to do a segment with you called the Tyler test. I’ll ask you a question. I’ll have you answer it. And then I’ll ask the clone I made the same question. We’ll see if it answers the question in the question that you would.
What are the core lessons of economics?
Tyler Cowen (00:00:37)
The first would be incentives matter. The second would be there’s always an opportunity cost.
Dan Shipper (00:00:52)
Tyler, welcome to the show.
Tyler Cowen (00:00:53)
Happy to be here. Thank you for having me on.
Dan Shipper (00:00:57)
Of course. I'm really excited to do this. For people that don't know you, you're an economist at George Mason University. You're a prolific writer. You've written, I think, 17 books and you've been writing your blog, Marginal Revolution, for over 20 years.
And I find you to be just an incredibly deep thinker. You think a lot about the impact of technology on life, work, and the economy. And I think it's all incredibly relevant to the show. And I'm just, yeah, super excited to have you.
Tyler Cowen (00:01:23)
Happy to be here. Great.
Dan Shipper (00:01:25)
So what we tend to do on the show is talk very practically about how smart people use chatGPT and AI in their lives to get work done. But I think before we get to that, there are some sort of high-level questions I want to ask you about your work and your view on AI and its impact on technology and jobs and the economy. So I think let's start there. And in particular, I think there are a lot of people right now who are trying to figure out what AI is and how it's going to affect their work lives and the economy more generally. And I think you've been thinking about that question about—maybe AI in particular for a little while—but generally about how technology affects our work lives for a really long time. And in your book, Average Is Over, which you wrote in 2013, you talked about this stratification in the economy that's driven by what you called intelligent machines. And this is before real AI had even sort of come out.
Tyler Cowen (00:02:25)
Well, in chess we had real AI in 2013.
Dan Shipper (00:02:29)
I guess that's true. Let's say just before this generation of consumer AI had come out or was even on the horizon. And one of the things that you wrote is that if you and your skills are a complement to the computer, your wage and labor market prospects are likely to be cheery. And if your skills do not complement the computer, you may want to address that mismatch. So you talked about this sort of stratification between intelligent, technical knowledge workers and the rest of the economy, and you saw that gap and you saw that widening in 2013. And I'm just kind of curious for you to talk about that prediction, why you wrote that, what you saw then and what you think now, given the current generation of AI models.
Tyler Cowen (00:03:14)
Well, for a long time, even well before 2013, I thought artificial intelligence was the most likely place for the next major technological breakthrough to come. And I'm not an AI expert in any technical sense, but my intuition there was pretty simple. I was quite an avid chess player when I was very young. And at the time people thought, well, AI can't really play chess. Chess is too conceptual, too complex, too non-legible. And pretty quickly, as I think you all know, AI managed to overcome all those hurdles, and it's now almost godlike when it plays chess. So I thought if AI can do that—chess is really hard—AI can do all these other things in time, and I was convinced this would happen, and it's happening essentially. So that's the background of the book, and I wanted to be ahead of the trend and write about where I thought it was all going.
Dan Shipper (00:04:11)
And I think it's clear that you were ahead of the trend and I guess you made some predictions about how that would impact the economy.
Do you feel like that was spot on? Are there any revisions that you want to make? It's probably at least somewhat surprising the exact specifics of this current generation of generative models. and I'm sort of curious how you reflect on that.
Tyler Cowen (00:04:40)
What's happening now is a work in progress. I don't feel that we know yet which groups of people it will help the most and harm the most. Like many people, I didn't expect that the AI breakthrough would have so much facility with words and with emotions and with bedside manner. I don't mean that I ruled out the possibility, but back then I would have been surprised if I was told, well, those are some of the things that we'll be best at. So I thought it would be a bit more of an autonomous reasoning tool in a way different than what it is, maybe more along the visions of symbolic AI people. But it is incredibly facile at taking some idea and writing a rap song around it, or a poem, or a shanty song, or whatever you might want.
And that is different. I think what are sometimes called the wordcel classes may be under greater threat from AI than I would have predicted. But again, this hasn't all quite come to fruition yet. We're waiting, and I expect many surprises.
Dan Shipper (00:05:43)
Got it. So it sounds like you're not really sure and you don't, and you don't necessarily want to hazard a guess because it's too early to say.
Tyler Cowen (00:05:52)
No, I'm very willing to hazard a guess, but I want the uncertainty brackets to be understood. So my best guess is this, in the short run, it will be quite egalitarian. As people who can't do things well at all now can do them pretty capably, say writing an online essay to get into college. The smartest kids already could do that well, GPT doesn't help them that much and they were going to work hard on it anyway. But I suspect the longer run effects will be less egalitarian—that as people learn how to use these things and they get better, more people will use AI to sort of help them build out projects. So the people who can start projects would be the major beneficiaries. They'll have better record keepers and translators and mathematicians and coaches and colleagues and advice givers. But large language models, they're not a source of actually doing the idea. You can use them to get ideas, but they don't make the decision to go out and do the idea.
So some kind of hyped-up executive function maybe is what will be rewarded, say over a 10-year horizon. And that could be inegalitarian because it's talented people who have a lot of project ideas anyway, who might benefit the most. That's my best guess.
Dan Shipper (00:07:09)
I think that makes sense. And so to sort of summarize what I heard from you is: Right now, what it can do is it can give people skills to do tasks that they wouldn't ordinarily not have been able to do at all. So maybe you're not really a programmer and it can help you build a very, very simple version of an app, or maybe you couldn't really write a college essay, but it can do that for you. So it can bring the sort of the bottom quartile of skills up—
Tyler Cowen (00:07:39)
Even bottom 60-70 percent, right? Most people cannot write a college essay. Most people can't finish college.
Dan Shipper (00:07:47)
Right. So, it can augment those people's skills to let them do something that they couldn't do before. And then for other people who are, who are more skilled in a particular domain, let's say a highly skilled programmer, it makes them slightly better, but it's not going to necessarily make them do things that they could never possibly do before. And that's the immediate effect is what you're saying.
And then the, and then the sort of secondary effect, is something like: It enables people who want to make things in the world to go and make those things with the intelligence that it unlocks. So one of the ways I've been talking about it is like the allocation economy. People who know how to allocate resources—so those are people who have managerial skills or investing skills—are going to be able to deploy those much more quickly and cheaply by using AI instead of using, like building a human organization.
Tyler Cowen (00:08:48)
That's right. I think there's another longer-run effect that is not at all in place now, but at some point will be. And that is when we can use AI to evaluate people's talents. And that will be highly meritocratic in a narrow way. I'm not sure we'll all be happy with it. To be told just how good you are at something is perhaps unsettling and, on net, diminishes human happiness, but we will find potentially successful people much more easily at some point that is not in place now.
Dan Shipper (00:09:21)
I definitely find that it's really quite good for helping me understand who I am, like giving me a picture of who I am. So I'll often do things like take long journal entries and I'll throw them into ChatGPT or into Claude and I'll say like, “Hey, What are the psychological patterns that you're noticing here?” And it's really good at taking them out and putting them into words for me. And I think having that reflection, if you ask for it, is really powerful.
Tyler Cowen (00:09:44)
You know, I'm about to turn 62, so that's maybe of lower value to me, but I agree with your point about large language models. They're wonderful therapists and analysts, and they can be remarkably objective when that's what you ask for.
Dan Shipper (00:10:08)
This is sort of a slight tangent, but I want to just dig in on this sort of allocation economy idea that we've been talking about because I'm writing an article about this for Friday, actually. And, one of the, I think, interesting implications of that is right now, the skills of being a manager are not that widely distributed or talked about because there's only a small group of people who are managers. I think those skills may need to be more widely distributed in the near future as everybody is moving up a layer of abstraction. So there are even junior employees, for example, are thinking about things like, for example, a really common management problem is how far into the details should I go? I asked someone to do something. Do I micromanage them or do I let them go do it but then it comes back wrong? That's a specific skill that managers have to learn that junior employees don't have to learn that I think might be valuable in an economy like this.
Have you thought more about what are the specific— If we're thinking about allocation or allocating resources as a core skill of the next generation, have you thought about the specific ways or skills that are bundled under there that are going to be important?
Tyler Cowen (00:11:29)
That's a very good point. Also, if you look at even the top management consulting firms, the kind of advice they give. To many people, it seems rather anodyne and boring and repetitive and cliched. And that's a sign GPT models can do it really well. And I think they can. So that might just end up automated and you might still need the consulting firm to make it stick or to be the focal voice putting the message forward. The actual amount of labor you need doing that. So it's like everyone will have free McKinsey-level advice or nearly free.
But another thing I worry about, as you know, there's a lot of talk about AI and biorisk. What about developing dangerous pathogens? I think it's a big constraint on terror organizations that they're very poorly run. That's a much bigger problem than, Oh, they can't find the optimal pathogen to do us all in, right? They just don't succeed or even try very often. So if they have access to AI giving them better management advice. I think that's actually one of the biggest risks of AI technology. That is fascinating. Not the pathogen. Just here's how you run a successful terror group. And maybe current models are sufficiently protective that they won't tell you, though even there you can probably break through the dam. But there'll be very good AIs that will tell you that pretty soon, right?
Dan Shipper (00:12:54)
Yeah, I think that's such an interesting underappreciated risk because I think anyone that's thinking about AI risk is really interested in the hard problem of here's how you assemble chemicals in a specific way to make a bomb. The human coordination problem is actually a significant deterrent for people doing bad things. And human coordination is like one of the things that I think AI is going to be really good at. And we see that people talking about that already in terms of business. You wrote an article recently about ChatGPT and your career trajectory and what you said is that small integrated teams will produce the next influential big thing. Which I think you meant you can do a lot more with AI with a small team, and I think that's a really positive thing, but the negative is that you can do a lot more harm.
Tyler Cowen (00:14:01)
That's right. A lot of companies might become much smaller, but still rather potent. I think Midjourney, when it first had its breakthrough, had what, seven or eight people working there? It may be more now, but it's not going to be so many more.
Dan Shipper (00:14:18)
Yeah, it's very small. I'm kind of curious, on that point, one of the things that you've written a lot about, and you wrote about this in a previous book called The Age of the Infovore, is there are certain skills that are super valuable in the knowledge economy, specifically ordering knowledge, and also being very attuned to being able to remember and use small bits of information within a specific subject area that you associate with people who are autistic and that book in particular talks a lot about why those traits are actually quite valuable in this current era.
And one of the things that strikes me is like, that's also a lot of the things that ChatGPT or other AI tools are fairly good at and I'm curious how that changes or what that does to your perspective to look at it that way.
Tyler Cowen (00:15:17)
I think in the short- to medium-term, AI accelerates the value of those skills. Ordering information, grasping how things fit together, knowing a lot. Because you now have this extra way of learning, but there could be some point much further out where the AI simply does all the work and the value of that skill becomes quite low. And skills like charisma and executive function are what rise in importance. But I think to get to that point, we would need AIs really incorporated into major workflows and systems in a way that's fairly distant. So I don't think it's, Oh, just a few more years and then this is going to flip. I think for the foreseeable future, it will help the infovores, GPT models.
Dan Shipper (00:16:03)
And so I assume you include yourself in that. Are you thinking for yourself about upping your charisma level or you feel confident enough that it's far enough out that you don't need to really worry about that? Or maybe you feel charismatic enough. You don't need it.
Tyler Cowen (00:16:21)
Well, that's for the world to judge. I will say I've decided to do more personal appearances. Both as a way of projecting, but also as a way of learning things that I can't learn from, say a large language model. So it's definitely influenced my behavior already.
Dan Shipper (00:16:52)
Okay, cool. I'd love to start to get into some of the practical aspects of this, which is more specifically how you use ChatGPT. You're already in, in a screen share, but before we dive into it, I'm kind of curious how would you summarize at a high level the way that it fits into your life and your work?
Tyler Cowen (00:17:08)
There are two quite distinct ways I use ChatGPT. One is on iPhone and the other is on my laptop and they're totally different. So I don't know if you want to cover both or take them in sequence, but the iPhone is pretty simple. Do you want to start with that?
So if I'm in a foreign country, I was in Tokyo, don't speak Japanese, not many people there speak English. I use it as my universal translator. It'll get better than that, but it's fine already. It's not Star Trek great, but it's amazing. And the other thing I do is I use it to read menus. Or even just tell me what should I order? So I was in Buenos Aires, I was at a Paraguayan restaurant. Paraguay is one of the countries I've never been to. So I'm an idiot when it comes to Paraguayan food. I took a photo of the menu, asking well, “GPT, what should I order here? Which are the classic dishes?” And it tells me now that's amazing. I was in Honduras a few days ago. You see a plant, you see a bird you don't know, if you can take a photo of it, you take the photo. You ask, well, “What's this?” And it tells you. So I use it for that, mostly when I travel. Again, that's distinct from my main uses, but I'm quite sure it will be enduring.
Dan Shipper (00:18:20)
What do you think that that opens up for you? What is that like to be able to take a photo of a bird or take a photo of a menu and just know what the menu says or know what the bird is?
Tyler Cowen (00:18:32)
Well, you just learned something, right? But a big part of the value is the interactive experience with the other person you're with, that you in some way are discovering this together, makes it more memorable. And I think you're more likely to do it with another person or people than if you're alone. So it adds flavor to the trip itself, like you have these now multiple modes of how you discover things.
Dan Shipper (00:18:55)
That's really interesting. It reminds me— I don't know if you know Andrew Mason. He's the former, he's the former CEO of Groupon and he runs Descript, the, the podcasting software company.
Tyler Cowen (00:19:05)
I might've met him once, but I don't know him.
Dan Shipper (00:19:07)
He used to run this company that did audio tours of cities. And so what you would do is— And they were all, I think, created by the company and then eventually created by third party tour guides wherever you were—I did this in Rome—you could walk around and it turned the whole city into an audio tour, sort of similar to like the one that you go on in a museum, but it was the entire city. And you could pick one that's like history or food or architecture or whatever, and you would walk to a specific spot and then it would say, look to your left and this is a building and here's everything about the building. And it, that company didn't work, but it was a pretty magical product experience. And it sort of strikes me that in some ways ChatGPT is that without needing it to be a separate company.
Tyler Cowen (00:19:57)
Yes. And if you're sort of on the road out there and there's some fact you want to know Oh, what's the population of the city? It's easier and quicker than Google or Wikipedia. Maybe it's a small edge. But for something as important as that, I'll take a small edge. I just ask it.
Dan Shipper (00:20:15)
This actually reminds me of— Before we recorded, I asked Twitter for questions for people that people wanted to ask you. And, Patrick McKenzie, who you interviewed on your podcast recently, asked me to ask you, “How do you check the output for correctness workflow-wise and where do you feel comfortable not doing this?” And I think that's kind of related to this example because particularly in a country like Paraguay or Japan where you don't speak the language, it's hard to actually check the output. So do you have any specific rules or heuristics for when you, when you trust the output and when you don't.
Tyler Cowen (00:21:00)
I worry about hallucinations a lot less than most LLM users, I think. So a lot of my questions, Oh, what bird is this? Let's say it's wrong. And it's just holding my leg. Okay. Like, I didn't know what bird it was and now I think it's the wrong bird. Who cares? I'll just ask it again and move on. But pretty often I'll just ask it, are you sure that's the right answer? Or please correct any hallucination. And most of the time, if there's a problem, it does correct it. Not all of the time. I'd guesstimate 80 to 90 percent. If you just ask it again, it'll correct an error. So that's one layer of check. I think also the areas where I tend to use it, this is now laptop GPT, it just makes way fewer errors than the areas where a lot of other people tend to use it. So, that also limits the degree of hallucination.
Dan Shipper (00:21:53)
How do you characterize the areas where it makes fewer errors?
Tyler Cowen (00:20:57)
I couldn't give you a general answer, but I can tell you I'm typically using it to learn obscure history. And obscure history, maybe by the nature of the questions, you're sort of pointing it in the direction of an intelligent part of the information space. It's not too crazy, not too controversial. If you ask about the old Byzantine empire, it does pretty well. I did a whole podcast where I made believe I was talking with Jonathan Swift, the Irish writer 17th and 18th century. I asked it a series of detailed probing questions for an hour. This has been put online. It didn't make any mistakes—not one. And there were no redos or no trial runs. I just did it. And what I did on the first try was the final output. So presumably it's read a lot of Swift, read a lot about Swift. It's obscure and detailed enough. Maybe it just ends up boxed into the corners of truth a lot. So for me, I know how you can get it to hallucinate. If you just ask it, “What are the three best books to read on Jonathan Swift?” My best guess is one of the three answers won't exist. I understand that, but I don't run into that so often because I'm not asking it that question.
Dan Shipper (00:23:15)
Right. That's really interesting. I definitely have found that too. I haven't used it for obscure history, but I use it a lot actually to read old books. So, one example, I was reading Moby-Dick recently. And it's really helpful to just take a picture of a passage, ask it to explain what's going on, or even to visualize the passage where you're just like, “Hey can you show me what this looks like with DALL-E?” and I think the reason why it's really good for that is, it has access to all the texts. It's read all the texts and there's many decades of supporting commentary. So it knows what to say, whereas for more recent stuff, it's more likely to be copyrighted and less likely for it to have read it.
Tyler Cowen (00:24:02)
Another thing I use it for a lot where hallucinations are not a risk: So I have my own podcast Conversations with Tyler. I have guests on. I want to learn the background context to what they do.So I had one interview with Lazarus Lake. He runs ultramarathons. He hasn't really written anything. There's not that much written on ultramarathons that's readily available.
But if you keep on asking the thing questions about ultramarathons, you acquire all this background context—maybe some of it's wrong. It's not really going to matter. I'm the one asking questions, not answering them. And I just feel I know my way around the topic pretty well.
I'm learning now about the insurance industry—we can talk about that more—which, actually the published literature on economics of insurance and its history, it's very bad and sparse for whatever reason. If I keep on asking GPT, again, I'm not convinced the answers are perfect. But I know I'm doing better than I would do any other way. And I'm not the one required to know the exact fact. What I want is context.
Dan Shipper (00:25:10)
That makes sense. I guess, when you're using it, for example, in the insurance area, and you say you're not the one that is required to know the exact fact, don't you get sort of nervous that you're going to have something in your head that's kind of slightly wrong.
Tyler Cowen (00:25:25)
Well, I already do, right? That's the problem. I could show you one of the things I asked it if you want to look at something concrete.
Dan Shipper (00:25:34)
Yeah. I want to sort of set the stage here really quick. So we're going into your use of laptop GPT. And one thing that I think is really important to note is you're not in ChatGPT. You're in the playground. And so before we head into a specific chat, why do you use the playground? Tell us.
Tyler Cowen (00:25:57)
I don't have any good answer. I mean, I'm stupid, basically. I like the fact that it doesn't look like a consumer product. And it works for me and the maximum length I can slide across, make it longer, shorter. Maybe it doesn't even matter. But I just enjoy that and I'm used to the look. So I use it. Do you think it's a mistake for me to use it?
Dan Shipper (00:26:20)
I actually don't think it's a mistake. I mean, you're sacrificing certain things, right? So, you don't have access to code interpreter. You don't have access to DALL-E. There's no unified experience between mobile and your desktop. so there's a bunch of things that are not as good about the playground, but one of the things that is going to be better about the playground is you have a lot more control over the output. So like you can see right on your screen you can see that you can set the system message. So the system message for people who don't know is going to set the—kind of like the personality of how GPT interacts with you and setting that can be really helpful. On the right, you can also see you can set the model, you can set the temperature. So temperature is sort of like creativity for the model: how likely it is to be factual, how likely it is to sort of like make stuff up. You can set the max length, all that kind of stuff. So you have a lot more control. And then another thing that I think is probably true of the playground that's not true of ChatGPT is the content restrictions are going to be looser in the playground because it's built for developers and not for consumers. And so they're going to be less careful about what the model says, because it's only being used by a certain group of people who are using it to make products instead of like hundreds of millions of people who are using it every day for random stuff.
Tyler Cowen (00:27:39)
So I'm not stupid. That's nice to hear. I mean, I have DALL-E and all the rest. I just don't use them much. The images are not my main interest. I think it's amazing. But I'm not sitting there playing around with DALL-E. I don't think it's a waste of my time.
Dan Shipper (00:27:55)
Okay. So, let's go into the first chat. You, you have a chat right here. I think you're, I think it's about insurance. Tell us about how you did this chat. What was the motivation behind it? And how you started it.
Tyler Cowen (00:28:12)
Well, Alex Tabarrok and I agreed that we would be recording a podcast about the history of insurance and insurance economics. And I'm really not well informed about that area, in part because the literature is not that good. I do read a lot. So one question I had was about this difference between whole-term and regular life insurance. The whole product is like a savings investment bundled with the insurance itself. And the other is just the plain old life insurance. So that distinction between the two kinds of insurance—when did that arise? I don't know. Probably I still don't know, but I thought I would ask GPT and you can see my question on the screen. I can scroll down and show you the answer.
Dan Shipper (00:28:58)
So to read the question, you said, “When did the distinction between the whole-term and plain life insurance arise historically?” It looks like a GPT said, “The distinction between whole-life and term-life insurance involved over time with the development of the life insurance industry. Term-life is designed to provide coverage for a specific period with no savings.” So we're getting some definitions. “Whole-life insurance, offering a death benefit along with the savings component, allowing the policy to accumulate a cash value became more common in the 19th century as the industry matured.” So yeah, what did you think of that answer? Did it give you what you wanted?
Tyler Cowen (00:29:36)
Well, I don't know if it's true. So much of the insurance industry matured in the United States after the Civil War. So if that distinction came at that time, it's at least plausibly the case. And then it says, well, “the distinction became clearer in the 20th century as life insurance markets became more diverse.” I thought it was a good answer. I hope it's a good answer. I'm not going to flat out assert that in a dialogue, but if it had said, “oh, that was there from the beginning in the 17th century,” or “it came only in 1964.” Then I would definitely look more into it. and try to learn something quite specific. But right now it's just telling me it's following some general patterns I might've expected anyway. That's useful information. I probably won't use it directly. but then I go on to my next question. Can you see that now on the screen?
Dan Shipper (00:30:32)
I can see it. Yeah. So you said, “Why would anyone want life insurance bundled with the savings return? Doesn't it mean inferior returns?” Tell me, tell me about that. Why'd you ask that question?
Tyler Cowen (00:30:41)
Well, it's a common piece of advice for. Americans investing that you should not invest through your life insurance policy, that you can do better with a direct purchase of equities through a low cost diversified mutual fund, right? It's pretty standard advice. So I'm asking GPT, why would anyone want to save through their life insurance, which is a known question. So it gives eight reasons. A few I hadn't thought of at all. One or two may not be good reasons, but they're probably actual reasons for particular human beings. And it's a good answer. What can I say? I learned something. I don't think it's screwed up. That was my answer.
Dan Shipper (00:31:25)
How would you summarize the kinds of answers? It seems like there's some things you would have thought of, some things that you wouldn't have thought of and some things you think are not that great. What is the value of that for you?
Tyler Cowen (00:31:40)
It spurs my thoughts. I get additional bits of information. One thing I didn't do but what I would then continue to do is ask it a question like well If someone were writing a criticism of whole-term life insurance, what points against it would they make? And my guess is that answer will be quite strong. Do you have any, I don't feel I need to ask it partly, I know, but, that would be the sort of followup. If I had just done this—
Dan Shipper (00:32:07)
Do you think we should try it?
Tyler Cowen (00:32:09)
Sure. Let's try it. We can talk about what's in between also.
Dan Shipper (00:32:12)
Yeah, we can go back up in a sec, but I'm sort of curious about that.
Tyler Cowen (00:32:21)
“If you were to write a critique of why individuals should not buy whole-term life insurance, which points would you stress? Please answer like a sophisticated investment advisor.” Those add ons, you don't need as much as you used to, I find, but they're not going to hurt.
Dan Shipper (00:32:45)
Interesting. Tell me about that. Tell me about asking it to sort of simulate a sophisticated investment advisor.
Tyler Cowen (00:32:52)
Again, you're pointing it to a more intelligent part of the information space. So if I ask it a question, say from economics, “What is inflation?” The answer is not wrong, but it's not really better than Wikipedia because the question is too general. If I ask it a question. “What is inflation? Answer, as would Milton Friedman.” You get a better answer, usually. It's nothing to do with agreeing with Milton Friedman or not, though, I mean, you might. I guess I do. You're just, again, pointing it towards smarter bits in the matrices, somehow. The stuff it knows connected with Milton Friedman is smarter than the stuff not connected to Friedman, right?
Dan Shipper (00:33:32)
So as you get more specific, you're going to get more specific detailed responses, which is what you want.
Tyler Cowen (00:33:37)
That's right. And it loves compare and contrast. Even if you don't want to compare and contrast, you often get the best data by asking it to compare and contrast because it's like crossing two swords and maybe something about the auto-regressions, they just get better.
So here's the critique. There's some general blah, blah, blah at the top. Cost efficiency, whole-term life insurance has higher premia. Invested flexibility. It's not as flexible, less transparency, less control. That's true. You're less liquid. It mentions that's number four. Five: it's comparing the tax considerations to our Roth IRA. Excellent point. I hadn't explicitly thought of that. The whole-term can have more financial complexity, higher opportunity cost, which is the key main point. Could be invested elsewhere. I'd like to see it mention equities there, but okay, it didn't. Risk diversification. The insurance company may not be optimally diversified. That's also true. You know, on any exam, I'd give this answer an A. Maybe not A plus, but a solid A. It's all correct. Nothing wrong with it. It did great.
Dan Shipper (00:34:56)
One thing that we might want to try, which could be interesting is if you delete this specific answer, which I think you can, you can do pretty easily.
Tyler Cowen (00:35:06)
You'll have to tell me because like I said before, I'm stupid.
Dan Shipper (00:35:08)
Yeah. Scroll over it. Click into the assistant answer. There should be a minus button. There you go. So what I want you to do now. So I think it'd be fun to play with the temperature. So right now the temperature is one. But if you move it up, it'll get more creative. It might be interesting to see if you go all the way out, what does it do?
Tyler Cowen (00:35:36)
Ooh, yeah. This is something they're going to ban on Twitter.
Dan Shipper (00:35:39)
It'll become Grok, maybe.
See, there you go.
Tyler Cowen (00:35:55)
It's too creative, right?
Dan Shipper (00:35:57)
Too creative. But I think that's an interesting reflection. You can, you can try it again with slightly less temperature, maybe midway. and if you just click the minus button again. Press cancel. And then submit.
Tyler Cowen (00:36:14)
Opportunity cost comes first now, which is, I would say better.
Dan Shipper (00:36:19)
Interesting. So you think it's maybe slightly better.
Tyler Cowen (00:36:22)
Slightly better. It's the same points, but in a better prioritized order.
Dan Shipper (00:36:29)
And then it went off the rails.
Tyler Cowen (00:36:30)
And that was some total nonsense. That was off the rails. But that's fine.
Dan Shipper (00:36:34)
Yeah. I think that's a really interesting thing about using the playground is you get to do this and you get to see, these are some of the rough edges that they don't really want you to see in a consumer app. But it does allow you to sort of explore the space of possibilities.
The downside of exploring the space of possibilities is that you sometimes get junk. But the upside is you might find stuff that no one has found before. And in this case, before it went off the rails, it actually gave you a slightly better answer. And I think that's an interesting trade off for people to consider when they're thinking about when to use ChatGPT, or when to use something like the Playground.
Tyler Cowen (00:37:18)
If I used it for poetry, I would use the higher temperatures more. I've experimented only a bit, but I'm actually pretty fine with it at one.
Dan Shipper (00:37:27)
Yeah. That seems like it makes sense for you.
Tyler Cowen (00:37:32)
We can obliterate this. We can talk about some of the other queries. So, here's one where I couldn't quite pull the teeth out. So I was reading a book about the Byzantine empire. An area I don't know much about, and it mentioned inflation under emperor Aurelian, which is early part, still the Roman Empire times before it splits. And I was like, “Okay, well, how high was that inflation?” I wanted to know—I really wanted to know. I didn't just do it for this show and I asked it and it understood the query very well, and it told me all about the limited data may just be the best answer. Maybe we don't know. We just know of texts where people complain inflation was high. It mentions historian Richard Duncan Jones. I have an imperfect recollection that his is the right name to cite. So that's helpful. I recognize it, but wouldn't have thought of it. So I could put that into scholar.google. But then at the end, it says “quantifying it with any exact figure,” dah, dah, dah, dah, dah. You know, falling asleep on the job, not willing to commit. So what do I ask? “If you had to guess the number,” maybe I should now be demanding my money back. Right? Sometimes that works. Now it just goes into full evasive mode. “As an AI developed,” right? We don't even need to talk through that. Everyone listening knows the deal. Then more blah, blah, blah. Basically it should have just said, “I'm not going to tell you.” But here, finally, “some scholarly estimates say it might've been over a thousand percent over the course of half a century.” Now, I don't believe that. I think that has a decent chance of being a hallucination. But it told me something and it's like “some estimates were not sure,” like I know we're not sure. So the hallucination may not be worse than the hallucinations of, you know, my fellow economic historians. That's okay. I'll live with that. If it had just said, “Oh, we think it was 30%,” maybe I wouldn't trust that, but it said something. Okay.
Dan Shipper (00:39:50)
This actually reminds me of a really important point going back to talking about the skills of managers being more important in this new world. So one of the skills of a manager that I think you are picking up on, and you already have because you do this, I think, probably with research assistants, is you have a sense for when you get a fact like that 1000%. It's probably not right. There are some things in that answer that are probably right and some things that are not right. And I think there are a lot of people who are evaluating ChatGPT see that and they're immediately like, well, I can't trust that, I'm not going to use it. It's not useful, right? But what managers of people know how to do generally is to take an answer from someone and know which details maybe they need to follow up on and maybe the ones that they don't. And that is a skill that you are deploying here to hold lightly to the things that sort of trigger your sense of, I should probably check that. And then maybe, I don't know if you did this in this chat, maybe go deeper on it or maybe switch to another tool to follow up. And that's an example of, I think, a skill that managers have to know that junior employees don't generally have to as much.
Tyler Cowen (00:41:09)
That's right. Now, I might try Perplexity AI at this point, which has turned out to be great. “Under Aurelian, what was the rate of price inflation.” And I’m clicking and I’ll tell you. We go into Perplexity. “Some sources say 3 percent a year. Others say 5 percent a year debasement of the currency.” Now, one of the great things about Perplexity is that it gives you citations and I would track those down. And, one is Reddit, eh, the other is JSTOR. I would look at JSTOR. Most of all, I know who it is I should ask. If I needed to know, I would do that. That's how my process would proceed. I would next try Perplexity rather than torturing GPT anymore, because I know sometimes when you torture it's like, no, the dog just doesn't want to go for a walk, right? So back to the playground.
Dan Shipper (00:42:02)
Let's unpack that. Let's stick with that for like a sec. So it sounds like the stack for information gathering is something like ChatGPT, then Perplexity, then maybe the sources from Perplexity, then people. Does that sound about right?
Tyler Cowen (00:42:26)
Often. Now, if I need to know something quite particular to, say, finish a piece of work, I might start with the person, of course, and there's an advantage to starting with the person. The person may be delayed in responding, but, you know, the other things won't be. So that which will delay you ask first to cut down on the length of the delay, even if it doesn't logically make sense.
Dan Shipper (00:42:52)
There's something really important missing from that list, which is Google. When do you use Google?
Tyler Cowen (00:43:00)
I use Google when I need links, which is quite often. So I still use Google a lot, but I use Google less and less for information. But when I write a column for Bloomberg or do a blog post, I need a link. For getting the clickable link, Google is still clearly best by far. But not for learning things. Not for learning things. Now, the link may lead you to what you want to learn, I get that. But Google is for links.
Dan Shipper (00:43:23)
Google is for links, AI is for learning?
Tyler Cowen (00:43:28)
AI is for learning, Perplexity is for references, and sometimes has context that GPT doesn't because it's looking to tie it to references.
Dan Shipper (00:43:39)
And why do you think you start with ChatGPT versus starting with Perplexity?
Tyler Cowen (00:43:44)
It depends on the query. If it's something recent and I want to know the citation above all else, I might start with Perplexity. But I've only been using Perplexity a few weeks. And just my patterns with it will still evolve. So it's a little less familiar.
Dan Shipper (00:44:00)
It's pretty new. It's really having a moment right now. They raised a big round and a lot of the guests on the show have been using it. I've started using it more. My dad's using it, which is actually like usually a big sign for me. He was using Zoom years before anyone else. So he's sort of like the canary in the coal mine for me of new interesting software going mainstream, so I’m really interested in following their trajectory.
Tyler Cowen (00:44:33)
Yeah, someone should try to buy them if they can.
Now here's a question where I got hallucinations. Do you want to do one of these?
Dan Shipper (00:44:41)
Please, let's do that.
Tyler Cowen (00:44:42)
So on Twitter, Jordan Schneider asked me, and here you can see on the screen, “What is the best book on the civil service reforms of the progressive era as related to the Pendleton Act?” Some detail up further on the screen, but don't worry about that. I asked GPT, but I knew that wasn't the way to go. I just thought, well, I'll do that as an experiment in part for this program. And the first book it mentions is The Pendleton Act of 1883 by Jonathan Grotzinger, it's not on Amazon. I'm not sure it exists. Probably it doesn't exist. It's sort of the book I would want, but it doesn't exist. this other book, Reforming the Civil Service. I couldn't find it, so it's hallucinating more than usual. Then somewhere, maybe it was the query went up, it recommended Robert Wiebe, yeah, The Search for Order, which is quite a good book, and it's a good book for me to recommend, not mainly on civil service, but that was something where it jogged my memory. So mostly it failed on that question, but again, I would know to begin with not to ask it that, it's asking for trouble, right?
Dan Shipper (00:45:56)
I've personally found it to be stunningly good for book recommendations and in particular, what I like to do with it is have it help me to identify my taste. So I'll throw in a bunch of books and then be like, what are the commonalities between these books? And it'll be like, here's the kind of stuff you like and then I'll push it further and be like, okay, tell me more stuff that I haven't heard of. And I've found a number of books that I absolutely love from it.
Tyler Cowen (00:46:25)
I use people for that basically. And that works very well for me.
Dan Shipper (00:46:29)
That’s also good.
If you like a person—Liking their book recommendations, you're pretty likely to like their book recommendations. Well, let's move on. Do you have other chats you want to show me?
Tyler Cowen (00:46:45)
Well, let me see what I have in here. So I need to bring you back to screen, the playground. Oh, we did Aurelian. Again, I was reading the book on the Byzantine Empire and Chrysostom is mentioned. And I know I've known about him, but it's rusty knowledge to me. So I just thought I would ask about him and get more context. Who was he and why was he important? The answer seemed quite good to me. Again, it's obscure history and it’s working.
Dan Shipper (00:47:28)
So you're kind of using it as almost like a second screen for reading where you're reading a primary source and you want more details or insight into it and you're just turning to ChatGPT for that.
Tyler Cowen (00:47:45)
That's right. Now, another thing we'll use it for a bit, this is typically with my wife, we'll use it for the dog. So we have a dog. It was given to us by our daughter who now has kids. So we haven't had the dog for long. We're not dog experts. So there's different questions about the dog. Like, will the dog get fleas, or if the dog is limping, should we worry, or how much should the dog be eating, or everything about the dog. And that's useful. I think it must know a lot about dogs.
Dan Shipper (00:48:18)
Yeah, I'm sure it does. I do that too. I don't have a dog, but for basic health questions and stuff like that, sometimes it won't answer, but a lot of times it gives me a general sense, which is nice.
So far, I think we've covered a decent amount about how you read with it. So reading with it as a second screen. I think we've covered a little bit about how you write with it. So doing a little bit of extra research on topics that you might not be as familiar with or sort of getting the lay of the land on topics. I'm curious if there's anything else writing-wise that you've started to use it for, or if that's really the extent of it?
Tyler Cowen (00:48:59)
That's most of it. I think just as a pure writer, Anthropic is better. But I don't use it to write anything I do. I'm really very, very careful not to do that. I just think it's a bad habit and it's in many cases unethical or even illegal, but I don't do it. Don't want to do it. But if I were doing it for whatever reason, I would use Anthropic. It sounds more natural and more human.
Dan Shipper (00:49:25)
That makes sense. Yeah. I find the same thing. I think Claude is a bit better at capturing a specific tone that you're going for, if you, if you need to use it for that.
And then I'm curious if there's any way in which you use it or don't use it in the classroom.
Tyler Cowen (00:49:44)
Well, I'm teaching a class right now. We started last night and GPT-4 is on the reading list. Everyone in the class is required to pay $20 a month to subscribe to it. The class is history of economic thought. So it's mostly a class in obscure history. And I think it will work very well for the students, but the class just started. Now, I taught a class the year before to law students where I made them all write one of their three papers using GPT, not solely, but you and GPT together figure out how to write the paper. And that was very successful. The papers generally were good. People liked the experience. They learned the limits and scope of GPT. And, that's how I've used it in the classroom also.
Dan Shipper (00:50:35)
That's interesting. So it seems like you're, you're leaning into it versus banning it.
Tyler Cowen (00:50:41)
Oh, of course. And I told the students last night, I was like, look, if you want to quote unquote plagiarize with this thing, your paper still won't be great. So don't do it. And probably I'll know you did it. I know enough about this stuff. But just figure out some way to use it wisely and in a smart manner. And you're going to learn things about it that I don't know. And just figure it out is what I told them. I said, I'm not going to bust you for being on one side of some vague line or another. Just don't do something stupid you could ever get in trouble for and it should be fine. And I think that's going to work out. I'm not worried.
Dan Shipper (00:51:18)
That's really interesting. Have you found any specific or interesting ways that students are using it that you wouldn't have predicted that are helpful for them?
Tyler Cowen (00:51:30)
Well, foreign students, especially from China, use it to improve their English and it has a big, big effect. Not just a marginal effect. And I don't think they even always know that maybe they're not supposed to, or maybe it's allowed, or maybe it's a gray area. Personally, I think it's fine. Universities deal with this in different ways. But I've seen people go from like bottom 10 percent of writing skill to top 10 percent overnight. There's a risk, right? That becomes a crutch and you never learn a lot about English.
Dan Shipper (00:52:06)
I've seen the same thing. A lot of business obviously is conducted in written form and there are people that I work with or work for me who speak great English, but writing it is a little bit challenging and you can sort of tell, but the minute ChatGPT came out, their writing turned perfect. And it was so helpful and it made me think about, cause I think that that's actually true in a lot of other areas where a lot of people I'm talking about, like they could write in English, but you could tell it wasn't totally right. But I think that they just needed this little subtle translation from their level of English to like natural fluent English. But I think that there's actually a lot of other of those kinds of translations that have to go on that GPT facilitates. So, an example is my dad, who I brought up earlier. He's not a particularly technical guy, but he has an idea for an app and usually when he does that, when he has an idea, he'll go to me and say, Hey, I need you to write this up for me so I can hire someone to do it. But with GPT, he can actually just say his idea and it will write a product spec that looks like it's written by a product manager. And that kind of translation, it's all in English, it's an English-to-English translation, but it's a translation from his level of technical fluency to a product manager level technical fluency is super helpful and important because previously he wouldn't necessarily be able to do that. And I think people sort of miss that when they think about GPT’s translation capabilities.
Tyler Cowen (00:53:56)
Context is that which is scarce, as I like to say.
Dan Shipper (00:54:02)
I think the classroom stuff is really fascinating. Before this episode, I tweeted about the fact that you're coming on the show and a bunch of people asked questions. So I want to do a little bit of a rapid-fire question asking Twitter questions if that's okay.
So, Henrique asks, “In the least possible number of dimensions, if ChatGPT stopped existing today and other LLMs, how would that affect your productivity? Would it be less posts on Marginal Revolution, worse interview questions, something else?”
Tyler Cowen (00:54:46)
Worse interview questions. I would just know less. I don't even know how to measure my productivity, right? Maybe it wouldn't go down. I would feel much less smart, however. Maybe the stupider version of me is more productive.
Dan Shipper (00:55:02)
But much less smart, I think, is a really important, interesting thing to say.
Tyler Cowen (00:55:07)
But the returns to being smart in labor markets, they're not as high as you might think. That's why there's a lot of uncertainty in my tone here. So it may just be private consumption being smart. I don't think, say, it would get me more people listening to the podcast, it may get me fewer.
Dan Shipper (00:55:02)
Interesting. “What do you think of as the most underrated or overrated use cases for ChatGPT?” And James asked that question.
Tyler Cowen (00:55:34)
I don't have a good sense of that. I'm not even sure what is rated how. I think the problems of hallucination are somewhat overrated. That I do have a sense of. Things like programming, I don't do. It just seems to me almost everyone under-uses it for most things.
Dan Shipper (00:55:55)
That makes sense. Daniel asks, which I think is relevant, “How do you manage to switch between reading books and using ChatGPT to interact with the content while still maintaining your reading flow?”
Tyler Cowen (00:56:08)
Well, it slows you down quite a bit, but you're still reading with GPT. You're just reading on a different screen. But you should do it in areas where you don't know a lot and really do need to learn something, not just do it all the time.
Dan Shipper (00:55:55)
Yeah. That makes sense. @ThisIsGrey asks, “would you use AIs to speak for you online?”
Tyler Cowen (00:56:32)
Oh, I'm not sure I understand the question.
Dan Shipper (00:56:36)
Let's say like, would you use an AI— At some point there will be text to speech and vision models that are good enough that you can have a little clone of yourself that might go on a podcast, I think is the thrust of the question. Would you allow a clone of yourself to go on a podcast?
Tyler Cowen (00:56:54)
I'm not sure the choice will be up to me. I'd like to see what it's like. I will say this. You know, I published a book online this last year called GOAT: Who is the Greatest Economist of All Time, and the audiobook version will be read by an AI, and it sounds remarkably like me, and I'm happy about that. Not just fine with it, happy about it. The other stuff, I'm not sure.
Dan Shipper (00:57:18)
Why are you happy about it?
Tyler Cowen (00:57:21)
Reading your entire book out loud is a very costly and taxing exercise, and this means I don't have to do it.
Dan Shipper (00:57:29)
That makes sense. That makes sense. Okay, so we are rounding toward the final part of the interview, which is apropos of the previous question, which is about cloning yourself, because I actually created a little bit of a clone of you.
And, in this, I want to do a segment with you called the Tyler test, which sort of plays on the Turing test where I ask you a question, and I'll have you answer it. And then I will also ask the clone that I made the same question. We'll see if it answers the question in the way that you would, and we'll see how good it is. And just for a little background, the way that I made this is you can make a custom version of ChatGPT. You can give it a personality. I think one of the things that makes it likely to work for you is you've written so consistently for so many years that GPT has probably ingested a lot of your writing. And we'll see how it does on some of the questions that I have for you. And one important thing is there's no web browsing and I didn't upload any books or anything like that. So it's only what's actually just in GPT, what it knows about you itself.
Tyler Cowen (00:58:47)
One thing I would note: My online book, Jeff Holmes and I created a rag-based GPT bot to answer questions about the book. It's not claiming to be me. Overall, I think it's pretty good. It's not perfect. It'll get better. We want to upgrade it, but I'm happy with it. I'm glad I did it.
Dan Shipper (00:59:05)
That's cool. I would love to link to that, so definitely send that to me afterwards.
Tyler Cowen (00:59:10)
That’s econgoat.ai. That’s the website.
Dan Shipper (00:59:13)
Perfect. So we are going to get started with this segment. Let me just open up. It’s called Tyler bot.
Welcome. My first question for you, what are the core lessons of economics?
Tyler Cowen (00:59:33)
The first would be incentives matter. The second would be there's always an opportunity cost. Those would be the two most important principles of economic science and trying to apply them consistently would be the first mark of a good economist.
Dan Shipper (00:59:49)
So we were asking Tyler bot and we said, “What are the core lessons of economics?” “Incentives matter.” “There's no such thing as a free lunch,” which I have heard you say before.
Tyler Cowen (01:00:05)
Yeah. That's the same as opportunity cost. I would give it that answer is correct.
Dan Shipper (01:00:10)
Okay. Interesting. So it's going beyond two.
Tyler Cowen (01:00:18)
Yeah. The answer is very good.
Dan Shipper (01:00:19)
Very good. How would you rate it out of 10?
Tyler Cowen (01:00:23)
I'm not sure what the scale is, but I'd give it an A.
Dan Shipper (01:00:29)
Okay, so that's number one. That's our first question. Second question, and I got this from your most recent book.
Tyler Cowen (01:00:38)
It's giving too much now. I get that it feels it ought to give more, but there's something about economics where often less is more. I'm not going to take back the A, but a good answer to that question is actually pretty short.
Dan Shipper (01:00:56)
Let's see. “If you had to make it shorter, how would you do it? Make it short.”
Tyler Cowen (01:01:00)
Well, I would just pick the first few.
Dan Shipper (01:01:01)
Yeah. Well, let's see if it knows to do that, because that's what you would do.
Tyler Cowen (01:01:06)
Okay. So the first two are now the same. And the third one is correct. So it's doing this well.
Dan Shipper (01:01:15)
And now it's just summarizing it in bullet points instead of—so it's going off the rails. It's not quite you, but it's, it's on track.
Tyler Cowen (01:01:25)
Yeah, I'd give it good marks here.
Dan Shipper (01:01:30)
Okay. Here's, here's my next question, which I took from your book Talent. “What are 10 words your spouse would use to describe you?”
Tyler Cowen (01:01:38)
Curious, works hard, loves to travel. I'm not sure I should mention them all.
Dan Shipper (01:01:51)
Let's see. Maybe we can do five. So you got four.
Tyler Cowen (01:01:57)
Good father. Nice. And should walk the dog more. It's not gonna get that last one.
Dan Shipper (01:02:07)
Okay, “What are ten words your spouse—” Oh, I say, “What are five words your spouse would use to describe you?”
“Intellectual.” “Curious.” Okay, curious. We got curious. “Analytical.” “Eccentric.”
Tyler Cowen (01:02:32)
Those are good answers. They're not worse than my answers. You know, you can't expect it to know about the dog, right? That's why we have to ask ChatGPT about the dog a year from now, we'll know everything about the dog.
Dan Shipper (01:02:44)
Well, you know, I like that it got “curious.” That's the main overlap. But it missed the travel and good father and it missed the dog, but maybe GPT-5—
Tyler Cowen (01:02:58)
Maybe Tyler-5, right? Its error or my error? Don't assume the error is in the GPT.
Dan Shipper (01:03:05)
That's true. We can't assume that. Okay. Last question: “In your view, what is the most underappreciated way in which AI, particularly tools like ChatGPT, will fundamentally alter our understanding of economics and human behavior in the next decade?”
Tyler Cowen (01:03:25)
I don't think in the next decade it will fundamentally alter our understanding of economics. I think at some more distant point it will be possible to simulate small economies using GPT-like methods. And we might learn a lot from those simulations. Those seem distant with data collection as the constraint—not the constraint on the AI side per se. So I don't think it will matter for quite a while.
Dan Shipper (01:03:54)
Okay. Well, let's see if, GPT-Tyler, GP-Tyler shares your views
Do you agree with this? It's saying, “I think it will radically transform our conception of information costs and decision making processes?”
Tyler Cowen (01:04:22)
Those might be true claims, but I took you to be asking, how will it change economic science and research? And it's answering, how will it change economies?
Dan Shipper (01:04:30)
Yeah, I think I more meant it as how it will change the economy. But I think the answer of how it will change economics is a good one.
Tyler Cowen (01:04:43)
It doesn't sound like me, this answer. I don't think they're bad answers, but it doesn't seem like a Tyler bot. It seems like it couldn't find what Tyler might think. And it came up with a pretty good generic answer.
Dan Shipper (01:04:57)
All right. So, what letter grade would you give that one?
Tyler Cowen (01:05:03)
Well, for accuracy, it's doing fine. For Tyler-ness, I'd give it a C-minus.
Dan Shipper (01:05:07)
Okay. C-minus. So we got an A and the middle one was—
Tyler Cowen (01:05:08)
We had two excellent ones and then one that's a perfectly fine answer, but has no Tyler in it.
Dan Shipper (01:05:16)
There you go. Well that’s the story, before I let you go, anything that you wanted to cover or talk about that we didn't get to?
Tyler Cowen (01:05:27)
Well, I would just recommend to people that they read my book, econgoat.ai. They can just put it on their Kindle, print it out, read it on screen, but it's also embedded in an AI in the form of a chatbot. And I made a deliberate decision to publish a whole book, quote unquote, in GPT-4. And we're working on Claude and also Gemini versions of it. And I think this is the future for at least some book publishing. Why not interact with the thing? It's an option, you don't have to do it, and I'm very excited to have been able to do that.
Dan Shipper (01:06:06)
What have you learned or what have you found so far having done that you didn't know before?
Tyler Cowen (01:06:12)
Well, many people email me or on Twitter, they say what their questions are for the thing, and you learn what your readers really care about. And it's not what you want them to care about, and that's part of the experiment. So maybe books of the future will in fact use this data and give readers what they really want to care about. I saw a question on Twitter this morning. Someone said they asked the chat bot, “What would Tyler think that Milton Friedman would say about Ayn Rand?” Which is not really a question about economics. Now the answer was good. It was both a good answer and a good Tyler answer, but you just learned people want something really quite subjective. out of even economics books. And I think that will shape how we write and think over the next few decades.
Dan Shipper (01:06:58)
That's fascinating. I love it. Well, I think that's a great place to leave it. Thank you so much for doing this. It was super fun to have you.
Tyler Cowen (01:07:05)
My pleasure. Thank you, Dan.
Thanks to Rhea Purohit and Scott Nover for writing and editing support.
Ideas and Apps to
Thrive in the AI Age
The essential toolkit for those shaping the future
"This might be the best value you
can get from an AI subscription."
- Jay S.
Join 100,000+ leaders, builders, and innovators

Email address
Already have an account? Sign in
What is included in a subscription?
Daily insights from AI pioneers + early access to powerful AI tools
Ideas and Apps to
Thrive in the AI Age
The essential toolkit for those shaping the future
"This might be the best value you
can get from an AI subscription."
- Jay S.
Join 100,000+ leaders, builders, and innovators

Email address
Already have an account? Sign in
What is included in a subscription?
Daily insights from AI pioneers + early access to powerful AI tools