
The transcript of AI & I with Aaron Levie is below. Watch on X or YouTube, or listen on Spotify or Apple Podcasts.
Timestamps
- Introduction: 00:01:30
- Why AI won’t take your job: 00:02:36
- Jevons paradox and the future of work: 00:06:42
- How Aaron’s experience with the cloud era shapes his view of AI: 00:10:40
- Why every knowledge worker is becoming a manager of AI agents: 00:19:44
- What Aaron’s learned from bringing AI into every corner of Box: 00:25:21
- What’s overhyped in AI today: 00:33:57
- How Aaron balances everyday execution with innovation: 00:43:31
Transcript
(00:00:00)
Dan Shipper
Aaron, welcome to the show.
Aaron Levie
Hey, thanks for having me.
Dan Shipper
Thanks for coming on. So for people who don't know, you are the CEO of Box. You are a longtime X poster — P-O-A-S-T-E-R. And you have turned yourself into, I think, a really interesting thinker on AI, which is not surprising, but I think for someone running a big company, it seems like you've really gotten your hands in it and really understand it in a deep way, which is awesome.
Aaron Levie
Yeah, no. I think my POAST days are a little bit behind me, but I'm handing that off to the next generation. And now I'm just just going to tweet about AI agents until the whole the whole thing's over. But yeah, it's been a super exciting journey for all of us that have been deep in the space probably the most exciting technology and at least my lifetime, and so just having a lot of fun with it.
Dan Shipper
So one of the places I want to start is I feel like you have a pretty particular perspective on why jobs aren't going away because of AI agents. Do you want to talk about that?
Aaron Levie
Yeah, so I still leave open a 5 percent chance that I'm totally, obscenely wrong—as you should. So this is sort of a high confidence with room for debate and some internal doubts here and there. But for the most part, and I think it's somewhat empirical when you use these tools, I'm a big believer in—and thousands of people have talked about this at this point—but jobs are not tasks. Jobs are a collection of tasks. And AI is very good at automating tasks. Obviously, the definition of a task is expanding dramatically based on what agents can now do. But at the end of the day, we still need a human to incorporate whatever task was executed into a broader workflow, into a broader business process, into some form of actual value creation. And because you can't ever get rid of that person, that means we'll eventually still have some degree of specialization of what people end up owning from a completion of all the relevant tasks in their domain.
So let's just go through the list. Even an engineer who needs to develop some form of software—they're going to go to an AI agent and have it work on part of the code base, but then they're going to have to make a decision: Do I ship that feature? Do I like that code? Do I have to go talk to a product manager to make sure it's the right thing? I have to incorporate it into a broader project or a broader system. No matter what, you're still going to need people for all of that. So that's sort of why jobs, as a general matter, don't really go away.
Then the question is, what do you do if you can output 20 or 50 or 100 percent more in any given job? A lawyer can review double the amount of legal briefings. An engineer can generate two times the amount of code, or five times the amount of code. Well, shouldn't that reduce jobs in some commensurate way?
My view on that one is, I think it turns out that we're doing way less work than what is actually economically useful. We are merely constrained by how much time we have in the day and the cost of labor to be able to go and do that work. I look at examples throughout our own company, and if I could have lawyers go and review legal documents and contracts two times faster, I would not have half the number of lawyers. Actually, the throughput of that particular bottleneck in the organization would just go way faster and be reduced. So we would employ the same number of lawyers, but we would be reviewing all of our contracts at two times the speed, which would probably, in most cases, mean that there's some feedback loop where we're growing sales incrementally faster and we're getting back to customers more quickly. So we have higher customer satisfaction rates that would actually, even in some cases, lead to more revenue or better business results, which would then ironically lead to hiring more people in those functions that could drive more growth.
Similarly, in product and engineering, if we could ship two times the amount of code, what's likely going to happen is we're just going to expand the footprint of our features. We're going to build even more software that will then create new bottlenecks in the organization that cause us to hire more people in the areas that are now involved in that. So in most cases, I'm finding that we are just not at capacity and we have not reached the point of supply-demand equilibrium where we are doing just the perfect amount of work in the economy, in any given function. And if we can make it more efficient, we'll actually do more of it.
That kind of ties to the Jevons Paradox idea. Jevons Paradox, I think, is mostly applied to inanimate resources and AI systems, or trains and energy consumption. But you could actually apply it to people too. The idea would be, if you could have a lawyer that could do 30 percent more output or 50 percent more output of legal reviews, the demand would actually go up for legal work because it becomes incrementally cheaper to then go and do that legal work, which means you open up a new tranche of use cases for that kind of work to get executed.
So I just think all of the evidence right now is pointing more toward: we're just going to do more. We're going to ship more, we're going to better serve customers, we're going to have more marketing campaigns, we're going to build more software, we're going to get better healthcare, we're going to have more tutoring and education. But all of those things still then drive jobs in the economy.
Dan Shipper
I agree with you. I think the lawyer example is such a good one. I have caught hundreds of thousands of dollars worth of legal mistakes just by putting my contracts into GPT-4. And I couldn't have hired a lawyer for that before because it would have taken too much time. I already had a lawyer draft the contracts, and they're the ones that made the mistake. So the second level of legal review is a thing I couldn't have paid for before, but now is a job that either a lawyer assisted with an AI, or a legal firm could offer via ChatGPT or whatever—or ChatGPT just offers that now in a way that I couldn't have bought before.
Aaron Levie
Yeah, I think if you think about it, what are all the things today that you're not doing because the price of entry to doing that thing is I have to hire a person or go and pay and procure an external service? The minimum amount of money that you can spend just to even start to talk to a lawyer is thousands of dollars. The minimum amount of money you can spend to prototype an idea with an ad agency is tens of thousands of dollars. So what happens in a world where you can go and do that for $5 or $100 or $1,000? You're just going to do way more of those types of activities.
And then again, kind of ironically, by doing more of those activities, you might actually find scenarios where you want to now bring an expert back into that workflow that you wouldn't have had before. So there are areas within our company where we start to do something purely as a test case or a prototype, or just kind of an ideation with an AI system. And it works just enough that we say, now let's actually go do this in the real way and let's go pay somebody or hire somebody or put somebody on that project. But we wouldn't have even gotten it started before if AI didn't exist, because we would not have even thought that we should go light up that project if we couldn't have prototyped it in the first place.
And so this is sort of the part that, again, no economist has any way to estimate how much of the economy is going to grow as a result of that. It's impossible for our brains to kind of get around: well, how many new things get lit up because AI lowered the barrier to entry to cause then more people to get involved in that particular task? But that's actually going to be probably a substantial amount of work in the future.
Dan Shipper
How do you think about the future? So you're someone running a big company, you're running a company that came up in the cloud era, seeing a new technology wave come through. There's probably maybe a little bit of a sickening moment, like, oh my God, am I going to have to change everything? Or how does this affect my business? Right? And your job is to figure out where the future's going and to start to understand, okay, is this going to be good for us? Is this going to be bad for us? How is it going to change jobs? All that kind of stuff. And you're approaching it from a perspective that seems—you seem to have a pretty informed perspective on where it's going. And I'm curious, how do you form that yourself? How do you go through all the possibilities to understand what's coming next?
Aaron Levie
Yeah, I mean, I think to some extent I'm kind of working through analogy and working through the fact that I've seen a couple of these major shifts, and so you have that as a background experience that informs a lot of what's going to happen next. And everyone can kind of debate what are the most relevant platform shifts that we've experienced that AI relates to. But at a minimum, you can kind of think about it as: okay, we went from the mainframe to the PC. We went from PC to mobile. We went from on-prem to cloud. These are platform-level shifts. We kind of know how they work. You have the early adopters start to play around with new technologies. They adopt these tools. And then there are breakthrough use cases that cause more of the mainstream, pragmatic buyers of technology to adopt those, then that kind of accelerates, and then you eventually have the laggards.
(00:10:00)
We see this pattern every single time a new technology emerges, and it happens at the big macro technologies like cloud and mobile. And then it happens at the micro technologies—like any sub-service in cloud or mobile experiences the same thing.
So AI actually has a very similar curve. It's going through the exact same typical bell curve of adoption patterns. One thing that's different is it's happening in a compressed fashion. So where cloud may have taken 10 years to reach complete mainstream adoption by every relevant company, AI is probably doing that in two years. But each individual technology within AI still has, again, a very similar curve. So obviously, those of us spending too much time on X, we're seeing AI agents in coding before the rest of the world. But you kind of know exactly what's going to happen next. Over the next two years, everybody's going to adopt AI coding agents. It's just a guarantee because the efficiency and the productivity gain is so massive that this will ripple through the economy.
And so I think by having a lens into both what the prior trends have been and just by being a very active user of these technologies myself, I can kind of see where things are such a breakthrough that they will most likely, again, ripple through the economy versus which things are maybe more incremental and won't be that impactful. And that ends up helping inform this.
And then I think maybe the third factor is, because we were a startup at the early days of cloud, I felt that shift deeply inside of a company that had to go through executing on a large technology shift that was happening. So to some extent, I'm kind of pulling from those memories as much as possible and saying, let's—I kind of have to do that again. Now, there's obviously more people, we have new risks, we have new opportunities. But it's very much hardcore startup mode of, I often just ask myself quite literally, what would we do if we were starting the company from scratch and it was just 10 people? How would we operate? How would we execute? What would we be building? What features would we be creating? So if we were starting the company over in an AI-first world, what would that look like? And so again, I'm benefiting from the fact that I saw that front-row seat on the cloud wave, and we're trying to—again, that's informing the company, I think, on what this should look like.
Dan Shipper
So what are some specific things that you remember from that cloud wave? Because I think the term "digital transformation" was the big thing like 10 years ago, and everyone needed a digital transformation strategy, a cloud strategy. And some people probably did it well, but a lot of people, I think, they just knew they needed to say those words. And there's a big difference between companies that say the words and are like, yeah, we have a cloud strategy, and companies that actually ended up effectively doing it. And I'm curious what your memories are of how that works and who does it well and who doesn't, and then how you think that applies in this era.
Aaron Levie
Yeah. You know what's interesting is, this is going to be much bigger than that. Because in the cloud transformation and the digital transformation, first of all, it was always a little bit abstract as to—and I think you're kind of getting at that in the question—it was always a little bit of an abstract concept. When United Airlines does digital transformation, what does that really mean? That means that they should probably have a really good mobile app, a really good website, the customer support should be intelligent and kind of relevant to you. But at the end of the day, and with all respect to United Airlines, if you look at your flying experience from 15 years ago to today, basically nothing changed. So pre- and post-digital—
Dan Shipper
It's probably a little worse.
Aaron Levie
It might be worse. It might be worse. So the reality is, for as much work as went into that digital transformation process—and I'm sure that behind the scenes, lots of really interesting technology got used, lots of new ways that they're operating in their data centers changed—the actual day-to-day experience as a customer did not meaningfully change or meaningfully jump. They probably have a better-designed website, et cetera. And I think that's probably felt across a significant portion of the economy, pre- and post-digital transformation.
Now, there are more extreme examples. So if you look at Disney as just a random example, I think they would probably say, well, we had to become a digital company in the form of our product has to be now fully digital. We can't rely on the movie theaters. We're going to go direct, and that's a more significant business model disruption that had to occur. Probably some banks maybe land in that category. But you were kind of on this continuum.
In the AI transformation, the reason why this is going to be so different and so much more impactful is it changes how every single employee in your company operates. Again, the daily experience of that United Airlines employee 15 years ago to today—their tools are a little bit more modern, they're probably getting real-time data feeds where it used to be a little bit more asynchronous. That was sort of a contained level of transformation in your daily experience as that employee. With AI, there's not going to be any going back to the way things used to be and how we work. It's just not possible because the efficiency gain between the company that uses AI versus the one that doesn't is just too insurmountable to try and make up for if you're not using these technologies. And the way that we will work at the end of that transition will be so different that you'll just fundamentally feel it again in your daily work, in your daily tasks.
And so maybe that would be the biggest difference between digital transformation and this sort of AI era that we're entering—the way that we work is going to be so fundamentally altered that you'll, again, just experience this in your daily life as an employee in any one of these companies. Some jobs will entirely change and be entirely shifted. And then other jobs, again, the daily activities will just be so different.
So let's take the engineering space, obviously, that we're following. If you talk to a very clued-in, online engineer right now and you say, how are you developing software today versus even one year ago? It is probably the biggest shift in any period in history of almost any knowledge worker job that's ever occurred. One year ago, you were typing into an IDE. Maybe you were having some autocomplete technology like GitHub Copilot. Maybe you were asking a question of an AI system, getting some suggestion back. That was sort of one year ago. Obviously, three years ago, none of that even existed.
Today, you're prompting an agent that's going to go off and do a large amount of work. It's going to come back with that work product, and you're going to go and review it. That is a completely different job. And within a one-year period, if not every job changes as much as that, if you kind of look at how that's going to ripple through knowledge work and you apply that to almost every form of knowledge work, it will mean that all of our daily workflows—if you're generating marketing assets and building marketing campaigns, if you are in sales and you're supporting a customer, if you're in research and life sciences—every single one of our jobs is going to look completely different in the next, let's say, five, plus or minus years. And that will be why it's so different than, let's say, even digital transformation was.
Dan Shipper
So do you think then the better metaphor, a more apt analogy, is like the shift to using computers at all? When we first started using VisiCalc and spreadsheets or something like that?
Aaron Levie
I think it would rank at that level in terms of the amount of change. So the paper-to-digital process was a fundamental form factor change in how you worked, right? Everything about the workflow of a company—well, it's probably even more significant. It's probably from paper to digital plus the internet. Because I think what we sort of did was the skeuomorphic thing in the first phase, where we just kind of took the paper-based desk worker's set of tools and we put them into a digital screen. That was a shift, but it was then an even bigger shift once you could connect those systems and you could collaborate in real time. So we're kind of compressing that level of shift in a, again, one- or two-year period. But very much akin to that.
Even the shift from kind of on-prem to cloud—it sort of impacted the aesthetics of software, and it impacted the fact that when I chatted with you, you got the response faster and it was queued up differently, and the way we collaborated, we didn't use version control. We just worked together in real time. That was a very big deal. But we already kind of understood the general structure of how we would work together and how we would communicate together, kind of pre- and post-cloud.
The transcript of AI & I with Aaron Levie is below. Watch on X or YouTube, or listen on Spotify or Apple Podcasts.
Timestamps
- Introduction: 00:01:30
- Why AI won’t take your job: 00:02:36
- Jevons paradox and the future of work: 00:06:42
- How Aaron’s experience with the cloud era shapes his view of AI: 00:10:40
- Why every knowledge worker is becoming a manager of AI agents: 00:19:44
- What Aaron’s learned from bringing AI into every corner of Box: 00:25:21
- What’s overhyped in AI today: 00:33:57
- How Aaron balances everyday execution with innovation: 00:43:31
Transcript
(00:00:00)
Dan Shipper
Aaron, welcome to the show.
Aaron Levie
Hey, thanks for having me.
Dan Shipper
Thanks for coming on. So for people who don't know, you are the CEO of Box. You are a longtime X poster — P-O-A-S-T-E-R. And you have turned yourself into, I think, a really interesting thinker on AI, which is not surprising, but I think for someone running a big company, it seems like you've really gotten your hands in it and really understand it in a deep way, which is awesome.
Aaron Levie
Yeah, no. I think my POAST days are a little bit behind me, but I'm handing that off to the next generation. And now I'm just just going to tweet about AI agents until the whole the whole thing's over. But yeah, it's been a super exciting journey for all of us that have been deep in the space probably the most exciting technology and at least my lifetime, and so just having a lot of fun with it.
Dan Shipper
So one of the places I want to start is I feel like you have a pretty particular perspective on why jobs aren't going away because of AI agents. Do you want to talk about that?
Aaron Levie
Yeah, so I still leave open a 5 percent chance that I'm totally, obscenely wrong—as you should. So this is sort of a high confidence with room for debate and some internal doubts here and there. But for the most part, and I think it's somewhat empirical when you use these tools, I'm a big believer in—and thousands of people have talked about this at this point—but jobs are not tasks. Jobs are a collection of tasks. And AI is very good at automating tasks. Obviously, the definition of a task is expanding dramatically based on what agents can now do. But at the end of the day, we still need a human to incorporate whatever task was executed into a broader workflow, into a broader business process, into some form of actual value creation. And because you can't ever get rid of that person, that means we'll eventually still have some degree of specialization of what people end up owning from a completion of all the relevant tasks in their domain.
So let's just go through the list. Even an engineer who needs to develop some form of software—they're going to go to an AI agent and have it work on part of the code base, but then they're going to have to make a decision: Do I ship that feature? Do I like that code? Do I have to go talk to a product manager to make sure it's the right thing? I have to incorporate it into a broader project or a broader system. No matter what, you're still going to need people for all of that. So that's sort of why jobs, as a general matter, don't really go away.
Then the question is, what do you do if you can output 20 or 50 or 100 percent more in any given job? A lawyer can review double the amount of legal briefings. An engineer can generate two times the amount of code, or five times the amount of code. Well, shouldn't that reduce jobs in some commensurate way?
My view on that one is, I think it turns out that we're doing way less work than what is actually economically useful. We are merely constrained by how much time we have in the day and the cost of labor to be able to go and do that work. I look at examples throughout our own company, and if I could have lawyers go and review legal documents and contracts two times faster, I would not have half the number of lawyers. Actually, the throughput of that particular bottleneck in the organization would just go way faster and be reduced. So we would employ the same number of lawyers, but we would be reviewing all of our contracts at two times the speed, which would probably, in most cases, mean that there's some feedback loop where we're growing sales incrementally faster and we're getting back to customers more quickly. So we have higher customer satisfaction rates that would actually, even in some cases, lead to more revenue or better business results, which would then ironically lead to hiring more people in those functions that could drive more growth.
Similarly, in product and engineering, if we could ship two times the amount of code, what's likely going to happen is we're just going to expand the footprint of our features. We're going to build even more software that will then create new bottlenecks in the organization that cause us to hire more people in the areas that are now involved in that. So in most cases, I'm finding that we are just not at capacity and we have not reached the point of supply-demand equilibrium where we are doing just the perfect amount of work in the economy, in any given function. And if we can make it more efficient, we'll actually do more of it.
That kind of ties to the Jevons Paradox idea. Jevons Paradox, I think, is mostly applied to inanimate resources and AI systems, or trains and energy consumption. But you could actually apply it to people too. The idea would be, if you could have a lawyer that could do 30 percent more output or 50 percent more output of legal reviews, the demand would actually go up for legal work because it becomes incrementally cheaper to then go and do that legal work, which means you open up a new tranche of use cases for that kind of work to get executed.
So I just think all of the evidence right now is pointing more toward: we're just going to do more. We're going to ship more, we're going to better serve customers, we're going to have more marketing campaigns, we're going to build more software, we're going to get better healthcare, we're going to have more tutoring and education. But all of those things still then drive jobs in the economy.
Dan Shipper
I agree with you. I think the lawyer example is such a good one. I have caught hundreds of thousands of dollars worth of legal mistakes just by putting my contracts into GPT-4. And I couldn't have hired a lawyer for that before because it would have taken too much time. I already had a lawyer draft the contracts, and they're the ones that made the mistake. So the second level of legal review is a thing I couldn't have paid for before, but now is a job that either a lawyer assisted with an AI, or a legal firm could offer via ChatGPT or whatever—or ChatGPT just offers that now in a way that I couldn't have bought before.
Aaron Levie
Yeah, I think if you think about it, what are all the things today that you're not doing because the price of entry to doing that thing is I have to hire a person or go and pay and procure an external service? The minimum amount of money that you can spend just to even start to talk to a lawyer is thousands of dollars. The minimum amount of money you can spend to prototype an idea with an ad agency is tens of thousands of dollars. So what happens in a world where you can go and do that for $5 or $100 or $1,000? You're just going to do way more of those types of activities.
And then again, kind of ironically, by doing more of those activities, you might actually find scenarios where you want to now bring an expert back into that workflow that you wouldn't have had before. So there are areas within our company where we start to do something purely as a test case or a prototype, or just kind of an ideation with an AI system. And it works just enough that we say, now let's actually go do this in the real way and let's go pay somebody or hire somebody or put somebody on that project. But we wouldn't have even gotten it started before if AI didn't exist, because we would not have even thought that we should go light up that project if we couldn't have prototyped it in the first place.
And so this is sort of the part that, again, no economist has any way to estimate how much of the economy is going to grow as a result of that. It's impossible for our brains to kind of get around: well, how many new things get lit up because AI lowered the barrier to entry to cause then more people to get involved in that particular task? But that's actually going to be probably a substantial amount of work in the future.
Dan Shipper
How do you think about the future? So you're someone running a big company, you're running a company that came up in the cloud era, seeing a new technology wave come through. There's probably maybe a little bit of a sickening moment, like, oh my God, am I going to have to change everything? Or how does this affect my business? Right? And your job is to figure out where the future's going and to start to understand, okay, is this going to be good for us? Is this going to be bad for us? How is it going to change jobs? All that kind of stuff. And you're approaching it from a perspective that seems—you seem to have a pretty informed perspective on where it's going. And I'm curious, how do you form that yourself? How do you go through all the possibilities to understand what's coming next?
Aaron Levie
Yeah, I mean, I think to some extent I'm kind of working through analogy and working through the fact that I've seen a couple of these major shifts, and so you have that as a background experience that informs a lot of what's going to happen next. And everyone can kind of debate what are the most relevant platform shifts that we've experienced that AI relates to. But at a minimum, you can kind of think about it as: okay, we went from the mainframe to the PC. We went from PC to mobile. We went from on-prem to cloud. These are platform-level shifts. We kind of know how they work. You have the early adopters start to play around with new technologies. They adopt these tools. And then there are breakthrough use cases that cause more of the mainstream, pragmatic buyers of technology to adopt those, then that kind of accelerates, and then you eventually have the laggards.
(00:10:00)
We see this pattern every single time a new technology emerges, and it happens at the big macro technologies like cloud and mobile. And then it happens at the micro technologies—like any sub-service in cloud or mobile experiences the same thing.
So AI actually has a very similar curve. It's going through the exact same typical bell curve of adoption patterns. One thing that's different is it's happening in a compressed fashion. So where cloud may have taken 10 years to reach complete mainstream adoption by every relevant company, AI is probably doing that in two years. But each individual technology within AI still has, again, a very similar curve. So obviously, those of us spending too much time on X, we're seeing AI agents in coding before the rest of the world. But you kind of know exactly what's going to happen next. Over the next two years, everybody's going to adopt AI coding agents. It's just a guarantee because the efficiency and the productivity gain is so massive that this will ripple through the economy.
And so I think by having a lens into both what the prior trends have been and just by being a very active user of these technologies myself, I can kind of see where things are such a breakthrough that they will most likely, again, ripple through the economy versus which things are maybe more incremental and won't be that impactful. And that ends up helping inform this.
And then I think maybe the third factor is, because we were a startup at the early days of cloud, I felt that shift deeply inside of a company that had to go through executing on a large technology shift that was happening. So to some extent, I'm kind of pulling from those memories as much as possible and saying, let's—I kind of have to do that again. Now, there's obviously more people, we have new risks, we have new opportunities. But it's very much hardcore startup mode of, I often just ask myself quite literally, what would we do if we were starting the company from scratch and it was just 10 people? How would we operate? How would we execute? What would we be building? What features would we be creating? So if we were starting the company over in an AI-first world, what would that look like? And so again, I'm benefiting from the fact that I saw that front-row seat on the cloud wave, and we're trying to—again, that's informing the company, I think, on what this should look like.
Dan Shipper
So what are some specific things that you remember from that cloud wave? Because I think the term "digital transformation" was the big thing like 10 years ago, and everyone needed a digital transformation strategy, a cloud strategy. And some people probably did it well, but a lot of people, I think, they just knew they needed to say those words. And there's a big difference between companies that say the words and are like, yeah, we have a cloud strategy, and companies that actually ended up effectively doing it. And I'm curious what your memories are of how that works and who does it well and who doesn't, and then how you think that applies in this era.
Aaron Levie
Yeah. You know what's interesting is, this is going to be much bigger than that. Because in the cloud transformation and the digital transformation, first of all, it was always a little bit abstract as to—and I think you're kind of getting at that in the question—it was always a little bit of an abstract concept. When United Airlines does digital transformation, what does that really mean? That means that they should probably have a really good mobile app, a really good website, the customer support should be intelligent and kind of relevant to you. But at the end of the day, and with all respect to United Airlines, if you look at your flying experience from 15 years ago to today, basically nothing changed. So pre- and post-digital—
Dan Shipper
It's probably a little worse.
Aaron Levie
It might be worse. It might be worse. So the reality is, for as much work as went into that digital transformation process—and I'm sure that behind the scenes, lots of really interesting technology got used, lots of new ways that they're operating in their data centers changed—the actual day-to-day experience as a customer did not meaningfully change or meaningfully jump. They probably have a better-designed website, et cetera. And I think that's probably felt across a significant portion of the economy, pre- and post-digital transformation.
Now, there are more extreme examples. So if you look at Disney as just a random example, I think they would probably say, well, we had to become a digital company in the form of our product has to be now fully digital. We can't rely on the movie theaters. We're going to go direct, and that's a more significant business model disruption that had to occur. Probably some banks maybe land in that category. But you were kind of on this continuum.
In the AI transformation, the reason why this is going to be so different and so much more impactful is it changes how every single employee in your company operates. Again, the daily experience of that United Airlines employee 15 years ago to today—their tools are a little bit more modern, they're probably getting real-time data feeds where it used to be a little bit more asynchronous. That was sort of a contained level of transformation in your daily experience as that employee. With AI, there's not going to be any going back to the way things used to be and how we work. It's just not possible because the efficiency gain between the company that uses AI versus the one that doesn't is just too insurmountable to try and make up for if you're not using these technologies. And the way that we will work at the end of that transition will be so different that you'll just fundamentally feel it again in your daily work, in your daily tasks.
And so maybe that would be the biggest difference between digital transformation and this sort of AI era that we're entering—the way that we work is going to be so fundamentally altered that you'll, again, just experience this in your daily life as an employee in any one of these companies. Some jobs will entirely change and be entirely shifted. And then other jobs, again, the daily activities will just be so different.
So let's take the engineering space, obviously, that we're following. If you talk to a very clued-in, online engineer right now and you say, how are you developing software today versus even one year ago? It is probably the biggest shift in any period in history of almost any knowledge worker job that's ever occurred. One year ago, you were typing into an IDE. Maybe you were having some autocomplete technology like GitHub Copilot. Maybe you were asking a question of an AI system, getting some suggestion back. That was sort of one year ago. Obviously, three years ago, none of that even existed.
Today, you're prompting an agent that's going to go off and do a large amount of work. It's going to come back with that work product, and you're going to go and review it. That is a completely different job. And within a one-year period, if not every job changes as much as that, if you kind of look at how that's going to ripple through knowledge work and you apply that to almost every form of knowledge work, it will mean that all of our daily workflows—if you're generating marketing assets and building marketing campaigns, if you are in sales and you're supporting a customer, if you're in research and life sciences—every single one of our jobs is going to look completely different in the next, let's say, five, plus or minus years. And that will be why it's so different than, let's say, even digital transformation was.
Dan Shipper
So do you think then the better metaphor, a more apt analogy, is like the shift to using computers at all? When we first started using VisiCalc and spreadsheets or something like that?
Aaron Levie
I think it would rank at that level in terms of the amount of change. So the paper-to-digital process was a fundamental form factor change in how you worked, right? Everything about the workflow of a company—well, it's probably even more significant. It's probably from paper to digital plus the internet. Because I think what we sort of did was the skeuomorphic thing in the first phase, where we just kind of took the paper-based desk worker's set of tools and we put them into a digital screen. That was a shift, but it was then an even bigger shift once you could connect those systems and you could collaborate in real time. So we're kind of compressing that level of shift in a, again, one- or two-year period. But very much akin to that.
Even the shift from kind of on-prem to cloud—it sort of impacted the aesthetics of software, and it impacted the fact that when I chatted with you, you got the response faster and it was queued up differently, and the way we collaborated, we didn't use version control. We just worked together in real time. That was a very big deal. But we already kind of understood the general structure of how we would work together and how we would communicate together, kind of pre- and post-cloud.
The shift from pre- and post-AI is, again, very fundamentally different because what I think we're seeing is that the job of an individual contributor really begins to change because you are now a manager of agents, and that is a completely different level of work that you do. And we don't—it's a very different level, step-function shift than what we've seen previously.
(00:20:00)
Dan Shipper
100 percent. Yeah, the way I've been writing about it for a couple years is thinking about us moving from a knowledge economy to an allocation economy, where your job is to allocate intelligence, even as an IC.
Aaron Levie
But that's a key point. So managers' jobs are really about prioritization. They're about allocation, they're about using judgment across a set of tasks and projects that are happening. And that effectively becomes the new IC job in the future.
Dan Shipper
Totally. One thing I want to get into, though, is you said you're in startup mode. And you've been running Box for a while. And I'm curious what that has been like for you. Are you—you seem energetic, but are you, were you like, oh fuck, I can't believe I have to go back to startup mode? Or were you like, this is great, finally I get to feel like I'm back to the ground floor again? Or some mix? And you can be honest, this is a safe space, but I'm really curious.
Aaron Levie
Yeah. Okay, I'll try and be as honest as possible in this one. So I'm going to have to be introspective for five seconds. So I'd say it's 80 to 90 percent very excited, 10 to 20 percent anxiety.
So on the 80 to 90 percent—I just love technology so much. And I mean, you sort of have to be to be in this industry and to be trying to build a company for as long as we've been kind of working on this. So I would say this is sort of me in my happy place, which is something, some major technology event is happening and you can get your hands on it. And ideally, it impacts you in some way that you can make sure that you have to respond to something and do something. That kind of appeals to my ADD instincts.
There's probably a period three or four years ago where I was like, huh, maybe I should start to pick up some hobbies. The industry was kind of settling down. We kind of knew all the different landscape. We knew cloud, we knew mobile, we knew how everything was going to work. And so this is way better because at 10:00 PM, instead of doing some arbitrary hobby, I'm on a Zoom call with somebody about AI or playing around with a new feature or building something. And that is so much better for me from an emotional excitement standpoint.
Dan Shipper
I don't know. I mean, I feel like the world needs more of your pottery or woodworking.
Aaron Levie
Yeah. I—at the very tail end of Covid, I started picking up guitar just because I had some free time. And that's clearly a sign that the industry had crested and there wasn't a lot of change going on if I have time to learn guitar. And now it's just, yeah, we're in kind of full crank mode, and it's incredibly exciting because something changes every single day that you have to respond to. And that's definitely exactly where I like to be.
Dan Shipper
We are so back. We're so back.
Aaron Levie
Yeah. We're so back.
Dan Shipper
Yes, but one thing that I think you've done really impressively is you were one of the first CEOs to really start the AI-first wave. You sent the memo and you were like, we're transitioning to being a company that takes this really, really seriously. And if you're not in, you're out. And I'm curious—I think a lot of companies are thinking about doing this right now or trying to do it with varying levels of success, and I'm curious how that has gone for you and what you've learned about the way to do this well and to really be able to start from ground zero inside of an existing company. Like, that's so fucking hard. So yeah, I'm curious what you've learned.
Aaron Levie
Yeah, it's extremely hard. And I would say we're still totally on that journey, and we're not yet—I can't put the mission accomplished flag in the ground and say that we are the case study. We are cranking on this every single day.
A few quick lessons. One thing that we tried to do was just be very clear that this is not about replacing jobs or spending less money as a company. This is purely about how do we get output to increase, how do we move faster as a company, how do we do more, and how do we better serve customers? So the first thing that I wanted to do is just make sure that this was not some threatening technology, but this is something actually that we should be on the forefront of because it's going to let us work better. It's going to let us actually do more as an organization. That was kind of, I think, important to lay out upfront.
The next is just making sure that everybody's using it every single day in some capacity, and then constantly showing each other how we're all using it. So we do this thing every single Friday—it's our internal all-hands. We have somebody demo how they're using, in our case, Box AI for different use cases. So they created an agent to automate some sales workflow, they created an agent to automate a compliance workflow. And so we want to constantly just have everybody learn from each other on what this technology is, how it works, how it helps in your daily work, and then how you can go off and do it yourself.
We haven't quite systematized this, but I think increasingly I'm at least asking, and I'm hearing more people ask: Hey, why can't we do that faster? You look at a project timeline that comes back and it's three weeks or four weeks, and you say, well, I don't know why we can't do that in two days if you really just thought about it from a first-principles standpoint. You're really just trying to create this thing or build that. Why can't we dramatically compress that timeline? And that's causing people to say, okay, maybe actually I should re-look at this. Maybe there is a technology out there that we can go and leverage to make that happen. And so that's starting to kind of create a flywheel. But I think, again, the end result is companies are just going to move far faster. They're going to get way more done. They're going to be able to better serve their customers as a result of this. And I'm seeing plenty of examples of all of that happening.
The one asterisk that I'll say is the one thing we can't yet do—and this is maybe me being defensive, maybe we actually could if I was just burn-the-bridges kind of—right now when I talk to five- or 10- or 20-person startups with no existing process, with complete clean sheet of paper on how they operate, I am seeing them be so differently wired than you can be once you have existing workflows or processes that it is causing me to think: do I have to start to maybe go and find areas where, from a completely fresh start, we go and re-engineer something? And it's because these startups start with, again, nothing, that they can kind of think about their engineering workflows more in this modern way, which is the workflow is actually you're prompting an agent, it's operating in the background, you're reviewing the code of that agent. You're very documentation-driven. You're very spec-driven. You're prompt-driven. And then you're letting the agent go off and do lots of the work, and then you're reviewing all of that. And you can afford to do that when you're not putting AI into an existing workflow, but when you're, again, kind of reinventing the workflow from scratch. And I do think there's areas where I want to do that much more in the organization.
Dan Shipper
Yeah, I mean, we have that. We run four software products internally and we're 15 people, and I commit code to those, which I should not be able to. And yeah, it would be totally impossible without Claude Code and Cursor CLI and all those kinds of things. And our IT—it just totally changes the engineering process because it's, yeah, it's about the plan and if the plan's up to date and who's reviewing the plan and what work has been done. And the actual code doesn't matter as much.
Aaron Levie
Yeah. And I think when you have years and years of highly tuned tribal knowledge on how to build things and in-person code reviews and all of the internal workflows, it's sort of harder to do the "let's just start, let's invent this whole thing from scratch." But we are going through that journey. The way we build software already looks very different than it did two years ago, and I think it'll look vastly more different in a year from now than the combination of the past couple years.
Dan Shipper
Yeah. I'm totally with you. One thing that's been on my mind is, and I agree that there's way more demand than can be served and you're generally just going to want to do more work as a company—we've been in a non-recessionary environment for a long, long, long time. I'm curious, do you think that changes in a recession?
Aaron Levie
I think it changes, but again, we don't know the counterfactual. Where usually in a recession, you unfortunately have job reduction regardless. So now you would probably still have job reduction in a recession, but the companies are still able to drive more output because they can, again, they get more leverage from AI. So maybe, optimistically—I'm totally making this up—maybe you get out of the recession faster because you haven't totally decimated your productivity levels as an organization.
But I would say that I could totally contemplate a scenario where you have a very bad economic environment that would lead to job reductions, as it kind of always has unfortunately. That would right now probably be blamed on AI because you'd still see people doing work with AI. When, again, it's one of these counterfactuals, which is—well, again, a recession prior to AI, you also had, unfortunately, job cuts that you have to make in those situations.
(00:30:00)
So you can't really know what would've happened in the non-AI scenario. So I do think we have to kind of watch out for that. But I really think about this as, again, I'm not seeing any evidence to the contrary. I think about this as just the next era of knowledge work technology that we have always had. These boosts in capability and productivity, and when you're in the moment of that transformation, it's easy to kind of look at it myopically and say, well, oh my gosh, this is going to totally reduce the number of jobs in this area or impact us in this area.
And then you look at it 10, 20, 30 years later and you realize, wow, actually, it turned out the demand for that type of work was way bigger than we had imagined. And if we had never made it more efficient, we would never have actually gone and been able to capitalize on that. I mean, if you just look at—I'm totally making this up, this is fan fiction—but if you look at probably what a graphic designer 30 or 40 years ago would've said when they saw Photoshop. And you're like, wait a second. Right now this project takes a week to go in and design this poster for this client, and you're going to make it take five hours. Well, how is it not going to reduce graphic design jobs by 10x? And today we have vastly more graphic designers in the world than we did 30 or 40 years ago.
Or you look at all of the stories of accounting when we went from any kind of paper-based methods to the PC and Excel and VisiCalc and Intuit. And we have way more accountants today than ever before. So what is it about digitization that actually causes increases in these jobs? It's because we finally make the function efficient enough that way more people can actually go acquire those services. And AI is, again, for everything that I'm seeing, is just going to do the same thing again for a number of fields.
Dan Shipper
Hoping the one negative impact of technology is more accountants than ever before. So hoping that doesn't continue. Love my accountant.
Aaron Levie
But one thing for sure, you can imagine, again, your example of AI reviewing your contract. Now imagine when you start doing that to kind of everything in your organization from a legal standpoint—you're going to be hiring way more lawyers as a result because you're going to say, oh, I found this thing, now I have to go talk to somebody. And it is still going to be bottlenecked by that human. So I don't see a lot of these large job categories getting reduced.
Dan Shipper
What do you think is overhyped in AI right now?
Aaron Levie
This will certainly show my bias on this topic. I don't know if I can think of something. I think if we look at where we are—and this is again, this is maybe one of the examples of having been through the cloud wave, and again, I might extrapolate too much—but I remember 15 years ago being like, gosh, I can't imagine this X SaaS category getting 10 times larger or five times larger. And you zoom out and you're like, oh my God, we were actually just at the very small period of the curve at that point. And I was in shock that AWS was still doubling 10 years ago, and it's probably two orders of magnitude bigger today than it was.
And so I think I don't know of a category where, if I look at it, I say it's not going to be 10 times larger in five years from now. I don't think I can find that. What would you argue?
Dan Shipper
What do I think is the most overhyped thing in AI right now? I mean, I think that people tend to just generally with AI—they tend to oscillate between, like, it's utopia and we're going to, everything's going to be solved and it's free room service and teleportation for everybody. Oh yeah, we're all going to die. And that's why I like talking to people like you, because I think you have a more grounded perspective on, well, it's going to change. It's going to change a lot of the way that we work, and also the world is going to continue more or less. We're still going to have problems, the bottlenecks just move somewhere else. And I think that's actually a much more interesting perspective and a much better way to talk about the future.
Aaron Levie
I'm totally fine with the utopian people, assuming they're driving positive progress in the technology on that. I don't think you end up in this scenario that people probably imagine on that front, simply because every step along the way, some problem emerges that the AI is not good enough to handle, that humans have to play this kind of stopgap on. And I think that is a rolling process as far as the eye can see. And I think that each new technology breakthrough just leads to a new bottleneck somewhere else that people have to—we play the role of the duct tape on. And I think that's probably actually been the story of technological progress for 150 years. We thought we were going to automate X function, and we kind of did by 80 percent, and then people have to do the rest. And I don't, again, I don't see AI meaningfully changing that.
There's just so much—and this is sort of back to this thing of why does the engineering job still exist in the future, et cetera. There's so much signal and so much context that you get by still operating in the outside world that is necessary for these AI systems to know about, but they can't glean on their own. And there's no breakthrough that we have any example of that replaces my ability to talk to a person down the hall that gives me an idea because they just talked to a customer. We can't replicate that in an AI system right now. We don't know how. And there's no technological breakthrough that we know of that will replicate that. So as long as there's still a three-dimensional world out there that we have to go and participate in, we are going to have much more signal, much more context than the AI will. And that's going to just keep us in jobs, keep us doing things, again, for as long as we can look out right now.
Dan Shipper
Yeah, I agree with that. I think even if we get there and people are working in robotics and continual learning and all that kind of stuff, it's easy to forget that there are all these weird—people collect experiences and you learn from experiences. And so if you've spent a long time in an industry, for example, for the last 15 years, you have a lot of this tacit knowledge in your neural networks that you can't—you get feelings about. But even if we have an AGI, if it hasn't been there, it's not going to have the same perspective. Maybe it has an interesting perspective that's useful, but there's still just your own personal experience. Thinking about you versus other humans matters.
Aaron Levie
Yes. And I don't think it'd be that fun if you have to just prompt engineer everything in your life. Like if I have to—if you had to prompt me when we're getting on this podcast and like, okay, you are doing a podcast right now and here is what you're going to—it's like, no, it would be so draining that we would completely halt all of our interactions. So you do need—but people don't have to be prompted in that way constantly. We can kind of pick up on the cue that lets us be like, okay, I can put this in this part of my memory and I'm not going to accidentally talk to you about healthcare right now because that's not what we're talking about. And so people are just going to be better at that. Until you kind of look at—if you watch the Richard Sutton podcast with him, it's kind of exactly that, which is these systems are not—they do not have any context. They don't have any ability to truly predict the future. Their next token prediction, it's insanely valuable. There's an incredible amount of economic value in that. But they don't replace people because we can go around the world and we can build a tremendous amount of context that the AI will never be able to get. And no amount of humanoid robots in the world will be able to still replicate that. And we will deploy these as utilities for us, as we always have. We continue as a species to be able to deploy technology as a utility so we can get better things in life—better healthcare, better life expectancy, better food production, better entertainment. And I think this is, again, one of those technologies.
Dan Shipper
Why do you think that more AI CEOs don't talk like this?
Aaron Levie
When you say AI CEOs, are you saying like—
Dan Shipper
Dario or—I think a lot of the CEOs of the big labs don't talk publicly with this perspective.
Aaron Levie
I think, well, there's a reason I chose B2B software. I'm probably on the more boring end of this ecosystem.
Dan Shipper
I've read your Twitter. You're not on the boring end.
(00:40:00)
Aaron Levie
Okay. But there's a reason that I landed in enterprise software and not building an AI lab. So I live in reality with the practical implications and limitations of these tools. I actually don't mind that Sam Altman or Dario or others either talk in the way they do or have the ambition in the way that they do. I believe by the time the technology hits the real world, it will manifest just a bit differently. And so thus the implications are a bit different. But I think it's kind of fun that we have different approaches to this. I think it would be actually very boring if everybody was hyper-pragmatic and practical. I think on the margin, I'd rather have the more "this is going to be a crazy utopian future" than the dystopian angle that some take. But again, I think it's cool. We have a marketplace of ideas. There's lots of different opinions.
But when you see things like, let's say, Dario says we're going to see this massive job dislocation or 50 percent of jobs—or, I forget the exact stat, I don't want to misattribute what he said—I think the thing that just ends up being a little bit different than that point of view is, again, these jobs are a collection of these tasks, and we have figured out how to automate tasks, and the tasks are getting bigger. But there still is an endpoint where a person has to come intervene, review something, and execute something.
And even in the cases—and again, this is where I'm informed by having, now Box has a few thousand employees, so I'm informed by this as a result of that—but even in the places where we say, well, we probably could have a smaller total number of folks in the company doing password reset emails because we can just automate that, the way as a resource allocator that I respond to that, though, is we put more resources in a different area of customer success that has always been underfunded chronically because we didn't have enough budget to go apply there. And so if I can make one job function more efficient, I gladly will take those dollars and reapply them into an area that is more strategic that we have not been able to automate, and that I see no plans to be able to automate. And that's the much more dynamic nature of these organizations and of the economy generally that I think sometimes doesn't get brought into the big economic dislocation conversation.
Dan Shipper
How are you splitting your time and your focus between—I have these two buckets that I try to think about. So one is just removing the current bottleneck for any product I'm working on, any company I'm working on. It's like we have a funnel, where's the biggest bottleneck in the funnel, and how do we make that better? How do you split your time between that and magic moments? It's like, wow, we have—there's this new technology that does this crazy wizard shit and we can do something fundamentally new here. And how do you split your time and attention and your company's time and attention between those two things?
Aaron Levie
And to clarify, do you mean internally, operationally, or for the product experiences we're building?
Dan Shipper
I think both, because I assume the internal operations lead to building magical product stuff.
Aaron Levie
Yeah. Sometimes they can be decoupled, but I think, again, as a pragmatist, probably 80 percent of the time is going into the incremental bottleneck that we can de-block, even from a product roadmap standpoint. If you look at the AI that we are delivering, a lot of it is just hyper-practical. You have a lot of contracts, you have a lot of invoices, you have a lot of research papers. I want to extract data from that so I can put it into a database to automate a workflow. That's where we're putting a lot of our energy because that's just a major pain point in the economy. It turns out there are trillions of documents that all have important data in them that you'd want to be able to extract automatically. And we could then power workflows more efficiently as a result of that. That's where we spend a lot of our time.
Equally, though, we have a small number of initiatives that are much more like, okay, how do I go and automate the entire workflow of generating a loan agreement or of doing due diligence on a transaction? And that's a long-running agent. It's going to do hours' worth of work. It's going to read lots of documents, it's going to collate them, it's going to generate a report. And that's much more like a 10x step-function change in how that workflow works today. But again, we're going to pay the bills because we're going to do a very practical automation along the way.
And so I think internally, that's probably how we look as well, which is 80 percent of our time is, let's reduce that bottleneck, let's make that more efficient, let's improve how we respond to customers there. And then 20 percent is experimenting, saying, what would this workflow look like if we had Claude Code go and just run it as a background agent and do a lot more of the work, as opposed to a more basic way of interacting with the agent in an IDE?
Dan Shipper
Tell me about this automated due diligence research agent. Is it working? What have you learned from building it? Do you think it's going to work?
Aaron Levie
Still very early, and it will work simply because, as we're finding, it's just all a trade-off on how much compute you want to apply to the problem. We always—we have a lot of internal debates which are like, man, we can make that thing happen in 10 seconds, but we know that the hit rate is 50 percent, or 70-30. Or we can make it happen in one minute and we know it'll be 93 percent. I'm making up all the numbers. And then at some point the customer's going to be like, wow, that wasn't very magical because it took a minute. But you're like, but you got the right answer. And the customer obviously would be way happier if they were in the 70 percent success rate in 10 seconds, but they're not going to be happy if they're in the 30 percent.
So all of these things are just product trade-offs, which is: how long is the customer willing to wait? Do you give them the knobs to tune those decisions? If you give them the knobs, can a regular person outside of Silicon Valley understand what those knobs even mean? Or are they just now super confused and we're doing techno-speak? And so those are—I mean, but it's so much fun because I can guarantee we have spent more time on UX patterns that are completely unprecedented UX patterns in the past year and a half than probably the past 20 years of building a company.
Because for the most part, over the past 20 years, software didn't really change. Again, pre-post-cloud—we had buttons, we had tabs, we named the things, you could have a sidebar. Like all of the UX patterns of software were exactly the same. We just got better at designing pixels in the past kind of 10, 15 years. Figma just made it so we could actually make everything look modern and have a good design system. But nobody really reinvented fundamental—there were drop-down menus, there were windows inside of each other, et cetera.
In AI, man, every single couple weeks we're like, shit, how should we expose the idea that this is going to be a longer-running agent to the person? And how much should be anthropomorphized as what you would do when you're interacting with a human versus, no, this should actually be kind of behind the scenes and it's just software doing stuff? And that actually obviously makes it so much fun because we're designing a new form of software. The software is actually labor that you're interacting with in some form, and that doesn't have the classic patterns of software. And so we get to invent a whole new style of how we build these tools.
Dan Shipper
One of the other interesting trade-offs there on the product end is which model to use. So newer models, more expensive, usually faster, but you get much better results. Older models, less expensive, maybe worse results. But there's always that trade-off there where we could serve this to everyone and it'd be amazing, but it would bankrupt us. So how do you think about that?
Aaron Levie
In general, right now I'm in the "you should just always be using the best that there is." And in our case, we're fortunate enough where we can afford to sort of say, okay, we'll spend a little bit less in that area to fund the subsidy—some of the compute on this area. We can move things around. I'm certainly sympathetic to smaller startups, maybe that don't have venture funding—they can't make those decisions as easily. But generally speaking, right now, I think we're in a part of the curve where you kind of just want to always be betting on the better technology. And mostly because you will have a competition that does, and you will not be able to be the company that has an inferior product right now.
And then equally, any company that is doing work to mitigate the quality issues of an inferior model relative to what you could be getting from a better model—that work is totally wasted relative to, again, real productive value creation. So you kind of—and this is why, fortunately, a lot of people are kind of building in public, so you get to kind of see these examples—but this is why I think on a regular basis, every three or six months, you're kind of moving up the stack from a scaffolding standpoint because you built scaffolding two years ago that mitigated context window length. And now that's not an issue.
For instance, we built a lot of features—this is just to give you an example. We built a couple services internally because ChatGPT 3.5 had a context window of, I don't know, 6,000 tokens or 8,000 tokens or whatever. Well, that scaffolding is irrelevant in a world of 100,000 tokens or 200,000 tokens, effectively. And so we could have been like, no, let's really keep betting on that thing and the old model. But obviously you're like, no, you should just instantly upgrade. Screw the software that you built. Now let's just benefit from the model's capabilities.
(00:50:00)
Similarly, as you have better reasoning capabilities, better multimodal experiences, more of these features will be compressed into one single model. And so I think you kind of have to just bet on the best-in-class models that are out there. Unfortunately, cost be damned. If you can make it more efficient with intent routing and whatnot on your own, great. But you cannot afford to have an inferior experience on any dimension right now and be competitive based on how fast the space is moving.
Dan Shipper
That's great. I totally agree. Aaron, this is a pleasure. Thank you so much for joining.
Aaron Levie
Yeah, thanks for having me.
Thanks to Scott Nover for editorial support.
Dan Shipper is the cofounder and CEO of Every, where he writes the Chain of Thought column and hosts the podcast AI & I. You can follow him on X at @danshipper and on LinkedIn, and Every on X at @every and on LinkedIn.
We build AI tools for readers you. Write brilliantly with Spiral. Organize files automatically with Sparkle. Deliver yourself from email with Cora. Dictate effortlessly with Monologue.
We also do AI training, adoption, and innovation for companies. Work with us to bring AI into your organization.
Get paid for sharing Every with your friends. Join our referral program.
Ideas and Apps to
Thrive in the AI Age
The essential toolkit for those shaping the future
"This might be the best value you
can get from an AI subscription."
- Jay S.
Join 100,000+ leaders, builders, and innovators

Email address
Already have an account? Sign in
What is included in a subscription?
Daily insights from AI pioneers + early access to powerful AI tools
Ideas and Apps to
Thrive in the AI Age
The essential toolkit for those shaping the future
"This might be the best value you
can get from an AI subscription."
- Jay S.
Join 100,000+ leaders, builders, and innovators

Email address
Already have an account? Sign in
What is included in a subscription?
Daily insights from AI pioneers + early access to powerful AI tools
Comments
Don't have an account? Sign up!