Transcript: ‘His GPT Wrapper Has Half a Million Users—And Keeps Growing’

‘AI & I’ with Vicente Silveira

2

The transcript of AI & I with Vicente Silveira is below.

Timestamps

  1. Introduction: 00:00:35
  2. AI PDF’s story begins with an email to OpenAI’s Greg Brockman: 00:02:58
  3. Why users choose AI PDF over ChatGPT: 00:05:41
  4. How to compete—and thrive—as a GPT wrapper: 00:06:58
  5. Why building with early adopters is key: 00:20:49
  6. Being small and specialized is your biggest advantage: 00:27:53
  7. When should AI startups raise capital: 00:31:47
  8. The emerging role of humans who will manage AI agents: 00:34:53
  9. Why AI is different from other tech revolutions: 00:45:25
  10. A live demo of an agent integrated into AI PDF: 00:54:01

Transcript

Dan Shipper (00:00:35)

Vicente, welcome to the show.

Vicente Silveira (00:00:36)

Hey, Dan. Thank you for having me here.

Dan Shipper (00:00:39)

Thanks for coming on. So, for people who don't know, you are the CEO of AI PDF. It’s one of the biggest AI PDF readers in the world. You have about 500,000 registered users and it's only been live since the end of last year. You've done over 2 million conversations in the GPT Store and you just started monetizing and you have about almost 3,000 paying subscribers. So, it's a really, really cool business, and what I think is most interesting is there is this narrative, especially in the sort of earlier days of AI, but I think it's still happening now, which is like, oh yeah. All of these AI PDF companies are fucked and they're not going to do well and whatever. And I think you're actually building a really interesting business and you're also part of this broader wave of people who are making things in this new AI economy with really small lean teams, who run the company break-even without raising a ton of money. I know you raised a friends-and-family round, but you haven't raised a ton of VC. I think there's a lot of overlap between the kinds of things that you're building and the kinds of things we're building internally at Every. And I just want to learn about how you're thinking about it. We can swap stories.We've done a lot of thinking too about fundraising and all that kind of stuff and how to build this. Because I think people assume, oh you can't build a good business like this, but I think you can actually build a sneaky, really, really great business this way with small teams, especially with AI. So yeah, tell us about that. Tell us about your business and how you're thinking about running it.

Vicente Silveira (00:02:22)

Yeah, it's funny you mentioned the whole wrapper thing. We almost have a wrapper death countdown clock, which is this many days since we died the last time. From time to time, we're pronounced dead. In reality, the business keeps growing. It's kind of interesting. At some point, when OpenAI first allowed to upload PDFs to ChatGPT, it was, oh, all this stuff is dead. No. Some of our competitors actually just gave up at that time, but we kept plugging it. And I think at some level this is an interesting industry because everyone is pointing at each other saying, you are a wrapper. It's like, I don't know if NVIDIA is a wrapper around math. Everything goes from there. 

Sponsored by: Every



Tools for a new generation of builders

When you write a lot about AI like we do, it’s hard not to see opportunities. We build tools for our team to become faster and better. When they work well, we bring them to our readers, too. We have a hunch: If you like reading Every, you’ll like what we’ve made.

  • Automate repeat writing with Spiral.
  • Organize files with Sparkle.
  • Write something new—and great—with Lex.

But yeah, we started on this because we tried different things actually. When we first started, when ChatGPT was just coming out, and I was basically trying to do property injection against ChatGPT and Sydney back then. And I found some stuff. I sent an email to Greg Brockman at OpenAI and he replied, oh, that's kind of interesting. You should talk to this guy here. And at the time, I just got out of Meta and I was angling for a job at OpenAI. I talked to this guy, he's like, well you have an interesting background. Right now we're not hiring for this, maybe in a couple of months, so we’ll keep you in mind. And I'm like, well, maybe you can give me an API key or let me into this new developer program, and that's how we started. And we tried different ideas, but the PDF one really took off immediately. And in reality it is because this is one of the first things that people are trying to figure out with AI. It's a lot of pain dealing with lots of documents, and PDF is the main kind of document across platforms. So people just gravitated to that.

Dan Shipper (00:03:21)

That's really interesting. And so people gravitated to that. And you've said people have declared your death multiple times. What do you think—? I mean, if a PDF reader is in Chat, why are people using you?

Vicente Silveira (00:04:39)

Great question. So the thing is when we looked at this and we started building it we— Actually at first, we didn't even have a place for people to upload PDFs. We were just like, okay you can just give us a link and our server would go there and fetch the content and because we're growing so fast at the time, people were giving us Google Drive links and Dropbox links. And Google and Dropbox started rate-limiting our IPs because they saw us as an aggressive bot. And this was a bad experience for our users because it would get an error. And we were like, well what are we going to do? And we're like, well, I guess we could try to play the cat-and-mouse game with those guys but we actually know how that works and that would be kind of pretty bad. So we're like, well, what if we just let them upload their files? And we thought no one would do it. And so the first version of our website, My cofounder, Karthik, he was like, you look scary. And to our surprise, in a week, that domain for our website became the number one domain for the links passing the Google domain and the Dropbox domain.

So that was a lesson for us because it told us that the users that were gravitating to ChatGPT, and they're going to all the trouble to enable plugins, that those people were actually risk takers and early adopters. So we built that for them. And then, going back to your question, why do they keep using us even after this was created on ChatGPT proper is because they don't want to just upload one file. When it works, they basically want to upload their whole collection of files. And even to this date, I think our ChatGPT is limited to 20 files. And for us, we have people with more than 150,000 files in one account. We have people with multi-level folders. No one else supports that kind of stuff. But it's important because people need that low-friction and even respecting the intelligence they put in creating their folder structure to be part of this onboarding. So this is part of the product experience.

Dan Shipper (00:06:42)

That's interesting, but I want to push you a little more. So it sounds like one of the reasons people are still using it is because you're sort of staying one step ahead of what ChatGPT will do. Do you have a theory about why they won't eventually do a multi-folder upload? An example might be a NotebookLM where it does let you open a lot of files. Are you worried about that at all? Or is there some strategic reason why you think you're going to go deeper than a ChatGPT or like a Gemini or whatever.

Vicente Silveira (00:07:20)

I mean, this is interesting because I feel like these guys, especially if you look at ChatGPT. Their focus is to kind of race towards AGI and create a product that's good enough to have enough usability there so that lots of people use it, they can collect the training data, and then feed their machine. I don't think they're going in one particular specific direction. They're mostly kind of touching on what is the minimum for core use cases. So I think, is there a possibility that something like ChatGPT or Claude can actually compete with us and basically there's no need for this kind of platform? Yes. But that is always the question for startups when things start.

So when you think about a startup like Loom, Why does Loom exist and will sell, I think, for almost $1 billion to Atlassian. Loom is just recording video. But they did that use case so well that even though YouTube had all the technology to do it, they didn't do it. Vimeo had technology to do it, they didn't do it. All of the major providers had the technology. They didn't do it, but Loom actually nailed the use case. So, for us, we're nailing the use case of getting a collection of documents that you have and being able to do end-to-end workflows with those documents.

Dan Shipper (00:08:46)

That's really interesting. I think it dovetails with some of the things that I think about. One of the things that I have in my mind when we build things internally at Every is that ChatGPT and Claude, I think, like you said, rightly, they're building for the most broad use cases possible. They just want anyone to go on and be able to do whatever they want, basically. And in that way, it's sort of a little bit like Excel, where anyone can go into Excel and you have a blank page, a new sheet full of lots of cells and you can just start typing numbers and anyone can do that. And then people are going to discover as they're using ChatGPT and Claude that they have more specific— They're going to discover use cases for themselves that they didn't know existed.

So for example, we have a product called Spiral that lets you automate a lot of creative work. It helps you do headlines and come up with tweets and all this kind of stuff and that's the thing you can do with Claude. But Claude is not purpose-built for it. So our thesis is people will discover use cases for AI and discover problems to solve with AI by using these more general-purpose tools. And then that will create demand for other players to peel off some of those use cases for particular kinds of people for particular kinds of workflows. For us, it's marketers and creators who have a very specific need for that kind of workflow. And having a product that's purpose-built is going to serve those people better. 

And I'm sort of curious, for you, because a key part of my thesis is you need to have a particular persona in order to be powerful enough for that kind of workflow. But it sounds like you're kind of going a level up, which is more general. But do you have a particular persona in mind or how do you think about it?

Vicente Silveira (00:10:46)

We get this question quite a bit and it's interesting because our persona is an early adopter of technology, a risk taker that has an actual job to be done involving lots of documents. So all of those things are important. One is they're not just an early adopter because an early adopter— And a lot of ChatGPT and Claude use people that sign up to just see what's possible, they don't have an actual job they're going to do there, but they just want to get that familiar with technology, be able to talk to someone about it, those kinds of things. So a lot of that kind of use case that's one component of our user base. The other component of our user base is they have an actual job that they need to do today. And they have a lot of documents. So this is the combination of things that creates our—so like, who are these people?

We have a law firm. The partner is an 80-year-old lawyer And he found us. He's like, I'm using this every day and we're going to have it within our firm and I’m a decision maker— We're going to have it adopted here. And we have people that are researchers, we have accountants, we have writers. So there's a range of these kinds of different types of profiles. But what you'll find in common is that they bring a lot of documents to the platform and they are basically trying to get some job done today. And they want to do this in a new way, which is the AI-first way. So that's one thing about the persona. The other thing that I want to kind of highlight is we also differentiate from a platform like Claude or ChatGPT in a way that I think they're not really going there, which is we're building this from the perspective of, it is kind of like we're giving a cloud drive to an AI agent. 

That's very different than you allow an AI agent to access some files. So what do I mean by that? And we can show you a little bit about that as well. But the agent that we have in AI Drive is capable of doing things like creating new files, updating metadata in files, and going through the file structure. So it's really kind of driving that effectively that cloud drive to be able to accomplish a job for the user. Because a lot of the job involves manipulating a lot of documents. So that's another very important point, which I think it's not really the focus for these other platforms.

Dan Shipper (00:13:29)

Why do you think it's not the focus?

Vicente Silveira (00:13:32)

Because I think it's not necessary to accomplish what they're trying to do. And it also introduces other types of considerations and risks that they may not be interested in dealing with. So this opens up our platform for a tinker type user to be able to do things like— So think about this: You have chat history in something like a ChatGPT. But that's something that you can go to, you can look at the chat history. In our product, chat history is actually made of files that— I can access those files on your behalf and use the same tools, like a search tool to be able to go into those kinds of files. So these are all kinds of the same primitives that we're building on.

Dan Shipper (00:13:42)

So one of your core users, you said early adopters, but it's I mean, I'm thinking of myself, information nerds like I was like, ooh, your chats are files. That's amazing. That's so cool. It's like feeding back into itself, but that's a certain kind of nerd that cares about that.

Vicente Silveira (00:14:35)

I was just going to illustrate what you're saying. So you see that we have this little system folder here.

Dan Shipper (00:14:40)

So basically you're sharing AI PDF and the product is AI Drive. And basically you're showing me on the left, it looks like you have a list of files. So that's your drive. That's like a Google Drive. And then you have a chat window which I assume allows you to sort of chat with those files, sort of like a NotebookLM. And then on the right, it looks like there's a reader view where, for any particular file that you're talking to, you can also see the PDF—that's basically what I'm looking at?

Vicente Silveira (00:15:09)

Yes, that's right. And you can see on the left side here, we have shortcuts here. So if I click on this little clock, you can see a history. So this is very similar to what you have in ChatGPT. There's a history of the chats that you had today and that's all you get in a typical platform. That's not really open for people that are kind of tech enthusiasts like you and many others. But what that means for us is that we're building this. So if I go back to files, you can go and find these as files under the chat history folder. We're building these as basically kind of an open platform where everything from your chats to and we're going to be adding memory—all of that will be basically just a file in the system.

Dan Shipper (00:15:54)

That's interesting. I want to go back to what we were talking about earlier. So one persona is information nerds like me, and maybe that cuts across lots of different industries. There’s probably information nerds who are lawyers or accountants. It doesn't seem like any of your marketing is specifically about that. Your marketing is a lot more general AI PDF stuff. How do you feel about that?

Vicente Silveira (00:16:16)

Yeah, and we're flipping that because our trajectory is going through this transformation. So if you go back to when we built this, and I give this advice to people when they're thinking about what they're going to do. It’s like what we did with the plugin. That was the least effort thing that we could have done at the time. It's just an API. At the time I had a server running on Replit. Replit’s amazing. And that was everything. And with that, we discovered the market. But then at that point, we're just a plugin. So we couldn't operate independently.

Dan Shipper (00:16:57)

What do you mean by it was just a plugin?

Vicente Silveira (00:16:58)

What I mean by there was no web, no web app, no place for you to create an account. We had no direct relationship with the user. So you would go to ChatGPT, you would enable plugins, you would find us.

Dan Shipper (00:17:05)

I see. It was a ChatGPT plugin. I just forgot about that whole era of ChatGPT.

Vicente Silveira (00:17:11)

It wasn't that long ago, but it feels like that. Yeah, so at that point, we were a product that only made sense as an add-on to ChatGPT. But what we realized is that people needed an actual environment where they could do the work end-to-end with the AI and the files. You are able to verify the work of the files, which is another thing that we as you can see on the right side of our AI Drive screen, you have the actual files that are the source material. But beyond that, the other thing that we realized, and I think it will continue to be true for the foreseeable future, is because of this arms race between the main providers, the ones that can play the game right now, probably Google and Anthropic and OpenAI, I guess, is coming up as well as a potential one. So you have maybe four or five providers there that can actually provide unique capabilities when it comes to the models. And our users, because they are like you, and I heard you say in your podcast, you tried this on Claude or this on o1. They want to be able to have the latest and greatest. Now, if you're going to upload your files to ChatGPT, and then tomorrow, Claude has a better reasoning model. Then now you get to upload your files over there as well. So we bring that one place where you can use all the models combined. So that's the other aspect of that.

Dan Shipper (00:18:42)

That makes a lot of sense. I mean, it's funny. We do a lot of work with big companies where we help them figure out what to use and sometimes train their employees and that kind of thing. And I think mostly they're not in this category. They were just excited to see that ChatGPT had a new feature, but none of them had heard of Claude. They're like, what? We're like, it's the best. And so, yeah, I think there is that sort of a different market. The early adopter market is a bit different for people who really want to use the latest and greatest-type thing and, like I said earlier, that sort of cuts across industries and there's room for that kind of nerd and what's a nerd as a customer.

And what's interesting to me about how you're doing this is you're just taking the first step first. You're like, okay, cool. I'm going to make a plugin and then that starts working and then you're like, okay, now our customers really need a place that's different from the chat to be form factor. So I'm going to go build this other thing. And then now you're adding more features onto it and that kind of thing. Rather than I think there's this other approach, which is instead of getting to market super quick, you kind of come up with an idea. You make a deck, you raise money, you start to recruit a team and you take a year to put this thing that's your vision into the world and that's an alternative thing. I think that works too. And there are always trade-offs. A trade-off that I'm seeing here in the way that you've built this is that the product that you eventually ended up building is sufficiently different from where you started. There's always this difference between what is public, what the public marketing is about and how you talk about it and like where the product is now. And you need to constantly catch up. I've experienced that a lot before. So that's one trade-off where someone who just started out just put the vision out into the world—like singularly, that's not their problem. Their problem is does anyone even want this? I'm just curious how you came to that or why that's your methodology.

Vicente Silveira (00:20:53)

I'm glad you asked this because, and you are right, there's different ways, and I have other friends—founders—and they took these other paths. I think one is what works for you. This whole thing started as a side project with me and Karthik. We're just packing over the weekend because we just love the stuff. That's one. But the other thing that I think about it now, and I think it's still a reality for us is that, even though we may feel like, oh AI already sort of happened and we are so early in this AI cycle on the ground—and you I'm sure you feel that way as well. Then the steady state of this technology or the productivity state of this technology, it's still not very clear what that's going to look like. So, just in this short time that we've been doing this stuff, you start with, okay, you have these first. You have these, these specialized models. Those people had a bunch of companies. And I just recently talked to a company where they had a model for PDF extraction that they built on Watson and all of that. Then they tried it with GPT-4 and it just blew that thing that they worked on for years out of the water so that was a big phase shift there where a lot of people had a rude awakening there. There's a great paper by Microsoft when they did a bake-off between the Microsoft PII, the private information detection model that was built over years with Microsoft resources, and they baked that against GPT-4 and it destroyed it, right?

So that was the first phase shift, but the reality is that it continues to happen. So we built this thing and at first Chat barely worked. Now Chat works pretty well. And we have multimodal chat and all those kinds of things. And now if you look at what Sam Altman and a bunch of people building now are saying—building the foundation—they're pointing to oh, the AIs are going to become more and more capable so they can take more of the task and become these agents. Well, everyone's talking about this kind of, okay, we're going to move to agents. So the ground is shifting as we go along. We have something like Computer Use now. So Claude did a little demo of that. So what I mean by that is by building this for an early adopter crowd with an actual problem to solve. They have day jobs. That's very important. We are actually capable of tracking the evolution of this market, so that we don't get stuck on a kind of an early internet-type thing. Oh, there was ICQ. There were a bunch of different things that eventually became irrelevant once you got into the productivity state. So in a way it's like, for us, doing this is both strategic and also defensive as well.

Dan Shipper (00:24:10)

Well, let me unpack a little bit of what you said. Because the question I'd ask is sort of that tinker mentality where you're going out and you're just building the thing and you're getting to market super quick. And it sounds like what you're saying is that by serving an early adopter market, you'll be able to, and you're incentivized to kind of keep up with the latest and greatest so that you don't get left behind. How do you bridge the gap between the two? What's the bridge from how you got to market to not being left behind because you're serving early adopters?

Vicente Silveira (00:24:43)

I think the early adopter, what they give to you is some leeway to experiment more. So for example, we now introduce the agent into our product before— if you look at our product, and it's still there. So that's the main way. You can see this menu. And basically I clicked on this menu in the chat prompt and you have the models from the main families here on Anthropic, OpenAI, and Google Gemini, so you can just go into this and just do a regular chat, which most users, that's what they are used to. But then we have our kind of users pushing us forward. They want to go into the agent and be able to do more of that task. So we're trying to basically work with the core of our early adopter users and also listen to the ones already moving forward to the next thing and what these early-adopter users give you is more tolerance for the experimentation because of course and something like an agent today still doesn't work great. You may have a moment where it's like, amazing, and the next run that you do, it may get stuck. But they actually want to see where that thing is going. So that's why these types of users, they help us bring both the main use case and the leading use case at the same time forward.

Dan Shipper (00:26:11)

I think this is an important related point to this competitive thing that we've been unpacking together. How do you compete if you're a two-person team or, I don't know how big you are, but we're eight people. So how do you compete against OpenAI or Google or Notion? And I think the thing that comes to mind for me, that I think you're saying, which I think is true for us as well, is people forget that when you're a big company, you have to serve a lot of users. It's really hard to take risks. And for a while there was this feeling about an AI where it was, well, the AI is going to be smart enough that it's never going to make mistakes. So big companies are going to do anything that startups can do. And I was always just like, no, big companies always find a way to fuck things up. It's not because they're not smart. It's just innovator’s dilemma stuff. It's basic stuff that you just can't take for granted. And I think that's why when you look at a lot of the AI stuff, for example, and I won't name names because I don't like shitting on people directly. But for example, I got a fitness tracker app recently and it's really great. I love— Actually the way it works and the app is whatever, but they have an AI feature in it. And the AI is just so milquetoast. It just doesn't say anything useful basically.

And the reason is they have to make it work for the lowest common denominator user. They have to make it not confusing and they don't want to take any risks because they're a big company and it would be bad if it said something that was risky which makes a lot of sense, but it means that the experiences you're able to do as a bigger company are less good. In a lot of ways than the experiences that you can do is as a small company that you can just decide, okay, yeah, we're going to serve these users that don't care if there are rough edges and we're going to explore on the boundaries of what's possible and our users are going to understand if we return a result that's not so great and that allows us to experiment and they understand because they want the greater power and they understand that there's a trade-off there, which I think I think that's really important. I think people miss that all the time.

Vicente Silveira (00:28:20)

Yeah, I think that's totally true. The only reason why startups have a shot at anything it's because there's, I guess, a core vulnerability to establish business. And usually that is their customers. So the thing that makes them powerful is that they have lots of customers. It's the same thing that makes it hard for them to take risks with those customers. So if you think about the mainstream products, you take a spreadsheet product. A product like that, there's a lot of investment that was done on a ton of features and training of users. I mean, they actually have certification courses for their user base who go there day in and day out expecting a certain experience. They also want to know that there's some AI on the side—and you can put some AI on the side, they do sprinkle AI everywhere, but to radically change that experience to be something like, oh, it's going to be AI-first. You're not going to go click the buttons. You're going to tell an agent to go do that for you. And that's going to be the core of the experience—not distracting anything that is the change. They’re just too radical typically for the incumbents to be able to do. So that's why you have the opportunity for a startup of yours and ours to come in and introduce a new way of doing things, which I think is part of what we're trying to do, which is what is going to be the new way of doing things. And we feel like it's a lot more— We have this analogy here. So you just think how a very wealthy person operates: They operate through intelligent agents. People that they hire, they're very smart. They learn everything about them and they operate all the complexity behind them. And so that's kind of where we think things are going with AI. Of course, right now it's far from that. But it will approach and get very close to that. So this opportunity to build for the new experience, I think leaning into that is very important.

Dan Shipper (00:30:32)

I totally agree. I think that metaphor is so powerful. It's something that I've written about a lot and I've thought about a lot. If you want to know where things are going with agents, people have been hiring agents for a long time. And they are solving problems with these agents where there's a lot of overlap with what AI is gonna be able to do. And maybe you can do new things with AI that you couldn't do with things like hiring a personal assistant or whatever. But, there's a lot there that you can just kind of carry over—even for me. I run a media company and. I employ a lot of people, editors and writers and designers and all this kind of stuff that help me do things at a high-level frequently all the time. This YouTube video is going to be edited and it's going to have an intro sequence and it's going to have a thumbnail and all that kind of stuff. And I can do that because the company is successful enough that I can hire those people. But it took me a long time to get to a scale where I could do that. And I think, if you want to understand, for example, where the future of media is going, it's not that teams of creatives are going away. It's just that an individual creator is going to be able to do a lot of the things—I have to hire people to do on day one and I'll still have lots of people doing stuff. I'll just be doing it at a higher scale because the people that are editing my videos can edit twice as many videos or whatever. But I think that's a really good place to look for ideas. If you want to understand how people are going to run their calendars or their emails or whatever, just look at how people with assistants do it. So that's definitely a metaphor that we use a lot internally. 

I'm curious, for you, you only had a friends-and-family round, I assume with the kind of traction you have, you could have gone and raised a venture round. Why didn't you do it?

Vicente Silveira (00:32:29)

First, our experience raising, it was kind of very interesting. The beginning was, oh, this is amazing. We're just going to be raising a ton of money now, and maybe we should raise more. The beginning was fast and the process then later started dragging along. And I felt like I was back working at Meta doing PowerPoints and tweaking PowerPoints and prepping with a friendly VC to talk to another one. And when we have users, you're basically asking us to do stuff. And at the time it was just, right, so I'm hating this. And if we can just go and monetize, let's just do that and then we'll come back. So I think that was the main reason. And it was a time as well, where with AI, the whole kind of hangover of the first wave of investments was what set is. So people are really worried about this. It was a time also for us product wise that the product was very much dependent on OpenAI in ChatGPT specifically, right, which is not the case now anymore. So for those reasons, we're like, yeah, let's just focus on the product, which I think was the right thing to do.

Dan Shipper (00:33:40)

That's interesting. Do you think you might raise again in the future? What's the path look like for you?

Vicente Silveira (00:33:42)

Yeah, I think so. I think the reason is we want to be very careful and I guess diligent about when we do it, why we're doing it, and how we're going to use the capital that gets invested. And this is one of those things where kind of going back to how many people, how much software—that's the other question—a startup is going to need moving forward and we think it's actually less than what has traditionally in the last 5–10 years been. So we want to make sure we do it right so that we are not— Sometimes, and I think Bill Gurley famously said that, companies raise money and that because they have that money, then they end up becoming more complacent. So that's one thing that for now we want to make sure we don't do, but yes, I think once we know like, okay, this is going to allow us to grow in this direction it will make sense yeah, I think so. I think the other thing that I want to maybe touch on is between this point and the previous one on how both the experience is different, but also how you end up saving money, right? If you just lean in, we think about— Let's say a job like onboarding on our product. When you sign up for our product, our onboarding sucks. Your onboarding is beautiful. I did the Spiral one. It's beautiful.

Dan Shipper (00:35:16)

Thank you. We worked really hard on it.

Vicente Silveira (00:35:18)

Yeah, I love it. Ours sucks. And we're like, okay, we need to make our onboarding better. I do think that the way I want to do that is basically give an AI a job to do that onboarding of that user. So what does that mean to us? Instead of having sort of, okay, either a product that we attach to our app that will kind of be configured for that onboarding in some way or or build that ourselves. We are going to basically lend you to an agent. That knowledge about the product knows where you came from. Oh we have a landing page for lawyers. You came from the landing page for lawyers. So you're likely a lawyer. Hey this is what the product does. Do you want to upload one of your files and we can show you what it is, what it can do for you? And it basically gives that AI a job and what we want to do is so we have a person that we just hired that is going to be responsible for this area. So that person's job, the actual human that we hire is to basically manage that little agent. And be responsible for the delivery that little agent does. And of course, as the agent gets better, there will be more and more that it will be able to do. So it's kind of an interesting way to think about these things, where we feel that as we hire people, they will end up being responsible for certain agents on the product. They have specific jobs both for the user and for the company as well.

Dan Shipper (00:37:01)

I love that. I think that makes a lot of sense. I've been writing a lot about what I've been calling the allocation economy. And I think this is right. On that train, which is in an allocation economy, instead of doing a lot of the IC work, you're doing a lot of more management work where you're managing the allocation of intelligence, managing agents and and in that world, the skills of managers become more important than they are now and they need to be more widely distributed. So I think that makes a lot of sense. That's really interesting. I'm curious, for you: Do you have any quantifiable sense of, as you said earlier, the things you can get done with a smaller team with less capital. You can get a lot more done now. Do you have any quantifiable sense of what that is vs. 10–15 years ago?

Vicente Silveira (00:37:48)

Oh, for sure. And you can see this everywhere. And I can give an example, working at Meta. Of course, Meta has a ton of money. And pre-GenAI, and now they're deploying it very aggressively internally, as I hear from the outside. But pre-GenAI, you would have things like a product manager—and I was a product manager there. They would want to know, okay what's going on with this particular feature? What are the top issues that our customers are having in this particular area? And the PM would basically talk to a person in the support area, whose job was only to collate all this feedback and create this report. And it will take maybe a day or if they're busy, maybe a little bit more, depending on how much of a priority. So all of that now, you can get done with AI. That's just there. You can get done directly with AI. So that's just one example of how with the tools that we have now, we should be able to be a lot more efficient and do things that would only be available to larger companies.

Dan Shipper (00:39:02)

I think that's true. I mean, we see some of the apps that we work on internally—I can see someone take something from zero to it's a fully finished product that would have taken a year and they can do it in two or three months if they're good and they have a whole general skill set and we have a bunch of support for them and all that kind of stuff. But it's kind of wild. And I think I probably have a particular anti-VC bias, but I've been historically hesitant to raise. We raised a little bit in 2020, we raised like $700,000 and we raised a little bit more recently, but it was like $100,000—amounts where a VC would laugh at it and be like, what are you going to do with that? And for me, I'm like, we have raised less than $1 million. And if you look at the number of products we've built, there's a couple of different products and companies that have come out of just that one raise. And I think in a year we'll have a bunch more. And I think about raising capital and why you do that. It's if you spend a couple months raising, theoretically you can hire more people and spend more on growth so that you can pull forward. The progress that you would have made over the next year and maybe make that progress in like three months or whatever. And I think that equation—it's still there, but it's different. It's the amount that you can get done if you basically have gotten a lot of that pull forward effect from just properly using AI which is really interesting. And of course everyone else has that. So, to some degree, having extra capital can help, but capital has always been pretty available for technical teams and whatever.

And I think it's rarer to actually be using AI well than it is to get capital in certain areas. Some people have a very difficult time getting capital and that's a whole different problem. But if you're a technical Silicon Valley-ish team, and I think a lot of those people are, I just feel like they don't use AI that much in their daily work because they're like, oh, I'm better at it, or what I'm better than the AI or whatever I think that's changing a lot and people that have really gone headfirst into it are quite a bit more productive.

Vicente Silveira (00:41:16)

And for us it's productivity at all levels. So I have a software engineering background, but I haven't— I coded early in my career and then I stopped, pretty much went to the business side. I thought I was never going to be able to code again. And when this came back with GenAI, it was kind of, you can't mountain bike anymore because you can't go up the mountain. Now you have an electric bike and up you go. It was incredible. And then I say, I have two mentors. One is AI, the other one's my cofounder. And I see that even for him kind of world-class engineer—former Google AI. It makes him so much more productive. So, everyone goes up from whatever level you are at. You become a lot more capable. So, that's absolutely true. And we think that there is a huge lack of awareness overall in the population. I think some of this is, and I like your show because it helps spread the word, which is. AI is actually for everyone. People are like, if you just follow the more mainstream media, you think that, well, one AI is going to kill you, but if it doesn't kill you, then it's going to take your jobs. And by the way, the rich will get richer. So, you end up like, what kind of message is that? What does that do to the population? Everyone gets demotivated. You take agency out of people. In reality, you have the opposite. You talked about this, learning to be a manager. Well, if you, if you have access to an AI, you were practicing being a manager. If you have a phone, you have access to AI, so you can start specifying a task. The AI doesn't do what you want. You're like, hmm, well, actually my question wasn't good enough. That's a lot of what being a manager is. So you get your prompt better, your question better. Now you have to look at the result—what the AI brought back. Is this quality good? Am I willing to put my name on this thing, the idea I came up with. So, yeah.

Dan Shipper (00:43:23)

I think that makes a lot of sense. I mean obviously there's a lot of difficult issues—difficult social and economic issues to broad AI rollouts. But I do think that your point is totally right. People tend to miss how powerful this is as an immediate upskill for lots of people—even people we have internally, for example, if English is their second language, maybe they speak fluent English, but their writing of English was not as good and you could tell. And that limits their ability to get hired or get promoted or do certain jobs. And the minute ChatGPT came out, it was a total shift. They could immediately write fluent English. And that level of opportunity just opens up, that was not available before and they didn't have to do anything. And I do think that's one of the things I would love to do with the show—show people the easy ways that they can get started and also what the most interesting or smartest or furthest ahead people are doing so that we can bring forward everyone else to use it. Because yeah, I think hopefully raising the floor creates a lot more economic opportunities for people. And I love that you have this tutor in your pocket that— I would have just talked to ChatGPT all day if I was 11 or whatever and I think we're kind of lucky there's a lot of concern about AI companies racing ahead and releasing it to public before it's ready and all that kind of stuff. I think on the other end of the scale, we're kind of lucky that we live in a world where all these companies are trying to make it as cheap as possible for everyone to use. There's an alternative timeline where IBM invented this and only like the DOD gets access for the first 15 years and that would suck. I don't know. I could just imagine a lot of things being written the other way where only rich big companies get access to this crazy intelligence. And I'd rather just have everyone, if we have to pick. I'd rather this world, I think, to some degree. I don't know. I'm at this point, I'm totally off what we usually talk about in this show, but I think it's really interesting and important.

Vicente Silveira (00:45:44)

Yeah, I think you're totally right. And if you're in Europe, they're still struggling. I hope that a lot of people get to see this. Because it's the first time that I see—and I've seen these other tech revolutions—where most people are already equipped to be able to use it. So before, the microcomputer was super expensive. You know, I actually grew up in Brazil. We couldn't get it. And we had to basically smuggle parts and kind of build the Frankenstein computers there. And even with cell phones when they came out, right, it took time until people had access, then the networks were not good. And then now we get this thing that if you can get whatever social media on your phone, you can get AI. AI actually works better because most of it is lower bandwidth. So that works well. And I think for us is what we see as we go into this kind of direction, from just conversations with the eyes. I ask a question, it gives me an answer, which is the initial, like chat to moving to agents, which is, I give it a task and he executes that task for me. Is this the evolution of possibilities that you can have? It's one of the things that we spend a lot of time doing now is building tools for the agent. It's kind of super interesting because you start to think about it. Is my tool good? Is the tool self-explanatory? Does it do what it's expected to do? And so you have that second level there. And all of this is opportunity. Because if you think about what people say, well when the AIs do everything, then there's nothing to be done. Again, if you use the rich, wealthy person metaphor, you can be very rich and you can hire a bunch of people to create a company to do something. But most of the time that doesn't work. Because it takes leadership and talent and some vision and some grit there to be able to organize these human agents right into a company that actually succeeds. And I think the same thing will play out with the agents.

Dan Shipper (00:48:11)

You're so right. I'm literally writing an article about this today, but there's this weird fallacy where people are like, well, agents are going to be doing it, so you won't have to think about it. And I'm like, have you ever managed a person, who is literally a general intelligence. People are very smart. It's really hard. And it for sure is a different kind of thing than doing it yourself. But even if someone else is doing it, the skill of delegating— There's this thing that early managers have to figure out, like, okay, how much do I delegate and how much do I micromanage? Because if I micromanage, it'll get done the way I want it to get done. But if I delegate but then I have no leverage—and I'm just basically doing their job for them. So why did I hire them? But if I delegate, then I have more time to do other things or think at a higher level, but it comes back wrong. And that's literally the problem that a lot of people are having with AI right now is they're like, oh, it sucks or whatever I have to do. I can just do it quicker myself. And I'm like, that's exactly what managers face. And so a world where we have these— let's say it's AGI—really cool agents to do all this stuff, I still think there's a lot of, you're skipping a lot of invisible things that have to be done to scope the task and pick the right resource and have the taste and vision to be like, this is what I want done in the world and I don't think we'll get to a point where even they're doing even that, but then there's a lot of stuff that is sort of invisible in this in an AGI-type scenario—or even just even before AGI, just intelligent agent scenario where there's a lot of skill and talent that needs to be directing them. And I think you're pointing to that.

Vicente Silveira (00:50:06)

We actually think a lot about delegation. I can show you something cool. 

Dan Shipper (00:50:08)

I would love to see something cool because I think that there's a lot of talk about agents and there's a lot of stuff happening, but I haven't seen anything really compelling yet. We had Yohei on last week and I think he's got a lot of cool stuff with BabyAGI, but he's like, I think a lot of that is still pretty prototype-y and experimental.So, yeah, I'm really curious to see what you guys are working on.

Vicente Silveira (00:50:40)

Yeah, let's just kind of do this live demo here. So anything can help. Okay, So I did download a few of your posts in the Context Window series of articles. 

Dan Shipper (00:50:45)

So I just want to set the scene for people. So basically we're back in AI Drive. We've got the same kind of three-column layout, which is like we've got a folder structure on the left. We've got a chat in the middle and then we've got an open PDF on the right and you've downloaded every article from Context Window, which is our digest that goes out every Sunday. And you're typing into the chat. And I'll read what you type after you're done typing it.

Vicente Silveira (00:51:13)

So I'm just here on AI Drive and I'm just typing this prompt. Hi, I'm talking to Dan Shipper. So I'm writing this to the AI, which is our agent in AI Drive. And he has a series of blogs—Context Windows—in the folder, and then I'm pointing it to which folder I have downloaded it. Can you read them all and suggest some interesting talking points or conversation starters for us? And I'm going to use this thing here as well, which is to use the expert to plan. So this is one delegation to see the reason. And I'm going to fire this up as I explained. So now it's kind of processing. And if you click, you can see what it's actually doing. So the first thing is getting a tool plan from the expert model. So the main agent you're talking to here is GPT-4o, which is great, but it's actually not the smartest at planning. So what we do is we allow this agent to delegate to an expert in planning, which can be o1, which is great, or it can be the latest Claude, which is also very good, and you can see this here, right? It says, oh, you know this is the task. I need to read all the blogs here. Give me a plan and then you'll get a plan back. And yeah, so now it is outputting the conversation starters. So as you can see— Let me just kind of collapse here so we can look at all the tools that were used.

So the first thing that it did was get the plan from the expert. Then with the plan, he went about executing it, which is, okay, let me take a look in drive at what's in the folders and files. So it went into the Context Window folder just to see which files were there.

And this is one of the things that you have to do with AI, is that each AI has an amount of context window or working memory that you can work with and you have to work around that. So here this particular model, GPT-4, that family doesn't have that big of a context window. So in this case here, we basically tell you, okay, you're going to delegate the task of reading these files. So he went ahead and said, okay, going to this folder. And extract the key arguments, unique insights, personal anecdotes, blah, blah, blah, organize this as a conversation starters. So, and then you have the output here, which it brought back into the chat. So, and then we have here—I don't know, app integration to ChatGPT. So there's some interesting conversation starters here about the app integration there. So this is just to show how in one chat now with a smart agent, you can actually have two delegations that are happening to get that task done.

Dan Shipper (00:53:40)

I get it. I think that's interesting. So which articles have you downloaded?

Vicente Silveira (00:53:42)

We can go take a look. So here I'm going to drive into the folder for the Context Window—Apple Took the Stage for WWDC, Blue-sky Thinking, Creator-led Businesses, Generative Thinking, Spiraling Out of Control and Your AI Research Assistant.

Dan Shipper (00:54:00)

We can try it on this. I don't know if it'll work, but one of the tasks I really want to do, and I don't think that these models are capable of it yet with just putting everything in the context window is: I want to create a list of all of the thesis statements or all of the ideas. For example, I have this allocation economy idea, and that’s an ongoing thread that I write about, and I really want to create a list of all those threads just to have, so I can refer back to them, and this seems like a good thing to do that with. Do you think we could try and see if it can pull out things like that?

Vicente Silveira (00:54:37)

Let's give it a shot. So where do you start from? Where is the information?

Dan Shipper (00:54:40)

I would just start, I mean, if it's not too hard, I would just go into Chain of Thought, which is my specific column on Every. It would be every.to/chain-of-thought and download.

Vicente Silveira (00:55:15)

Let's try one thing real quick here. Can you go to every.to, chain of thought, and find that? Chain of Thought. Just to make sure—this is a website here and Chain of Thought column, URL. So we're going on the web. So we're using a tool that fetches URL content. So it's going at this every.to, so Dan's website, and let's see if it's going to find the right URL. Well, apparently I found something here. Oh, it's actually going into AI Drive because we may have downloaded something. So this is one of the interesting things you see with the tool, right? It's a tool to go out to fetch URLs, but even files here on AI Drive are URLs themselves. So he went there. Let's see what he did. Oh, okay. There's some issue here. Let me try another approach. That's one of the cool things about the agents is that they kind of work around issues and the more tools you give, if there's some overlap between the tools, they can actually find work around. So let's see what he did. So he went to fetch the URL again. Oh, he found it.

Dan Shipper (00:55:16)

That's awesome. So what I wanted to do is there's a way to sort it by newest. And so I would like it to just sort by newest and take all the articles that are on the first page of newest and, tell me, okay, what are the main ideas of each article as bullet points?

Vicente Silveira (00:56:50)

Okay. So we can give it a quick shot here. So you want the latest articles there?

Dan Shipper (00:56:52)

Yeah, so basically it's a new URL, to make it— Okay, let's do that. I'm putting it in Chat. It's just a parameter where you just sort by newest.

So basically I wanted to go to each article in the URL and pull out the main idea of the article and express it as a sentence. And then give me a bulleted list of the main ideas.

Can you go to this URL, which is the Chain of Thought URL and download the articles and then create a document with the main ideas in those articles and express those main ideas as sentences. Use the experts to plan your expert to plan your tasks. So it's basically like going to the expert model, which is, I guess, o1.

Vicente Silveira (00:57:37)

Yeah, right now it's Claude.

Dan Shipper (00:57:38)

And Claude's basically creating a plan. So, it's writing a plan for the other AI to follow and it's giving it some things, like it's going to fetch the URL content and write to the file and you know, all that kind of stuff. It's telling it, okay, here's some things to watch out for. And now it's starting to take action. So we're starting to see it, I think, go to the site and then write the files into another file. So, write the text of the site into the files and then, wow. Okay, so now we're getting some output.

So the first one is how to figure out what people want is the article title, understanding customer needs involves thinking in sequences. That's right. OpenAI launches a document code editor, and the main idea is Canvas enhances human AI collaboration by allowing real time document code editing. This is cool. Yeah, this is actually quite helpful. I think you may have converted me. This is definitely different than, I think, the experience— For example, one of my problems with NotebookLM, for example, is I have to go do all this manually—uploading all the stuff manually. And it's really nice to have it just be able to fetch it for me. And I think the other thing that's nice is like, yeah, there's some more complex workflows you can build here that are helpful for text processing. I like this. This is cool.

Vicente Silveira (00:59:06)

Yeah what we find is it's kind of interesting if you think about the evolution of these things. If you do a Google search you can't do something like this in Google Search, because usually Google searches are optimized for maybe two words. I think that even in Perplexity— Do you want to try this in Perplexity?

Dan Shipper (00:59:24)

Sure. 

Vicente Silveira (00:59:26)

Yeah, let's try that. So, let's do a quick bake-off here. I'm going to get the exact same prompt to be fair. So we can go to Perplexity here and share this tab. Okay. So can you see Perplexity here?

Dan Shipper (00:59:49)

Yeah, I guess we're eliminating any chance Perplexity ever sponsors the show.

Vicente Silveira (00:59:50)

They do great work. We like them a lot.

Dan Shipper (09:59:52)

You should do Pro. Make sure it's Pro. Alright. That's fine. Okay. Okay. So it's doing some of the things it's saying, okay, we're going to navigate to the URL. It did that. It's going to extract the main ideas from each article. And then it says, I apologize. I cannot directly download content from external websites. See, this is exactly what I'm talking about with the risk thing is Perplexity. It’s a huge startup. Now all eyes are on them. They have to make sure that it works for everyone and that they don't get sued. And that means that they have to limit— The edge of the product that they can build is not the edge of the technology. It's the edge of what they legally are allowed to do and like what they think can work for the greatest number of people. And because you're— I don't know how many people are—

Vicente Silveira (01:00:50)

We are a very small team. So it's Karthik and me and another three engineers. And we have a few contractors for some front-end type work.

Dan Shipper (01:00:55)

So, you're five people, so you can do it, which is great. And what you can do with five people is just so much different and it would not get easier if you had 100, it would get harder. I think that's a really important thing that people miss about this type of product.

Vicente Silveira (01:01:13)

And that's also because I think Perplexity’s focus is really kind of replacing Google. It's the answer engine vs. the question, which I think is a great angle. Our focus is not that. Our focus is what you have, which is I have all these different kinds of documents here and I need to manipulate and create new documents from it. And usually what that means is you have to be really good. You have to have tools that are really good at fetching, parsing, manipulating, creating documents. So it's a kind of a different focus as well.

Dan Shipper (01:01:48)

Yeah, totally. Well, this is really, really cool. I'm really glad we got a chance to chat. Before we go, if people want to check out what you're working on and just check out your stuff personally, where can they find you on the internet?

Vicente Silveira (01:02:03)

Yeah, so our website is myaidrive.com. That's basically the app that we've been using here during the show. If you have ChatGPT Plus or Teams, you can see our GPT, which is in the productivity category. You can find me on X as well. And I think my handle is @vicentes. That's my handle on X. And, it's awesome to be able to discuss this kind of stuff with you Dan. 

Dan Shipper (01:02:30)

Thank you so much. A true pleasure.

Vicente Silveira (01:02:31)

Likewise.


Thanks to Scott Nover for editorial support.

Dan Shipper is the cofounder and CEO of Every, where he writes the Chain of Thought column and hosts the podcast AI & I. You can follow him on X at @danshipper and on LinkedIn, and Every on X at @every and on LinkedIn.

We also build AI tools for readers like you. Automate repeat writing with Spiral. Organize files automatically with Sparkle. Write something great with Lex.

Find Out What
Comes Next in Tech.

Start your free trial.

New ideas to help you build the future—in your inbox, every day. Trusted by over 75,000 readers.

Subscribe

Already have an account? Sign in

What's included?

  • Unlimited access to our daily essays by Dan Shipper, Evan Armstrong, and a roster of the best tech writers on the internet
  • Full access to an archive of hundreds of in-depth articles
  • Unlimited software access to Spiral, Sparkle, and Lex

  • Priority access and subscriber-only discounts to courses, events, and more
  • Ad-free experience
  • Access to our Discord community

Comments

You need to login before you can comment.
Don't have an account? Sign up!
Every

What Comes Next in Tech

Subscribe to get new ideas about the future of business, technology, and the self—every day