The transcript of AI & I with Linear cofounder and CEO Karri Saarinen is below. Watch on X or YouTube, or listen on Spotify or Apple Podcasts.
Timestamps
- Introduction and how Every first discovered Linear: 00:00:39
- Why Linear waited to ship AI features instead of rushing to chatbots: 00:02:00
- Linear’s agent platform and becoming the system that guides AI agents: 00:05:06
- Why “SaaS is dead” is a simplistic narrative: 00:07:42
- How Linear adopted AI coding tools internally: 00:12:18
- AI’s impact on product building workflows—speed versus thoughtfulness: 00:17:45
- The value of conceptual work and thinking before shipping: 00:22:18
- How AI is reshaping Linear’s product strategy: 00:29:30
- Demo: Linear’s agent skills, shared context, and code review workflow: 00:37:18
Transcript
Dan Shipper
Everyone will have many agents and companies will build their own agents. Linear becomes kind of a system for guiding the agents and building this context.
This is the perfect business for this era because it’s still SaaS. You’re the one who has this sort of sticky interface, because it’s where everyone is kicking things off from and where they’re recording all the information, but you don’t have to pay for any of the actual tokens.
Karri, welcome to the show.
Karri Saarinen
Thanks. Thanks for having me.
Dan Shipper
Really great to finally meet you. You are the co-founder and CEO of Linear. Little known fact, the first time I ran into Linear was because we were using it in 2020 at the very beginning of Every to act as our content management system for the newsletter.
At the time it was this very hush-hush thing. You couldn’t get access to it, but if you knew, you knew—Linear was amazing. We used it for a while and really loved it, but then we realized it was made for software, not publishing articles. So we moved off of it. But it was really cool while we did it.
I’ve always admired the level of taste and craft that you bring to what you build, and also the level of thoughtfulness and patience that you build it with. I think one really interesting thing is the way that you built the company originally was to keep it closed for a while, not raise too much money, not put too crazy expectations on the company, and be patient and willing to build something quality over the long term.
I think that also has something to do with how you approached AI. You guys are really in AI right now. When I think about the companies that are successfully transitioning into this moment that were started in the pre-AI era, Linear is definitely on that list. OpenAI came out with Symphony the other day, and the main thing that it hooks into is Linear.
You’ve successfully transitioned the product to be really agent-native. But when GPT-3 first came out, I didn’t see anything about that on Linear. So I’m curious about that transition for you. What was that like emotionally—to have built this product for a particular way of working and a particular way of building software, and then see the world change, but maybe not be totally sure if this was going to be the thing? And then eventually realize, this is the thing, we need to rebuild the product or change how the product works in a significant way. Talk to me about that.
Karri Saarinen
First of all, thanks for being an early user. I think the thinking has always been the same—we just want Linear to be the best product in this category, helping companies move work forward and build software products. In some ways, this new AI stuff doesn’t really change that mission.
It kind of improves it. Our goal was always: can Linear take more of the burden of running these product teams, figuring out things to do, figuring out when to do them, and let the product teams or the individuals actually build the things? And now it’s also like they build it with AI, or the AI builds it.
So the mission for us didn’t change. The AI is actually making it better because now we can automate more and take more of that burden and let people use their craft, their taste, their thinking.
But I personally always have this way of addressing problems. I come from a design background, so a lot of times the way I approach things is first I’m trying to understand them. That sounds kind of obvious, but what happens in the tech world a lot of times is people don’t try to understand things. They often jump into, “Oh, I can do this, so I’ll do it now.” But did you think about whether you should do it? Does it actually help you?
So that was our thinking with the early AI and the chatbots. Every company was rushing into this moment—“Hey, we are now an AI company because we have this chatbot integrated.” We tried that too internally, and then we just realized this is not really that useful. What is the workflow where you would actually need this or use this? We’ve spent a couple of years now trying to understand these workflows—how do people actually want to use these things?
We did a couple of things though. We released this agent platform—it’s kind of an open platform. It has very good docs and the agents can build the integration themselves using the docs. Because of that, we now have most of the coding agents out there integrated with Linear. OpenAI brought their Codex in because we just had this available.
We kind of saw this world where I don’t think there’s going to be one agent. Everyone will have many agents and companies will build their own agents, which we are now seeing with Coinbase and Ramp, who are our customers. They built their own homegrown coding agents, which then integrate with Linear.
So Linear becomes kind of a system for guiding the agents and building this context. But we don’t try to own everything in this market. We can play with other companies too. The approach was much more about understanding the workflows—what is actually valuable, what people could use these tools for—versus just jumping into, “Well, everyone else is doing this thing, so we should do it too.”
By the way, now we are adding a chat interface into Linear, but it’s a lot more considered. There’s tools, there’s skills, and there’s more understanding of how you should use it. You can use it to synthesize customer requests, because Linear is a place for customer problems, requests, and other things. Now the Linear agent can natively work through those and see patterns. We’re trying to bring clarity and context to the organization, which they can then use as part of their AI building workflows.
Because once the AI builds more and executes more, the problem really becomes: how do you productively harness this in a good way? You can task a million agents to do something, but what are the things they should be working on? Probably not all of those. If you don’t think about it, a lot of that work is not necessarily that useful. You need some kind of decision-making process—is this actually important? Should we do this? Linear is a way to build that intent, build that context, and then go build it with the agents.
(00:10:00)
Dan Shipper
There’s an interesting meme or mind virus going around right now—the stock market thinks that SaaS is dead. I think you’re pointing to something really interesting, which is this dynamic of a couple years ago, a lot of companies, including a lot of SaaS companies, rushing to chatbots. A big part of that is, “We know this thing is happening, so we have to at least show that we’re doing something.”
The market is starting to look at that and require that. I imagine when the AI stuff was coming out and you guys were maybe testing AI features but weren’t releasing them, there was some pressure—maybe from investors, or from yourself, or internally—to do something.
It seems like you waited until you had the fat pitch. If that is true, what was that like, and what do you think it means for all the public market SaaS companies that are down right now, whose CEOs are like, “Well, I guess we really need to launch an agent platform”?
Karri Saarinen
We don’t really have pressure from the investors. That’s one benefit of picking the right investors—they trust us to make the right calls. We obviously talk about this, but we also have that discussion where it’s like, we just don’t see the value right now doing it this way. We need to find the actual real value that helps these companies.
So it wasn’t that bad. There was definitely internal pressure. And now the speed of the market has picked up a lot. Every month or every couple weeks, there’s something changing and we are tracking those same changes, trying to see where all of this is going.
But it also creates a lot of noise in the market. One week someone is doing loops, and then a couple weeks later people are like, “No, loops are a bad idea.” Those things are signals that you should read and understand, but you also need to know that a lot of this stuff is not tested. A lot of times the people testing these things are not testing it in some large organizational context where things actually matter—whether they work or not. We haven’t tested all these things, so we can’t make these predictions of exactly how things are going to change.
On the SaaS narrative—I do think it’s probably directionally correct that with SaaS companies, as an investor, there’s more uncertainty of the future cash flows, because if the landscape is changing, you can’t expect that everything will stay the same.
But the narrative is kind of simplistic. “Oh, people will vibe-code their own CRM tools.” I don’t think that’s exactly going to happen. But what might happen is new companies come out, or a lot of the public companies are not the most flexible or robust solutions out there. They are the big solutions that big companies use, and there’s a certain kind of inertia in there. I would say the public companies probably get hit the hardest because their moats are kind of disappearing.
Even for us, we consider that we need to live in this day-one world again, where we can’t rely on our previous decisions anymore. We have to look at these problems in a fresh way—what happens when agents come into this product development process? What are the new problems that come out of it? How do we help with that? We shouldn’t be tied to the past product we have, but see what the future product should be. This is harder for large companies and companies that have existed for decades. Growth companies or startups can do it a lot better.
(00:20:00)
Dan Shipper
How big is the team now?
Karri Saarinen
About 120 total. About half of them—60 people—are on the product team.
Dan Shipper
What has that transition been like? I assume that over the last couple years there have been a lot of divided opinions on—is AI coding really a thing? Is it just glorified autocomplete? Is it going to eliminate programming as a job? How has that change cycle been, to actually go change your workflow, figure out what the new programming workflow is like? How did you get the team in shape to do that, and what did you learn in that process?
Karri Saarinen
There was definitely a time in the company where we had to encourage people to use these tools more. There’s always that—habits where you’ve always done stuff this way, so you’re less interested in trying new tools. But now I’d say probably all of the engineering, and sometimes our design and PMs also, are using agent coding or coding tools.
We don’t track any kind of specific metrics. I joke about this sometimes on Twitter—the biggest vanity metric now is how much of your code is agent-written, or how many PRs are you merging. I think that’s not the right metric. It measures output, but what does that output do? Does it actually generate value? Is it improving the product? If you’re measuring these kinds of metrics, you need some kind of counterbalance—what is actually the quality of this work and is it actually meaningful?
That’s also what’s playing out in the market. We have large companies that are token sellers, and when your business model incentivizes spending more tokens—your revenue will be higher, your market share will be higher—there are a lot of incentives saying you should spend more tokens. Not saying, “Think about things and spend them well.”
I think people are looking at it too simplistically—like there’s a good thing if we just spend more tokens, things will be better. But I don’t think that’s ever been the case in building products. There’s some value in speed and making changes, but you should also understand that any change or addition you make can also have a negative impact. Activity is not always positive—sometimes it can be negative too.
Dan Shipper
What do you think is a more nuanced metric? If you’re judging how well you’re doing at figuring out these new workflows and adapting to them—tokens, number of PRs submitted, percentage of agent-generated code—those are not necessarily the right metrics, maybe even in isolation. What do you look at, or how do you think about it?
Karri Saarinen
I think it’s still the classic metrics—profits, revenue, user love, some of these things—that’s what you should be aiming for.
Dan Shipper
Those seem like lagging indicators.
Karri Saarinen
Yeah, they are. But there isn’t a perfect answer. You should still measure some of these things like token usage per person or by different teams, but you shouldn’t take it to the extreme of saying this is the only metric that matters. You should use it as a signal—are we doing something?
And then think: is our product actually improving? Do we have any indication of this? Do we get comments on the new features? Are there fewer bugs? Bugs are actually a very measurable metric if you run an honest bug tracking process.
Now I almost feel like, with agents and AI, why do you even have bugs in your product? There’s no excuse for it anymore. Internally, we have a zero-bugs policy. We have a Linear team triage and any bugs go there. There’s a one-week SLA that every bug needs to be fixed. Now with coding agents, the coding agents can actually do the first pass on it. Once it’s done, the fix will tag the engineer on it. The engineer might not like it, or there’s some changes they want to make—they can do it now inside Linear and review the code in Linear.
So there’s a very good workflow for that. But it still starts from the fact that we care about whether our product is buggy or not. We’ve made the choice that bugs are bad things—mistakes—and we should fix them as quickly as we can. That’s a priority for everyone. It’s still a choice: do you care about the quality of the output, or do you just want more output?
(00:30:00)
Dan Shipper
What are the ways that these tools have changed your product building workflow, both personally and as an org? What are the most effective ones that might be surprising?
Karri Saarinen
On the product side, it’s definitely a lot better. I have this skill in Linear where I fed some of our internal docs and blog posts about how we think about product development and made it into a “Linear way” skill. It writes as soon as I tell it—“Look at this, help me understand this feature request.”
We collect feature requests inside Linear, and for example, there’s a request like multiple assignees per issue that hundreds of people want. I can tell it to synthesize and help me understand what are the different reasons people want it. It starts with explaining the problem, trying to understand the core problem, which is usually what I want to know.
When I see a new request, I might go into Linear and say, “Do we have this kind of request already?” And then, “Help me understand it.” It helps me give an understanding of whether we should tackle this now, or is this something we could do later, or maybe never.
Before we start building anything, it’s helping me understand the problem in a very quick way. I don’t have to go ask around or find people to do it for me.
On the design front, I actually don’t personally use it much. I actually like the manual design process. I still have Figma open, and when I have a problem or idea, I just draw it in there. My work is often more like exploring things, so I actually don’t think the speed really helps there. I actually like the slowness of the manual thing. Every time you draw something, you have to check on yourself—why am I drawing it this way? Should I draw it a different way?
But the broader design team, when they work on problems, they’re building a lot more prototypes now. We have a quite robust build system. You can make a PR, it’ll run the build, you get the preview link, and then you can use it live in the product. It helps at the prototyping stage.
But I still tell the designers to explore more freely in Figma first, and try to think about how you approach the problem—not just jump into doing it. There are projects where it’s very clear what needs to be done, but for bigger projects, they should still spend that time.
On the engineering side, it’s probably similar to a lot of other companies. We can fix problems a lot faster once we identify them and decide to do it. We use Slack a lot and with our Slack agent, we have a discussion and then eventually decide, “Yeah, we should do this.” And then we just tag Linear in there saying, “Hey, can you create issues out of this conversation?” And then we’ll do it. It helps us track it, come back to it later, and actually make it actionable right away versus needing to have a meeting, start a project, and start assigning people.
The pattern across all of those things is it’s shortening some kind of loop and making it faster. You can do the thing right away versus waiting for next week or some other time. It’s very little effort to do it right away.
Dan Shipper
Which is interesting because sometimes it seems like you are the exact opposite of your preferred outlook. You know—actually, we shouldn’t do things faster, actually, we should take things a little bit slower. How does having tools that make you go much faster interact with that outlook?
Karri Saarinen
That’s a good point. I think we shouldn’t go fast in deciding things, or speed-running the decisions, or not even making decisions. Some people do it now where they just have an idea, then they build it, and now we’re all looking at this idea that no one really knows why it exists—should we even do it?
Every new prototype or idea can seem useful. Then you don’t have a good way of framing how useful this is versus other things. Should we spend the time actually committing to this idea, when we already have other ideas we’ve decided on?
There’s this danger where, if you don’t have some kind of decision-making framework—we don’t have a lot of processes in Linear, but it’s more like: we want to commit on something. Once we commit on the thing, the fix, or the project, then I want it to move fast. I want the loop to be fast to actually work on the problem.
But I don’t want the problem-finding to be fast. You should take the time to find the right problem and the right approach for the problem. Then once you decide that, you can go faster on it.
(00:40:00)
Dan Shipper
One thing that what you’re saying makes me feel is—I totally get that approach. But for myself as a product builder, I often don’t know what I’m doing until I do it. I can’t think it through until I’ve done five different things that I can’t explain, and then I’m like, “Okay, here’s the thing. I understand it.” Is what you’re saying different from that, or is it the same just said differently?
Karri Saarinen
Maybe it’s different. But I can see that workflow. I feel like that workflow is kind of like understanding—you’re trying to understand what you’re doing.
Dan Shipper
Yeah. It’s building—it’s making things as understanding.
Karri Saarinen
And I think that’s fine. The problem there just becomes—sometimes in design, I consider this conceptual work where the output is a concept. It’s not something we should necessarily deliver. It’s like, I went through this process of understanding this problem and I have a concept for it.
Dan Shipper
What’s an example? I would assume that the output of a design process would be a Figma you could export. So what’s an example of a concept that comes out of a design process?
Karri Saarinen
In large companies, I’ve used the concept term to not scare people. Usually it’s rethinking some area completely. It’s like a concept car—this car won’t go into production, but here are some ideas that could influence the next car.
Sometimes people, once they see something very different, their fears start coming up. “Well, if we change this, what else is going to happen? What’s going to break?” But the point is not to decide that right now. We just decide: does this concept, this new idea, have merit? Do we think it’s important enough to take it further, and then deal with the problems later?
You’re trying to divide which decisions you’re making now. I’ve done it in our company—I just completely rework a surface and say, “Hey, I think the project should look like this,” which is completely different from what it currently is. Then people are like, “Oh, that’s actually interesting,” or they’re like, “Well, it won’t work for this and that.” And I’m like, “Okay, that’s fine.” It might be a Figma design or a prototype.
Even with all this tooling, the output shouldn’t always be that we ship something. Sometimes the output can be something internal—“Hey, now we have a better understanding of this problem, we can tackle it better, and we can actually make it into a shippable thing.” But we first try to think about it before doing it.
Dan Shipper
Right. And to you, thinking about it can include building stuff. It’s just that the reason you’re building stuff is not to ship it the next day—it’s to understand it better. But thinking can be designing, it can be writing, it can be talking about it.
Karri Saarinen
Yeah. And something I did have to share with the company recently was—we always care about the quality bar a lot, and that thinking process of “are we doing the right thing?” is what we’re trying to decide sometimes.
Now with AI, it’s actually hard to tell. The tooling changes all the time, the LLMs are not deterministic anyway. You don’t always know how useful something could be, and then there’s a moment you just have to decide—yeah, I think we should try this internally, but we also need to try it with customers. You kind of put it into some kind of beta.
There’s definitely nuance to this right now. There are situations where—and it’s always with product building—there’s a limit to how much you can think about it inside your company until you need to actually put it somewhere for someone else to use. Then you learn from that use case. But every stage, you have some kind of goal in mind. When you put something in beta, the goal should be to understand the workflows and how people use it and how they want it to be better—not to try to ship it as fast as you can. You should be honest about what the actual goal is for that stage.
(00:50:00)
Dan Shipper
So we’ve talked about how AI has changed your internal workflow. I’m also curious how it has changed your product strategy—not the actual work of building products, but what kind of product to build. Should you let AI agents connect into your product, which I know you’ve done, versus build your own AI into the core feature? Should you have both? What should they be able to do? How does it affect your product strategy and your vision for what a good product is?
Karri Saarinen
We are now adding a Linear agent that has context of your work, the context of the organization, and the products you build, that you can use in different ways. For the PM workflows, you can also use it as a designer to understand problems. And then we will also do a coding agent where you can start writing code with the agent.
Dan Shipper
Interesting.
Karri Saarinen
You can see the diffs inline. It’s kind of like a cloud coding environment where you can see the changes and guide it.
The strategy has definitely changed. We’re trying to understand what the problem set of today is. One of the things that’s changing is that historically people thought issue tracking was like a ticketing system for the kitchen. An order comes in, someone orders fish, so now that fish goes into the kitchen and there’s a ticket—“make fish.”
That’s kind of how people think about issue tracking, but we never thought about it that way. For us, Linear is more like the backbone—collecting signals, collecting problems, collecting decisions like, “We should do this thing.”
There’s definitely a shift where we have to teach people that this product is really meant to improve your team’s workflow, not be a weird ticketing system for different parts of your organization. That’s probably going away with the agents. You don’t need that anymore—the agent can do those tickets and complete them.
But we think there’s still value in collecting that context, shaping work into something actionable, and providing agents good context from the environment.
One lesson we learned with the agents is that it’s tough when we’re not ourselves in control of it. We want to support all companies and all agents as much as we can, but if we have ideas for it, we can’t do it—it’s on them. So now one of the reasons we’re doing this coding agent is we actually see a much smoother end-to-end workflow where you start your task in Linear. You can ask the agent, “Hey, does this thing exist already?” Or if not, make an issue, make a work stream out of it, and then start working on it. Start writing the code, see the diffs coming in, review it, merge it, see the prototypes.
One of the problems I see when I use Claude or ChatGPT or Codex is that I have to really explicitly tell the agent what context to bring. The value with Linear is the context lives there, and if we inject it smartly as part of the work stream, it’s much more natural. We can design the flow that makes sense and we don’t spam the context windows.
We see this as: you probably have Linear as the multiplayer or organizational context of what’s happening in the product and what the potential future state of it is. You might still run local agents, but there are situations where you should just automate some of the bug fixes or small tasks—just do it in Linear and let it run in the background in a sandbox while you run your own work on your own computer.
Dan Shipper
That’s really interesting. From a product strategy perspective, I’m curious about the decision to integrate your own agents. Before we did this interview, I didn’t know about the Linear agent, and I was sort of sitting here thinking—wow, this is the perfect business for this era because it’s still SaaS. There’s no AI token costs, but it is the place where you control all of the AI. All the other coding agents and companies have to deal with all the token costs—OpenAI, Anthropic, whatever. But you’re the one who has this sort of sticky interface because it’s where everyone is kicking things off from and where they’re recording all the information.
But you don’t have to pay for any of the actual tokens. And it sounds like you’re adding a layer where you will have to pay for the tokens, and you may prefer that. The reason you’re saying is because a tighter integration between the two means you can do more interesting, more powerful things.
How did you think about that from a business perspective? Changing your margin profile that much—I assume there’s a lot of interesting discussion about how adding token costs changes the business model.
Karri Saarinen
Honestly, it’s something we’ll have to see more in the future. We’ve definitely thought about it and have some calculations. On the coding agents, we do have to offer usage-based billing because it can get very expensive. On the basic Linear agent functionality—answering questions for you—that should be more included in the system.
Linear is still going to be a fairly focused platform for certain kinds of things. You shouldn’t be running random things here. It should be pretty clear what you should be doing inside Linear and what kind of workflows you’re running there.
We’re not trying to build a very generic agent platform. It’s more like a product context or product memory platform where you can integrate those agents and use Linear agents from other tools, or bring other tools into Linear. It’s a way to work around your product—it’s kind of an API into the product thinking, versus using normal tools where you always have to tell it to go fetch this thing, go find this thing, because it doesn’t have any understanding of what you generally do or what context might already exist.
Dan Shipper
Can we see a demo? I’d love to see it.
Karri Saarinen
Yeah. So what we have coming up—this is actually my real Linear instance. Now if you do a new tab inside Linear, that will be the classic box of “what do you want to do?” There’s also another interface where if you are inside some context—like a project—you can do the work there.
For example, we will have skills. We’ll have organizational guidance and personal guidance. You can have personal skills or organizational skills. So for example, what I was mentioning earlier—sometimes I want to understand problems.
I want to understand this problem of multiple assignees. So I make the skill, which is essentially—I fed some materials from our blog, and then it’s like, “Act like a Linear product teammate.” It has this format where it starts with the underlying need and has a way it goes through the problem.
I made this to help my workflow—I’m quickly trying to understand feature requests so I can act on them. Let’s do multiple workspaces. So we have this collection of stuff about multiple workspaces, and the agent can go through there—there are probably many different requests and it will try to think through it. It looks into customer activities, looks through different things.
Dan Shipper
What model is it under the hood?
Karri Saarinen
I think we’ll eventually allow multiple models, but now we use Claude.
So it starts going through—it’s like, okay, there’s a real need, but it’s more complicated than it sounds. Companies want multiple workspaces for different reasons. My understanding generally is that they want one place to have billing and governance, but then they might have multiple different divisions in the company. So it’s not—they would want to divide the workspace more. They might still have some kind of overarching control.
It goes through trying to explain what is missing and what is good. It also makes a few recommendations of the product direction—do this or that. It helps turn something that’s quiet and complicated into something actionable—something we can talk about as a team.
Similarly, for a more micro example—if I want to make a new theme, like a new dark theme—I can just say, “Make it just black.”
Dan Shipper
What’s a theme?
Karri Saarinen
Themes are just like in our app—you can have different themes.
Dan Shipper
Oh, like the way it looks. Yeah, okay.
Karri Saarinen
So maybe I want to create a new version of a dark theme. I can now task a coding agent on it. It should start looking into it, look into the codebase and try to understand it. First it turns it into an issue and then delegates it into an issue. So I created this issue—it’s in progress, delegated to Linear—and now Linear starts working on it. It’s spinning up the sandbox for it.
One of the benefits of an issue is that now people know I’m doing this. The team knows, and I can say, “Hey, FYI, I’m doing this.” They can also come here to look at what is happening. The agent session is visible to everyone—visible to me and to them. Once it’s done, we can both jump into this chat and tweak it together if we want to, or just see what happened.
It’s similar to what you do on your computer, but now it’s happening in a shared context and there’s more understanding of where this came from. It could come from me, or it could come from a customer discussion.
Dan Shipper
The shared context is interesting—so two people can be in the same chat?
Karri Saarinen
Yeah. I don’t have someone ready to demo this, but we did have this instance almost accidentally. We noticed this was actually useful. Anon, who was our head of product, and Connor, who was our head of design, were both working on some tweaks on the inbox. They could go back and forth—a PM and a designer going back and forth. “No, it’s not quite right, let’s fix this thing.” And they could both see the preview link.
If I have something here—like my previous pull request—we will have pull requests here. You can see the activity, but you can also see the code. You see the code diffs, and if you want to comment on it or work on this code with the agents—“No, this is not right”—and then you can work on it. Similar workflow for code reviews, where an engineer might come in and say, “This is not right.” They could just task the agent to fix it versus telling the other engineer to fix it.
It kind of collapses the cooperation loop a lot more and allows multiple people to use the agents to work on one thing.
Dan Shipper
I’m curious about this. One interesting thing is it seems to increase the surface area of the product a lot. It touches a lot of different things that already exist to some degree somewhere else. Obviously there are things you can do differently—you can have multiple people in a chat, it’s more plugged into Linear generally. But you kind of have to recreate a lot of stuff that’s already being built by a lot of other companies.
How do you think about the trade-offs of that, especially entering something like AI coding where all the big companies are going as hard and as fast as they can to build AI coding agents?
Karri Saarinen
There’s definitely the question we need to keep asking ourselves—what is our unique advantage here?
Honestly, I don’t think we will solve all the different coding needs. But we don’t necessarily have to. Where we see the value is sitting upstream where the work is coming from. There’s really good leverage there that we can offer to companies—work comes in, bugs come in, they automatically get delegated to agents. Engineers never even see them. Or if they see them, they see them once there’s a fix already being built.
It doesn’t work for all kinds of situations. It’s not the agent you go to and say, “Build me a new product.” We don’t think that’s where we should be working. It’s more like—you have a large company, you have a lot of things requested from you, a lot of bugs filed in. How do we reduce that workflow for you automatically? And then you can use other coding agents to do other kinds of work—but this is where we focus.
But generally, we’ve been thinking about the problem as: we don’t want to be a kitchen-sink product that does everything for everyone. Sometimes companies end up in that state because you have enterprise buyers with checklists and you just need to get the check mark in the right spot.
We don’t think those things create a good product experience. The way we’ve always thought about building products is: what is the natural next step in this workflow? If we go from an issue—the natural next step is someone needs to fix it. How do we help people fix it faster? One option is our cloud agents can fix it. But now the agent does this stuff—how do you know it’s good? So then you need to see the code, see the diffs, run the builds.
We’re always focused on the workflow and how we improve it—how we help companies output better and faster—versus trying to own every surface. We don’t have to own every piece, but we’re trying to find this optimized workflow for people to do certain kinds of product things.
Dan Shipper
So we’re almost out of time. My last question for you is: if you had to project how product development will change over the next five years, what will be different and what will be the same?
Karri Saarinen
I think there’s going to be more of this self-driving aspect where you can set up some kind of rules or guidance. We’re even building something around project memory. You could have a common workflow—we have projects going on, and a project is often a feature, part of the interface, or part of the product. There’s a lot of feedback and requests coming in.
I think there are opportunities to turn that into something more like—the product or feature is kind of an agent itself. It tries to make decisions based on the input it receives. It can still ask for a certain amount of human input, but it could run automatically. It’s like, “Hey, I’m seeing these patterns, and these patterns point to this solution, and this solution seems like it potentially works for people. I built it, I sent it to some customers, and the feedback is good.”
It gives you things on its own based on some kind of context and a rule-based system or some guidance.
But the thing I still think about—people should still think, even in this world where agents do some of the thinking and run automatically to some degree. It makes people have to be a lot more explicit about what they want, what is worth doing, and what areas they should focus on.
I think a lot of this—humans having meetings, discussions, writing issues, writing documents, reading documents—there’s still going to be a place where humans need to understand this stuff too. You can’t just outsource the thinking purely to AI agents.
The more you can clarify your own thinking and your strategy, the better it is for your team, but it’s also better for the agents, because then you can codify some of those strategies and thinking into actual autonomous things.
I personally don’t see the future as one where we’re replacing humans. I don’t quite believe in it—maybe I don’t want to believe in it. But I think things will change. The roles will change, maybe there’s some movement around exactly what engineering does, how many engineers we’ll need, and what the job looks like in the future.
But I still don’t see how agents or AI actually does all the thinking—the choices, the decisions. I think product building is still a craft or an art. We talk about intuition a lot—we just design things based on how we understand the problem. We hardly use any data as part of decision-making. Sometimes we use it to look at something, but it’s more like a signal.
I’ve never personally believed in A/B testing and data-driven product development, which I think could work well for agents. But I don’t think it works for all kinds of products. The best products are not necessarily being built that way. You still need that human touch—what is interesting, what would make this good.
Dan Shipper
I love it. Karri, thanks so much for joining.
Karri Saarinen
Yeah, thanks Dan for having me. This was great.
Dan Shipper is the cofounder and CEO of Every, where he writes the Chain of Thought column and hosts the podcast AI & I. You can follow him on X at @danshipper and on LinkedIn.
We build AI tools for readers like you. Write brilliantly with Spiral. Organize files automatically with Sparkle. Deliver yourself from email with Cora. Dictate effortlessly with Monologue. Collaborate with agents on documents with Proof.
We also do AI training, adoption, and innovation for companies. Work with us to bring AI into your organization.
Discover Every’s upcoming workshops and camps, and access recordings from past events.
For sponsorship opportunities, reach out to [email protected].
Help us scale the only subscription you need to stay at the edge of AI. Explore open roles at Every.
The Only Subscription
You Need to
Stay at the
Edge of AI
The essential toolkit for those shaping the future
"This might be the best value you
can get from an AI subscription."
- Jay S.
Join 100,000+ leaders, builders, and innovators
Email address
Already have an account? Sign in
What is included in a subscription?
Daily insights from AI pioneers + early access to powerful AI tools
Comments
Don't have an account? Sign up!