Every illustration/Chris Pedregal.

How to Build a Truly Useful AI Product

Generative AI breaks the old startup playbook

54 2

I used to take notes during Zoom meetings, toggling between the call screen and a Notion document that housed my notes—always hoping that whatever I jotted down actually made sense when I reviewed it later. Now I use an AI-powered meeting notes tool called Granola that automatically captures what’s happening in my call, so I can stay focused on the conversation. Some of my Every colleagues do as well, so we’re thrilled to publish this piece by Granola cofounder Chris Pedregal in today’s Thesis. In a landscape where the underlying AI models improve faster than developers can build applications for them, Chris argues that building AI products requires an entirely new playbook, and he shares four essential principles drawn from his own experience. If you’re interested in learning more from Chris’s experience, tune in to this week's episode of AI & I, where he talks with Dan Shipper about building Granola and what he’s learned.—Kate Lee

Was this newsletter forwarded to you? Sign up to get it in your inbox.


If building a startup is like playing a tough video game, building a startup in generative AI is like playing that video game at 2x speed.

When you’re building at the application layer—your startup uses an AI model provided by companies like OpenAI and Anthropic—you're relying on technology that is improving at an unpredictable and unprecedented rate, with major model releases happening at least twice a year. If you're not careful, you might spend weeks on a feature, only to find that the next AI model release automates it. And because everyone has access to great APIs and frontier large language models, your incredible product idea can be built by anyone.

Many opportunities are being unlocked—LLMs have opened up product abilities like code generation and research assistance that were impossible before—but you need to make sure you are surfing the wave of AI progress, not getting tumbled by it. 

That’s why we need a new playbook.

Having spent the last two years building Granola, a notepad that takes your meeting notes and enhances them using transcription and AI, I’ve come to believe that generative AI is a unique space. The traditional laws of “startup physics”— like solving the biggest pain points first or that supporting users gets cheaper at scale—don’t fully apply here. And if your intuitions were trained on regular startup physics, you’ll need to develop some new ones in AI. After developing these intuitions over the last two years, I have a set of four principles for building in AI that I believe every app-layer founder needs to know.

1.  Don't solve problems that won't be problems soon

LLMs are undergoing one of the fastest technical developments in history. Two years ago, ChatGPT couldn’t process images, handle complex math, or generate sophisticated code—tasks that are easy for today’s LLMs. And two years from now, this picture will look very different. 

If you’re building at the app layer, it’s easy to spend time on the wrong problems—those that will go away when the next version of GPT comes out. Don’t spend any time working on problems that will go away. It sounds simple, but doing this is hard because it feels wrong. 

Sponsored by: Every

Get the whole package

We write, and then we build. If you’re a fan of Every's writing, you’ll probably like the products we’ve made to make thinkers more efficient: Spiral to automate repeat writing tasks, Lex to help you write better, and Sparkle to clean up your desktop—for good. With our last discount of the year, we're offering the whole package of writing and software for 33% off.

Predicting the future is now part of your job (uncomfortable, right?). To know what problems will stick around, you’ll need to predict what GPT-X-plus-one will be capable of, and that can feel like staring into a crystal ball. And once you have your predictions, you have to base your product roadmap and strategy around them. 

For example, the first version of Granola didn’t work for meetings longer than 30 minutes. The best model at the time, OpenAI’s DaVinci, only had a 4,000-token context window, which limited how long meetings could be. 

Normally, lengthening this time frame would have been our top priority. How can you expect people to use a notetaker that only works for short meetings? But we had a hypothesis that LLMs were going to be much better: They’d get smarter, faster, cheaper, and have longer context windows. We decided not to spend any time fixing the context window limitation. Instead, we spent our time improving note quality.

For a while, we had to actively ignore users who complained about the duration limit. But our hypothesis was right: After a couple of months, context windows got big enough to handle longer meetings. Any work we would have done on that would have been wasted. Meanwhile, the work we did on note quality is one of the main reasons users say they love Granola today. 

2. Your marginal cost is my opportunity

Historically, a defining characteristic of software was that the marginal cost of supporting an additional user was close to zero. If you had a product that worked for 10,000 users, it wouldn't cost that much more to support 1 million users. 

This is not true when it comes to AI. The marginal cost of every additional user remains the same, and cutting-edge AI models are really expensive to run. For example, sending the audio of a half-hour meeting to OpenAI’s flagship GPT4o audio model costs about $4. Imagine that cost scaled across thousands of users, every day. There’s also a limit to the number of users your startup can onboard. Even if you had all the money in the world, OpenAI and Anthropic (which makes Claude) don’t have enough compute to support cutting-edge models for millions of users.

For the first time, it’s possible to provide a better product experience for a small number of users than for millions of users. But this isn’t an obstacle—it’s a big opportunity for startups. Big companies with millions of users literally can’t compete with you because there isn’t enough compute available in the world to provide a cutting-edge experience at scale. 

As a startup, you can give each of your users a Ferrari-level product experience. Use the most expensive, cutting-edge models. Don’t worry about optimizing for cost. If doing five additional API calls (server requests to your LLM provider of choice) makes the product experience better, go for it. It might be expensive on a per-user basis, but you probably won’t have many users at first. And remember: At best, companies like Google can provide their users with a Honda-level product experience. 

You might be wondering what happens when users come flocking to your Ferrari product experience. Won’t you end up in the same position as the big tech companies of today, unable to provide high-quality, cutting-edge services to your users?

The beauty is that even if your user base is growing exponentially, the cost of AI inference is decreasing exponentially. Today’s cutting-edge models will be affordable commodities in a year or two. Today’s Ferrari’s are tomorrow’s Hondas. Be a Ferrari while you can.

3. Context is king

When we first started writing prompts for Granola to generate meeting notes, we quickly realized that providing a set of step-by-step instructions doesn't work well in practice. The real world is messy, and it’s nearly impossible to anticipate and write rules for every situation an LLM might encounter. Even if you could cover every scenario, you'd inevitably have conflicting guidance.

We had an insight: Instead of treating AI models as something that just follows instructions, we should treat them like interns on their first day. An intern is smart but lacks context on what to do and how to do it. The key to an intern's success is to give them the context they need to think like you.

That's how we approach prompting at Granola now. We provide the model with curated context to guide its thinking. For Granola, the use case is writing great notes from a meeting. The context is understanding who is in the meeting and why it’s being discussed. Our work is to find that information—from the web and other sources—and then get the model to think like you (What are you trying to get out of this meeting? What are your long term goals and how is this meeting in service of that?) and put only the relevant information in the notes. The art is in selecting which context to provide and how to frame it—because no matter how good models get, the context you give them will always matter.  

I believe "context window selection" will be one of the defining ideas of our time, with implications far beyond AI. During the Industrial Revolution, the brain was described in terms of mechanical machines—blowing off steam, for example. When computers emerged, we started to use terms like “bandwidth” and “storage capacity.” I think we will start describing how the brain works in terms of "context window selection.” This idea will permeate well beyond tech. 

4. Go narrow, go deep

One fascinating challenge with building AI products today is that you're competing with general-purpose AI assistants like ChatGPT and Claude. They’re pretty good at most things. How do you build something good enough that users will choose you over these Swiss Army knives?

The only answer is to go narrow—really narrow. Pick a very specific use case and become exceptional at it. The cardinal rule of startups—building something people want—remains consistent in AI, but the bar is higher. 

But here's the plot twist: Exceptional experiences for narrow use cases often have little to do with AI. We spend endless hours on note quality at Granola, but we spend just as much time on features like seamless meeting notifications and great echo cancellation (so our tool works whether you're using headphones or not). The "wrapper" around the AI is often the difference between a delightful experience and a great demo that is disappointing to actually use.

Going narrow also makes it easier to improve the AI part of your product. When AI gets a response right, it’s magical. But when it gets it wrong, it does so in ways that can feel weird and disconcerting. It becomes obvious that you’re not talking with a human, but with an algorithm. Product experiences that fall into the uncanny valley can push users away from your product for good. When you go narrow, it’s much easier to identify the most common AI failure cases, and either mitigate them or try to fail more gracefully. 

The fundamentals are the same

Building in generative AI is like running on a treadmill while traditional tech moves at walking speed. This speed impacts everything from the technical problems you tackle to your timeline for reaching scale. While this acceleration should change your strategy, it doesn’t change the fundamentals of building a good product. You  need to build something people want. There are no shortcuts. You still have to sweat the details. And the most clarifying questions remain deceptively simple: How does this product make me feel when I use it?


Chris Pedregal is the cofounder and CEO of Granola. He previously cofounded the education tech company Socratic, which was acquired by Google.

To read more essays like this, subscribe to Every, and follow us on X at @every and on LinkedIn.

We also build AI tools for readers like you. Automate repeat writing with Spiral. Organize files automatically with Sparkle. Write something great with Lex.

Find Out What
Comes Next in Tech.

Start your free trial.

New ideas to help you build the future—in your inbox, every day. Trusted by over 75,000 readers.

Subscribe

Already have an account? Sign in

What's included?

  • Unlimited access to our daily essays by Dan Shipper, Evan Armstrong, and a roster of the best tech writers on the internet
  • Full access to an archive of hundreds of in-depth articles
  • Unlimited software access to Spiral, Sparkle, and Lex

  • Priority access and subscriber-only discounts to courses, events, and more
  • Ad-free experience
  • Access to our Discord community

Comments

You need to login before you can comment.
Don't have an account? Sign up!
Stephen Smith about 22 hours ago

This was super timely. I had been thinking about searching for or trying to build a product like this, which looks absolutely amazing. The timing could not have been better for this article!

Vivek Krishna about 21 hours ago

Amazing article - hits the right notes. I personally experienced many of the problems while building Gen AI apps

Every

What Comes Next in Tech

Subscribe to get new ideas about the future of business, technology, and the self—every day