Was this newsletter forwarded to you? Sign up to get it in your inbox.
If you prefer to listen to rather than read our essays, we’re live on ElevenLabs’s ElevenReader app. Download the app and subscribe to our feed to listen to audio versions voiced by AI.
For a long time, I felt like an imposter inside Every.
I’ve been a writer here for over a year—and I thought everyone on the team was “good” at AI…while I wasn’t.
What do I mean by “good”? It goes beyond technical knowledge of LLMs, like chain of thought or few-shot learning. Instead, they used AI with a natural fluency; an intuitive sense of its uses and limitations. I kept up with the zeitgeist and tried out new models, but I still fumbled to find that quiet competence.
Until the fog finally lifted when I spent a Saturday in November getting familiar with Excel functions. I realized that you can learn the “right” way to use conventional softwares like Excel, but generative AI is inherently different—it’s less about searching for the “right” way, and more about defining what that means for you.
The difference between conventional software and AI
I live in Spain, and perhaps the only boring thing about that is dealing with bureaucratic immigration processes. In November, one of these regulations required me to calculate the exact number of days I’d spent outside the country in the last four years. Tedious stuff, indeed. As I thumbed through my passport for entry stamps and scoured my email for flight tickets, I logged the information in an Excel sheet. I used simple functions to calculate the number of days in each trip I’d taken and add them up at the end.
I wouldn’t call myself an expert in Excel, but I got comfortable with the basics of the software quickly. It was easy. I figured out which formula I needed, typed it in, and if I made a mistake, Excel would helpfully throw up an error in that cell—I saw #NULL! or #NAME! and knew I’d gotten something wrong. (I was dealing with a relatively simple task, and there are certainly far more complicated functions within Excel that I haven’t begun to learn about.)
When you input a formula correctly in Excel, you get the right answer. If there's an error, Excel points it out. This clear distinction between right and wrong made me feel confident. I was certain I was using the application correctly. The direct feedback loop goes beyond just Excel to most types of conventional software—and it struck me how different this was from using an LLM.
I use Claude while I write sometimes, to brainstorm ideas for a lede, or for feedback on a piece. The chatbot always returns coherent, polished answers, even to prompts riddled with typos and garbled context. There’s no clear way to know if I’m using AI the “right” way. The beauty of LLMs is also their curse—there is no one, true way to get the most out of the technology. Add to that the possibility that the answers are objectively wrong because the models are prone to hallucinations. That nagging feeling I had about not being “good” at AI was about understanding the shades of gray it exists in.
Sponsored by: Every
Tools for a new generation of builders
When you write a lot about AI like we do, it’s hard not to see opportunities. We build tools for our team to become faster and better. When they work well, we bring them to our readers, too. We have a hunch: If you like reading Every, you’ll like what we’ve made.
AI’s stochastic dimension
A big part of conventional software’s value proposition lies in its predictability. The software we use to send emails, listen to podcasts, and read ebooks is so reliable that we don’t give it a second thought anymore; we just assume it’ll work. Identical input, identical output. After years of being exposed to apps like that, I had internalized that digital interactions are deterministic. There’s always a “right” way to use software, and you learn how to use it by figuring out what that is.
Generative AI turns this on its head.
LLMs have a stochastic element: Output can vary with similar—or even identical—inputs. You can give ChatGPT an identical prompt and get different answers. Broadly, while conventional software follows fixed rules, LLMs operate probabilistically to produce output shaped by likelihood. LLMs can also generate responses dynamically: Each time you prompt it, the model can “decide” on slightly different approaches on how to process input.
The stochastic nature of these models is part of what makes AI so powerful, enabling the technology to solve open-ended problems in creative ways. It’s also why using AI can feel…unsettling at times. Unlike conventional software, which directs users down specific, learnable paths, AI is a wide open field. The onus is on users to define their own approach to using the technology. While this can be more rewarding, it’s also a fundamentally more challenging task. It requires a deep comfort with ambiguity and a commitment to relentless iteration.
To muddy the waters further, using AI is often assumed to be “easy” because the interface through which we interact with LLMs—chat—is very intuitive. As writer Simon Willison says in a 2024 review of LLMs that’s been doing the rounds, “A drum I’ve been banging for a while is that LLMs are power-user tools—they’re chainsaws disguised as kitchen knives.” It isn’t hard to talk to a chatbot, but integrating meaningfully into your life, both personal and professional, is a different matter.
Tilt the odds in your favor
I no longer feel like an imposter at Every, and it’s because I’ve changed the way I approach AI.
I stopped wondering if I’m using AI the “right” way; after all, given the stochastic nature of the technology, I’m not sure if one exists.
Instead, I’ve been thinking about how to tilt the odds that the LLMs I use in my favor. The low-hanging fruit for me has been personalizing the models, by adding Custom Instructions in ChatGPT (yes, I own up to being one of the shamefully uninitiated) and Styles in Claude. This ensures that the AI’s responses match my needs and preferences more closely, without having to repeat the same instructions in every prompt. I’ve also gone from feeling overwhelmed and mildly nihilistic about prompt engineering techniques to trying them out.
This isn’t revolutionary advice—I’m far from the first person to suggest that customizing a LLM or using prompt engineering will improve its output—but I use these examples to illustrate the shift in my mindset. I went from expecting to learn how to use AI to defining my own relationship with the technology.
The distinction matters. When you're defining a relationship with technology, you're asserting agency. You're acknowledging that AI's potential isn't fixed or predetermined, but rather something you actively shape. You don’t have to find the “right” way, because it’s up to you to create it.
Rhea Purohit is a contributing writer for Every focused on research-driven storytelling in tech. You can follow her on X at @RheaPurohit1 and on LinkedIn, and Every on X at @every and on LinkedIn.
We also build AI tools for readers like you. Automate repeat writing with Spiral. Organize files automatically with Sparkle. Write something great with Lex. Deliver yourself from email with Cora.
Get paid for sharing Every with your friends. Join our referral program.
Find Out What
Comes Next in Tech.
Start your free trial.
New ideas to help you build the future—in your inbox, every day. Trusted by over 75,000 readers.
SubscribeAlready have an account? Sign in
What's included?
- Unlimited access to our daily essays by Dan Shipper, Evan Armstrong, and a roster of the best tech writers on the internet
- Full access to an archive of hundreds of in-depth articles
- Priority access and subscriber-only discounts to courses, events, and more
- Ad-free experience
- Access to our Discord community
Comments
Don't have an account? Sign up!