Was this newsletter forwarded to you? Sign up to get it in your inbox.
It’s the late 1800s and a brand new horse-drawn carriage just hit America’s dirt roads. Horses are still the most popular mode of transport, so this isn’t out of the ordinary—but the carriage is causing quite the stir. The horse is a bit…strange. It doesn’t whinny, neigh, stomp—or even move—its hooves.
This contraption was one of America’s earliest experiments with a car. Its creator, a blacksmith from New York, wanted people to feel comfortable in his strange mechanical contraption—so he stuffed all the nuts, bolts, and gears that make a car work into an artificial horse-shaped frame. The driver could even steer it with reins attached to the “horse’s mouth.”
A sketch of the design was published in an upstart magazine, The Horseless Age, and came not long after Thomas Edison was quoted as saying that “the horse is doomed” as a means of transport.
We know the rest—Edison was right, and modern car designs have evolved a long way from their equine-imitating ancestor. But the urge to couch early motor vehicles in forms that were familiar to people at the time is an example of a design principle known as skeuomorphism. And it persists to this day in cars: Tesla designs its vehicles with showy front grilles—typically needed to prevent engines from overheating—even though electric cars don't need them.
It’s also a big part of how we interact with AI systems. The way we think, talk, and use AI today is modeled on our understanding of humans. That’s natural, and even useful, in the initial stages of adopting a new technology—but you could say we’re living in the “horseless carriage” era of AI. What comes next, the enduring applications of AI, will look different. Let’s call them “AI-native.” These capabilities can only emerge when we stop trying to replicate human intelligence, and start exploring what makes AI unique.
Sponsored by: Every
Tools for a new generation of builders
When you write a lot about AI like we do, it’s hard not to see opportunities. We build tools for our team to become faster and better. When they work well, we bring them to our readers, too. We have a hunch: If you like reading Every, you’ll like what we’ve made.
New technology absorbs the shape and form of what came before it
Skeuomorphic thinking goes deeper than surface design; it extends to how we actually use new technology. The early internet, for example, was initially used to take existing behaviors online: reading the news, sending mail, shopping from catalogues. Internet-native use cases—ones that had no clear parallel in the pre-internet world, like crowd-sourced stores of knowledge and the rise of social media—emerged much later.
Like Web 1.0 in the 20th century, our mental models of what we can use AI for are currently shaded by skeuomorphism. We’re predisposed to map AI’s capabilities to human roles because generative AI comes closer to mimicking human behavior than any technology we’ve created yet: It can understand natural language, adapt its communication style to match your level of expertise in a subject, and make do with bad instructions, even though it thrives with context—much like a human being.
We assume that AI will do what humans do. That’s why the impact of AI is often spoken of in terms of how much of the labor force will be automated, its intelligence is measured in terms of human IQ, and its uses are modeled around human roles like the personal assistant, copywriter, and developer. While many of these use cases are admittedly helpful, this way of thinking is limiting. If we assume that AI is like us, we risk failing to explore what AI is uniquely suited to do.
How do we find the AI-native?
I’ll be honest, it’s hard to define what “AI-native” will look like, and these are my best guesses about how this will play out. With that caveat, we can start to paint a clearer picture by thinking about a fundamental quality of AI: AI allows us to interact with large amounts of data in natural language. Until now, doing this required technical expertise like writing a SQL query. The ability to have a conversation with a database this easily is unprecedented, and these are a few examples of fledgling AI-native trends that leverage it.
When you have a question about a space you’re not an expert in, have you ever wished you had access to a smart friend who was a domain expert? Someone you trusted to cut through the complexity and give you a straight answer? AI applications like Lenny Bot make me feel like that’s become a possibility. Lenny Bot is AI-trained on the body of work about product management created by Lenny Rachitsky, who has one of the largest Substack newsletter audiences. You can call or text Lenny Bot, prompt the AI to answer a question—including all the context about your specific situation—and receive a personalized response. Lenny Bot scales Lenny’s expertise because it wasn’t practically possible for someone as experienced as him to read and provide thoughtful responses to all the questions posed to him. Similarly, research tool Consensus analyzes academic papers to answer your questions and, through a “consensus meter,” gives you the degree of scientific agreement on the topic (it also surfaces the relevant papers, in case you want to read them).
Speaking of new domains, lately, I also feel less inhibited from reaching beyond my circle of expertise. AI has made it possible to do things I’ve never done—like build software (with no-code tools like Lovable and Bolt) or make music (with Suno)— without investing significant time or money in the pursuit. I feel like I have nothing to lose by trying.
Another growing AI-native trend I’ve noticed is social AI apps like Friend or Character.AI. I’ve experimented with a few of these tools a little, but haven’t been sold on the premise. Friendship is obviously an age-old concept, but before AI, we haven’t had the “opportunity” to mold the personalities of our friends to our liking. I realized that in friendships, I value people with opinions and distinct likes and dislikes, and perhaps more importantly, people who don’t always agree with me. Most AI personas don’t yet pass that test.
Instead of asking how AI can do human tasks better, ask what AI can do that humans never could.
Rhea Purohit is a contributing writer for Every focused on research-driven storytelling in tech. You can follow her on X at @RheaPurohit1 and on LinkedIn, and Every on X at @every and on LinkedIn.
We also build AI tools for readers like you. Automate repeat writing with Spiral. Organize files automatically with Sparkle. Write something great with Lex.
Get paid for sharing Every with your friends. Join our referral program.
Find Out What
Comes Next in Tech.
Start your free trial.
New ideas to help you build the future—in your inbox, every day. Trusted by over 75,000 readers.
SubscribeAlready have an account? Sign in
What's included?
- Unlimited access to our daily essays by Dan Shipper, Evan Armstrong, and a roster of the best tech writers on the internet
- Full access to an archive of hundreds of in-depth articles
- Priority access and subscriber-only discounts to courses, events, and more
- Ad-free experience
- Access to our Discord community
Comments
Don't have an account? Sign up!
Great insights and human understanding, Rhea. I look forward to your essay on What AI Can Do That Humans Never Could. Never is such a big word and mental challenge for humans who have a spirit that says they Can Do more than they imagine. That will be a great article, when you write it.