
Was this newsletter forwarded to you? Sign up to get it in your inbox.
In the 1470s, Venetian scribe Filippo de Strata penned a meditation on the practice of writing. “Writing,” he said, “should be respected and held nobler than all goods.” As a member of the quill-wielding guild, I tend to enthusiastically agree—but Filippo didn’t stop there. He added: “[U]nless she has suffered degradation in the brothel of the printing presses. She is a maiden with a pen, a harlot in print.”
Ooof.
Putting aside Filippo’s 15th-century ideas about female virtue, his disdain for the printing press was pretty common at the time. Many believed that this new machine didn’t democratize knowledge but debased it—by, say, prioritizing quick production over scholarly integrity.
This view didn’t age well, to say the least. It didn’t take long for the printing press to revolutionize the way knowledge was preserved and spread. By the 1600s its impacts were already legend. In 1620, British philosopher Francis Bacon wrote that the printing press was one of three innovations “unknown to the ancients” that “changed the appearance and state of the whole word.” (The other two were gunpowder and the compass.)
Flash-forward to the AI era of today. Almost daily we’re forced to reckon with fresh evidence that the powers of large language models and generative AI will upend industries and automate away human labor. It’s no wonder, then, that much of the conversation around the disruption AI is causing is laced with fear.
We can do better. Some worry is certainly warranted. Very little has been figured out as to what will become of workers affected by automation, for example, and there are many credible cases to be made about the harms of blindly adopting AI in various domains. But knee-jerk reactionary “new tech is bad” thinking is just as unhelpful as uncritical boosterism. Finding a middle ground means examining the changes brought about by AI, and our reactions to them, as they occur—in order to help us make more thoughtful decisions about a technology that is undeniably changing the world.
When tools form part of our identities
The printing press mimicked writing and copying books in longhand, a skill that Fillipo and other scribes held close to their hearts. It helped them make a living, defined their role in society, and arguably formed part of their self-worth. So it’s only natural that their first reaction to it being taken away was rejection. This feeling dissipated over time; subsequent generations of writers learned how to use printing presses to spread their work far and wide, vastly amplifying its impact. Skills didn’t disappear—they shifted.
New technologies often subsume human skills, and when those skills are things we take pride in, or rely on for our livelihood, we get scared. We tell stories about why technology is bad. In the 19th century, people lamented the emergence of new diseases like “telephone ear,” “typewriters cramp,” and “bicycle back.” Not long ago, an article ran in the Atlantic contemplating whether the internet is making us stupid. In these narratives we tend to pit ourselves against technology; it’s “us” versus an ambiguous, non-living “it.” That is, until we evolve, adapt, and develop a new set of skills to take pride in.
We’ve seen this shift happen before. Take the effect of GPS on our wayfinding skills: The first commercially available GPS device launched in 1989, and by the early 2000s, the technology was rapidly proliferating. Paper maps display information as it relates to other features and landmarks in the environment. When you use a physical map, you’re tracing the relationships between those landmarks and drawing a cognitive map inside your head. A map that uses GPS, on the other hand, displays information relative to your position and gives you a set of instructions to follow to get from point A to point B. Our wayfinding skills didn’t disappear, they evolved. (Of course, this wasn’t the first time our wayfinding skills have evolved—the paper map was a revolution in itself. Before they existed, our ancestors navigated using sensory cues like the position of the stars, ocean currents, and calls of migrating birds.)
AI can mimic the abilities that define knowledge work—like creativity, critical thinking, and our ability to reason—remarkably well. If you are a knowledge worker, that’s pretty scary! But one way to move beyond that is to ask: If AI can do these things, what new skills will the technology unlock?
A new cognitive frontier is here
A paper published recently by researchers from Microsoft and Carnegie Mellon University surveyed 319 knowledge workers to understand how using AI affects their critical thinking. They found that these tools shift what “thinking critically” means in three main ways:
- From gathering information to verifying information. AI tools can gather and curate large amounts of information in response to a prompt. They’re far more efficient than we are at it—but it falls to us to check the AI’s accuracy. In order to do that, humans will need to have a certain level of subject-matter expertise. Deep research, OpenAI’s agentic research assistant, is indicative of this shift because it's capable of independently compiling complex research tasks that would have previously required human effort.
- From solving problems to integrating AI output. As AI becomes more proficient at providing solutions to prompts with clear answers, humans will be able to spend more time deciding whether models’ responses fit the context of the request. Exercising good judgment—knowing what makes something not just correct but also meaningful—will take on increased importance.
- From executing tasks to overseeing them. 2025 has been dubbed “the year of AI agents” with good reason. Humans will focus on allocating resources and managing different AI models. The new paradigm of service as a software—a product model in which software provides an end-to-end service (instead of being just a tool) by taking on the tasks that would otherwise require significant human effort—is perhaps an extreme version of this shift.
The uncomfortable reality of progress
We may be able to adapt our cognitive skillset to thrive in the age of AI, but it would be naive to think it’s not going to be challenging. There are very real trade-offs that we’ll have to grapple with.
Filippo, our not-so-happy Venetian scribe, had a point when he worried that the printing press would lower the quality of books being brought into existence. There was, indeed, an influx of faulty editions of classical works, bawdy, low-brow texts, and straight-up book piracy in the years after the printing press was introduced. But the overall effect was of books flooding the world. Even though not all of them were great—and objectively bad stuff has proliferated ever since—we are undoubtedly better off for the abundance of wonderful work that otherwise wouldn’t exist.
With GPS too, research shows that heavy reliance on GPS may lead to an overall decline in our spatial memory—in other words, the ability to remember the locations of objects, places, and the relationships between them. While spatial memory is a key part of navigation, it plays a role in cognitive tasks beyond that as well, like packing a bag for a trip by mentally mapping the places and situations you are likely to find yourself in. Other critics make a more abstract argument: GPS reduces the level of engagement we have with our environment. Finding your way around a new city, a creased paper map in hand, is a fundamentally different experience from the GPS equivalent: “Turn left in 500 meters and your destination will be on the right.” Even if you agree with these arguments, it doesn’t change the fact that GPS is a hugely useful practical application of technology.
When it comes to AI, I believe that prioritizing adaptability over resistance will serve us better in the long run. We didn’t abandon writing after the printing press, and we won’t stop thinking because of AI. Our skills will settle around overseeing and evaluating, instead of raw creation. This shift will be uncomfortable—we’ll likely lose some depth in areas AI handles well—but the trade-offs enable new capabilities we are only just beginning to imagine. The only way to discover them is to lean into using the new technology.
I’m reminded of a comic featured in an edition of the satirically-themed MAD magazine from 1960 that poked fun at the effects of cars and other motor vehicles. It quipped: “In time our legs will become vestigial organs, and we’ll end up soft and fat, looking like round-bottom toy dolls.” From electronic scooters to motorcycles and Segways, motorized transport is more popular today than ever. But our legs haven’t withered away. Now we go to the gym or for long strolls or hikes just for the fun of it. We didn’t lose our ability to walk; we gained the freedom to choose when and how to use our physical capabilities. And so it will be with AI. We won’t lose our minds, but rather gain the agency to direct our cognitive energies intentionally.
Rhea Purohit is a contributing writer for Every focused on research-driven storytelling in tech. You can follow her on X at @RheaPurohit1 and on LinkedIn, and Every on X at @every and on LinkedIn.
We build AI tools for readers like you. Automate repeat writing with Spiral. Organize files automatically with Sparkle. Write something great with Lex. Deliver yourself from email with Cora.
We also do AI training, adoption, and innovation for companies. Work with us to bring AI into your organization.
Get paid for sharing Every with your friends. Join our referral program.
Find Out What
Comes Next in Tech.
Start your free trial.
New ideas to help you build the future—in your inbox, every day. Trusted by over 75,000 readers.
SubscribeAlready have an account? Sign in
What's included?
-
Unlimited access to our daily essays by Dan Shipper, Evan Armstrong, and a roster of the best tech writers on the internet
-
Full access to an archive of hundreds of in-depth articles
-
-
Priority access and subscriber-only discounts to courses, events, and more
-
Ad-free experience
-
Access to our Discord community
Comments
Don't have an account? Sign up!