TL;DR: Why does AI writing still sound like AI writing, even as the models get smarter? In his first piece since joining Every as Spiral’s general manager, Marcus Moretti explains why the answer is more complicated than you’d think. The most reliable fingerprints of your personal style come from the words you write subconsciously: articles, pronouns, and function words that emerge in a distinctive pattern as you focus on the meaning of a sentence. His piece explores what new research in machine learning and stylometry—the study of style—means for the future of writing tools like Spiral. If you want to go deeper, Spiral has several updates, including creating a writing style from your website or X account (even taking post engagement into account) and a cleaner, faster editor.—Kate Lee
Was this newsletter forwarded to you? Sign up to get it in your inbox.
OpenAI models demonstrate Ph.D.-level knowledge across physics, biology, and chemistry. Anthropic staff have claimed its Opus 4.5 model “largely solved coding.”
Yet AI writing remains stubbornly detectable: “It’s not an idea. It’s a breakthrough.” “Delve.” Lists of threes with no “and.”
If you’re a regular Every reader, you may already know why this is. LLMs are trained on an unfathomable amount of words and learn generally how to speak. Post-training, which refines a model after initial training on large datasets, makes the models friendlier and safer, so they end up speaking in a kind of generic politeness. Ted Chiang’s description from a few years ago remains apt: “ChatGPT is a blurry JPEG of the web”—a tool that approximates human insight without ever landing on the mark.
I’m interested in the relationship between LLMs and writing style because I’m the general manager of Spiral, Every’s AI co-writer. Writing sessions in Spiral begin as a chat: You describe what you intend to write, and Spiral helps you hone your message and gather relevant research. Then it produces one or more drafts, offering several approaches for your piece.
Our aim is for Spiral’s written output to reflect your personal writing style, not the generic politeness of the foundational model. To this end, I’ve been reading papers on natural language processing, linguistic forensics, and stylometry—the study of writing styles. It wasn’t until I started working on Spiral that I became aware of the century-plus history of stylometry, or of the fastidiousness with which researchers have catalogued the elements of style. In recent years, researchers in these fields have flocked to LLMs, finding new ways to expand our understanding of human writing. Here are some findings that I found interesting and even counterintuitive, and that provide a hint as to where AI writing might be headed.
Subconscious decisions define writing styles
Stylometry has had a few moments of glory. In the 1800s, stylometrists gave sold-out lectures about whether William Shakespeare wrote those plays. In the 1960s, two stylometrists isolated Alexander Hamilton’s contributions to The Federalist Papers based largely on the presence of the word “upon.”
In the 2020s, LLMs have introduced new ways of studying style. Last year, two Cornell University researchers systematically manipulated text snippets to see how it affected LLMs’ ability to guess their authors. They removed an attribute of the text one at a time—such as proper nouns or capitalization—and measured the effect on attribution accuracy.
The Only Subscription
You Need to
Stay at the
Edge of AI
The essential toolkit for those shaping the future
"This might be the best value you
can get from an AI subscription."
- Jay S.
Join 100,000+ leaders, builders, and innovators
Email address
Already have an account? Sign in
What is included in a subscription?
Daily insights from AI pioneers + early access to powerful AI tools
Comments
Don't have an account? Sign up!