Too much AI‑assisted writing lands correct but hollow. Copywriter Chris Silvestri turns a hard lesson into a three‑phase workflow for great writing with AI and shows, in a side‑by‑side prompt test, how a bit of context shifts the output. You’ll see how you can apply the same method to any kind of writing—copy, board decks, novels—and where judgment does the heavy lifting.—Kate Lee
Was this newsletter forwarded to you? Sign up to get it in your inbox.
The email that changed my career was four words long: “These articles are crap.”
It was 2015. I was trying to grow my freelance writing practice, but as a non-native English speaker from Italy, I was deeply insecure about my language skills. When a new client asked for 100 blog posts in a month (this was the heyday of SEO), I leaned on my software engineering background instead: I tried to systematize and scale, passing off the bulk of the work to an offshore firm, hoping to look like a legitimate agency. And deep down, I knew that if the writing turned out bad, I wouldn’t be the one to blame.
On delivery, the client thanked me and paid in full. For a week after, I dreamed about my online writing empire. It had been so easy to get this project across the finish line: Sign, write one or two example articles, delegate the rest, repeat. Watch the money roll in.
Then the email hit. The client wanted a full refund.
I promised them I’d rework the posts, even though I wasn’t sure how. Somehow I managed, but it took more than double the time we scoped. Needless to say, I lost that client.
The failure forced a choice: Quit or learn to write from the ground up. That shame—the guilt and feeling of not being good enough—pushed me to spend the next few years studying and deconstructing persuasive writing. Without the intuition of a native speaker, I took the methodical approach of an engineer. I built arguments from first principles because I had no other choice.
That painful experience taught me a lesson that’s more relevant today than ever: Scaling production without scaling human judgment is a recipe for disaster. This is the trap most people are falling into with AI. The system I was forced to build to overcome my own shortcomings is the same system that can help you create great work with AI. Here’s what I’ve learned.
Why your prompts aren’t working (and what to do instead)
AI has raised the quality floor, making truly terrible writing rare. In doing so, it has also flooded the world with a sea of prose that’s grammatically correct, tonally plausible—and strategically and creatively empty.
Scroll through LinkedIn and you’ll find a landscape of posts with absolutist hooks (“most companies do X wrong”), empty buzzwords (“unlock the power of…”), and inhumanly uniform paragraphs that all sound vaguely the same. It isn’t wrong; it’s just… bland.
The initial response to this wave of mediocrity was to get better at prompting. Many writers focused on prompt engineering, the technical skill of crafting the perfect command to get a better output, about which Every columnist Michael Taylor has been writing. A prompt engineer might write: “You are a B2B SaaS copywriter. Write three headlines for a new financial software targeting VPs of finance. The tone should be confident but not arrogant. Return as a numbered list.”
While this is more structured than a simple request, the input is still shallow. Even if you provide examples of the kind of writing you want, the AI is still just pattern-matching based on the representation, in its training data, of how a “B2B SaaS copywriter” might write.
If you want to get good writing—at scale—with AI, you need to also provide it with the strategic raw materials first (what AI researcher Andrej Karpathy and others call context engineering). Instead of simply telling the AI the audience is “VPs of finance,” you supply it with customer interview transcripts that reveal their pains and priorities. Instead of describing the voice as “confident,” you feed it a brand voice guide with clear examples of what to do and what to avoid. You’re building a rich, data-informed world for the AI to operate within, not just giving it a better list of instructions.
Prompt engineering focuses on crafting specific but rigid commands. Word something the wrong way and you’re back to square one, tweaking phrases and hoping for a better result. Context engineering is more flexible. Once you provide the necessary context and foundation, you can steer the AI with more human, conversational instructions. Your job is to guide the output, evaluate it against your goals, and make the final editorial choices that turn technically correct prose into strategically sound and emotionally resonant words.
A system for AI‑assisted writing
That’s the theory. Here’s the system I use.
The Only Subscription
You Need to
Stay at the
Edge of AI
The essential toolkit for those shaping the future
"This might be the best value you
can get from an AI subscription."
- Jay S.
Join 100,000+ leaders, builders, and innovators
Email address
Already have an account? Sign in
What is included in a subscription?
Daily insights from AI pioneers + early access to powerful AI tools
Comments
Don't have an account? Sign up!
Question: what have you found to be the best way to provide context to the "environment"? Your screenshot shows the same prompt. Are there earlier messages where you've uploaded the documents and told the LLM how to use them?
@chad.j.royer yes the difference between those two screenshots in Gemini was the context and knowledge I’ve shared previously. The first was a fresh chat, the second was my working chat where I’ve started sharing documents and had a conversation since the start of the project. For more complex projects I might use something like a Claude project to keep context organized.