Feasting at the Trough of AI Slop

AI imagery is more popular, more powerful, and less harmful than you think

24 3

Was this newsletter forwarded to you? Sign up to get it in your inbox.


In a recent interview, Mark Zuckerberg said something remarkable: “Feeds [were] friend content and now it’s largely creators, and in the future a lot of it is going to be AI-generated.” [Emphasis added]

Consider what he is saying. Not only will Meta’s curation on feeds like Instagram and Facebook be entirely algorithmic, but the content will largely be produced by algorithms, too. No longer will there be middlemen—your aunt, a friend from high school, or a Taylor Swift fan account—that you are responding to. Instead, you’ll be interacting directly with machines. This is a remarkable idea offered up by the best positioned person in the world to have it. Curation and creation, all algorithmically controlled.

However, Zuckerberg got one thing wrong: It isn’t the future—it's already happening. A cottage industry of AI creators on social media is flooding the internet with their creations. My research indicates this phenomenon is already far larger than you think. 

Across X, Facebook, and other social media platforms, AI content is generating tens of millions of impressions every week. A Europol study estimated that 90 percent of online content will be “synthetic”—a.k.a. AI-generated—by the end of 2026. These estimates strike me as accurate, if not a little conservative. The cost to generate text and images is already so low that human creators can’t compete. Whether you like it or not, AI products are vying for our attention.

In this column, we’ve discussed ad nauseum that attention is a competitive marketplace, meaning that AI content is only a threat if it is more attention-gaining than other assets, or if the producers of AI content have a structural cost advantage over traditional media companies. 

This article is not a doom-and-gloom outlook on AI-created material. Crucially, AI-generated content is only performative online if it is more competitive—if its consumers derive superior utility from it. It only takes market share if people like it. While it is tempting to dismiss this content out of hand, those impressions are earned, not given. Attention aggregators like the social media platforms are not in the business of losing users, so they aren’t going to surface something that isn’t performative.

Why are we talking about this now? AI content—in particular, AI-generated images—are, as of last week, indistinguishable from real photos. Welcome to the season of AI slop.

Checking in on AI progress

Let’s put content—a horribly vague word, but the only one broad enough to encompass my argument—on a timeline of improvement. On the left-hand side is the cave art of 40,000-plus years ago. It is creation at its most primitive, with humans using simplistic tools to express themselves. Each subsequent technological improvement like the Gutenberg printing press decreased the costs of creation or gave individuals the power to make more sophisticated art. Progressing from the harpsichord to the digital piano allowed far more people to play the keys. Progressing from the Gutenberg printing press to Google Docs and Amazon’s Kindle Direct Publishing allowed far more people to publish books.

Source: Every Illustration.

AI-generated content is the latest iteration of that improvement curve. It allows people to transform a hastily scrawled prompt, and voilà—you can produce not just one object, but thousands. It is the next step in that improvement curve, allowing more people to create more content cheaper. This improvement curve is why AI generation startups like Runway describe themselves as a “new type of camera.” Not only can you make videos, you can make new kinds of films (for a lot less money), too. 

The key to being a substitutable good, is, well, being able to substitute what came before it. For AI-generated content, that means it can look like the content produced by tools from previous generations.

As of last week, we have officially passed the point where it is possible to tell the difference between AI images and real life without close examination.

Source: Reddit.

This was made possible with the release of open-source models—known as Flux—made by Black Forest Labs, some of the original team behind Stable Diffusion (for which it raised $31 million). From there, people fine-tuned the models to be more realistic with a modification called LoRa. Because the Flux model and LoRa fine-tune are open-source, you can’t put the cat back into the bag. Anyone can do this now! And it will only get better as other developers work on it.

Here is an image, pre-LoRa, using Flux out of the box. 

Source: Reddit.

Here is an image after using LoRa. 

Source: Reddit.

I don’t know about you, but unless someone told me to look, I couldn’t tell that the second image was AI-generated. The skin shading, the hair, the out-of-focus logos in the background, the lighting—all of it is incredibly realistic. There are some flaws (look at the writing on the lanyard), but unless you feel like staring at it for a few minutes, you wouldn’t be able to tell. Even you sitting at home can do this, with anyone you have a picture of. (Here are instructions if you’d like to do it yourself.) 

Because the models are open-source, any company can also implement them. For example, as of August 14, X’s AI assistant has been running a Flux model for its image generation. I used it to generate a picture of Trump dabbing up Adolf Hitler.

Source: X. 

This image is just alright, based on how realistic it looks. While it may look obviously fake to this audience, you would be surprised by how many people would engage with it. X’s implementation of these tools pairs creation and curation. If anyone in the world can access algorithmic distribution and image tools in a single application, a new precedent has been set. Combined with the fact that Musk’s X makes financial payouts for views, it represents direct competition for new and old media formats. These images are already flooding the timeline, though their staying power remains to be seen.

While other mediums, such as voice and video, lag behind image generation, they’re in similar places to image generation two years ago. Music apps like Suno can produce a song that usually sounds human-made. It isn’t perfect, but it is pretty close. Runway, Google’s Veo, OpenAI’s Sora, and multiple other startups can generate beautiful videos that last more than 20 seconds. There are constraints around format, style, and timing, but these models are making rapid progress.

The form factor of AI content is clearly so developed that a casual observer can’t tell the difference between photo and text. This is not a hypothetical. For two years, I have been warning that this day would soon be upon us. Now that day is here. Deepfake images are totally believable, easily made by anyone, and cost less than 10 cents to make. Video, music, and other audio mediums are not far behind. These are cost structures that typical media companies simply can’t match. 

Even if these images can be made for cheap, what evidence do we have that they make as much of an impact?

Feeling sloppy

Content exists in the context in which it is consumed: We have to be able to evaluate the emotional impact of a piece on its own and in the feed in which it is consumed.

In a 2022 interview with the New York Times, David Levine, the chief content officer at Moonbug, discussed the company’s content strategy. For the unfamiliar, Moonbug produces the hit TV show CoComelon—known as CoCocrack among my friends whose young kids have become addicted to watching it. Levine explains how they use YouTube analytics to see what type of content increases watch time. Here’s their formula for success:

“Infants are also enamored with objects covered in a little dirt, like they’ve been rolling around on the ground. And they’re fascinated by minor injuries. Not broken legs or gruesome wounds. More like small cuts that require Band-Aids.
‘The trifecta for a kid would be a dirty yellow bus that has a boo-boo,’ Levine said. ‘Broken fender, broken wheel, little grimace on its face.’”

Before our tiniest human beings are even fully capable of stringing together sentences, they can form emotional reactions to the content they view. Our lizard brains all like the same thing—and apparently that thing is dirty yellow buses.

Lest you think this is only true for those human beings still in potty training, there are indications that adults are just as lizard-brained. I don’t mean this in a hand-wavy way. I mean in the cold, hard cash sort of way. There is a thriving media ecosystem for AI images that go viral on Facebook and make money for their creators. It works something like this: 

  1. An individual comes up with a combination of an image prompt, a caption, and hashtags.
  2. They register as members of Meta’s "Creator Bonus Program," which pays them on the basis of views or clicks.
  3. They post dozens of images a day hoping they go viral.

According to reporting from the excellent team at the news startup 404 Media, someone talented at making this kind of content can earn a few thousand dollars a month. That’s not a ton to a Westerner, but to the residents of Southeast Asia, where instructional videos about this are popular, these are large sums of money to be made. What they post shifts with whatever the algorithm is rewarding, but there are some commonalities: Jesus, poor people, soldiers, sexy women—things that arouse strong emotional reactions.

This image generated 100,000 likes on Facebook:

Source: X.

Lest you think it’s restricted to vaguely sinister and derogatory posts about child birth rates, here is a candidate for the U.S. Senate using AI imagery to attack a sitting senator.

Source: X.

These may appear to be obviously fake images to you or me, but in the context of the news feed with 100,000 likes as endorsement, you might feel intrigued, scroll by, and get mad. These images don’t hold up to close scrutiny at all! The new Flux model with LoRA was released on August 7. The level of sophistication is going to skyrocket from here. 

That the subject matter is consistently performative suggests that AI content is a substitutable good in terms of the reactions it elicits from consumers. Additionally, the workflow for these AI creators is identical to that of an employee at any other media company: Make something, distribute it through your favorite channels, get paid, and repeat. The fact that the object was created with AI doesn’t change its position in the feed, only the consumer interest.

However, this is only bad if it means that no one can tell the images are fake and, crucially, the platform's curation algorithms continue to reward engagement bait. While Meta and X should be concerned about truth, it is so hard to define and enforce that they end up focusing on the much easier and more legible thing—profit. Because the primary usage of these platforms is distraction and entertainment, truth is secondary. 

Thus, the problems of AI-generated content are the same as those of creator-generated content. The issue lies with the platforms.

Today, most AI slop manifests mostly through images and some audio. If improvement curves mimic what happened with images, in less than three years, deepfake videos will be indistinguishable from reality. Images, video, and audio are perhaps the most dangerous forms of misinformation because they are the most convincing. It is easy to dismiss prose, but it’s harder to dismiss what you see with your own eyes. My napkin math says that images cost less than a cent to produce in sufficient volume, so the spammer’s method outlined earlier yields a net positive return. Video generation costs are currently running about $5 a minute, so that format is not yet at the break-even point.

Spotify currently has more than 100 million tracks and 350,000 audiobooks. It does not matter if Suno or some other AI music app increases the song volume on the platform by 10 because what determines success is the curation, not the creation. You can scroll endlessly on X or Instagram Reels and never run out of video. More AI videos don’t change that. 

Cautious, not dismissive

The problems raised by AI content are those that come with putting a bunch of humans on one website. All of us love pictures that confirm our priors—funny memes, dumb jokes, and being turned on. AI’s only material change is that it makes the images, captions, and videos that make us feel that way cheaper and more convincing. The lies that could be told with Microsoft Word or Adobe Photoshop are just easier to tell with AI tools.

That all of us are reveling in content slop is an uncomfortable truth. It may be intellectually easier to blame AI as the problem when the issue resides in people and platforms.  My double-bind theory argues that attention aggregators have no choice but to allow content that pushes the bounds of social acceptability. Otherwise, engagement moves to another platform. But it is a true double-bind. The more promiscuous your community content guidelines, the more advertisers will be reluctant to advertise on your platform (as Elon Musk is learning with X). Any feed-based platform will have to allow AI-generated content in some form. 

Existing media companies will either need to adopt AI so they aren’t undercut by AI-first companies, or derive their value from things AI can’t replicate: appointment viewing, live events, exclusive IP, or counter-positioning as exclusively “human-made.” This is part of the reason why sports rights continue to grow more expensive. Creators will still be creators! AI just expands the scope and ambition of what they can create. 

In some ways, this phenomenon is the same as the one I wrote about last week, but instead of horny chatbots, it’s images. It is tempting to blame the tools of AI for the negative externalities, but they aren’t media—they are mirrors, reflecting back the ugly, messy truth of humanity. 

The real challenge isn’t the technology itself but how we, as consumers and creators, adapt to a world where creativity is being redefined by the minute. This is the content revolution, and whether we like it or not, we’re already living in it.


Evan Armstrong is the lead writer for Every, where he writes the Napkin Math column. You can follow him on X at @itsurboyevan and on LinkedIn, and Every on X at @every and on LinkedIn.

Find Out What
Comes Next in Tech.

Start your free trial.

New ideas to help you build the future—in your inbox, every day. Trusted by over 75,000 readers.

Subscribe

Already have an account? Sign in

What's included?

  • Unlimited access to our daily essays by Dan Shipper, Evan Armstrong, and a roster of the best tech writers on the internet
  • Full access to an archive of hundreds of in-depth articles
  • Unlimited software access to Spiral, Sparkle, and Lex

  • Priority access and subscriber-only discounts to courses, events, and more
  • Ad-free experience
  • Access to our Discord community

Comments

You need to login before you can comment.
Don't have an account? Sign up!
@sean_7871 about 1 month ago

I'm not sure you are using the word performative correctly.

Oshyan Greene about 1 month ago

"While Meta and X should be concerned about truth, it is so hard to define and enforce that they end up focusing on the much easier and more legible thing—profit. Because the primary usage of these platforms is distraction and entertainment, truth is secondary."

Is *that* the reason they end up focusing on profit - because truth is too hard? I doubt it. If truth were most profitable, they would focus on and favor truth. And doing so is one of the few competitive advantages I can imagine a future new platform using with any chance of success. If a social media platform came out tomorrow that could demonstrate strong capability in detecting, flagging, and actively filtering (or outright rejecting) fake content I would move there pretty quickly.

Instead comments and decisions like those from Zuckerberg, that have already turned the Facebook feed into a stream of mindless crap (instead of the human connections I joined for in the first place) incentivized me to start my own, private social media site. Most people won't have the means, but not everyone has to. This may lead to a fragmented future...

Aly G 3 days ago

I love how you write!

"Before our tiniest human beings are even fully capable of stringing together sentences, they can form emotional reactions to the content they view. Our lizard brains all like the same thing—and apparently that thing is dirty yellow buses."

"That the subject matter is consistently performative suggests that AI content is a substitutable good in terms of the reactions it elicits from consumers. Additionally, the workflow for these AI creators is identical to that of an employee at any other media company: Make something, distribute it through your favorite channels, get paid, and repeat. The fact that the object was created with AI doesn’t change its position in the feed, only the consumer interest."

On the topic of viewing AI-generated content:

You're not wrong—I’ve caught myself watching, listening to, and viewing AI-generated content. I don't particularly find it off-putting, unless it's done poorly or I feel a bit cheated because it seemed so real. But then again, the same goes for regular content, I suppose.

You make a valid point about traditional media needing to adapt. In a way, the money flows upstream to the platforms that stand to profit from a massive wave of creators who couldn't produce content before the rise of AI.

But on the flip side, with the increase in model capabilities, it's time to make the memes of my dreams!

Every

What Comes Next in Tech

Subscribe to get new ideas about the future of business, technology, and the self—every day