select all

Can YouTube Survive the Adpocalypse?

Illustration: Konstantin Sergeyev. Source images: YouTube; Runeer/Getty Images

Last winter, in what would become the second of three major advertiser exoduses from YouTube in recent years, companies like Coca-Cola and Amazon pulled ads from the platform after finding their content paired with hate speech and violent extremist content. YouTube — which places ads using a theme-guided algorithm, and at that point relied majorly on user flagging to catch inappropriate content — scrambled to appease the advertisers after mass outrage over the breach of trust.

In response to the uproar, popularly called the “Adpocalypse,” the platform introduced a vague new policy of automated demonetization to appease advertisers, but they drew remarkably wide lines for the AI carrying out these changes.

Guidelines for content that might be deemed “Not Advertiser-Friendly” (a term now so ubiquitous in YouTube creator discussions that it’s mostly said in shorthand, “NAF”) include categories like “sensitive social issues” or “tragedy and conflict.” Ostensibly built to combat a sliver of YouTube content with genuine potential to provoke extremism, the policy struck a major blow to any creator who even tangentially touched on controversy. Fairly mainstream commentary, unbiased reportage, comedy, PSAs, and world history were all demonetized with little attention paid to nuance or context.

Both the boycott and the policy shift provoked outrage and panic from YouTube creators large and small, many of whom reported losing upwards of half of their income in April and May. The comparatively detailed current NAF guidelines had not yet been published, and no one understood why their videos were being flagged or how to avoid it.

“YouTube doesn’t release anything until way later, and then all of a sudden we’re like, ‘Oh, so that’s what this means,’” said Arianna Pflederer, who reached out to me after her suicide-awareness video was demonetized. “I don’t feel like they ever really explain things well. Even after they explain things, I don’t think their explanation makes total sense.”

Pflederer, who authors a vlog focused on mental health and motherhood, had mostly avoided the rash of demonetizations, but she was shocked in July when her video “Do You Want to Commit Suicide?” was immediately marked NAF. The vlog, despite its provocative title, was an empathetic effort, and she immediately initiated an appeal — an anonymous button press that triggers an in-person review, the results of which may or may not be shared with the creator, and which might be done in a matter of hours, months, or not at all — and within a week, she received notice that the video had been further flagged and was now completely demonetized.

“I understand from YouTube’s point of view why, if someone had written suicide, they may mark that as not advertiser-friendly,” Pflederer conceded. “Because, you know, some people upload videos of people actually committing suicide, those kind of things, shootings, things like that.” Like everyone I spoke to, she made a point of appreciating the challenges the platform faces in managing so much content; algorithms are hard. Still, she was shocked that after watching the video, a YouTube reviewer had decided against it.

She combed the guidelines and double-checked with her network — a company that manages many YouTube channels and usually has slightly better intel on policy updates — but couldn’t find anything that justified the action against her.

Romina, another vlogger who creates travel videos and prefers to be identified by her first name and channel moniker (@RedRomina) for privacy reasons, had been waiting months for reviews of several of her NAF videos, one of which featured her exploration of Murphy’s Ranch, an abandoned Nazi prisoner camp in Los Angeles. “I sort of understand why that would get demonetized,” she said diplomatically, “but, of course, the video isn’t pro-Nazi or anything like that; it’s just an educational piece about a place.”

What she couldn’t understand was the NAF flag that appeared on her September 29 video about a cat café in Atlanta that employs the homeless as part of a skills-building program. After submitting a request for review, she complained about the incident on Twitter the next day, tagging @YouTube in the tweet. By the time we spoke on October 2, the video had been quietly remonetized.

“I just woke up this morning and I happened to see my video was no longer demonetized,” she said of the incident, “For my other two videos, I never publicized that they were demonetized. But for this video, I think that possibly doing that and tagging YouTube, and the fact that the video was gaining traffic pretty quickly, helped with the appeal process.”

The platform openly acknowledges that the algorithm is imperfect and that channels and videos need to pass minimum view thresholds before they’re eligible for NAF appeals (10,000 and 1,000 views, respectively). But the policy’s wide umbrella means that channels large and small have been losing revenue and traffic for content the platform has never before considered objectionable — “I have not talked to a YouTuber that hasn’t been affected,” one active member of the family-vlogging community said.

“I don’t think there is, like, some perfect explanation or perfect fix to this.” Pflederer said. “Either way, there’s going to be some give or take on either side, but for us as creators right now, it feels like creators are the only ones giving right now, and that everyone else is just taking.”

“Literally almost everyone across the board has seen their views cut in half,” the family-vlogger told me. “So we’re trying to fight the Not Advertiser-Friendly system as well as fighting the new algorithm, and it’s, like, how are people supposed to live off this anymore, you know?”

This saga has run parallel to another, a smaller YouTube controversy that sprung up after months of ignored flags and complaints, revolving around Mike and Heather Martin, parents who used the platform to make a living off of videos of emotionally and verbally abusive “pranks” they performed on their young children. The family came under scrutiny in April after popular YouTube commentator Philip DeFranco called out their channel, DaddyOFive, and the resulting outcry from YouTubers and international media outlets led YouTube to demonetize the channel and removed some videos. (Determining how many is difficult, as the Martins privatized their entire archive later that day, and YouTube won’t comment on individual videos.)

This was six months after Rose Hall, biological mother and now restored legal guardian of the two youngest children, began attempting in vain to get YouTube’s attention on the channel. She and at least six others allege that they had been regularly flagging since at least October of 2016, but despite YouTube’s claim that even a single flag triggers a human review, they saw no action.

“Before it even hit the news, they were getting flags,” the prominent family-vlogger said of the incident. “YouTube should have demonetized that channel a long time ago, taken away that incentive to continue to put videos up like this. If you’re being paid to put up shocking content with your children, people are gonna continue doing it.”

The couple, who were charged with child neglect in August and accepted five years probation in September, still post daily to their two other YouTube channels, FamilyOFive (formerly MommyOFive) and DaddyOFive Gaming, and both are still monetized and continue to grow. FamilyOFive now boasts nearly 30 percent of the original channel’s following. In the months since their plea deal, the channel has again featured Heather Martin’s three boys often pranking one another (though without scare quotes at the moment).

In April, DaddyOFive had over 750,000 subscribers and an estimated annual income (automatically accrued through YouTube ad revenue) of $200,000 to $350,000. YouTube, which takes a 45 percent revenue share, made nearly as much. Both parties continue to profit off of a following gained through behavior that the Martins themselves admitted could be labeled as child neglect. The platform has made no clear effort, internally or externally, to revise the policies and procedures that allowed it.

“I have no confidence in their flagging system, honestly, especially after the DaddyOFive stuff,” the prominent family-vlogger said. “I don’t flag videos anymore. It’s worthless.”

The Martins are only a microcosm of the gargantuan family-vlogging business on YouTube, an insanely popular, profitable, and unregulated industry that revolves around sponsored videos of young children advertising name-brand toys and games to audiences primarily comprised of other young children. Despite its massive scale — over 1 billion views each week across the vertical — the corruptive influence of this practice is comparatively benign when weighed against the dangers of hate speech and terrorist propaganda. Still, the children on both sides of the screen are incredibly vulnerable, and even a cursory glance will expose dozens of stories of children abused or exploited by their parents, on camera or off, in pursuit of ad dollars.

There are popular channels that fetishize their young children, exploit their injuries, and compensate them unevenly or unfairly for what often becomes a full-time job. Some of this behavior is largely invisible — taking place behind the scenes — and more of it is overlooked by the young children who, by the estimate of most creators, make up the majority of the vertical’s audience.

“When you think about what kids are doing to produce content, to create revenue for their families, it is the same kind of job that would be under child-labor laws, except for it’s not protected,” said Ana Homayoun, social-media expert and author of Social Media Wellness: Helping Teens and Tweens Thrive in an Unbalanced Digital World.

Family vlogging is one of YouTube’s most popular and profitable verticals, and though gaping holes in both YouTube guidelines and child-labor laws leave enormous potential for offscreen abuse, neither viewers nor advertisers have taken much issue with the content, and YouTube has made no move to take responsibility for regulating, managing, or tracking these incidents. For the most part, the final products are still “advertiser friendly,” and drawing attention to the tens of millions of children under 13 who legally shouldn’t be on YouTube in the first place would likely prove financially damaging to the company.

“If Facebook or YouTube know that they’ve collected information from a child under the age of 13, [COPPA law requires] they have to take steps to discard that information,” said Linnette Attai, a compliance expert in kids’ technology. “It’s a knowledge standard.” And the best defense against any knowledge standard is to maintain your own ignorance. Easy to manage for a company with a well-established reputation for silence.

In the past month, YouTube has again dealt with a massive advertiser boycott, this time thanks to commotion over mass-produced, often disturbing kids’ content that has been disseminated over YouTube Kids and the main site with little to no oversight. After a viral Medium post drew attention to this long-standing and pervasive issue, YouTube took action, proudly announcing on November 27 that it had “terminated more than 270 accounts and removed over 150,000 videos from our platform in the last week … we removed ads from nearly 2 million videos and over 50,000 channels masquerading as family-friendly content.”

At the same time, such a swift and sweeping purge illustrates that once again the platform’s indifferent response to reported abuses has less to do with capability than financial incentive. The “growing trend” of inappropriate kids’ content that YouTube noticed “in recent months,” according to its official statements, rings a bit false when at least one account at the center of the scandal, Toy Freaks, spent large portions of the last few years as one of the ten most-viewed channels worldwide and earned the platform tens of millions in revenue.

YouTube has hired a swath of new moderators, which may or may not save the platform from degradation, and it’s promised stricter enforcement of community guidelines around children’s content. At the same time, it has made no move toward an apology, no acknowledgement of the profits it’s made off of the “nearly 2 million videos” before demonetizing them two weeks ago, and no indication of concern for the well-being of kids in front of the camera, only for the ones watching at home. The crackdown is sure to improve the YouTube experience for tens of millions of unsupervised kid viewers, but there is no evidence that the company is interested in reform or regulation past what its advertisers can see.

Social-media platforms have tended to frame themselves as primarily content distributors (think Comcast or DirecTV in a traditional television marketplace), too far removed from the actual content creation or even curation to be responsible for how it’s made or what it says, but a single critical glance at the industry will reveal the inadequacy of such a comparison. Though YouTube does act as host and distributor, its algorithmic control over featured and suggested content, its Creator Academy, Partner Channels, and direct-payment scheme all point to the company’s heavy involvement in material production, from concept through marketing. And “advertiser-friendly” content requirements seem more like an exceedingly oblique form of network notes than anything else.

Before this most recent upset, advertisers had been returning to YouTube, especially to Google Preferred — a gated community of high-level advertisers guaranteed placement on curated, polished, popular content — though complaints about the 9,000 channels behind those gates are already surfacing. YouTube has largely recovered from each of its “brand safety” crises and likely will again, the sheer reach of the platform is too appealing to keep advertisers away for long. And the company has proven quite apt at making quick and effective pivots on long-standing policy in response to mass outrage. This ability seems likely to make the site a friendlier, more palatable place for visitors and viewers, but also one in which the smallest and quietest voices are further drowned out by the amplification of the most pleasant ones, and appearances are valued above truth.

For over ten years, YouTube has participated (at least rhetorically) in a massive project to democratize the world’s creative landscape. Truly, billions of people, anyone with Wi-Fi and a camera, can add their voice to the flood. But with this proliferation, the company has been forced into a clumsily enacted role as gatekeeper. As it grows in this position, YouTube looks increasingly like any traditional network prioritizing and rewarding its own vision of good content (despite CEO Susan Wojcicki’s popularized claim “We’re not TV and never will be”). This has its benefits — the platform is a safer, cleaner place than it was a year ago, and viewers as well as advertisers are the beneficiaries of that — but with many creators already moving to alternative platforms (Vid.me) and revenue streams (Patreon), YouTube’s age of exploration seems over in more ways than one.

Can YouTube Survive the Adpocalypse?