Tragic YouTube shooting casts new light on creators’ “adpocalypse” complaints [Updated]

Tragic YouTube shooting casts new light on creators’ “adpocalypse” complaints [Updated]

https://ift.tt/2H9sr9U

An image taken from alleged shooter Nasim Aghdam’s personal website.

As news unfolded about Tuesday’s YouTube shooting, a chilling motive emerged. Ahead of the incident, the alleged shooter had posted videos maligning the service—doing so as a former money-making user of the site.

“I’m being discriminated [against] and filtered on YouTube, and I’m not the only one,” alleged shooter Nasim Aghdam said in a video that was shared after her identity as the shooting’s current, sole fatality was revealed. “My workout video gets age-restricted. Vegan activists and other people who try to point out healthy, humane, and smart living, people like me, are not good for big business. That’s why they are discriminating [against] and censoring us.”

The shooting has put a massive spotlight on this topic, which, up until now, has been more likely to appear in angry YouTube videos than on major newspaper headlines. But well before this shooting, Aghdam was just one of many voices on the site to cry foul about YouTube’s policies.

Thus, it’s time to put some perspective on a topic that has become quite inflamed in the past 24 hours.

Adpocalypse now

We can start by rewinding to roughly one year ago when YouTube and its parent company, Google, began facing a public backlash from advertisers. This “adpocalypse” arguably became most inflamed when the UK government froze all ad-spending on YouTube in March 2017 after finding that its ads had been slapped onto “extremist videos” without the government’s consent. Other major advertisers, American and abroad, followed suit. More alarmingly for Google, advertisers kept doing so after YouTube offered assurances of changes and overhauls.

The proposition for advertisers was simple: so long as their ads might appear on questionable videos of any kind, they made the smart business move of bailing.

YouTube’s response was equally extreme: a sweeping expansion of its “age-restricted” video designation and, thus, a “demonetization” slap on any videos that fell on the wrong side of that label. To review, that designation includes, but is not limited to, the following:

  • Vulgar language
  • Violence and disturbing imagery
  • Nudity and sexually suggestive content
  • Portrayal of harmful or dangerous activities

Should YouTube decide that a video runs afoul of the site’s age-appropriate tag, that video is immediately ineligible for pre-roll ads—which is YouTube’s primary way to serve income to its creators.

On its face, this was meant to look like a win-win-win. Ads would be narrowly targeted to the giant site’s “safest” content, and YouTube creators could still post whatever they wanted, even if one-off videos pushed the service’s Terms of Service to their limits. (Those four descriptors in the age-restricted list do not technically violate YouTube’s TOS.) YouTube could continue hosting the bazillions of videos being uploaded every minute and keep advertisers in a protective silo.

Crushing potential revenue?

But, then, how is YouTube supposed to keep all of that content straight while protecting advertisers? By hiring hundreds of thousands of moderators to individually review every upload? The best gauge we have for YouTube’s actual manpower comes from the company’s promise of “10,000” moderators working in the wake of a potentially offensive video posted by Logan Paul in January. Of course, that incident was mostly a reminder that even thousands of moderators can let questionable content through.

Tech

via Ars Technica https://arstechnica.com

April 4, 2018 at 04:06PM