This article is more than

5 year old
Google

YouTube let extremist content flourish despite warnings

Source: Axios
April 3, 2019 at 18:39
Photo: Carsten Rehder/picture alliance via Getty Images
Photo: Carsten Rehder/picture alliance via Getty Images
The tech giant extended its ban on hate speech to speech that promoted or supported white nationalism and white separatism.

A damning report from Bloomberg Tuesday revealed that top YouTube executives debated for years whether extremist viral videos on its platform were really a problem — often rejecting solutions to manage the situation — in an effort to maximize growth and profits.

Why it matters: Tech companies have long been criticized for harboring hate, but as the consequences of their inactions begin to unfold more visibly in the real world, companies like YouTube are facing more pressure to address whether their ignorance was actually malpractice.

Driving the news: The most poignant aspect of the Bloomberg report is a narrative similar to one that's been reported about Facebook's handling of Russian misinformation: top executives were repeatedly briefed that there was a problem, and chose to downplay it for the sake of focusing on business outcomes.

  • The Bloomberg story alleges that the company focused on platform "engagement" above all other goals, which deterred corporate leadership from taking action against internal alarms about the ways hate content was flourishing on the platform.
  • It details ways YouTube's "neural network" AI system acted like an "addiction engine," pushing users to consume more videos, regardless of the fringe nature of their content.
  • The report says that after the 2016 election, YouTube, under the helm of CEO Susan Wojcicki, attempted to mitigate the problem by adding a not-well-known measure of “social responsibility” to its recommendation algorithm.
  • It explains that YouTube dissuaded employees from looking for bad videos, because doing so would expose YouTube to more legal liability.

The big picture: The report comes as Facebook is scrambling to manage hateful content and misinformation on its platforms ahead of upcoming elections in India and the coming round of U.S. presidential primaries.

  • The tech giant extended its ban on hate speech to speech that promoted or supported white nationalism and white separatism.
  • It announced Tuesday that it added a tip line for misinformation on its popular messaging app WhatsApp, since the encrypted network makes it nearly impossible to track misinformation.

Be smart: Calls for change have started to pick up in the wake of real-world outcomes occurring as the result of people who have been radicalized by hateful or conspiracy-minded content. As Axios has previously noted:

  • Anti-vaccination content that's long appeared in search results and on social media is now being regulated by social platforms after the U.S. government attributed recent measles outbreaks in part to reduced vaccination levels in some areas.
  • Terrorist attacks and mass shootings, like the recent New Zealand mosque attack, highlight ways that extremists are using social media channels to inspire hate and spread horrifying footage of mass killings.

Bottom line: Two years after the 2016 election, it has become increasingly apparent that Google and Facebook, despite warnings about ways their platforms' algorithms allowed bad content to flourish, shied away from doing much about it for business reasons. Now, facing elections and misinformation crises around the world, they are being forced to reckon with those decisions.

Keywords
You did not use the site, Click here to remain logged. Timeout: 60 second