This article is more than

1 year old
Social Networks

Graphic war videos go viral, testing social media’s rules

Author: Editors Desk Source: The Washington Post
October 11, 2023 at 13:11
The logo of social media platform X, formerly Twitter, is seen alongside the former logo. (Dado Ruvic/Illustration/Reuters)
The logo of social media platform X, formerly Twitter, is seen alongside the former logo. (Dado Ruvic/Illustration/Reuters)
Facebook, YouTube and TikTok ban support for Hamas. Telegram allows it. And X struggles to enforce its own policies.
A bulldozer plowing through the fence separating Israel from Gaza. A young woman being carried off from an outdoor concert by Hamas militants. Rockets exploding in the night sky as Israeli missiles intercept them. An apartment building in Gaza imploding into rubble.

Videos uploaded to social media — some by Israelis, some by Palestinians — have helped to shape the world’s understanding of the violence in Israel and Gaza, even as a torrent of fake and misleading posts clouds the picture.

But as the war unfolds, who can post such videos and what people can say about them will be determined in part by content moderation policies that vary widely from one social network to the next.

Those policies can mean the difference between a given video going viral or being scrubbed from the site.

On Google’s YouTube and Meta’s Facebook and Instagram, you can stand with Israel, call for peace or lament the plight of Palestinians. But expressions of support for Hamas are forbidden. Both companies consider Hamas an extremist organization, meaning that no one affiliated with the group is allowed to use their platforms, and no videos or images created by Hamas can be posted there.

TikTok, which has in the past declined to comment on which groups it designates as extremist organizations, confirmed to The Washington Post that Hamas is banned from its platform as well.

Still, videos that appear to have been taken by Hamas members have surfaced on all three platforms, in some cases because they are allowed by exceptions for newsworthiness or “counter-speech,” in which people post objectionable content to denounce it. Some have shown Israeli hostages or even the bodies of victims.

In contrast, the influential messaging platform Telegram does very little content moderation. It hosts a Hamas channel that has been openly broadcasting grisly footage and images of dead Israelis to more than 100,000 subscribers. And some of those posts have been rebroadcast on Elon Musk’s X, formerly Twitter, which nominally prohibits Hamas content but appears to be doing relatively little to police it after Musk laid off the majority of the company’s employees.

Why TikTok videos on the Israel-Hamas war have drawn billions of views

Experts say X in particular has become a hub for posts and videos taken down by other platforms for violating their rules against graphic violence or hate speech. On Tuesday, E.U. Commissioner Thierry Breton posted a letter to Musk warning him that regulators have “indications” that the site may be in violation of European rules on violent and terrorist contentas well as disinformation.

In Israel, some authorities are suggesting that parents keep their kids off social media altogether to prevent them from being exposed to violent content, after a Hamas leader said the organization would broadcast executions of Israeli hostages.

In deciding what posts to take down during a war, social media companies have to weigh their interest in shielding users from violent, hateful and misleading content against the goals of allowing free expression, including newsworthy material and potential evidence of war crimes, said Evelyn Douek, an assistant professor at Stanford Law School. And they often have to make those calls under time pressure, without full information.

“There are no good options for a platform trying to do responsible content moderation in the middle of an escalating conflict and humanitarian atrocities,” Douek said. “Even for a platform that is fully resourced and really genuinely trying to act in good faith, this is a really hard problem both technically and normatively.”

In the case of the Hamas-Israel war, those calls are complicated by a desire to avoid being seen as abetting a terrorist organization by enabling it to broadcast propaganda, threats, hostage videos or even executions. Facebook has been sued in the past by the families of people killed by Hamas. And earlier this year, Google, Twitter and Meta defended themselves at the Supreme Court against charges that they had materially aided the Islamic State terrorist group by hosting or recommending its content, such as recruiting videos. (In each of those cases, the tech firms prevailed.)

But defining what counts as an extremist group isn’t always straightforward, and social media platforms over the years have faced scrutiny over which government actors, political movements, military operations, and violent regimes get a voice and which don’t. After the United States withdrew its forces from Afghanistan in 2021, social media companies had to make a high-stakes decision about whether to continue to ban the Taliban, since it had taken over the country’s government.

Israel’s vaunted tech sector is going to war

In the end, Facebook opted to prohibit the Taliban, while Twitter allowed the organization to maintain an official presence as the de facto government.

“Platforms have been notoriously opaque about what organizations they designate as dangerous organizations, or terrorist organizations,” Douek said. “It’s also an area where platforms tend to err on the side of caution because of the fear of legal liability.”

In the case of content that supports Hamas, erring on the side of caution could mean taking down videos that show atrocities. But it could also mean suppressing arguably legitimate expression by people who support Palestinian liberation.

“Within social media companies, the category that you’re placed in determines how your speech is going to be treated,” said Anika Collier Navaroli, a former Twitter content policy official. “The speech of a political party is going to be treated extremely different than the speech that comes from a terrorist. The speech from a legitimate nation-state is also going to be treated different than somebody who is not recognized as that.”

Last year, the consultancy Business for Social Responsibility released a report commissioned by Meta that found the social media giant had unfairly suppressed the freedom of expression of Palestinian users in 2021 during a two-week war between Israel and Hamas.

Tech companies were lauded for allowing users to share firsthand accounts about the bloody conflict. But the report chronicled how Meta had erroneously removed some users’ content and was more likely to take action against content written in Arabic than Hebrew.

How to limit graphic social media images from the Israel-Hamas war

Earlier this year, Meta loosened its rules against praising dangerous groups and people, allowing for more posts about extremist entities as long as they are made in the context of conversations about politics or social issues such as news reports, or academic conversations about current events.

Still, Ameer Al-Khatahtbeh, who runs an Instagram account with the handle @Muslim that has nearly 5 million followers, said he worries similar dynamics are playing out in this war. “There are a lot of people that had their posts taken down” or were restricted from using Instagram’s live video feature for posts supporting Palestinians, he said.

On TikTok, both the #Israel and #Palestine hashtags have attracted tens of billions of views as young people turn to the platform for news and perspectives on the conflict. But at least one prominent account that covers news from a Palestinian perspective received a notice Monday that it had been permanently banned. TikTok spokesperson Jamie Favazza said Tuesday the ban was a mistake and the account was reinstated.

Since Hamas’ invasion began, TikTok has shifted more content moderators to focus on posts about the conflict, including posts in Arabic and Hebrew, Favazza said. It has also been blocking some hashtags associated with graphic violence or terrorist propaganda, including footage of hostages or executions. And it is working with fact-checkers to identify misinformation, though a quick browse through popular searches such as “Israel” and “Gaza” on Tuesday turned up numerous videos from previous, unrelated conflicts that were being presented as though they were news. Other videos racked up views with graphic footage of Israeli victims, likely produced originally by Hamas, with a thin veneer of commentary decrying the acts.

Pro-Hamas hackers send fake rocket alerts, knock websites offline

As for YouTube, spokesperson Jack Malon said the platform is working to connect users who search for terms related to the war with reliable news sources. He added that YouTube takes down hate speech targeting both the Jewish and Palestinian communities.

In the first hours of Hamas’ invasion, graphic footage surfaced on smaller platforms with permissive content rules, including Gab and Telegram, said Yael Eisenstat, vice president of the Anti-Defamation League and a former senior Facebook policy official. Inevitably it is then reposted to mainstream platforms, where it can either flourish or wither, depending on their policies and enforcement. Much of it has found a home on X.

“It is harder right now to find clearly violative, especially the more antisemitic stuff, on YouTube and even Meta right now,” Eisenstat said. “It is totally easy to find on X.”

On Telegram, an apparently official Hamas account with close to 120,000 subscribers has routinely posted grisly video of the attacks on Israel. One clip, with more than 77,000 views, showed an unidentified militant stomping on a dead soldier’s face. Many of the videos have been reposted to X. At least one of the videos was also posted by the media outlet Al-Jazeera Arabic to YouTube, where it has 12.9 million subscribers, but with some of the gore blurred.

Telegram did not respond to requests for comment.

On Monday, X’s “Safety” account tweeted a change to the company’s policy that will allow more posts that would normally violate its rules to remain on the platform under an exception for newsworthiness. “In these situations, X believes that, while difficult, it’s in the public’s interest to understand what’s happening in real time,” the company said in the tweet.

Meta and TikTok have partnered with fact-checking organizations to label false and misleading information. X enlists its own users in a crowdsourced fact-checking project called Community Notes.

On Tuesday, participants in the project debated whether to apply a fact-checking label to a gruesome video posted by Donald Trump Jr., the former president’s son. The video appeared to show militants firing guns at dead and wounded bodies on a concrete floor, but its provenance was unclear. The video remained up on Wednesday.

Drew Harwell and Cat Zakrzewski contributed to this report.

Keywords
Advertisement
You did not use the site, Click here to remain logged. Timeout: 60 second