When atrocities happen—in Mariupol, Gaza, Kabul, or Christchurch—users and social media companies face a difficult question: how do we handle online content that shows those atrocities? Can and should we differentiate between pro-violence content containing atrocities and documentation by journalists or human rights activists? In a conflict, should platforms take sides as to whose violent content is allowed?

The past decade has demonstrated that social media platforms play an important role in the documentation and preservation of war crimes evidence. While social media is not the ideal place for sharing such content, the fact is that for those living in conflict zones, these platforms are often the easiest place to quickly upload such content.

Most platforms have increasingly strict policies on extremism and graphic violence. As such, documentation of human rights violations—as well as counterspeech, news, art, and protest—often gets caught in the net. Platforms are taking down content that may be valuable to the public and that could even be used as evidence in future trials for war crimes. This has been an ongoing issue for years that continues amidst Russia’s invasion of Ukraine

YouTube proudly advertised that it removed over 15,000 videos related to Ukraine in just 10 days in March. YouTube, Facebook, Twitter, and a number of other platforms also use automated scanning for the vast majority of their content removals in these categories. But the speed that automation provides also leads to mistakes. For example, in early April, Facebook temporarily blocked hashtags used to comment on and document killings of civilians in the northern Ukrainian town of Bucha. Meta, Facebook’s owner, said that this happened because they automatically scan for and take down violent content.

We have criticized platforms for their overbroad removal of “violent” or “extremist” content for many years.  These removals end up targeting marginalized users the most. For example, under the guise of stopping terrorism, platforms often selectively remove the content of Kurds and their advocates. Facebook has repeatedly removed content criticizing the Turkish government for its repression of Kurdish people.

Facebook has at various times admitted its mistake or defended itself by linking the removed content to the Kurdistan Workers’ Party (PKK), which the US State Department designates to be a terrorist organization. Whether this justification is genuine or not (Facebook allegedly left up Turkey’s ruling party’s photos of Hamas, another US-designated terrorist organization), it effectively means the platform aligned with the government against political dissenters.

When a platform removes “violent” content, it may effectively censor journalists documenting conflicts and hamper human rights activists that may need the content as evidence. At the beginning of the Syrian uprising, without access to receptive media channels, activists quickly turned to YouTube and other platforms to organize and document their experiences.

They were met with effective censorship, as YouTube took down and refused to restore hundreds of thousands of videos documenting atrocities like chemical attacks, attacks on hospitals and medical facilities, and destruction of civilian infrastructure. Beyond censorship, this hampers human rights cases that increasingly use content on social media as evidence. A war crimes investigator told Human Rights Watch that “I am constantly being confronted with possible crucial evidence that is not accessible to me anymore.”

During the Ukraine invasion, online platforms added some promising nuances to their content moderation policies that were absent from previous conflicts. For example, Facebook began allowing users in Ukraine and a few other countries to use violent speech against Russian soldiers, such as “death to the Russian invaders,” calling this a form of political expression. Twitter stopped amplifying and recommending government accounts that limit information access and engage in “armed interstate conflict.” This seems to be a nod to concerns about Russian disinformation, but it remains to be seen whether Twitter will apply its new policy to US allies that arguably behave similarly, such as Saudi Arabia. Of course, there may be disagreement with some of this “nuance,” such as Facebook’s reversal of its ban on the Azov Battalion, a Ukrainian militia with neo-Nazi origins.

Ultimately, online platforms have much more nuance to add to their content moderation practices, and just as important, more transparency with users. For example, Facebook did not inform users about its reversal on Azov; rather, the Intercept learned that from internal materials. Users are often in the dark about why their dissenting content is removed or why their government’s propaganda is left up, and this can seriously harm them. Platforms must work with journalists, human rights activists, and their users to establish clear content moderation policies that respect freedom of expression and the right to access information.