US, France call for ‘utmost restraint’ in Middle East

How Social Media Sites Failed to Avoid Censorship, Stop Hate Speech and Disinformation During the Gaza War

LONDON: Tech giant Meta recently announced it will start removing social media posts that use the term “Zionist” in contexts where it refers to Jews and Israelis, rather than supporters of a political movement, in a bid to curb anti-Semitism on its platforms.

The parent company of Facebook and Instagram previously said it would lift a blanket ban on the single most moderated term across all Meta platforms — “martyr,” or “martyr” in English — after a year-long review by its supervisory board found the approach “excessive.”

Similarly, TikTok, X and Telegram have long pledged to step up efforts to curb hate speech and the spread of misinformation on their platforms amid the ongoing war in Gaza.

Activists accuse the social media giants of censoring posts, including those that present evidence of human rights abuses in Gaza. (Getty Images)

These initiatives aim to create a safer and less toxic online environment. However, as experts consistently point out, these efforts often fail, leading to empty promises and a worrying trend towards censorship.

“In short, social media platforms have not been very good at avoiding censorship or curbing hate speech and misinformation about the war in Gaza,” Nadeem Nashif, founder and director of 7amleh, a digital and human rights advocacy group for Palestinians, told Arab. News.

“Throughout the conflict, censorship and account deletions have also compromised efforts to document human rights abuses on the ground.”

Nashif says hate speech and incitement to violence remain “rampant”, particularly on the Meta and X platforms, where anti-Semitic and Islamophobic content continues to be “widespread”.

Since the Hamas-led attack on October 7 that sparked the conflict in Gaza, social media has been flooded with war-related content. In many cases, it served as an important window into the dramatic events unfolding in the region and became an important source of real-time news and accountability for Israel's actions.

Profiles supporting the actions of both Hamas and the Israeli government have been accused of spreading misleading and hateful content.

SOONFACT

1050

Human Rights Watch documented deletions and other suppression of Instagram and Facebook content posted by Palestinians and their supporters between October and November 2023.

Despite this, none of the social media platforms — including Meta, YouTube, X, TikTok, or messaging apps like Telegram — have publicly stated policies aimed at mitigating hate speech and incitement to violence related to conflict.

Instead, these platforms remain filled with war propaganda, dehumanizing language, claims of genocide, open calls for violence, and racist hate speech. In some cases, the platforms remove pro-Palestinian content, block accounts, and sometimes shadow-ban users who express their support for the people of Gaza.

On Friday, Turkey's communications authorities blocked access to social media platform Instagram, owned by Meta. Local media reported that access was blocked in response to Instagram removing posts by Turkish users expressing condolences over the recent assassination of Hamas political chief Ismail Haniyeh in Tehran.

The day before, Malaysian Prime Minister Anwar Ibrahim accused Metta of cowardice after his Facebook post about Haniyeh's murder was deleted. “Let this serve as a clear and unequivocal message to the Met: stop this display of cowardice,” Anwar, who has repeatedly condemned Israel's war in Gaza and its actions in the occupied West Bank, wrote on his Facebook page.

A screenshot of a post by Malaysian Prime Minister Anwar Ibrahim condemning Meta's censorship of his post critical of Israel's assassination policy.

Meanwhile, footage of Israeli soldiers allegedly blowing up mosques and homes, burning copies of the Koran, torturing and humiliating blindfolded Palestinian prisoners, leading them tied to the hoods of military vehicles and glorifying war crimes remains freely available on cellphone screens.

“Historically, platforms have done a very poor job of moderating content about Israel and Palestine,” Nashif said. “Throughout the war in Gaza and the ongoing likely genocide, it just got worse.”

A Human Rights Watch report titled “Meta's Broken Promises” published in December accused the firm of “systematic online censorship” and “inconsistent and opaque application of its policies,” as well as practices that silence voices in support of Palestine and the Palestinian person. . Instagram and Facebook rights.

The report added that Meta's conduct “does not meet human rights due diligence obligations” due to years of broken promises to end “excessive repression.”

Jacob Mukherjee, head of the Masters Program in Political Communication at Goldsmiths University of London, told Arab News: “I'm not sure to what extent you can really call them an effort to stop censorship.

“Meta has promised to conduct various reviews – which, by the way, it has already promised for several years after the last flare-up of the Israeli-Palestinian conflict in 2021 – by October 7 of last year.

“But as far as I can see, it hasn't changed significantly. Of course, they had to respond to suggestions that they were engaged in censorship, but in my opinion it was mostly PR.”

Between October and November 2023, Human Rights Watch documented more than 1,050 removals and other suppressions of content on Instagram and Facebook posted by Palestinians and their supporters, including material about human rights abuses.

Of these, 1,049 involved peaceful pro-Palestinian content being censored or otherwise unjustifiably suppressed, while one case involved the removal of pro-Israel content.

However, censorship seems to be only part of the problem.

The 7amleh Violence Indicator, which tracks real-time data on violent content in Hebrew and Arabic on social media, has recorded more than 8.6 million pieces of such content since the start of the conflict.

Nashif says that the proliferation of violent and harmful content, mostly in Hebrew, is largely due to the lack of contributions to moderation.

This content, which primarily targeted Palestinians on platforms such as Facebook and Instagram, was used by South Africa as evidence in a case against Israel at the UN International Court of Justice.

Meta is apparently not the only one responsible for what South African lawyers have called the first genocide to be broadcast on cellphones, computers and television screens.

X also faced accusations from both supporters of Palestine and Israel that he allowed the spread of misinformation and falsified images, often shared by prominent political figures and the media.

“One of the main problems with today's content moderation systems is the lack of transparency,” Nashif said.

“When it comes to artificial intelligence, platforms do not provide clear and transparent information about when and how artificial intelligence systems are implemented in the content moderation process. Politicians are often opaque and allow the platforms to do as they see fit.”

For Mukherjee, the challenge of moderation, which occurs behind the smokescreen of murky politics, is highly political, requiring these companies to adopt a “balanced” approach between political pressure and “managing the expectations and desires of the user base.”

He said: “These AI tools can sort of be used to insulate the real owners, meaning the people who run the platforms, from criticism and accountability, which is the real problem.

“These platforms are private monopolies that are essentially responsible for regulating an important part of the political public sphere.

“In other words, they help shape and regulate the arena in which conversations happen, in which people form their opinions, in which politicians feel the pressure of public opinion, and they are completely unaccountable.”

While there have been examples of pro-Palestinian content being censored or removed, as Arab News revealed in October, these platforms made it clear long before the Gaza conflict that it was ultimately not in their best interest to remove content from their platforms.

“These platforms are not created for reasons of public interest or to ensure an informed and educated population that has diverse perspectives and is able to make sound decisions and form opinions,” Mukherjee said.

“The thing is, business models actually want a lot of content, and if it's pro-Palestinian content, so be it. At the end of the day, it still drives attention and engagement on the platform, and content that evokes strong sentiment, to use industry terms, drives engagement, and that means data and money.”

Leave a Comment

URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL