Documents reveal Meta’s bias in moderating Arabic content over Hebrew
Facebook's parent company Meta does not have the same processes in place to moderate content when it comes to Hebrew and Arabic languages, leading to possible bias in how they are enforcing rules on Arabic-language content while Hebrew content is penalised less.
Newly revealed documents and a former Meta employee who spoke to the Guardian shows that internal policy guidelines governing hate speech related to Palestine and Israel are not equal or standardized, something which activists and digital rights campaigners have long argued.
Meta, which owns Facebook, WhatsApp and Instagram, have long been scrutinised for their policies when it comes to their approach to language used in relation to Israel’s ongoing war on Gaza.
Internal guidance documents issued after the start of the war on 7 October showed a disparity in how Meta was moderating content.
One such example is the policy requiring the removal of statements such as "boycott Jewish shops" and "boycott Muslim shops" but allowing the statement "boycott Arab stores".
The former employee and the documents also revealed that while Meta has a system in place to track precision of content enforcement in many languages, which uses human experts, for a portion of Hebrew-language content decisions, scoring was deemed "unfeasible" because of an absence of translation.
The former employee, who was not named by the Guardian due to fears of reprisal, said Hebrew was reviewed on an "ad hoc" basis, unlike Arabic, because it was not onboarded onto the system.
This discrepancy suggests there was a "bias on how they are enforcing content", as Meta are reviewing Hebrew less systematically than Arabic, the former employee said.
Activists have long argued that more attention is needed on Hebrew content on the social networking sites, after a 2022 independent analysis commissioned by Meta showed that Arabic speakers were penalised more often than Hebrew speakers amidst Israel’s earlier 2021 assault on Gaza.
At the time, Meta’s system was automatically flagging Arabic-language content at a higher rate than Hebrew content as a result of the company’s inconsistent policies, which "may have resulted in unintentional bias", the report stated.
This was due to Meta installing an Arabic "hostile speech classifier" which automatically detected hate speech, but did not do the same for Hebrew language content.
This resulted in Arabic content being removed more frequently than Hebrew.
Image banks and algorithms
Images, phrases and videos uploaded by Meta to allow their machine learning tools to flag and remove posts that violate policy have also raised concerns.
Material which has been deemed as breaking Meta’s rules is uploaded to banks of content, which is matched to content posted on social networks by algorithmic moderators.
However, the former employee said staff had erroneously uploaded some images on the bank after 7 October and that there was "no process" to remove those images, which could also be leading to the over-enforcement of content related to the war on Gaza.
Meta has since disputed this, saying it is "relatively easy to remove an item from a bank if added in error", despite documents showing there was at the time no process to remove "non-violating clusters after policy calls are made that render previously banked content benign".
The increased scrutiny on Meta’s decisions and policies since the start of the war has caused some employees to be afraid of retaliation or be viewed as "antisemitic" if they were to raise the over-enforcement of Arabic and pro-Palestine content, the former employee said.
The disparity in how language and content is moderated by Meta has garnered a great deal of criticism in recent months, as the war on the besieged enclave rages on and the Gaza death toll tops 40,000.
"When Palestinian voices are silenced on Meta platforms, it has a very direct consequence on Palestinian lives," said Cat Knarr, who works for the US Campaign for Palestinian rights.
"People don’t hear about what’s happening in Palestine, but they do hear propaganda that dehumanises Palestinians. The consequences are very dangerous and very real,” Cat added.
In December, Human Rights Watch said Meta’s policies and practices "have been silencing voices in support of Palestine and Palestinian human rights", saying there was increased "systematic online censorship which has risen against the backdrop of unprecedented violence…".
The rights group added that between October and November 2023, they documented over 1 ,050 takedowns and other suppression of content that had been posted by Palestinians and their supporters on Instagram and Facebook, including about human rights abuses.