Facebook and Twitter failed to remove nearly 90% of Islamophobic posts flagged to them – report
Facebook, Twitter, Instagram, YouTube and TikTok failed to act on nearly 90% of anti-Muslim and Islamophobic content on their platforms, according to a new report.
Research from the Center for Countering Digital Hate, released Thursday, found 530 posts, viewed 25 million times, containing content dehumanizing Muslims through racist caricatures, conspiracies and misrepresentations.
This included Instagram posts that portrayed Muslims as pigs and called for their expulsion from Europe, comparisons between Islam and cancer that should be ‘treated with radiation’ in a photo of an atomic explosion, tweets on Twitter claiming that Muslim migration was part of a plot to change the politics of other countries, and many others.
Many of them had offensive hashtags such as #deathtoislam, #islamiscancer and #raghead, which CCHR used to identify which posts to report.
CCHR reported 125 Facebook posts, of which only seven resulted in action; 227 to Instagram, with only 32 shares; 50 to TikTok of which 18 implemented; 105 to Twitter with just three shares; and 23 videos submitted to YouTube, none of which were reported.
Facebook has also hosted numerous groups dedicated to Islamophobia, with names such as “ISLAM stands for terrorism”, “Stop the Islamization of America” and “Boycott Halal Certification in Australia”. Many of these groups number in the thousands, with 361,922 members in total, mostly in the UK, US and Australia. As of this writing, all of these groups remain online despite being reported to Facebook.
Researchers also identified 20 posts featuring the Christchurch terrorist, of which only 6 were acted upon, despite Facebook, Instagram and Twitter having publicly pledged to remove terrorist and extremist content.
The shooter also released a 74-page manifesto that denounced Muslims and immigrants, which quickly spread online.
At the time, Facebook said it had deleted 1.5 million videos showing attacks on New Zealand mosques in the first 24 hours after the mass shootings.
The video, which was posted on Facebook, was originally viewed 4,000 times, with social media sites struggling to remove uploaded footage.
Many uploaders have made small changes to the video, such as adding watermarks or logos to the footage or changing the size of the clips, to prevent YouTube from detecting and removing it.
Facebook’s Community Standards prohibit “direct attacking people on the basis of…race [or] ethnic”, just like Instagram. Twitter states that users “may not promote violence against or directly attack or threaten others based on race, ethnicity [and] national origin”. YouTube states that “hate speech is not allowed on YouTube”, and TikTok “do[es] do not allow content that contains hate speech or involves hateful behavior, and we remove it from our platform. »
The Independent contacted all social media companies for comment.
“We welcome this report, which highlights the unacceptable abuse many Muslims face online every day. Social media companies must do more to take meaningful action against all forms of hate and abuse their users experience online,” Kemi Badenoch, Minister for Communities and Equality, said in a statement.
Anti-Muslim racism is not the only hate speech to have escaped the social media moderation network. The Independent found that anti-Semitic conspiracy theories still get millions of views in an October 2020 report, despite the platform banning misinformation about Jews.
“We’ve always been open that we won’t catch every instance of inappropriate content or account activity, and we recognize that we have more to do to meet the standards we set for ourselves today. is why we continue to invest at scale in our trust and safety operations, which include both technology and a team of thousands of people around the world,” TikTok said at the time.
In the same year, researchers found that Facebook posts and pages spreading fascism were “actively recommended” by its algorithm. In response, Facebook said it was updating its hate speech policies.