Euro 2020: Why is it so hard to track down racist trolls and remove hate messages from social media? | UK News
Sky News analyzed 99 accounts implicated in racist abuse against England players Bukayo Saka, Jadon Sancho and Marcus Rashford on social media.
The accounts, which were identified by the Center for Countering Digital Hate (CCDH) campaign group, were behind more than 100 racist comments on players’ Instagram posts.
106 accounts were reported to the social media platform by the CCDH, but three days later only seven were deleted.
This despite comments violating Instagram community guidelines.
We found that only three of the accounts still active appeared to be UK-based, with one apparently managed by someone of primary school age.
More than a quarter of the comments were sent from anonymous private accounts without their own publication.
But identifying the perpetrators of hate online is only part of the problem.
Ensuring hateful content is removed from platforms presents its own unique challenges, according to Professor Matthew Williams, author of The science of hate, and professor at HateLab, Cardiff University
And activists say the government must work with social media companies on both issues to stop platforms from “giving racism, abuse and hate a megaphone.”
One of the first things we wanted to establish was which of the affected accounts were managed by people in the UK.
Even that is difficult. It is difficult to know if a user is who they say they are.
It is impossible to discover through open source techniques that an account is operated from a particular country.
There are, however, a few clues that can help.
A native English speaker, for example, may structure sentences in a different way than someone with English as a second language, or use particular spellings or slang.
Other markers are the type of accounts the user follows and interacts with, and the times the user posts.
Based on this, we believe that only three of the accounts still active at the time of our analysis were managed by people living in the UK.
Two of them appeared to be headed by men in their twenties.
The other appeared to be headed by a person of primary school age.
A closer look at one of the deleted accounts that appeared to be managed from the UK revealed that, despite being deleted, a second private account using the same photo and a nearly identical username is still active.
The second profile, which has been around at least April 2021, even posted racist comments on a post from another of the profiles reported by the CCDH in the days following the match.
The comments were left under an image watermarked with the logo of a European white supremacist organization.
This user regularly shares content promoting the White Genocide conspiracy theory known as “The Great Replacement”.
But the profile picture on this account is of a young girl.
A reverse search of the image results in a link to a man’s account on Russian social media site VK, although the link between the image and the account is unclear.
These two examples illustrate how easy it is for users to bypass the restrictions put in place by social media platforms to spread hate.
Other users can anonymize their existing accounts so that comments they post are not identifiable to them in the offline world.
According to their profile picture metadata, three of the accounts we looked at last changed their photo within hours of the final.
This can either mean that the account image was changed at that time or that the account was created at that time.
It is not known whether the profiles were created specifically to post hate messages, were altered in any way to conceal the identity of the person disseminating the abuse, or whether they were that of a simple coincidence.
But it does reveal how easy it is to become anonymous online, and how hard it can be to find out who is really behind these profiles.
We found that more than a quarter of the comments reported by the CCDH came from private profiles without publication.
Many of them also had no subscribers, which is common on accounts used specifically for trolling.
Some accounts, however, appear to be the personal profiles of users who appear to have made no attempt to conceal their identities.
These include that of a Russian filmmaker, who regularly shares professional snapshots from various film shoots around Moscow.
The CCDH took over after leaving racist and homophobic slurs in the comments on one of Jadon Sancho’s messages.
Other comments came from a personal account of a bodyguard to Azeri lawmakers. Another was from the corporate account of a Tehran-based welding company.
Most of the accounts we analyzed appeared to be run by fans of English football teams.
One of the users who commented on abuse on a post from Jadon Sancho has a separate social network account that has “Everton” in the username.
We also found profiles that identify themselves as Manchester United and Liverpool FC fans among the 106 reported by the CCDH.
Most of them appeared to be people living outside the UK.
HateLab Professor Williams says that while social media companies and police can identify perpetrators, it doesn’t often happen.
“The problem is that the platforms have been reluctant to help with investigations and largely refuse to cooperate when the content does not reach a criminal threshold, which includes numerous messages sent to football players in recent months.”
“But even when the account holders have been identified, there is little that the UK authorities can do to punish the offender if he is based abroad.”
When it comes to detecting and removing hateful content, it’s even more complicated.
âThe AI ââused to automatically detect harmful content is currently not up to the task, which means it still finds large amounts of false positives (content classified as hateful when it is not) and false negatives (content not identified as hateful when it is), âhe told Sky News.
“This means that some non-hateful content can be censored, which creates risks for freedom of expression and poster rights, and that some hateful content is missed, which can harm the victim or the community. . “
Professor Williams also highlighted the unique challenge emojis present for social media platforms, as they can be used in both positive and hateful ways, depending on the context.
Sky News found that only 5% of the posts we analyzed that used racist emojis were linked to accounts that were later deleted by Instagram.
This is compared to 17% of posts that used racist language.
A petition to make identity verification a requirement for opening a social media account has garnered more than 500,000 signatures since the Euro 2020 final.
The petition was originally created in May by reality TV star Katie Price in response to perceived limitations of the government’s online security bill to protect users from hate speech.
It gained momentum this week after a wave of collective anger over racism directed at Saka, Sancho and Rashford swept the country.
Prime Minister Boris Johnson said on Wednesday the government was working to ensure that football ban orders were extended to include racism online.
This follows a meeting between the government and social media giants Facebook, Twitter and TikTok on Tuesday in which the prime minister said he told them the online security bill would legislate to address this issue.
The bill proposes that social media companies face fines of up to 10% of their global revenues if they fail to remove racism and other hate speech from their platforms.
But some point out that the bill will not prevent hateful and racist messages like those seen after Sunday’s game from appearing online.
“The bill will not require social media platforms to prevent the posting of hateful content, and will only require them to remove it in a timely manner,” said Professor Williams.
âSo if a hate message is directed at a person, chances are they’ll see it before it’s deleted, and the damage will be done.
âMuch of what the bill proposes in terms of rapid removal of illegal content is already covered by the EU Code of Conduct on Combating Illegal Hate Speech Online, to which most major corporations of social media have already subscribed, so in reality it is doubtful that much will change. “
The Data and Forensics team is a multi-purpose unit dedicated to delivering transparent Sky News journalism. We collect, analyze and visualize data to tell data-driven stories. We combine traditional reporting skills with advanced analysis of satellite images, social media and other open source information. Through multimedia storytelling, we aim to better explain the world while showing how our journalism is done.
Why data journalism matters to Sky News