Designation and moderation of terrorist content online

Author of the article: Anne Craanen

Anne Craanen, Tech Against Terrorism Research Manager

On May 14, 2022, an attacker entered a supermarket in Buffalo, New York, and killed 10 people and injured three. The attacker, an 18-year-old white man, livestreamed his attack on streaming platform Twitch and posted a link to a manifesto detailing his motivations for the attack on Google docs ahead of filming. After the attack, the attacker’s online diary was discovered on the Discord gaming platform. Immediately thereafter, content produced by the author was uploaded, recorded, and streamed by other online users on a range of platforms, including several smaller platforms. Most of the tech companies, where the material was uploaded, responded quickly by removing it. Additionally, technology initiatives such as the Global Internet Forum to Combat Terrorism (GIFCT) have activated their Content Incident Protocol (CIP) – a mechanism to facilitate cross-platform sharing and removal of material during crisis situations.

While we should be careful about drawing conclusions based on his manifesto alone, it seems clear from this document, and other facts about the author’s life, that the attack was racially motivated. . The attack was declared an act of domestic terrorism – the term often used in the United States to describe what elsewhere is described as far-right or white supremacist terrorism. Commenting on the attack, US President Joe Biden said that “[h]ate must not have a safe port. We must do everything in our power to put an end to domestic terrorism fueled by hatred. However, in this article, I will argue that the United States and other states are neglecting a crucial tool that will facilitate action against terrorist use of the Internet, including racially and ethnically motivated violent extremists: the designation .

Removal of terrorist content online

The effective use of the Internet by terrorists has placed the fight against terrorist content online at the forefront of several global and national counter-terrorism initiatives. Global tech companies, large and small, are overwhelmingly willing to counter the terrorist use of their platforms. Tech Against Terrorism, a UN-backed initiative, supports these tech companies in the fight against terrorist use of their services while respecting human rights. In our experience, technology platforms are much more likely to remove terrorist content if it is produced by a designated terrorist group, as the designation removes a level of uncertainty in platforms’ decision-making and provides a clear legal mandate for tech companies to remove such material. For example, the contents we have reported to technology companies through our Terrorist Content Analysis Platform (TCAP), which flags content produced by designated terrorist entities, has a removal rate by 94%. We assess this to be the case because we provide a clear relationship between the designation and the legality of content produced by designated entities online.

In addition to being a mechanism that has a tangible positive impact in the fight against terrorist content online, it is also a matter of principles. It is essential that counter-terrorism measures, whether online or offline, are based on the rule of law. As such, it should be up to governments to decide – with adequate human rights safeguards in place – what constitutes illegal terrorist content. Currently, democratic governments are grappling with this, which means that tech platforms have to make such decisions themselves. While big tech platforms might — thanks to the ability to outsource their teams of counterterrorism experts — be able to make such decisions, smaller platforms will struggle with it. Designation is therefore a crucial instrument available to governments to facilitate better action against terrorist use of the internet in a manner that upholds the rule of law.

Over the past year, Tech Against Terrorism has analyzed 13 different global designation systems to investigate the terrorist designation processes used, the implications of designating a terrorist entity on online content, safeguards existing human rights frameworks and how global designation processes can be improved to provide guidance on the implications for online content and therefore improve online counter-terrorism efforts.

Lack of consensus

First, we have found that there is often little or no consensus on country designation lists, which creates further confusion regarding illegality across jurisdictions. Designation, banning, disbanding, proscription, or political proscription are all terms used to refer to government listing activity of terrorist organizations that have different implications. This poses a challenge for tech companies, and especially smaller ones that might not have the expertise in place to interpret the legal systems of different jurisdictions. We also recommend that there be a separate registration process for designation terrorist groups that do not confuse these lists with unconstitutional, political or other status groups. This would ensure that counter-terrorism measures, whether online or offline, are not used against groups that are not terrorists.

Different legal designations

Second, we found that currently, countries differ in the extent to which online content produced by or in support of a designated terrorist group is illegal. In the Interim regime in the UK, it suggests that content produced by a designated terrorist entity that leads to a terrorist offense is considered illegal. However, this is currently only advisory. The use of the designation as a tool to prevent terrorist use of the Internet misses opportunities, given the demonstrated positive impact it can have on directing the fight against terrorism on line. It also creates a significant gray area where tech companies must decide what content should be classified as terrorist.

Furthermore, it brings a high risk that individuals and groups will be victims of unjust attacks on freedom of expression, while those who engage in terrorism may be able to spread their messages online. Therefore, one of our main conclusions is that democratic nation-states and supranational institutions can gain a great deal by clearly stipulating that official content produced by a designated terrorist entity (whether a terrorist group or ‘an individual) that leads to a domestic terrorist offense (in nation states’ jurisdictions) must be classified as illegal. This would anchor online content moderation in the rule of law and provide tech companies with clarity on what constitutes most terrorist content. This would make it easier for technology companies to moderate such content, which will significantly disrupt terrorist use of the internet.

Asymmetric Designation Lists

Third, most of the country designation lists reviewed are heavily biased towards Islamist terrorist groups, with either none or only a few far-right terrorist groups listed. Canada and the UK have, to date, designated the most far-right groups as terrorists, with 9 and 5 respectively (as well as 4 pseudonyms). Worryingly, the United States lacks the legal mechanisms to designate terrorist entities or lone actors under the country’s definition of domestic terrorism, as there is no legislation in place to do so. The only way for a domestic terrorist, like the perpetrator of the Buffalo bombing, to be designated would be to affiliate to a Foreign Terrorist Organization (FTO) on the U.S. Department of State’s Terrorist Organization List. Due to the lack of designation of far-right terrorist groups, far-right violent extremists can operate more freely than many designated Islamist groups. It is therefore imperative that nation states designate more far-right terrorist groups to accurately reflect and respond to the threat emanating from domestic and transnational far-right terrorist groups. Only then can the designation be used to counter online content produced by far-right terrorist entities.

Obsolete Lists

Fourth, we found that there are inaccurate listings where disbanded or renamed organizations are often still listed under previous titles. This is likely due to a lack of regular review processes that hampers counter-terrorism efforts both offline and online. We recommend that governments establish regular review processes to ensure the lists are accurate in scope and kept up to date. We also recommend that these processes take place with input from civil society representatives, counter-terrorism experts and human rights lawyers.

Non-transparent appeal mechanisms

Fifth, non-transparent appeal mechanisms pose significant human rights risks. Where persons cannot appeal their inclusion on a designation list, they may be subject to severe restrictions of their rights. There are known cases of individuals being mistakenly included on designation lists, so this is very problematic. Governments should put in place accessible appeal mechanisms to ensure that group members and individuals can challenge their inclusion to avoid erroneous additions.


It is essential that online content produced by actors such as the perpetrator of the Buffalo attack is countered in a way that respects human rights and a clear naming policy provides opportunities to do so. However, as things currently stand, designation systems are arguably outdated as they do not clarify the implications of an entity’s designation on the online content produced by that entity, limiting the impact positive that the designation can have in combating terrorist content online. Therefore, I recommend that decision-makers integrate criminal justice, and in particular designation, into the 21st century to counter terrorist use of the Internet more effectively and in a responsible and human rights-compliant manner. To echo President Biden’s words, we must do everything we can to end hate-fueled domestic terrorism, or any form of terrorism for that matter, and I argue that designation should be one of the priorities for achieve it.

European Eye on Radicalization aims to publish a diversity of views and as such does not endorse the views expressed by contributors. The opinions expressed in this article represent the author alone.

Comments are closed.