Is a technology company always neutral? Cloudflare’s latest controversy shows why the answer is no.
Jenna Ruddock is a Research Fellow with the Technology and Social Change Project and April Glaser is a Senior Internet Policy Research Fellow at Harvard Kennedy School’s Shorenstein Center.
It’s time to take a hard look at Internet infrastructure policy
Infrastructure rarely makes the headlines – until it fails. Internet infrastructure is no exception. But last month, Cloudflare – a popular internet infrastructure company providing a range of services from domain name support to cybersecurity and content delivery – was reluctantly (again) lured under the spotlights. The problem was not a broken pipe or a cyberattack targeting its network or its customers, but rather the fact that Cloudflare continued to protect one of its customer websites despite overwhelming evidence of persistent online harassment and abuse and offline perpetrated by the community of users of the site. (The website in question will not be named by this article in order to avoid harassment or direct readers to its content.)
Flag. To block. To suspend. Demonetize. Most of us are familiar with the range of tactics major social media platforms use to moderate online content, and the confusion and challenges that have resulted from erratic efforts to arbitrate user-generated content at large. ladder. The Internet’s Facebook and YouTube have proven ineffective in preventing the online communities they host from engaging in harmful behavior, including incitement to violence. The prospect of internet infrastructure companies that aren’t directly involved in the social media business making decisions about what’s okay and what’s not to keep online is even more difficult.
But the stakes are just as high: consider Cloudflare’s decision in 2019 to stop providing services to 8chan, a website well-known for its violent extremism and explicitly white supremacist content. That year, three mass shooters posted their hateful manifestos on 8chan before opening fire. Seventy-five people died from these shots, with 141 casualties in total. Even immediately after the third attack – in El Paso, Texas – Cloudflare initially said it would not stop providing services to 8chan. A few hours later, following public outrage and bad press, Cloudflare ended its technical support for the site.
So how should we think about online infrastructure companies and their responsibilities to combat damage caused by websites using their services?
Social media sites that encourage people to post content have more targeted tools to moderate that content, such as flagging or deleting a problematic post or banning an individual’s page. But companies that provide internet infrastructure services like web hosting or domain name services usually have much less granular options available to them. They are often limited to direct actions such as deleting entire web pages or blocking entire domains. Governments are increasingly turning to infrastructure providers such as ISPs in an effort to disrupt internet access for entire regions in times of unrest.
For those who would rather see a company like Cloudflare stay entirely out of the content moderation game, well, that ship has sailed. From top to bottom of the “stack”, internet infrastructure services have repeatedly made unilateral decisions to take down entire websites – Cloudflare is not alone. When Cloudflare ditched the neo-Nazi website daily storm in 2017, Google, which was the site’s domain registrar, and GoDaddy, the site’s web host, also did so. Largely hidden from public view, however, these decisions rarely make headlines unless they are prompted by sustained public outcry. And it’s rare for Internet infrastructure companies to proactively cite clear pre-existing guidelines or policies when taking action in these cases. The result: a record of ad hoc, reactive decision-making that is often so opaque and inconsistent that it’s hard for anyone outside of business to imagine better solutions to these thorny policy issues.
In a recent blog post, Cloudflare management offered what some have found to be a compelling analogy to defend the company’s stern reluctance, and sometimes outright refusal, to part ways with websites with long histories of harm. . In its role as a website security service provider, the company claims, Cloudflare is a lot like a fire department. Therefore, refusing to provide services to a website based on the content of that website would be tantamount to refusing to respond to a fire because the accommodation belonged to someone of “insufficient character”.
Without going too deep into this specific analogy, there are two glaring problems with comparing most Internet infrastructure providers to any community-rooted utility they serve. The first and most obvious problem is that the vast majority of Internet infrastructure providers are for-profit corporations with no comparable regime of public oversight and accountability. While these Internet infrastructure companies may rightly position themselves and their services as valuable and even integral to the Internet as a whole, their most concrete obligations are ultimately to their paying customers and, most importantly, , their owners or shareholders.
But the second, more nuanced distinction concerns how we identify rights and harms at stake. Often the provision of infrastructure services is positioned as a neutral default – only the denial of these services is framed as a political choice. Or in other words: refuse services to websites or forums that promote or have been directly linked to violence has been easily framed as a potential denial of rights and therefore an affront to the “free and open internet”. But when a company chooses to Continue provide services even with hard evidence that a site is being used to promote hate and abuse, it is generally not treated as a threat to the overall health of the internet in the same way. As legal scholar Danielle Citron has noted, however, online abuse itself “jeopardizes freedom of expression” – particularly by silencing “women, minorities and political dissidents”, who are disproportionately targeted online.
Infrastructure companies themselves have championed this idea of neutrality, and in the absence of support from law enforcement or the courts, calls to action from targeted individuals and communities are too often reduced to subjective content or policy disagreements. The Cloudflare analogy provides just one example here: not providing services to a website is tantamount to refusing to administer potentially life-saving emergency aid, while the harms of persistent and targeted harassment are reduced to a judgment on “moral character”. And while companies can rely on their willingness to act in accordance with legal process, shifting the burden entirely onto the court system ignores the fact that law enforcement agencies and Courts have an abysmal record of not only ignoring harm reported by communities that are frequent targets of online abuse, but also causing further harm in the process.
A frequently expressed concern is that denying services to a bad actor is a “slippery slope” leading to denial of services to anyone, including marginalized communities often targeted by forums like 8chan. So far, that has not been the case. While Cloudflare claims its high-profile decisions to end services for 8chan and the daily storm has led to “a dramatic increase in authoritarian regimes trying to get us to terminate the security services of human rights organizations”, it is unclear whether any of these demands are reflected in the corporate transparency reports. Greater transparency is needed throughout the stack for a well-informed public conversation to be possible. But it is equally important to consider how and when the “slippery slope” arguments are applied. Cloudflare says its latest withdrawal decision was made because the escalation of threats – in just forty-eight hours – led the company to believe there was “unprecedented urgency and an immediate threat to life. human”. The slope from “revolting content” to the harassment, swatting and mass shootings encouraged by online hate communities also seems awfully slippery.
There are two things those who care about creating a safe and thriving digital world have learned from watching the long and drawn-out conversation about moderating social media content. For one thing, there are few, if any, easy answers. This is just as true for internet infrastructure services as it is for major social media platforms. And second, problems don’t solve themselves or go away — tech companies react to public outcry and investigative journalism that makes them look bad. Trying to untangle complex political issues in times of crisis is impractical – but so is continuing to insist on the existence of neutral actors.
There is no doubt that these horrible corners of the internet will persist – in some form or in a forum. There will always be places on the web where those determined to cause harm and perpetuate abuse can band together and build new outposts. Combating these harms clearly requires a whole-of-society approach, but Internet infrastructure providers are as much a part of society and the online ecosystem as the rest of us. An honest and solid conversation about the real consequences of allowing hate communities to grow online and how internet infrastructure companies allow them to do so is the only path to an internet where diverse communities can create and thrive safely.