Supreme Court to hear content policing cases

The past week has seen two ominous events for the tech industry in general and internet violence in particular: Elon Musk said he would buy Twitter for $44 billion after all, a deal expected to be completed by here on October 28; and the Supreme Court agreed to hear a case called Gonzalez vs. Google LLC. Together, these stories could define the trajectory of hate speech online, which has been globally linked to increased violence against minorities, mass shootings, lynchings, ethnic cleansings and the decline of democracy.

If the Twitter deal goes through, Musk would have to lift Donald Trump’s ban on using the platform, which since January 8, 2021 has confined the former president to his relatively weak Truth Social platform. Trump had 88,936,841 Twitter followers and posted a total of 59,553 tweets and retweets. About 60% of his tweets after the November 2020 presidential election were posts challenging and undermining the legitimacy of the results – an average of 14 a day – including his now infamous tweet from December 19, 2020: “Big protest in DC on January 6. Be there, it’s going to be wild!

In González, judges will consider whether internet platforms are legally liable for spreading false or violent content under Section 230 of the Communications Decency Act. (In another case, Twitter, Inc. vs. Taamneh, the Court will relatedly consider whether internet service providers can be held liable for aiding terrorists under a criminal statute.) Criticisms of Section 230’s unfettered immunity run the gamut – from Trump himself, who has complained about the censorship of conservative voices, to more left-leaning sources who argue for legal inducements on providers to screen users and content for lies and extremists. From the supplier’s point of view, if the Court limits the immunity to González, the legal risks and complexities of content management are daunting. For his part, Joe Biden said during the 2020 campaign that Meta CEO Mark Zuckerberg “should be subject to civil liability and his company should be subject to civil liability”.

A quick refresher on Section 230: Congress passed it in 1996 in response to a court ruling holding an Internet service provider liable for a defamatory statement posted on a website’s message board. The law excludes that Internet service providers are held responsible for the information provided by a third party user. The theory was that providers don’t generate content. They simply perform the equivalent of the traditional editorial functions of an editor, such as deciding whether or not to publish content, when to release it, and whether to edit it in any way before publication. Section 230(c)(1) therefore specifically states that a provider must not “be treated as the publisher or speaker of information” simply because it hosts it.

In 1996, only 20 million Americans had access to the Internet and spent an average of less than thirty minutes surfing the net. each month. There was no Google, Twitter, Facebook, Instagram, Yelp, YouTube, Snapchat, Talk or Wikipedia. Only a handful of national newspapers published articles online. Computers took about 30 seconds to load each page over a phone line, and users paid for Internet service by the hour. The first commercial ISP was only six years old, and by far the biggest was AOL.com. The first webpage was created in 1991. The first web browser, Mosaic, was released in 1993. Amazon started selling (only) books in 1995. The first webmail services, Hotmail and Rocketmail, were launched on same year that Section 230 became law in 1996. The term “blogging” was not coined until 1999.

Many things have changed since then. Today, there are more than 307 million American Internet users, or 97% of American adults. Among them, 15% only use smartphones. Rather than posting content to a site so that every user sees the same thing, social media platforms today typically use computer algorithms to sort and prioritize content based on the likelihood of an individual engaging with it. Actually. Once a user shows interest, the algorithm directs the user to similar items, assuming the content will align with the user’s pre-existing tastes. An algorithm may also send the user messages from another user with a similar profile, regardless of factual accuracy and journalistic quality. Social media companies also make money from fees paid to promote certain content.

Because the algorithms work with personally identifiable information, including an individual’s geographic location and associations with other online contacts, the privacy implications of today’s Internet are far reaching. of those of a quarter of a century ago. Algorithms also disproportionately allow the “viral” to spread non-objective, polarizing, and false information across the social media space in seconds, becoming a tool of influence and propaganda that Congress likely hadn’t considered. in 1996.

The question before the Supreme Court of González is whether social media companies’ use of algorithms to target users and push someone else’s content (rather than just employing strictly traditional editorial functions) is fully protected from legal liability. . The case arose from the death in November 2015 of Nohemi Gonzalez, a 23-year-old American student, after three Islamic State terrorists fired on a crowd of diners in a Parisian bistro, killing 129 people. His relatives sued Google, which owns YouTube, alleging that it aided ISIS by knowingly allowing it to post hundreds of radicalizing videos inciting violence and targeting potential subscribers whose characteristics matched the profile of a Islamic State sympathizer. The complaint alleged that Google knew from media coverage, complaints, and congressional investigations that its services were assisting ISIS, but refused to actively police its platform for ISIS accounts. Google decided to dismiss the lawsuit based on absolute Section 230 immunity and won in the lower court because the videos were produced by ISIS and not Google. The liberal Ninth Circuit Court of Appeals agreed.

Plaintiffs’ argument on appeal to the Supreme Court is that the selective promotion of content – ​​often for profit – is significantly different from posting and moderating third-party messages on a virtual bulletin board. In declining to hear a similar case in 2020, Judge Clarence Thomas expressed concerns about what he perceived to be an overbroad reading of the law, noting that courts too often “filter[] their decisions through the policy argument that “Section 230(c)(1) must be interpreted broadly” and that “the extension of Section 230 immunity beyond reading the text naturally can have serious consequences.” But so far, no court has denied immunity under Section 230 due to algorithmic “matchmaking” results. The Ninth Circuit has found that websites “have always decided . . . where on their sites .

Although the current Supreme Court majority has come under fire for its ideological (or apparently ideological) rulings in controversial cases, Section 230 is apparently not susceptible to a predetermined “conservative” outcome. Arguably, the strongest constitutional case to make in Google’s favor is that it’s the job of Congress to pass laws and update them, and Section 230 is no exception. Yet respecting the prerogative of Congress has not been this Court’s guiding mantra, as evidenced by its rulings under the Voting Rights Act and the Clean Air Act. When it has seen fit to usurp Congress on a matter of general public interest, this Court has acted without restraint. With nearly 200 Holocaust deniers on the ballot for congressional races next month, the ability to spread propaganda online could be precisely the type of issue that would benefit from such judicial intervention.

The Court has not yet set a date for oral argument, but González must now be decided during this term.

Comments are closed.