FTC report warns against using artificial intelligence to tackle online problems

Today, the Federal Trade Commission released a report to Congress warning against the use of artificial intelligence (AI) to combat online problems and urging policymakers to exercise “a great deal of caution before relying on it as a political solution. The use of AI, especially by large tech platforms and other companies, comes with its own unique limitations and challenges. The report highlights significant concerns that AI tools can be inaccurate, biased and discriminatory by design and encourage the use of increasingly invasive forms of commercial surveillance.

“Our report underscores that no one should view AI as the solution to the spread of harmful online content,” said Samuel Levine, director of the FTC’s Consumer Protection Bureau. “Tackling harm online requires a far-reaching societal effort, not an overly optimistic belief that new technologies, which can be both useful and dangerous, will rid us of these problems.”

In legislation enacted in 2021, Congress tasked the Commission with examining the means by which AI “may be used to identify, remove, or take any other appropriate action necessary to address” several specified “online harms.” Harms of particular concern to Congress include online fraud, impersonation scams, fake reviews and accounts, bots, media manipulation, sale of illegal drugs and other illegal activities, exploitation sexual violence, hate crimes, online harassment and cyberbullying, and disinformation campaigns aimed at influencing elections.

The report warns against the use of AI as a political solution to these problems online and notes that its adoption could also introduce a series of additional harms. Indeed, the report describes several issues related to the use of AI tools, including:

  • Inherent design flaws and inaccuracy: AI detection tools are blunt instruments with built-in imprecision and vagueness. Their online harm detection capabilities are significantly limited by inherent flaws in their design, such as unrepresentative datasets, misclassifications, inability to identify new phenomena, and a lack of context and meaning.
  • Prejudices and discrimination: In addition to inherent design flaws, AI tools can reflect the biases of their developers that lead to erroneous and potentially illegal results. The report provides an analysis of why AI tools produce unfair or biased results. It also includes examples of cases where AI tools have discriminated against protected categories of people or blocked content in ways that diminish freedom of expression.
  • Incentives for commercial monitoring: AI tools can encourage and enable invasive commercial surveillance and data mining practices, as these technologies require the development, training and use of large amounts of data. Additionally, improving the accuracy and performance of AI tools may lead to more invasive forms of surveillance.

Congress has tasked the Commission with recommending legislation that could advance the use of AI to combat online harm. The report concludes, however, that since major tech platforms and others are already using AI tools to combat online harm, lawmakers should consider focusing on crafting legal frameworks that would ensure AI tools do not cause additional damage.

The Commission voted 4-1 in a public meeting to send the report to Congress. Chair Lina M. Khan and Commissioners Rebecca Kelly Slaughter and Alvaro Bedoya released separate statements. Commissioner Christine S. Wilson issued a concurring statement and Commissioner Phillips issued a dissenting statement.

Comments are closed.