Facebook drove test account to gore and fake news in just 21 days



In three weeks, Facebook’s test account to determine how its own algorithms affect what people see has turned into a maelstrom of fake news and inflammatory images.

The test was revealing because it was designed to focus exclusively on Facebook’s role in recommending content. (Image source: Reuters)

In February 2019, Facebook Inc. set up a test account in India to determine how its own algorithms affect what people see in one of its most dynamic and important overseas markets. The results stunned the company’s own staff.

In three weeks, the new user’s flow has turned into a whirlwind of fake news and inflammatory imagery. There were graphic photos of beheadings, doctored footage of Indian airstrikes against Pakistan and scenes of chauvinistic violence. A group for “things that make you laugh” included fake news of 300 terrorists who died in a bomb attack in Pakistan.

“I have seen more images of deceased people in the past 3 weeks than I have seen in my entire life,” wrote a staff member, according to a 46-page research note which is part of the mine of documents posted by Facebook whistleblower Frances. Haugen.

The test was revealing because it was designed to focus exclusively on Facebook’s role in recommending content. The test report used the profile of a 21-year-old woman living in Jaipur, western India, from Hyderabad. The user has only followed pages or groups recommended by Facebook or encountered through these recommendations. The experience was called an “integrity nightmare” by the author of the research note.

While Haugen’s disclosures painted a damning picture of Facebook’s role in delivering harmful content in the United States, the Indian experience suggests that the company’s influence globally could be even worse. Most of the money Facebook spends on content moderation is focused on English-language media in countries like the United States.

But much of the company’s growth has come from countries like India, Indonesia and Brazil, where it has struggled to hire people with the language skills to impose even basic surveillance. The challenge is particularly acute in India, a country of 1.3 billion people and 22 official languages. Facebook has tended to outsource the monitoring of its platform’s content to contractors from companies like Accenture.

“We have invested heavily in technology to detect hate speech in various languages, including Hindi and Bengali,” a Facebook spokesperson said. “As a result, we have halved the number of hate speech people see this year. Today it has fallen to 0.05%. Hate speech against marginalized groups, including Muslims, is on the increase around the world. We are therefore improving law enforcement and committed to updating our policies as hate speech evolves online. “

The new user test account was created on February 4, 2019 during a research team trip to India, according to the report. Facebook is a “pretty empty place” without friends, the researchers wrote, with only the company’s Watch and Live tabs suggesting things to watch.

“The quality of this content is… not ideal,” the report says. When the Watch video service doesn’t know what a user wants, “it seems to recommend a bunch of softcore porn,” followed by a frowning emoticon.

The experience began to turn bleak on February 11, when the test user began exploring content recommended by Facebook, including popular posts on the social network. She started with benign sites, including the official page of the ruling Bharatiya Janata party of Prime Minister Narendra Modi and BBC News India.

Then, on February 14, a terrorist attack in Pulwama, in the politically sensitive state of Kashmir, killed 40 Indian security personnel and injured dozens more. The Indian government attributed the strike to a Pakistani terrorist group. Soon the flow of the tester turned into a barrage of anti-Pakistani hate speech, including footage of a beheading and a graphic showing preparations to cremate a group of Pakistanis.

There were also nationalist messages, exaggerated claims about India’s airstrikes in Pakistan, fake photos of bomb blasts and a forged photo that purported to show a newly married serviceman killed in the attack preparing for return to his family.

Many hate messages were in Hindi, the country’s national language, escaping regular social media content moderation checks. In India, people use a dozen or more regional variations of Hindi alone. Many people use a mixture of English and Indian languages, making it nearly impossible for an algorithm to sift through the familiar clutter. A human content moderator would need to speak multiple languages ​​to filter out toxic content.

“After 12 days, 12 planes attacked Pakistan,” one post exclaimed. Another, still in Hindi, claimed as “Hot News” the death of 300 terrorists in a bomb explosion in Pakistan. The name of the group sharing the news was “Laughs and Things That Make You Laugh”. Some messages containing fake photos of a napalm bomb claiming to be India’s air attack on Pakistan came to light: “300 dogs died. Now say long live India, died in Pakistan.

The report – titled “Indian Test User Descent into Sea of ​​Polarizing, Nationalist Messages” – clearly shows just how little control Facebook has over one of its most important markets. The Menlo Park, California-based tech giant has identified India as a key growth market and used it as a test bed for new products. Last year, Facebook spent nearly $ 6 billion on a partnership with Mukesh Ambani, Asia’s richest man, who heads the conglomerate Reliance.

“This exploratory effort of a hypothetical test account inspired a more in-depth and rigorous analysis of our recommendation systems and contributed to product modifications to improve them,” said the Facebook spokesperson. “Our work to combat hate speech continues and we have further strengthened our hate classifiers, to include four Indian languages. “

But the company has also repeatedly interfered with the Indian government over its practices there. New regulations require Facebook and other social media companies to identify those responsible for their online content, making them accountable to the government. Facebook and Twitter Inc. fought against the rules. On Facebook’s WhatsApp platform, fake viral messages circulated about child abduction gangs, leading to dozens of lynchings across the country starting in the summer of 2017, angering users even more , courts and government.

Facebook’s report ends by acknowledging that its own recommendations led the test user account to become “filled with polarizing and graphic content, hate speech and misinformation.” It sounded a note of hope that the experience “can serve as a starting point for conversations about understanding and mitigating damage to the integrity” of its recommendations in markets beyond the United States.

“Could we, as a company, have an additional responsibility in preventing integrity breaches resulting from recommended content? Asked the tester.

  • The Indian Express website was rated GREEN for its credibility and reliability by Newsguard, a global service that rates news sources against their journalistic standards.


Leave A Reply

Your email address will not be published.