Bumble presents a policy to combat hate speech

Bumble introduces a policy to combat hate speech.

Bumble announced that it has introduced a new policy that explicitly prohibits identity-based hate, an act that reinforces the stance the company previously took in banning racist, transphobic, ableist and shameful language.

The company also announced today that it will take action against those who intentionally submit false reports based on someone’s identity, including removing repeat offenders from its platform.

The company defines identity-based hate as content, images or behavior that promotes or condones hatred, dehumanization, degradation or contempt for marginalized or minoritized communities with the following protected attributes: race, ethnicity , national origin/nationality, immigration status, caste, sex, gender, gender identity or expression, sexual orientation, disability, serious medical condition, or religion/creed.

“As a platform rooted in kindness and respect, we want our members to connect safely and without hate that targets them simply for who they are,” said Azmina Dhrodia, Chief Security Officer of Bumble. “We want this policy to set the gold standard for how dating apps should think about and enforce rules around hateful content and behavior. We were very intent on tackling this complex societal issue with principles that celebrate diversity and understand how those with overlapping marginalized identities are disproportionately targeted with hate.

Dhrodia, an expert in gender, technology and human rights, joined Bumble in 2021. Dhrodia previously worked on violence and abuse against women online at the World Wide Web Foundation and Amnesty International, as well as with various tech companies to create safer online experiences for women and marginalized communities.

“Our moderation team will review each report and take appropriate action. Part of rolling out this policy included training on implicit bias and discussion sessions with all safety moderators to explain how bias can exist when moderating. of content,” Dhrodia said. “We always want to lead education and give our community a chance to learn and improve. against our policies or guidelines.

Identity-based hate is an issue that negatively affects many communities and that more and more gender non-conforming people, such as trans and non-binary people, face in online dating.

In a recent analysis by Bumble, it revealed that up to 90% of user reports it received about gender non-conforming people were ultimately rejected by its moderators because no violation of Bumble’s rules had occurred. been observed. User reports frequently contained language related to the reported user’s gender and speculation that the profile might be fake. These new rules now mean that Bumble can take action against those who intentionally submit false or baseless reports solely because of someone’s identity.

The app uses automated safeguards to detect comments and images that violate its guidelines and terms and conditions, which can then be forwarded to a human moderator for review. Up to 80% of Community Guidelines violations on Bumble are now proactively detected before anyone reports them, part of the company’s commitment to reducing and preventing harm before it happens. they do arise.

Members of the Bumble community can also report someone for identity-based hate in the app’s Block + Report tool, either directly from someone’s profile or in their chat conversation.

Comments are closed.