In June 1993, a new radio station called Radio-Télévision Libre des Mille Collines (RTLMC) began broadcasting in Rwanda.

The station was loud and used street language – there were disc jockeys, pop music, and phone calls. Sometimes the advertisers were drunk. It was designed to appeal to the unemployed, delinquents, and militia gangs of thugs. “In a largely illiterate population, the radio station soon had a very large following who found it immensely entertaining. This entertainment led to the massacre of 800,000 people with machetes in 30 days.

Quite quickly, this radio station systematically repeated messages to deliberately “troll”, incite and arouse an infamous will to destroy and get the other party to get angry in retaliation. In fact, the word troll in fishing terminology means constantly driving a fish along one baited line into the other to catch the fish.

It has become the order of the day in the world of social media and the internet. All those who want to follow are guided through it to elicit their reactions a bit to denigrate and slander them. It has also snowballed into hate speech. In fact, in the troll medley, hate speech is a major component. This includes a great level of incitement, instigation and arrogant blackmail.

Hatred and beef take over. A recent statistical analysis has shown that women are the most affected group to be trolled. The Guardian of UK claims that more than 40 percent of comments on articles written by women are abusive. The statistics, while not empirical, are even more dire in Nigeria, as regulators surveyed say they have collected more than 40 million reports of hate speech online in the past 10 years. The recent political imbroglio in Nigeria has further intensified trolling among various groups and hate speech has become the order of the day. In fact, the leader of the free world is no slouch with his recent support for white supremacy and his slander of the liberal media as fake news. It is said that over 40% of his tweets are defamatory or hate speech. The shocking part of this debate is that there are no local laws that adequately define what constitutes hate speech or trolls and how those involved could seek redress. The UK recently sent out a stern warning, but that might not be enough. The definition of the terms of what constitutes hate speech remains undefined.

However, in order to crack down on hate speech, the internet giants need to be able to define it effectively, but let’s see how the giants define it.

Facebook defines “hate speech” as “direct and serious attacks against any protected category of people because of their race, ethnicity, national origin, religion, sex, gender, sexual orientation, disability. or their illness ”.
Twitter does not provide its own definition, but simply prohibits “posting or posting direct and specific threats of violence against others”.
The YouTube website makes it clear that it does not allow hate speech, which it defines as “speech that attacks or demeans a group on the basis of race or ethnicity, religion, disability. , sex, age, veteran status and sexual orientation / gender identity ”.
Google makes a special mention of hate speech in its content and user behavior policy: “Do not distribute content that incites hatred or violence towards groups of people based on their race or origin. ethnicity, religion, disability, sex, age, veteran status or sexual orientation / gender identity. Until now, that was the definition of internet giants, but I was excited when

in May 2016, Facebook, Google and Twitter signed a code of conduct, announcing a set of standards to combat hate speech, including:

A promise to review the majority of illegal hate speech reports and remove the offending content within 24 hours; make users aware of what is prohibited by each company; train staff to better identify and respond to hate speech online.

In addition, German Justice Minister Heiko Maas has proposed fining social media of up to € 50 million for failing to respond quickly enough to reports of illegal content or hate speech. (March 2017)

• The law would require social media platforms to find ways to make it easy for users to report hateful content. Companies would have 24 hours to respond to “clearly criminal content” or a week for more ambiguous cases.

These are measures so far but more needs to be done, as the proliferation of hate speech and trolls could damage the moral fiber of the world.

Rufai Oseni,

Leave A Reply

Your email address will not be published.