Type here your keyword(s)

Media Against Hate

MEDIA AGAINST HATE
Uphold our ethical standards

LOIC VENANCE / AFP

Hate speech online: could the EC Code of Conduct limit freedom of expression?

Published on 2016-12-07

On May 2016, the European Commission published a Code of Conduct aiming to stand against illegal hate speech on the Internet, following recent terrorist attacks in Europe, which prompted discussions on how social media might be used by terrorists groups.

The idea behind the initiative is that IT companies, such as Facebook, Twitter, Youtube and Microsoft, after signing the Code of Conduct would be responsible to have either regulations or guidelines to make clear that “promotion of incitement to violence and hateful conduct” is prohibited.

According to EU law, illegal hate speech is defined as “the public incitement to violence or hatred on the basis of certain characteristics, including race, colour, religion, descent and national or ethnic origin”. In the majority of Member States, it was further extended to hate speech against sexual orientation, gender identity and disability.

Both the European Commission and IT companies pointed out the importance of freedom of expression. However, in a comprehensive review of the Code, Article 19 concluded that such measures could still negatively affect freedom of expression. Despite its non-binding character, the Code of Conduct could increase censorship by private companies on their online platforms.

Shortly after the launch, the European Commission decided to assess how fast these IT companies would react on the notifications of illegal hate speech. During six weeks, organisations from Austria, Belgium, Denmark, France, Germany, Italy, Spain, the Netherlands and the United Kingdom sent notifications to the IT companies. Overall 600 notifications were issued, with 270 to Facebook, 163 to Twitter and 123 to Youtube. No notifications were sent to Microsoft. A large number of cases was on the grounds of Anti-Muslim, ethnic origin or race hatred. The results, announced on December 6, showed that in 28% of cases the content was removed: Twitter and Youtube were more keen to remove the content of “trusted flaggers”, whereas Facebook didn’t make much difference between them and “normal users”. Out of this 28%, only 40% was removed within 24 hours after notification. A second monitoring exercise will take place in 2017.

#MediaAgainstHate