By Campaign Agent Charlotte Spencer-Smith
Hate speech is a distressing everyday reality of social media, but governments want social media platforms to do more to counter it. The question remains, is it possible to do away with abuse on social media?
January has been an interesting month for social media. On New Year’s Day, Germany started to enforce the new Network Enforcement Law (“NetzDG”), one of the most advanced pieces of legislation against online hate speech in Europe. Social media companies have to delete or block “evidently unlawful content” within 24 hours, or up to a week for more complicated cases. This new law signifies that Germany has decided to take a different approach, whilst countries including the UK try to combat hate speech by focussing on users rather than social media companies.
Keen to avoid German fines up to 50 million euros, Twitter has not just deleted offensive tweets by far-right politicians, but tweets satirising the offending tweets, including by German satire magazine Titanic, which also found its account suspended for 48 hours. Critics worry that this new level of zeal poses a threat to free speech in Germany. Social media companies have to judge what might fall under the new law and what might not, and can be reported by users to the authorities if they fail to act. If Twitter seems to be overcompensating, governments have long complained that social media platforms have been too slow and inactive in tackling online abuse.
Policing their own platforms
To a large extent, social media platforms police what is and is not acceptable to post online. Twitter, Facebook and YouTube have user policies on hate speech, but provide little transparency about the finer details about where they draw the line. Training documents leaked from Facebook in 2017 allow a glimpse into rules for content deletion that seem to produce counterintuitive results (for example, offensive content about white men can be deleted, but not about black children).
Moderating is a touchy subject for platforms like Twitter, Facebook, and YouTube because, traditionally, they have seen themselves as technology platforms, rather than media companies with editorial responsibility for the content they host. This becomes more complex when commercial interests and ad revenue become involved. Although YouTube has refused to take down extremist content that does not violate its terms of service, it has demonetised videos and made them more difficult for users to find.
Hate speech or freedom of expression?
With 1.7 billion daily active users on Facebook alone, monitoring the web for hate speech is a colossal task. As well as relying on users to flag abuse, social media platforms use algorithms, machine learning, and artificial intelligence to search for potential hate speech, for example, Instagram’s DeepText. However, in many cases, hate speech can only be identified in context, so a human has to make the final judgement. This has led to growth in the number of content reviewers, dubbed “the worst job in technology” as workers are exposed to distressing content. Meanwhile, users’ ability to flag content has been abused, for example, recent attempts to shut down the Facebook activities of Egyptian democracy activists.
These controversies raise questions about how social media companies can judge what constitutes hate speech and what constitutes freedom of expression - a difficult task even for governments and judiciaries. While the Crown Prosecution Service has pledged to push for tougher sentences in online hate speech cases, investigating and prosecuting individual users is costly. A balanced approach to making social media providers more responsible could be the way forward. The Committee on Standards in Public Life has recently recommended that UK law should be changed to make social media companies liable in certain cases.
Ultimately, social media platforms are owned and run by private companies, but treated by users as public spaces. Governments are beginning to sit up and pay attention to the evolution of social media and its role in our lives. The challenge is to find the right legislative balance to protect citizens from hate speech.
Sources and Further Reading:
- Derek Scally, “Germany’s social media hate speech ban faces wide backlash”, The Irish Times, (9 January 2018)
- “Germany is silencing “hate speech”, but cannot define it”, The Economist, (13 January 2018)
- Rick Noack, “Can social media become less hateful by law? Germany is trying it - and failing, critics say”, Washington Post, (13 January 2018)
- Henning Hübert, “Wie das Bundesamt für Justiz über das NetzDG wacht”, Deutschlandfunk, (24 January 2018)
- “Code of Conduct on countering illegal hate speech online: Results of the 3rd monitoring exercise”, European Commission, (January 2018) - PDF
- “Hateful Conduct Policy”, Twitter
- “Encouraging respectful behaviour: Hate speech”, Facebook
- “Hate speech", YouTube
- “Facebook’s Secret Censorship Rules”, Propublica Facebook page, (28 June 2017)
- Julia Angwin & Hannes Grassegger, “Facebook’s Secret Censorship Rules Protect White Men From Hate Speech But Not Black Children”, Propublica, (28 June 2017)
- Ali Breland, “YouTube cracking down on hate speech”, The Hill, (24 August 2017)
- “Facebook Reports Third Quarter 2017 Results”, Facebook Investor Relations, (1 November 2017)
- Nicholas Thompson, “Instagram unleashes an AI system to blast away nasty comments”, Wired, (29 June 2017)
- Lauren Weber & Deepa Seetharaman, “The Worst Job in Technology: Staring at Human Depravity to Keep It Off Facebook”, Wall Street Journal, (27 December 2017)
- Dania Akkad, “Revealed: Seven years later, how Facebook shuts down free speech in Egypt”, Middle East Eye, (26 January 2018)
- Vikram Dodd, “CPS to crack down on social media hate crime, says Alison Saunders”, The Guardian, (21 August 2017)
- “Intimidation in Public Life”, Committee on Standards in Public Life, (19 December 2017) - PDF
- Jordan Erica Webber, Iain Chambers & Max Sanderson, “Digital dystopia: tech slavery and the death of privacy – podcast”, The Guardian, (12 January 2018)