Can We Control Hate Speech on Social Media?

By Campaign Agent Charlotte Spencer-Smith

Hate speech is a distressing everyday reality of social media, but governments want social media platforms to do more to counter it. The question remains, is it possible to do away with abuse on social media?

January has been an interesting month for social media. On New Year’s Day, Germany started to enforce the new Network Enforcement Law (“NetzDG”), one of the most advanced pieces of legislation against online hate speech in Europe. Social media companies have to delete or block “evidently unlawful content” within 24 hours, or up to a week for more complicated cases. This new law signifies that Germany has decided to take a different approach, whilst countries including the UK try to combat hate speech by focussing on users rather than social media companies. 

Keen to avoid German fines up to 50 million euros, Twitter has not just deleted offensive tweets by far-right politicians, but tweets satirising the offending tweets, including by German satire magazine Titanic, which also found its account suspended for 48 hours. Critics worry that this new level of zeal poses a threat to free speech in Germany. Social media companies have to judge what might fall under the new law and what might not, and can be reported by users to the authorities if they fail to act. If Twitter seems to be overcompensating, governments have long complained that social media platforms have been too slow and inactive in tackling online abuse. 

Policing their own platforms

To a large extent, social media platforms police what is and is not acceptable to post online. Twitter, Facebook and YouTube have user policies on hate speech, but provide little transparency about the finer details about where they draw the line. Training documents leaked from Facebook in 2017 allow a glimpse into rules for content deletion that seem to produce counterintuitive results (for example, offensive content about white men can be deleted, but not about black children). 

Moderating is a touchy subject for platforms like Twitter, Facebook, and YouTube because, traditionally, they have seen themselves as technology platforms, rather than media companies with editorial responsibility for the content they host. This becomes more complex when commercial interests and ad revenue become involved. Although YouTube has refused to take down extremist content that does not violate its terms of service, it has demonetised videos and made them more difficult for users to find

Hate speech or freedom of expression?

With 1.7 billion daily active users on Facebook alone, monitoring the web for hate speech is a colossal task. As well as relying on users to flag abuse, social media platforms use algorithms, machine learning, and artificial intelligence to search for potential hate speech, for example, Instagram’s DeepText. However, in many cases, hate speech can only be identified in context, so a human has to make the final judgement. This has led to growth in the number of content reviewers, dubbed “the worst job in technology” as workers are exposed to distressing content. Meanwhile, users’ ability to flag content has been abused, for example, recent attempts to shut down the Facebook activities of Egyptian democracy activists.

These controversies raise questions about how social media companies can judge what constitutes hate speech and what constitutes freedom of expression - a difficult task even for governments and judiciaries. While the Crown Prosecution Service has pledged to push for tougher sentences in online hate speech cases, investigating and prosecuting individual users is costly. A balanced approach to making social media providers more responsible could be the way forward. The Committee on Standards in Public Life has recently recommended that UK law should be changed to make social media companies liable in certain cases

Ultimately, social media platforms are owned and run by private companies, but treated by users as public spaces. Governments are beginning to sit up and pay attention to the evolution of social media and its role in our lives. The challenge is to find the right legislative balance to protect citizens from hate speech.

Sources and Further Reading:

TalkPolitics is proud to be supported by Audible. For 50% off your new membership, click here.