How Fake News Took Over Social Media - And What’s Being Done

By Campaign Agent Charlotte Spencer-Smith

What's your favourite fake news story? My favourite comes from the US election of 2016. A hitman in the pay of the Clinton family claims to have procured women for sexual trysts with Hillary Clinton. A photograph of said operative shows him standing with the Clintons in the seating area of a stadium. His face is marked in the photo with a white circle. It's a picture of Ed Milliband. 

If you're British, this is a brilliant piece of 21st century ridiculousness. Anyone with a passing knowledge of UK politics can see through it in seconds. But in certain American states, or anywhere else where Ed Miliband is unknown, for the right target audience, the story is further proof that Hillary Clinton is the craven head of a criminal campaign to destroy America. Some people are so willing to believe this that a man is currently serving four years in a US jail after storming a pizza restaurant he believed to be the headquarters of a paedophile ring linked to the Clintons.

Since the turn of the millennium, the meaning of the term, “fake news”, has undergone an evolution. Originally, it was applied to satire on current events, produced by the likes of John Stewart, or the Onion, or in the UK, the Daily Mash. However, the explosion of disinformation on the internet during the 2016 US Presidential elections cemented the use of the phrase to mean false news stories ranging from outright lies to propaganda and dishonest reporting about real life events. While disinformation has been deployed as a political and military tactic throughout history, a collision of technology and an unstable moment in global politics has driven the recent boom.

Connecting people, sharing news

The internet gives us the means to self-publish as never before. There are almost 1.9 billion websites online as of April 2018. Of these, YouTube is the second most visited site in the world, Facebook is the third, and Twitter comes in twelfth. While Facebook and Twitter are social networks, content delivery has become a defining role for all three websites. They have all been heavily affected by the fake news phenomenon. Facebook CEO Mark Zuckerberg’s is keen to stress Facebook’s primary function of “connecting people”, but social media platforms have also become a key mechanism for posting and disseminating information, and increasingly, disinformation. 

The fake news boom of 2016 can be partly attributed to events that started ten years prior. In 2006, two things happened: Facebook introduced a share button and Twitter launched its platform. These two events enabled fast, viral sharing of information, accelerating the ability of content to travel from user to user. Over the next decade, Facebook competed enthusiastically with Twitter, unfortunately acquiring some of Twitter’s problems in the process. The competition intensified in 2009, when Twitter introduced the retweet button, making it even easier and faster to share tweets. By 2012, Twitter had become the fastest way to spread news online. To follow the 2011 London Riots minute-by-minute, Twitter was the place to go for phone snaps of burning cars and rumours about where the rioters might go next, with the BBC lagging well behind. 

Facebook moved to improve its offering by adjusting News Feed to favour news stories and offered new services to publishers. At the same time, Facebook, much like Twitter, drew the line at policing the content of these news stories. This would have turned into an editorial mission that neither Facebook nor Twitter wanted. But as Wired’s major investigation into Facebook explains, “neutrality is a choice in itself. For instance, Facebook decided to present every piece of content that appeared on News Feed—whether it was your dog pictures or a news story—in roughly the same way. This meant that all news stories looked roughly the same as each other, too, whether they were investigations in The Washington Post, gossip in the New York Post, or flat-out lies in the Denver Guardian, an entirely bogus newspaper”.

News pirates of the internet

If the mechanisms for spreading information fast had already started to develop in the late 2000s, it was only a matter of time before people began to take advantage of them. This first exploded into public consciousness during the 2016 US elections with a number of fake news sites mostly run out of the Macedonian town of Veles. The sites were a natural evolution of the clickbait phenomenon with the sole objective of generating outrageous stories to generate clicks and therefore ad revenue. The US elections simply provided an economic opportunity as the sensationalist mood during the run-up guaranteed higher click rates. However, fake news websites also popped up closer to home, including eccentric publishers with an almost ideological commitment to producing fake news for the sake of media diversity. A famous example is Your News Wire, run by the British US resident Sean Adl-Tabatabai, who sees what he does as “alternative news”.

Recent hearings in US Congress and UK Parliament have focused on slicker, better-funded operations that have specifically aimed to disrupt democracy. In March 2018, whistleblower Christopher Wylie explained to the Digital, Culture, Media and Sport Committee how data company Cambridge Analytica used data analysis and ad targeting on Facebook to show fake news to the people most likely to believe it. According to Wylie, CA eroded trust in mainstream media by manipulating users: “they start seeing all of these ideas or all of these stories around them in their digital environment. They do not see it when they watch CNN or NBC or BBC, and they start to go, ‘Well, why is it that everyone is talking about this online? Why is it that I’m seeing everything here but the mainstream media isn’t talking about how Obama has moved a battalion into Texas because he’s planning on staying for a third term?’”. 

Less than two weeks later, Mark Zuckerberg answered questions in US Congress on both the Cambridge Analytica scandal and the activities of the Internet Research Agency. The Internet Research Agency is a Moscow-based company alleged to use blogs, fake social media accounts and bots to spread Kremlin-sponsored propaganda both in Russia and abroad. Although investigated by the New York Times in 2015, the Agency rose to prominence during the 2016 US elections by running hundreds of fake Facebook and Twitter accounts to interfere in sow confusion and manipulate voters during the election campaign. It is now at the centre of an indictment led by US Special Counsel Robert Mueller, as part of an investigation into foreign interference in the elections.

“Not the arbiters of truth”

Facebook has been fighting its fake news problem since around 2016. As well as trying to make fake news less lucrative for commercially-driven sites, the platform has made it easier for users to report content and also started working with third-party fact-checkers. However, some of its initiatives have collapsed into controversy. The hiring of a team of 25 editors with journalism backgrounds to prevent the “Trending Topics” feature from being taken over by fake news and manipulation failed when the company was accused of suppressing conservative news outlets in the US. The introduction of a “disputed” flag to mark controversial content was similarly ill-fated, as the company discovered that “putting a strong image, like a red flag, next to an article may actually entrench deeply held beliefs”. At the end of last year, it introduced a tool for US users to check if they had unwittingly followed an account belonging to the Internet Research Agency and it continues to weed out and delete Agency accounts.

The recent Cambridge Analytica controversy has put Facebook under the spotlight, but both Twitter and YouTube have been heavily affected by fake news, not least Twitter. In many ways, Twitter is a fundamentally different beast to Facebook. It does not insist on authenticity or that users log in with their real names, and also has a more relaxed policy on bots, provided they serve a benign purpose. This relative freedom also means that it is frequently used by the Internet Research Agency and has a significant fake news problem, with a recent MIT study finding that fake news spreads further and faster on Twitter than real news. Twitter primarily relies on its own users to debunk fake news, with senior management commenting in February that, “we are not the arbiters of truth”. YouTube is also affected - in the week after the Parkland school shootings in Florida in February, videos appeared on the platform accusing pupils at the school of being professional “crisis actors”. YouTube has announced plans to employ 10,000 content moderators by the end of the year.  

Firefighting fake news

Politicians in India, Malaysia and France have all touted the idea of banning fake news, but so far, it remains unclear as to how to implement a ban without serious threat to freedom of speech and a free press. Exactly who decides what is true and what is false is a tricky question for democracy. Meanwhile, Zuckerberg puts his faith in the future of artificial intelligence. In a hearing in the US, the CEO told congressmen that Facebook had used AI tools to identify fake accounts in German and French national elections and the Alabama special election last year and hopes to develop more. Going after fake accounts and bots is a safe approach for the company, as this does not require the human news judgement needed to differentiate fake content from real content, especially after its experiences with the Trending Topics backlash. It is also fits comfortably with Facebook’s insistence on authenticity, which, while well-intentioned, also has implications for privacy and the freedom of the individual to choose how they want to present themselves online.

However, the roots of the problem run deeper than fake accounts, stemming instead from a period of change and conflict in global politics, as well as social media design that amplifies people’s fears. Civil society initiatives have chosen to focus on the users by promoting awareness and media literacy. NGOs like the EU Disinfo Lab monitor fake news in national election cycles, while other projects like this fake news game attempt to educate the public in how fake news works. While user awareness is indispensable, individuals cannot be expected to become digital media experts overnight. Rather, governments, schools and public broadcasting services should make investments in educating the current and future electorate not just in media literacy, but also in how to navigate the new digital landscape. This might help prepare users for what awaits them on the internet.

Sources and Further Reading


TalkPolitics is proud to be supported by Audible. For 50% off your new membership, click here.