The deadlyattack on two mosques in Christchurch, New Zealand, in which 50 people were killed and many others critically injured, was streamed live on Facebook by the man accused of carrying it out. It was then quickly shared across social media platforms.
Versions of the livestream attack video stayed online for a worrying amount of time. A reportby the Guardianfound that one video stayed on Facebook for six hours and another on YouTube for three.For many, the quick and seemingly unstoppable spread of this video typifies everything that is wrong with social media: toxic, hate-filled content which goes viral and is seen by millions.
But we should avoid scapegoating the big platforms. All of them (Twitter, Facebook, YouTube, Google, Snapchat) are signed up to the European Commissions#NoPlace4Hateprogramme. They are committed to removing illegal hateful content within 24 hours, a time period which is likely to come down tojust one hour.
Aside from anything else, they are aware of the reputational risks of being associated with terrorism and other harmful content (such as pornography, suicide, paedophilia) and are increasingly devoting considerable resources to removing it. Within 24 hours of the Christchurch attack, Facebook had banned1.5m versions of the attack video of which 1.2m it stopped from being uploaded at all.
Monitoring hateful content is always difficult and even the most advanced systems accidentally miss some. But during terrorist attacks the big platforms face particularly significant challenges. Asresearch has shown, terrorist attacks precipitate huge spikes in online hate, overrunning platforms reporting systems. Lots of the people who upload and share this content also knowhow to deceive the platformsand get round their existing checks.
So what can platforms do to take down extremist and hateful content immediately after terrorist attacks? I propose four special measures which are needed to specifically target the short term influx of hate.
1. Adjust the sensitivity of the hate detection tools
All tools for hate detection have a margin of error. The designers have to decide how many false negatives and false positives they are happy with. False negatives are bits of content which are allowed online even though they are hateful and false positives are bits of content which are blocked even though they are non-hateful. There is always a trade off between the two when implementing any hate detection system.
The only way to truly ensure that no hateful content goes online is to ban all content from being uploaded but this would be a mistake. Far better to adjust the sensitivity of the algorithms so that people are allowed to share content but platforms catch a lot more of the hateful stuff.
2. Enable easier takedowns
Hateful content which does get onto the big platforms, such as Twitter and Facebook, can be flagged by users. It is then sent for manual review by a content moderator, who checks it using predefined guidelines. Content moderation is a fundamentally difficult business, and the platforms aim to minimise inaccurate reviews. Often this is by using the stick: according to some investigative journalists, moderators working on behalf of Facebookrisk losing their jobsunless they maintain high moderation accuracy scores.
During attacks, platforms could introduce special procedures so that staff can quickly work through content without fear of low performance evaluation. They could also introduce temporary quarantines so that content is flagged for immediate removal but then re-examined at a later date.
3. Limit the ability of users to share
Sharing is a fundamental part of social media, and platforms actively encourage sharing both on their sites (which is crucialto their business models) and between them, as it means that none of them miss out when anything goes viral. But easy sharing also brings with it risks: research shows that extreme and hateful contentis imported from niche far-right sites and dumped into the mainstreamwhere it can quickly spread to large audiences. And during attacks it means that anything which gets past one platforms hate detection software can be quickly shared across all of the platforms.
Platforms should limit the number of times that content can be shared within their site and potentially ban shares between sites. This tactic has already been adopted by WhatsApp, which now limits the number of times content can beshared to just five.
4. Create shared databases of content
All of the big platforms have very similar guidelines on what constitutes hate and will be trying to take down largely the same content following attacks. Creating a shared database of hateful content would ensure that content removed from one site is automatically banned from another. This would not only avoid needless duplication but enable the platforms to quickly devote resources to the really challenging content that is hard to detect.
Removing hateful content should be seen as an industry-wide effort and not a problem each platform faces individually. Shared databases like this do also existin a limited waybut efforts need to be hugely stepped up and their scope broadened.
In the long term, platforms need to keep investing in content moderation and developing advanced systems which integrate human checks with machine learning. But there is also a pressing need for special measures to handle the short-term influx of hate following terrorist attacks.
(Bertie Vidgen PhD Candidate, Alan Turing Institute, University of Oxford).