This month Facebook banned Britain First from its platform, saying the group had “repeatedly posted content designed to incite animosity and hatred against minority groups”. This follows Twitter’s ban in December 2017 of the organisation’s leaders, Paul Golding and Jayda Fransen, from its platform. Similarly, at the end of March 2018 it was announced that Twitter had also permanently banned Tommy Robinson, former leader of the English Defence League. It is understood that he had fallen foul of Twitter’s rules governing “hateful conduct”.
But is the removal of hateful content or the closing down of an organisation’s or an individual’s internet presence enough?
Social media companies are coming under increased pressure to combat extremism appearing on their platforms. Prime Minister Theresa May has been highly critical of the big tech companies and, like the Home Affairs Select Committee, wants to see more and quicker action to remove extreme content automatically. May has said: “These companies have some of the best brains in the world. They must focus their brightest and best on meeting these fundamental social responsibilities.” The issue is also engaging other governments too. For example, representatives of Facebook, Google and Twitter were summoned before the U.S. Congress in January to give evidence about the steps they are taking to combat the spread of extremist propaganda over the internet. The hearing was revealingly titled: “Terrorism and Social Media: Is Big Tech Doing Enough?”
Clearly the big tech companies have not done nearly enough to counter not just extremism, which is difficult to define – with interpretations varying from country to country – but also in relation to the facilitation of child abuse and modern slavery taking place in their backyards. That these companies have a wider social responsibility that goes beyond promoting connectivity and selling our data to advertisers is no longer in doubt. The scope of international political pressure thus far has been to take down extreme content from social media platforms. There is also an increasing demand that the big tech companies become faster and more systematic at doing so.
Britain’s Counter Terrorism Internet Referral Unit has now reportedly removed more than 300,000 pieces of terror propaganda from the internet since its inception in February 2010. Its success in working with social media companies to remove this content has led to many other countries adopting similar units to protect their citizenry. There will now be an increased emphasis on Facebook, Twitter and Google getting their own houses in order without having to be reminded to do so.
While the ability to remove harmful content is a much-needed part of the response to hateful content online; however, it is impossible to win the battle of ideas through take-downs alone. If we do not become more holistic in our approach to online extremism, then we risk being locked into the same pattern of whack-a-mole in identifying and removing harmful content. At the same time, the algorithms get better, the machine learning smarter, and tomorrow is another day and, rest assured, there will be a plethora of the latest extreme content to deal with.
Therefore, big tech companies’ banning of groups and individuals from propagating hateful content on their platforms is only part of the answer. This must form part of a broader strategy which encompasses engagement with the very subjects that groups like Britain First use as lightning rods to galvanise support both on and offline.
And yet the British government cannot abrogate its responsibility for the formulation of this wider strategic response by simply foisting it onto multi-national tech companies. In any event, right now, Facebook has bigger fish to fry, namely data protection concerns that could precipitate changes to its very business model. We cannot wait for private firms to do this for us; instead, the solution actually starts closer to home.
My recent research on the applicability of the UK’s Counter Terrorism “Prevent” strategy to right wing extremism (as set out explicitly in the 2011 review) identified many instances where local authorities are failing to engage with the concerns of their constituents. This ranges from immigration and integration to the impact of globalisation, and child sexual exploitation investigations. These are difficult subjects to discuss. To date, too many local authorities have been unwilling to create the spaces to talk about them. Exploiting this vacuum, groups such as Britain First have, until recently, been given license to seize upon the narratives of these highly emotive issues – while also demonstrating their resonance with extensive networks of followers both on and offline (at the time of its ban Britain First had more than 2 million Facebook ‘likes’).
Where is the challenge by lawmakers and government officials? If policy makers and local authority representatives persist in their failure to recognise the need for mainstream discourse on issues that have become synonymous with groups such as Britain First, then they risk swelling the ranks of radical right groups, and bestowing legitimacy on individuals presenting themselves as the spokespeople of those who feel left behind by a political elite who no longer represent them. If people find their locally elected officials refuse discuss the issues of greatest importance to them, they invariably seek out and find people who will.
The point is aptly highlighted in the 2016 Casey Review; “A review into Opportunity and Integration”, which set out that “a failure to talk about all this leaves the ground open for the far right on the one side and Islamist extremists on the other”. Accordingly, this is not merely about countering extremism but rather promoting integration. That’s right, integration, possibly the biggest policy deficit we currently have in contemporary Britain. Even the Prevent Strategy asserts out that “an effective strategy must be based upon an effective integration strategy”. And yet because so little work is being done to proactively integrate communities and counter the misinformation being propagated on the streets or via Facebook and Twitter, we risk problematising various communities as being “extreme”, rather than acknowledging that they are in fact at their very root, poorly integrated.
Strategies to build integrated communities, in response, must traverse both the on and offline social spaces. Local authorities must get better at identifying the narratives that could be seized upon by extreme groups and have something to say about them. You cannot influence hearts and minds by refusing to talk about the issues that can draw people into extremism. This applies both on and offline. Banning their social media profiles is a welcome start, but can only ever be a partial response to the radical right and other forms of extremism.
Dr Craig McCann is a Policy and Practitioner Fellow at CARR, and is Principal at Moonshot CVE, a boutique start-up specialising in countering violent extremism. See his profile here.
© Craig McCann. Views expressed on this website are individual contributors’ and do not necessarily reflect that of the Centre for Analysis of the Radical Right (CARR). We are pleased to share previously unpublished materials with the community under creative commons license 4.0 (Attribution-NoDerivatives).