Centre for Analysis of the Radical Right
Categories
  • Audio/Visual
  • Book Reviews
  • CARR Research Insight Series
  • Insights
  • Interviews
  • News
  • Reports
  • Resources
  • Upcoming
Centre for Analysis of the Radical Right
  • Publications
    • Insights
    • Reports
    • News
    • Interviews
    • Book Reviews
      • Book Reviews
      • Books Available for Review
    • Audio/Visual
    • Upcoming
    • Search CARR posts by Author, Title, or Tag
  • About Us
    • CARR Steering Group
    • Fellows
    • Patrons
    • Our Partners
  • Research
    • CARR Research Insight Series
      • CARR Research Insight Series Publications
    • CARR Research Units
  • Resources
    • CARR Bibliography
    • TantonWatch: Battling Hate in the Immigration Debate
      • The Camp of the Saints: A Key Anti-Immigrant Text
      • Stephen Miller
      • Roy Beck
      • NumbersUSA
      • Mark Krikorian
      • John Tanton
      • Federation for American Immigration Reform
      • Faith and the Tanton Anti-Immigration Network
      • Eugenics and the Tanton Anti-Immigration Network
      • Dan Stein
      • Center for Immigration Studies
    • Proscribed Right-Wing Extremist Organisations
    • CARR Doctoral Forum
    • CARR Doctoral Resources
    • CARR Datasets
    • CARR-Richmond MA in Terrorism Security, and Radical Right Extremism
  • Forum
  • Insights

The effects of censoring the far-right online

  • Ofra Klein
  • February 17, 2020

Berlin.- Adolf Hitler speaks at the Reichstag session in the Kroll Opera after the end of the Balkan campaign (Bundesarchiv, Bild 101I-808-1238-05/Creative Commons)

Since 2016, censorship of far-right groups and individuals on social media platforms has been the subject of much public discussion. With the implementation of laws to counter hateful speech, such as the German Network Enforcement Act (NetzDG) and the EU code of conduct, social media companies are now much more responsible for regulating the content that is posted by users online.

Censorship of such hateful content can take different forms. Entire pages or accounts – or in less extreme circumstances individual posts – can be removed, a practice known as de-platforming or no-platforming. Examples of this were the deletion of Facebook pages of several British organizations and individuals, amongst which included the British National Party, Britain First, and the English Defence League (EDL). This happened in April last year, two months after the leader of the EDL, Tommy Robinson, was banned from the platform. More recently, in September 2019, the pages of the Italian parties Fuorza Nuova and CasaPound were made inactive. Besides entirely removing pages or posts, content can also be algorithmically ranked lower. Consequently, users will see it appear less prominently in their Facebook or Twitter feeds. Former UKIP leader Nigel Farage argued in 2018 that this practice lead to a much lower engagement from his followers on Facebook.

Anecdotal evidence suggests that censorship can make it more difficult for far-right individuals or groups to reach a mainstream audience. Perhaps the best example of this is Milo Yiannopoulos. The former alt-right influencer went allegedly bankrupt after being removed from Twitter. Yiannopoulos has had a hard time selling his merchandise via alternative platforms such as Telegram and Gab, which have a much smaller and more specific user base. It has also been argued that the removal of conspiracy theorist Alex Jones from YouTube and Yiannopoulos from Twitter lead to drop-offs in audiences. At the same time, fears exist that through censorship, far-right actors are forced to migrate to more fringe, unmonitored online environments, where more extremist content can continue to circulate. This fear is legitimate, as forums such as Gab and 8kun have been linked to shootings.

Removing pages might also strengthen the far-right’s perception of injustice. As CARR’s William Allchorn shows, censorship has become a key focus in the far-rights’ victimization discourse. The removal of Robinson and Yiannopoulos from online platforms, as well as the court case against Geert Wilders, leader of the Dutch Party for Freedom (PVV), have fueled this discourse. These examples are being used by the far-right as a tactic of being silenced and restricted in their freedom of speech. When censorship seems to not just remove content that is extremist in nature, but also content to which we merely disagree, this rightly spurs feelings of injustice and increases opposition. For example, in a recent case in Italy the court agreed that Facebook’s censorship of CasaPound was unjust and that it excluded the party from the Italian political debate. Consequently, Facebook had to reactivate CasaPound’s page and pay the group 800 euros for each day that it had been shut down.

Far-right actors can also adapt their strategies online in order to avoid being blocked. Prashanth Bhat and myself show in a forthcoming book chapter (in the edited volume #TalkingPoints: Twitter, the public sphere, and the nature of online deliberation) how practices of coded verbal or visual language are being applied by far-right users online to stay clear from censorship. These so-called dog whistles are used to express more covert forms of hate speech online. A well-known example was Operation Google in 2016, during which users were adopting code words as a revenge on Google that was implementing automated hate speech detection using AI. Far-right users started to refer to black people as “Googles” and Jews as “Skypes”. Consequently, these coded forms of hate speech would turn the brand name into a racist slur and force companies to censor their own brand name.

Similarly, CARR Fellow Bharath Ganesh addresses how terms such as “white genocide” or “rapefugees” were invented by Twitter users as these words are not immediately recognizable as extreme under the community guidelines of platforms. This strategy of dog whistling also allows for addressing content that is not necessarily hateful but harmful in other ways. Recently, for example, online audiences have started to employ hashtags such as #maga2020 on Instagram as to avoid their content being blocked when talking about anti-vaccines. Far-right users, as early adopters of the internet, have adopted creative strategies around censorship.

Shifting to more extreme platforms and using converted forms of hateful speech are tactics that underline the difficulties that platforms have with dealing with extreme content online. Debates on the effectiveness of online censorship often look at the numbers of takedowns, but therefore tend to neglect how censorship might in fact also spur support for the far-right.

Ms Ofra Klein is a Doctoral Fellow at CARR and a Doctoral candidate in Department of Political and Social Sciences, European University Institute. See her profile here.

© Ofra Klein. Views expressed on this website are individual contributors and do not necessarily reflect that of the Centre for Analysis of the Radical Right (CARR). We are pleased to share previously unpublished materials with the community under creative commons license 4.0 (Attribution-NoDerivatives).


Views expressed on this website are individual contributors and do not necessarily reflect that of the Centre for Analysis of the Radical Right (CARR). We are pleased to share previously unpublished materials with the community under creative commons license 4.0 (Attribution-NoDerivatives).
Related Topics
  • censorship
  • covert speech
  • de-platforming
  • Far-Right
  • Free Speech
  • German Network Enforcement Act
  • Milo Yiannopoulos
  • Online platform
  • rapefugees
  • Tommy Robinson
  • white genocide
Ofra Klein

Centre for Analysis of the Radical Right
Serious Issue. Serious Analysis.

Input your search keywords and press Enter.