Right-wing terrorism is a growing threat for democratic societies globally. With rising numbers in death tolls and far-right narratives creeping into the mainstream, the extreme right assumed a new face through the digital dissemination of extremist propaganda and trivialising violence – thereby attracting new audiences. Greater cooperation is required between the state, researchers, and tech companies to address the manipulative strategies used by these groups.
This month two right-wing extremists face trial in Frankfurt, Germany, suspected of assassinating the CDU politician Walter Lübcke at his home in June 2019. Lübcke, who openly supported Chancellor Angela Merkel’s liberal border policy at the height of the “refugee crisis” in 2015, has been a symbol of hate within far-right circles and was vilified as a “traitor against the people” (Volksverräter). As the first assassination of a politician at the hands of right-wing extremists in post-war Germany, this case brings together two key elements of transnational far-right narratives; namely, that (1) the state has fallen into the hands of the “enemies” who are facilitating (2) apocalyptic scenarios of the “death of the people” (Volkstod) by welcoming migrants into the country. A series of non-fatal attacks against politicians and arson attacks against refugee centres over the last years have prompted German authorities to take the right-wing threat more seriously; this includes measures such as the Network Enforcement Act (NetzDG) and its proposed amendment through the draft bill to “combat right-wing extremism and hate crime.” However, these measures do not adequately address the global characteristics of the recent wave of right-wing terrorism. Hence, while the suspects on trial for Lübcke’s murder have a long neo-Nazi past, there is a risk that future attacks will emerge from a new type of perpetrator rooted in radical online milieus – as seen in the cases of Halle and Hanau. Based on a recently published chapter for the German Peace Report, below we summarise our findings on the transnational threat of right-wing terrorism and its digital underpinnings, and conclude with some recommendations.
The narratives of right-wing terrorism
International right-wing terrorism has a long history with apparent peaks in the 1970s and 1980s. In recent years, the number of right-wing terrorist attacks has increased significantly in Europe, North America, Australia and New Zealand. Figures from 2019 show that in Western Europe, Germany continues to be the country with the highest rate of fatal and non-fatal right-wing terrorist attacks. Today, the majority of fatal attacks are carried out by lone actors, however militant groups still gather for violent actions, often coordinated via messenger apps and social media. While the right-wing terrorist groups of the 1970s and 1980s are only understandable in the context of the Cold War, violence emerging from today’s extreme right is focussed on migration, driven by the belief that the native population is supposedly being “replaced.”
Online, such conspiracies are articulated in the form of “white genocide,” and “the great replacement,” as well as “ethnic conversion” (Umvolkung) and the above-mentioned “Volkstod.” These ideas are not only based on the narrative that society faces a threat from “violent” foreigners, but also that mainstream “cultural Marxist” politicians are responsible for “inviting” in refugees. These narratives have their origins in fascism, and they are not only propagated by militant right-wing extremists, but also radical right-wing parties. It is precisely the mutual reinforcement of facing a perceived existential threat, coupled with the conspiratorial narrative of a culpable establishment, that serves to legitimise violence and empower individuals to act upon this “threat.” Thus, even if such acts are committed by lone actors, they take place in the context of open hostility towards minorities, “elites,” “old parties” and democratic procedures.
From radical milieus to terrorist subcultures
Right-wing terrorists are often linked to radical milieus from which they receive not only ideological, but also infrastructural and organisational support. These support structures are integral to traditional forms of (right-wing) terrorism. However, some of the most recent perpetrators of right-wing terrorist attacks did not rely on group logistics or specific networks to carry out their actions. Although the individuals who committed the attacks were not integrated into far-right organisational structures and could therefore not expect any support from them, they still conformed to extremist subcultures that follow their own norms and values. For example, the perpetrators of both the Christchurch and Halle attacks in March and October 2019 respectively, uploaded so-called manifestos online, which contained many references to relevant Chan communities, demonstrating that they saw themselves as part of a transnational virtual subculture of white supremacists.
With these performative online actions, perpetrators consciously address a transnational audience, formulating their ideological set pieces in such a way that transcends national borders and languages and can be understood by a far-right audience “in the know.” Thus, this digitally mediated form of right-wing terrorism can no longer be understood in the absence of the wider transnational context. In particular, image-based forums such as 4Chan and 8kun (previously 8Chan) are used by the extreme right not only as a space for inspiring new lone actor terrorists, but also as a means to express their ideology with humorous and ironic discourses, thereby making it increasingly difficult to distinguish between organised action and individual acts of provocation.
Trivialising mass violence
Far-right actors are deliberately calling on their followers to post content in ironic formats in order to forge new virtual alliances, and to incorporate far-right themes into the public debate. Within this context, meme culture is an important tool used by the extreme right to imply that what they say should not be taken seriously and that their rhetoric is “harmless” or “just for laughs,” despite the fact that the memes often display overtly racist overtones. Memes are used as a way to appeal to a younger audience and trivialise or even glorify right-wing violence.
One key tactic here is to reach out to the wider spectrum of the far-right sub-movements; that is, individuals who do not necessarily fall under the extreme end of the spectrum, but who may hold anti-immigration sentiments, support radical right-wing populist parties and politicians, and/or believe that freedom of expression is threatened by a culture of “political correctness.” Amongst the extreme right, these sub-movements are referred to as the “normies”; the far-right view such individuals as potential target audiences that can be radicalised through exploiting the above grievances.
Grasping the intangible
In terms of long-term prospects, it is important to place more attention on the cultural practices of the radical online milieus from which right-wing extremists emerge. Above all, this requires familiarity with transnational online cultures of the far-right; namely, their codes, ironic references, language, and frames. Corresponding clues could then be matched with more conspicuous features that point to the planning of violent acts. Such nuances and sub-text can be difficult to identify, and therefore stronger analytical capacity within and across tech companies is important. Whilst hashes may help to identify certain images, an additional level of human content moderation is vital to decipher these codes.
Although action against incitement and threatening behaviour is important in order to prevent potential acts of violence, it is also worth emphasising that extremist violence is not a consequence of hate speech alone. Whilst myths concerning the perceived threat of foreigners and the culpable establishment do not explicitly call for hatred and violence, they do play a significant role in legitimising violence. Espousing these narratives is not a criminal offence, however extreme right-wing actors are fully adept at spreading and amplifying such myths and conspiracy theories across social media. The problem lies not in individual content and posts, but rather the complex extremist networks that disseminate such narratives and infiltrate mainstream discussions and platforms with swarm tactics. Thus, knowledge-sharing and collaboration between larger and smaller tech firms is required, as the latter have less resources at their disposal and extremists are more likely to exploit such platforms.
Mr Maik Fielitz is a Doctoral Fellow at CARR and a Doctoral Candidate in Department of Political Science at Goethe University Frankfurt. See his profile here.
© Maik Fielitz and Reem Ahmed. Views expressed on this website are individual contributors and do not necessarily reflect that of the Centre for Analysis of the Radical Right (CARR). We are pleased to share previously unpublished materials with the community under creative commons license 4.0 (Attribution-NoDerivatives).
This article was originally published at Global Network on Extremism and Technology. See the original article here.