Even after hosting companies refuse to keep extremist sites like 8chan online, they find other ways to communicate on the dark web. How do we combat this?
In the wake of the El Paso shooting, attention turned once again towards imageboards such as 8chan, and their role in radicalization and the spread of terrorist propaganda. The El Paso shooter’s short manifesto had been uploaded to the 8chan “politically incorrect” /pol board (so it seems, by someone besides the killer himself). This was only the latest in a list of far-right terrorist manifestos that had been uploaded to the site in recent months. Perhaps most famously, the Christchurch mosque attacker uploaded his own manifesto to 8chan, along with links to a live stream of the shooting. Later, a shooter targeting a synagogue in Poway, California, also announced his impending attack via an 8chan post.
This time, the public outcry was overwhelming. Under pressure from the public, the press, and government, Cloudflare, the company which provided crucial infrastructural support to 8chan, severed businesses ties. 8chan quickly went offline, only to resurface some days later using the distributed ZeroNet network. The site now required far less in the way of intermediary services to remain online. Its users’ shared computing power now did the work for which 8chan previously had to rely on third-party vendors.
Think of this shift as the difference between downloading a movie from iTunes vs. torrenting it. Instead of a single company site hosting 8chan’s website data, serving its domain name, or keeping its flow of traffic steady, now innumerable users cooperated to achieve the same ends. Of course, there were unforeseen consequences. Users’ IP addresses became visible to interested observers, in contradiction to the anonymity for which the Chan boards have become famous. Furthermore, by helping to host the site, these new “0chan” users were likely assisting in the storage and spread of illegal content, including child pornography.
Despite these obstacles, it is likely that 8chan’s distributed strategy represents a Rubicon of sorts for extremist communications. In recent years, many right-wing extremists have increasingly been banned from social media and user-generated content platforms such as Twitter, Reddit, and YouTube, and dropped by key infrastructural intermediaries such as web hosts, DNS registrars, and payment processors. This has led to the emergence of the “alt-tech” movement, an attempt by the far-right to develop its own digital communication platforms, ever-lower down “the stack”—the technical megastructure of the Internet. But as sites like 8chan lose their relationships with key infrastructural intermediaries, we might now be seeing an inevitable migration toward distributed, encrypted, and dark web platforms.
Pressures on Far-Right Media
It is hard to observe this trend without a sense of inevitability—fate, even. The combined legal, technological, economic, and social pressures on extremist media are larger and more complex than any one company, or even government. At times they seem inexorable. To name only a few:
1. Public and private regimes of governance
One key legal factor in extremist media’s “down-stack” momentum pertains to Section 230 of the U.S. Communication Decency Act. Under Section 230, interactive computer services enjoy the freedom and protection to divest from extremist content. Section 230 indemnifies the providers of interactive computer services such as social media platforms, user-generated content sites and services, and infrastructural intermediaries such as web hosts and DNS hosting services from liability “on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected” (47 U.S.C. § 230 (1996)). Social media platforms and infrastructural service vendors can ban distasteful content and divest from controversial clients at will, leaving extremists with no legal recourse.
2. Technical affordances of web platforms and internet infrastructure
Extremists’ alternative communications platforms rely on digital service vendors: web hosts, payment processors, and domain name servers. And as the far-right continues to be associated with hate speech, harassment, assault, and murder, these services continue to drop off, one by one. What vendors remain to support alt-tech platform clones are themselves borderline criminal enterprises whose long-term reliability is extremely questionable. Over time, any extremist platform will find the difficulties in obtaining these vendors outweighing the difficulties in pursuing distributed, encrypted, or dark web solutions.
3. Financial norms structuring digital business
Digital startups typically make money in a combination of several ways. First, a startup can be acquired by an established company (like Facebook buying Instagram). Second, very successful startups can make an IPO and become publicly traded (like Facebook itself). Third, some startups opt to remain “steady-state,” turning a profit for their founders via fees, advertising, or selling data (like Kickstarter). The first two of these are, of course, out of the question. And extremist associations make advertising revenue difficult to obtain in pursuit of a sustainable “mom and pop” model. Even 8chan’s relatively good advertising revenue did not preserve its presence on the surface web. A data mining and sales model would likewise almost certainly decimate these platforms’ user base. Long-term extremist platforms are economically unviable businesses on the surface web.
4. Public pressure
Public pressure, concerns over brand image, the risk of boycott, and indeed the moral conscience of intermediaries’ executives have motivated them to divest from extremist platforms. Given the wide berth afforded to tech companies by laws such as Section 230, and the strong libertarian “Californian Ideology” that guides so many Silicon Valley execs, public pressure has often been the determining pressure forcing extremist media further down the stack. Shooters such those in Christchurch and El Paso often believe that their violent spectacles will provoke mass civil conflict. This belief is almost always misplaced. In fact, attacks on civilians strongly correlate with diminished support for extremist groups. From a communications perspective, it appears that violent spectacle actually led to further marginalization of extremist media, via public outrage.
Going Distributed, Going Dark
Already, there is talk on the extreme right of developing self-financed communication networks combining the best of distributed, dark web, and encryption technology affordances. Such a system would be based on already technically viable engineering principles. The distributed hosting experiment of “0chan” could be combined with TOR’s onion routing techniques. And in order to circumvent the challenge of maintaining payment processors for extremist purposes, this network of devices would collaborate to mine cryptocurrency. Pre-existing encryption technologies could easily be incorporated into such a design.
While hosting, DNS, and payment processors would no longer act as points of failure, these dark, distributed networks would not be immune to intervention from those intent on their removal. It is theoretically possible that internet service providers (ISPs) might be able to recognize and disrupt such a network via its protocols—that is, the combination of rules and languages that machines in the network would use to send data to one another. This practice is already used by ISPs to interfere with file sharing. If it were inclined to likewise interfere with this dark, distributed far-right network, an ISP might develop methods to recognize the combination of protocols characteristic of the network. Then, the ISP could theoretically trace this “signature” back to its source and block the user.
But before society entrusts the governance of controversial speech to a handful of ISPs, better involvement by law enforcement and civil society should be pursued. A more desirable solution might come in the form of increased oversight and cooperation between public and private entities. Online monitoring of the extreme right by law enforcement has to-date been neglected, to the detriment of public order. Private watchdog groups have provided important undercover monitoring, which has proven useful in the case against accused Charlottesville murderer James Alex Fields, Jr., for example. But monitoring alt-tech should be seen as a critical issue of preventative public safety, not merely an ex post facto resource for prosecution and civil suits.
As extremist platforms crumble under myriad legal, public, and financial pressures, debates as to the desirability of this state of affairs seem almost beside the point. The momentum of these forces, and the relative speed with which they are affecting this shift seem to overwhelm and outpace attempts to shift norms. Every chapter in the history of extreme right communication has brought with it some tragedy that might have been avoided with adequate monitoring and intervention. Surely, a new wave of extremist media embedded in the widespread criminality of the dark web would be no exception. Rather than asking if this future of distributed, encrypted, and dark web extremist platforms is preferable to the (perhaps) now-dwindling surface web extremist ecosystem, we should prepare now for its inevitable arrival.
Mr Brian Hughes is a Doctoral Fellow at CARR and a Doctoral candidate in School of Communication at American University. See his profile here.
© Brian Hughes. Views expressed on this website are individual contributors and do not necessarily reflect that of the Centre for Analysis of the Radical Right (CARR). We are pleased to share previously unpublished materials with the community under creative commons license 4.0 (Attribution-NoDerivatives).
This post was also hosted by our media partner, Rantt Media. See the original post here.