74. The Jurisprudence of Instructional Violence, pt. 2
From the Hit Man Manual to Christchurch
Rice v. Paladin told us that when speech is “purely functional” it can be treated as conduct rather than protected advocacy. The problem now is scale and medium.
After Oklahoma City—the deadliest act of homegrown terrorism in U.S. history, accomplished with a massive homemade ammonium-nitrate truck bomb—observers found that some online forums and documents republished or improved on the technical details of the device used in that attack. Over the years the web has hosted troves of “recipes” and tactical guides, from Usenet posts to compiled “terrorist handbooks,” that explain how to make explosives and evade detection.
Transnational extremist outlets have long amplified this dynamic. Publications like Inspire and other jihadi magazines repeatedly published operational recipes that were then rebroadcast across encrypted channels, accompanied by photos and step-by-step instructions. Those manuals are explicitly designed to turn remote sympathizers into capable attackers, lowering technical barriers to violence.
The Christchurch mosque massacre in 2019 showed how instruction, ideology, and platform dynamics can combine. The shooter posted a manifesto online and livestreamed the attack, using the internet both to announce his intentions and to seed material meant to inspire copycats; the event underscored how modern attackers exploit online infrastructures to recruit, instruct, and broadcast violence.
Contemporary counter-terror reports echo the concern: authorities in Europe and beyond now flag instructional material (operational guides, DIY bomb instructions, encrypted “how-to” posts) as a core vector for radicalization and home-grown plotting, not merely rhetorical exhortation. The digital environment makes facilitation both cheaper and more diffuse: a single widely circulated manual can reach thousands, furnishing would-be attackers with the technical know-how they previously lacked.
Rice drew a line between persuading and performing. The internet has blurred that line by combining belief with detailed instructions for action. The challenge for courts and lawmakers now is practical and urgent: should this kind of content be treated as criminal conduct or as protected speech? And if it’s treated as conduct, how do we keep that power from eroding the core freedoms the First Amendment was meant to protect?
The internet has made it possible for a single piece of speech to instruct thousands. Should our understanding of “imminence” or “intent” evolve when technology makes harm both instant and global?


What’s changed today is that the internet massively amplifies both reach and immediacy. A single post or video can instantly equip thousands with the tools to cause harm, collapsing the time and distance that once separated advocacy from action. Still, expanding the definition of conduct too far poses real dangers. Governments could overreach, labeling controversial or unpopular speech as “dangerous.” The challenge then is to craft a narrow standard that targets operational facilitation, speech whose sole and intended purpose is to enable violence, without chilling legitimate political or academic discussion. In my view, perhaps the evolution we need is not in the abandonment of “imminence” or “intent,” but in how those terms are interpreted in a nuanced context. When someone deliberately publishes instructions designed for a dispersed audience of potential attackers, that may satisfy intent even without a specific listener. Imminence, meanwhile, might need to account for technological immediacy and the resulting harm that can be instantaneous once the information circulates. Ultimately, the First Amendment must continue to shield ideas and debate but not what essentially become blueprints for harm disguised as ideology.