Volunteers unfurl a giant banner printed with the Preamble to the U.S. Constitution during a demonstration against the Supreme Court’s Citizens United ruling at the Lincoln Memorial on the National Mall Oct. 20, 2010, in Washington, DC. Photo by Chip Somodevilla / Getty Images
With the news that one of the shooters in the Dec. 10 rampage in Jersey City, N.J., had published anti-Semitic posts online prior to the attacks, there is more interest than ever in trying to curb hate speech.
The carnage at a kosher grocery store, and an earlier incident that same day, that left six people dead, one of them a police officer, comes in the wake of prior episodes of violence, as in the synagogue shootings in Poway, Calif. and the Christchurch, New Zealand, mosques shootings, which also found the shooters publishing online diatribes.
Both sides of the aisle have been trying to do something about hate speech.
The Trump administration has recently drafted regulations that would curb anti-Semitic and anti-Israel speech on college campuses that receive federal support, on the grounds that such speech creates a hostile environment for Jewish students.
Meanwhile, progressives have spent the last several years pressuring tech companies such as Facebook and Twitter to terminate the accounts of groups who post material attacking racial and religious minorities and immigrants.
In the midst of this, law professors and other free speech advocates have been warning against the dangers of censorship and the erosion of First Amendment protections of free speech.
This leads to the question of what, if anything, can, or should, be done to curb hate speech? It turns out that the First Amendment is by no means the absolute barrier to curbing hate speech that people assume it is.
The actual American law of free speech makes important distinctions among what speech is protected, to what extent, and from what kinds of interference.
The current American understanding of free speech law is actually new. According to Justice Oliver Wendell Holmes, widely considered the progenitor of modern free speech protection, the First Amendment as originally understood only prevented censorship—that is, prior restraint of communication. It did not protect against subsequent punishment.
But, in a series of opinions and decisions in the aftermath of WWI, the modern approach of protecting speakers who express unpopular, and even inflammatory, ideas began to take shape. Today, the First Amendment is understood as protecting any idea, no matter how obviously false or shocking.
Nevertheless, not all speech is protected by the First Amendment. And the limits do point to actions that can legally be taken against hate speech.
The most important limit on constitutional free speech protection is that it only applies against interference by the government.
Non-government actors, such as private companies, are generally free to censor the speech of employees and others associated with the company.
This is especially true of Internet companies, which, under Section 230 of the federal Communications Decency Act, are expressly shielded from liability for restricting objectionable material online. In other words, Facebook and Twitter are free to ban offensive speech, if they, and society at large, consider that to be a good idea.
A second limit on constitutional protection is that a speaker can be held liable for defamation of another individual.
Cases permitting actions against defamation of individuals actually led the Supreme Court in 1952 to uphold punishment of a speaker for disparagement of African-Americans in general, in Beauharnais v. Illinois, the so-called group libel case.
Beauharnais has never been overruled. But a 2017 case, Matal v. Tam, prohibiting the government from denying trademarks that it considered to be disparaging to racial groups, suggests that it is no longer good law. Nevertheless, individual defamation actions remain viable.
But the most important limit on the reach of the First Amendment in terms of hate speech is that solicitation of, and participation in, criminal activity is not protected. Thus, even though expressed in spoken words, the offer of money for a gangland hit is punishable.
This limit on free speech was upheld by the Supreme Court in Holder v. Humanitarian Law Project in 2010, allowing the government to ban the giving of material support to a terrorist group, even though that support consisted of otherwise protected speech.
The high court has been careful not to go too far down this road in contexts that can be considered political. In the leading, 1969, case of Brandenburg v. Ohio, the court held that advocacy of violence or law violation can be punished only where it is “directed to inciting or producing imminent lawless action and is likely to incite or produce such action.”
This is now known as the Brandenburg test.
Taken as a whole, the above content of free speech law offers a path to early intervention by the government to prevent violent attacks, while maintaining a high level of protection of the expression of offensive ideas.
A new response to hate speech would begin with the realization that the American approach of protecting offensive speech, even when it is false and outrageous, has worked well.
So, it is legal in the United States to deny the Holocaust, whereas it is illegal to do so in much of Europe; but there is certainly not any more Holocaust denial here than there. It turns out that the best remedy for false speech really is truthful speech, rather than arrest and prosecution.
In addition, the strict application of the Brandenburg test to peaceful, but illegal protest, such as draft dodging and sit-ins, has also worked well. It has created a healthy arena of protest that has substituted for more disruptive political action.
But, murder is not an idea and violence is not something we should have to live with. In the realm of violence, speech protections should change with larger changes in the social and technological context.
Just as the Boston Marathon Bombers learned how to build bombs from an al-Qaeda website, though they had no other contact with the group, so terrorists of all kinds now associate loosely in dispersed networks that are nevertheless very effective at promoting terrorist acts.
This reality should allow Congress to designate such loose associations, and their accompanying websites, as terrorist cells. Such a designation would allow law enforcement to intercept even speech that would otherwise be protected, as was done in the Humanitarian Law Project case.
In addition, given the very real support that terrorists have found online, in the form of manifestos, videos and other encouragements, the courts should tighten the imminence requirement in the Brandenburg test in the context of speech encouraging violence.
Now that we know that these shooters really do intend to kill their victims as soon as they can, the police should be allowed to intervene sooner and more vigorously, when persons express support for the idea of killing.
These proposals are not major changes in the law. They retain robust protection for speech. But they do promise more effective law enforcement in dealing with these new and more violent threats.
Capital-Star Opinion contributor Bruce Ledewitz teaches constitutional law at Duquesne University Law School in Pittsburgh. His work appears biweekly on the Capital-Star’s Commentary Page. Listen to his podcast, “Bends Toward Justice” here.
Our stories may be republished online or in print under Creative Commons license CC BY-NC-ND 4.0. We ask that you edit only for style or to shorten, provide proper attribution and link to our web site. Please see our republishing guidelines for use of photos and graphics.