When Limiting Online Speech to Curb Violence, We Should Be Careful

Opinion: Silencing forums that spread mass violence can also silence the marginalized
A group of people take part in a rally to give a reaction against gun violence after the mass shooting in El Paso
Protestors in New York rallied against gun violence on August 4, after a mass shooting in El Paso, Texas.Atilgan Ozdil/Getty Images

America's ongoing problem with mass violence—and the difficulty we are having in quelling it—is causing many to call for the elimination of online forums used by the perpetrators. Long-term congressional gridlock on other solutions has many looking for new prevention strategies and new people to hold responsible.

The hope—for some it may be a belief—is that eliminating online speech forums will help prevent future violence. This is understandable. Everyone wants to live in a country where they are safe in their local stores, at festivals, and in other public places. The vile ideas driving shooters whose actions have caused unspeakable pain and loss are in plain view on 8chan, and the thought that we could just make them go away has strong appeal.

But this is also a critical moment to look closely at what is being proposed and pay attention to the potential consequences for us all. We all rely on our internet communities for connection, information, and organizing against violence. The same mechanisms used to eliminate online forums hosting objectionable speech are all too often used to silence marginalized voices of people who have important things to say, including those drawing attention to hate and misogyny. Rules prohibiting violence have already taken down Syrian YouTube channels documenting human rights violations, while Facebook discussions by black Americans about racism they have experienced have been removed as hate speech.

Two key strategies have emerged to hold online forums responsible for violence: deplatforming and increasing the liability imposed on internet intermediaries by changing Section 230 of the Communications Decency Act (CDA). Both strategies are notable because they are not directly aimed at the perpetrators of violence, or even at others who are participating in the hateful forums. They are instead aimed at the chain of companies or nonprofits that host the speech of others. For either approach, there is reason to tread carefully.

Deplatforming is a nonlegal strategy that involves pressuring companies to stop hosting or servicing certain individuals or forums, thus removing them from the Internet entirely or making them harder to find. This strategy recognizes that everyone who speaks online is dependent on a series of intermediaries, including direct ones like Facebook or YouTube and ISPs like Comcast or Verizon. They also include indirect intermediaries further upstream from the user, such as website hosting services, domain name registrars and domain hosts, and DDoS protection services like Cloudflare, which is currently in the news for cutting off services to 8chan.

The second strategy is a legal one that would open all of the above intermediaries to potential lawsuits by modifying CDA 230. Paradoxically, this law was passed to ensure that hosts could moderate content on their sites—protecting them from liability for both taking speech down and leaving it up. CDA 230 is rightly regarded as the law that allows all of us to participate and speak out online, since few hosts could survive if they had to face potential lawsuits every time someone criticized a company (Yelp) or said something that turns out to be wrong (Reddit or Wikipedia). Regardless of its primary benefits to us as speakers, CDA 230 has become the convenient—and often mistaken—scapegoat for those angry at technology companies for any number of reasons.

Both strategies have surface appeal in response to hateful speech. It can feel viscerally good to try to shut down these forums or chase them from host to host, or to hold someone accountable even if it is for what is said by others. But once you’ve turned it on, whether through pressure or threats of lawsuits, the power to silence people doesn’t just go in one direction. The power to stop someone you hate from speaking can be used to stop speech by someone you love, or your own speech. That power will be used by those who wish to silence their political enemies, including governments and big companies around the world. In our 30 years of helping people make their voices heard online at the Electronic Frontier Foundation, we have seen how censorship reinforces power far more than it protects the powerless.

After the 2009 shootings at Fort Hood in Texas, we saw calls to ban forums where Muslims gathered to speak. We’ve seen hate speech prohibitions in companies’ terms of service used to silence conversations among women of color about their experiences being harassed. We’ve seen regulations on violent content result in the erasure of vital documentation of human rights abuses in Egypt and Kashmir, and domestic law enforcement brutality here in the United States. We’ve seen efforts to convince upstream providers to block information about problems with electronic voting machines and actions to protect the environment.

Both strategies also assume that we want to double down on the idea that representatives from private companies—generally underpaid and traumatized content moderation contractors, but also the creators of unmoderated forums like 8chan—should be the primary deciders about what gets to be on the internet. They also assume that there is global agreement about what should be allowed and what should be banned.

Online hosts that do decide to deplatform a speaker or forum must do so only after careful consideration, applying predetermined and clear standards. Companies should strive for as much transparency as possible in their decisionmaking, for both those impacted by their decisions and the general public. They should have robust and ready processes to correct errors and guard against those who will inevitably try to game the rules to silence their enemies. In response to ongoing issues with the major hosts of user-generated content, EFF helped write and promulgate the Santa Clara Principles in May 2018 to try to outline some basic transparency and due process standards that those companies should implement when they directly host user-generated content.

Deplatforming and eliminating Section 230 both satisfy a craving to do something, to hold someone or something responsible. But make no mistake: Both carry great risks if we want the internet to remain a place where powerful entities cannot easily silence their less powerful critics.

Correction 8-9-2019, 1:55 pm EDT: An earlier version of this story incorrectly stated the year the Santa Clara Principles were written.


WIRED Opinion publishes pieces written by outside contributors and represents a wide range of viewpoints. Read more opinions here. Submit an op-ed at opinion@wired.com.


More Great WIRED Stories