Facebook and Twitter are getting rich by building a culture of snitching

Big Brother is watching; Big Brother is us.
Big Brother is watching; Big Brother is us.
Image: AP Photo/Jeff Chiu
We may earn a commission from links on this page.

Governments are experts in getting people to police one another. Communist East Germany had a vast network of citizen informants during the second half of the 20th century. Northern Ireland entices informants to hand over information on drug smuggling with large payouts. In the US, the FBI has pressured Muslims to become informants. But late-capitalist society has entered a new era: one in which corporations—whose interests are primarily financial—encourage consumers to rat each other out.

Facebook, Twitter and other social-media sites have given rise to an online snitching culture, in which users are expected to monitor one another’s actions and report problematic activity to the company. In order to encourage users to police each other, for example, Facebook has made “flagging” an essential component of its content moderation system. And so we have put each other under mass surveillance—and we are barely beginning to consider the implications.

From snitches to riches

Much has been written about how social-media users are actually the product that companies like Facebook and Twitter sell. These sites are free to use because they capitalize on user data, a practice that Harvard Business School professor Shoshanna Zuboff calls “surveillance capitalism.” She writes: “The game is selling access to the real-time flow of your daily life–your reality—in order to directly influence and modify your behavior for profit.”

Given that our data is valuable to companies, they have a financial interest in keeping their platforms clean, so to speak. Videos of beheadings and shootings, blatant racism, sexual harassment, and even nudity simply aren’t good for the bottom line, particularly when you’re a multinational company with a global user base, like Facebook, Google, and Twitter.

In order to keep the cost of policing their platforms low, these companies outsource at least some of their content moderation and rely on users to do the rest. In a 2014 paper published in the journal New Media & Society, researchers Kate Crawford and Tarleton Gillespie write that many companies understand user input to be necessary for “maintaining user-friendly spaces and learning from their community.” Flagging is “a mechanism to elicit and distribute user labor—users as a volunteer corps of regulators.”

But flags are “by no means a direct or uncomplicated representation of community sentiment,” according to the scholars. In other words, relying on users to police one another results in uneven, flawed content moderation. “Community policing” is intended to be a trust-based system. But users are human, and bring their own biases and interpretations of community guidelines to their decisions about who and what to report.

For example, in 2014, as rockets rained down on Gaza, a Hebrew-language page on Facebook called for attacks against Palestinians. Presumably, because the page used the word “terrorists” as a substitute for “Palestinians,” the page wasn’t removed—even after numerous reports—until it received media attention. Similarly, I’ve heard countless complaints about liberal and atheist Arabic-language content being removed. The assumption is that users’ anti-Muslim bias is at work behind the scenes.

Snitching by design

In an informal (and certainly un-empirical) survey, I asked my friends if they ever flag other users on Facebook or other social media. Some said no. Many others said that they report things like hate speech, terrorism, impersonation, spam, and pages that encourage violence. Several women said that they report sexual harassment, often to no avail.

But when I asked my friends if they ever report someone who is simply annoying them, many admitted to doing so at least once.

If a Facebook user stumbles upon content they find alarming in some way, they can click on “Report post” from a drop-down menu to flag the offending post to the company. In the reporting interface for most types of content, the user will be presented with the following options: “It’s annoying or not interesting,” “I think it shouldn’t be on Facebook,” or “it’s spam.” Selecting the middle option presents the user with a range of other options—from “it’s sexually explicit” to “it describes buying or selling drugs”—that draw from the company’s community standards.

The first option—“it’s annoying or not interesting”—indicates that Facebook has thought about how users might abuse its flagging tool. Users who select this option are presented with other choices. They can block the offending person, or send them a message telling them why they were wrong to, say, post a mean comment about Hillary Clinton’s appearance. In theory, this mechanism would cause users to think twice before submitting false reports about other users. But in practice, users have learned how to circumvent this check.

Crawford and Gillespie have found that users tend to adjust their expression to adhere to the “dominant vocabulary of complaint.” They concluded: “[As] long as these large-scale platforms, which host so much of our contested, global public discourse, continue to rely on flags and understand their obligation to respond only in terms of flagging, users wishing to object will find they have little alternative.”

Mechanisms of complaint

In 2014, a Verge headline called Facebook’s “report abuse” button a “tool of global oppression.” A number of Vietnamese activists had been kicked off the platform after being reported en masse for violating the community standards. The Verge called the phenomenon “Facebook raids”—“a large group of people all pressing the Report Abuse button at once.”

While social-media companies generally deny that the volume of reports on a given item influences whether it will be taken down, many people I spoke with believe volume makes a difference. Take Ásta Guðrún Helgadóttir, an Icelandic politician who says she sometimes reports profiles that are impersonating her or misrepresenting her work. “The trick is to get more people to report the same profiles,” she says. “One report doesn’t do the trick but if few other friends report it too, then they normally close it within few hours.”

Whether the volume of reports matters or not, it’s fairly easy to flag another user. Reporting people for violating Facebook’s real-name policy or for being underage will typically trigger the company to ask the user to upload a government-issued ID or other forms of identification But many users who might be at risk of being reported (for example, political activists in authoritarian countries) would avoid putting a copy of their ID on an insecure mobile phone, so they are effectively cut off from the platform.

There are also countless websites and Facebook pages devoted to crowd-sourcing reports in order to contribute to a given cause. Most of these crowd-sourcing sites (I’ve chosen not to link to any) run along a specific theme. Some are dedicated to anti-Semitic or anti-Muslim content; some link to sites run by terrorist organizations or drug cartels. A lot of the content appears to violate Facebook’s community standards. Other such sites are clearly intended to target individuals: academics, journalists, other public figures, and very often, women. These pages sometimes encourage users to specifically make false reports—a clear acknowledgement that they know they’re gaming the system.

“User-generated censorship is the strategic manipulation of social media to suppress speech,” writes Chris Peterson, a researcher at MIT’s Center for Civic Media who wrote his master’s thesis on the topic. In a brief summary of his research, Peterson presents several cases in which users banded together to act against other users.

In 2011, for example, progressive British activists organizing a strike created a website called J30Strike.org. They relied largely on Facebook to spread the word of the strike. Then, 10 days before the strike, Facebook started blocking people from posting links to the J30Strike website. Opponents had apparently flagged those links as “abusive or spammy.” In another example, social-media users coordinated a campaign to flag Sarah Palin’s controversial note about a “Ground Zero Mosque” as hate speech, successfully getting Facebook to take the note down.

Clearly, there are real concerns about how people can take advantage of Facebook’s dependence on community policing in order to censor others. One oft-suggested alternative would be to use algorithms to moderate content instead. Algorithms already affect what users see and identify certain types of banned content. But they are perhaps more flawed in their inability to detect tone and sentiment in more complex, text-based messages. And so community policing is likely to remain the primary way that companies monitor social platforms.

In the meantime, companies can lessen the effects of community policing by adjusting their content moderation policies to be more in line with the spirit of freedom of expression. That might involve, for example, more leniency toward non-sexual nudity and anonymity while ensuring that users can still report violent or harassing content. More importantly, we all need to do some serious reflection about how community policing—or for that matter, handing over so much data to companies—benefits and harms us. For while social media companies can bring us together, they can easily turn us against other too.