Future Tense

How Facebook Can Prove It Doesn’t Discriminate Against Conservatives

More transparent moderation policies would help everyone.

Elephant and donkey kicking Facebook logo.
Photo illustration by Natalie Matthews-Ramo/Slate. Photos by Thinkstock.

During last week’s congressional hearings investigating the accessing of data from (at least) 87 million Facebook users by Cambridge Analytica, several Republican lawmakers worried that companies like Facebook might have a “liberal bias” in the way it enforces its rules and moderates content posted by users. In 10 separate instances, Republicans brought up cases in which content posted by conservative-leaning Facebook users had been removed in error by the company. The most frequently cited case was the alleged censorship of conservative video bloggers Diamond and Silk. They have asserted since September that Facebook has purposely limited the reach of their brand page, and on April 5, they received a message from Facebook’s policy team saying the company determined their content was “unsafe to the community.”

Zuckerberg and Facebook have said that the “unsafe to the community” message was sent improperly and have apologized. But the deeper claims of censorship were somewhat misleading: Research by ThinkProgress suggests that video content from across the political spectrum was made less visible after recent changes to Facebook’s algorithms. By ThinkProgress’ analysis, Diamond and Silk apparently suffered less in these changes than comparable liberal-leaning outlets. Representatives from the company told the Washington Post that they had reached out to Diamond and Silk to provide more context to explain the changes.

Still, lawmakers had other anecdotes to highlight, alleging political bias was responsible for the rejection of an ad for a conservative candidate running for Michigan state Senate, the removal of an ad from a Catholic university featuring a cross, and the takedown of a Chick-fil-A Appreciation Day page.

We can debate the merits on individual cases. But whatever your political stripe, these are important concerns. Facebook, like many other social media platforms, is facing increasing scrutiny about potential bias in the way it makes decisions about what people are allowed to post online and what content becomes visible. The rules are often opaque, and there is almost no public accountability in the system. Without more transparency about how exactly decisions are being made and reviewed, it is no wonder that there are growing concerns about potential bias. Despite claims to the contrary by Sen. Ted Cruz and others, however, there is ample evidence that content-moderation problems do affect users across the political spectrum. Indeed, errors in moderation are much more likely to impact already marginalized groups.

Content-moderation systems are incredibly complex, involving a complicated tangle of human and machine enforcement, and they can be manipulated by organized flagging and spamming campaigns. There’s little good data to track whether or not these systems are operating fairly, and platforms have historically worked to keep their internal processes secret. That’s why we have been working over the past several years to collect reports from users when their content is taken down by social media platforms. We want to evaluate the scope and impact of content moderation, and we’re using this data to develop a set of independent standards for content moderation. Our data showed that during the 2016 elections, supporters of Donald Trump, Hillary Clinton, Bernie Sanders, and Jill Stein alike reported the removal of their content, as did users of many religious backgrounds.

Typically, content is first flagged for removal by other users, which triggers an evaluation by commercial moderators, often low-paid contract workers who have to assess hundreds of pieces of disturbing content per hour. They check the content against a version of the company’s highly detailed internal policies, which are not made available to the public, like what specific phrasing targeting which specific groups would constitute a violation. The interface they use may not give them much context to evaluate the content—and, again, they are expected to move quickly.

Such a complex system is bound to result in errors from time to time. In 2016, for instance, the company removed a Pulitzer Prize–winning photo of a young girl running naked through the streets of Vietnam after a napalm attack. The photo was removed for violating Facebook’s community standards and had in fact been used as a training example. Following a public outcry, the company acknowledged that some content that technically violates their community standards may nevertheless serve the public interest.

But as Rep. Bill Johnson, a Republican from Ohio, put it, “[E]very time a mistake like that is made, it’s a little bit of a chip away from the trust and the responsibility factors. How do you hold people accountable in Facebook, when they make those kind of mistakes of taking stuff down that shouldn’t be taken down, or leaving stuff up that should not be left up?” Indeed, it is often the workers who are blamed when they inevitably make mistakes, but as the Napalm Girl incident shows, the problem is a much more difficult, systemic one.

Zuckerberg has acknowledged that this is an area the company needs to pay more attention to, no doubt anticipating that content moderation will only become a bigger problem as the company increasingly turns to machine learning techniques to automate content-moderation processes. Toward the end of his testimony, Zuckerberg said that he thinks the U.S. needs to figure out and create the set of principles that we want American companies to operate under.

Whatever these are, the first step must be more public accountability. The concerns users have about potential bias all come about because moderation systems are so impenetrable. Users often don’t have the information they need to understand why their content was removed or their account suspended, and independent researchers don’t have access to the data they need to investigate claims of systemic bias.

For one thing, platforms should clearly inform users which of their posts were removed or triggered an account suspension, explain which policy it was found to violate, and offer reliable systems to deal with and correct the inevitable mistakes. Most major U.S.-based social media platforms offer their users some form of appeal, but it typically doesn’t work well. For Facebook, the option to appeal is currently available only when an account or page is removed. If your posts, photos, or videos are removed, or if you receive a temporary ban of 12 hours to 30 days, you will not be able to appeal. And the appeal process that does exist is limited. Many of those who submitted reports to us said that they were not aware they had the option, or if they did, that their appeal was unsuccessful. The major platforms have not yet invested the resources to work out how to provide a reliable, independent review of decisions in a way that is both trustworthy and that works at the scale they need.

Many platforms already produce transparency reports that provide aggregate figures when governments require them to remove content. Companies should issue similar reports to help people understand the number of posts removed and accounts deactivated under terms-of-service violations (after redacting personal information that could raise privacy concerns, of course). Ideally, these figures would be broken down by the policy violated, how the content was identified for moderation, the numbers of appeals, and their success rates. As the sizes of moderation teams continue to grow, platforms should also begin to provide better information about the demographics, working conditions, training, and quality-assurance processes for their moderators, who are doing a job that can be traumatizing.

These are easy first steps. The harder challenge will be to find ways to help independent researchers and civil society verify, track, and report on the performance of moderation processes over time. Platforms have historically been reluctant to open up their processes to external scrutiny. Ultimately, though, more transparency would be a win-win: It would enable users to better navigate these systems, allow the companies themselves to shield themselves from claims of bias, and, in the long term, improve content-moderation processes overall. That is something that everyone—even Diamond and Silk—could like.

Read more from Slate on Cambridge Analytica.