Facebook must stop pretending to be innocently neutral and start acting more like a media company

Time to face reality.
Time to face reality.
Image: Reuters/Mariana Bazo
We may earn a commission from links on this page.

Facebook wants to be seen as neutral territory. With close to 1.8 billion monthly active users, the social media giant has a vested interest in positioning itself as an impartial platform—the better to avoid alienating any of its users or advertisers.

“We are a tech company, not a media company,” CEO Mark Zuckerberg said this year in response to a student’s question about whether Facebook might take a more editorial role in the future. Because Facebook builds tools but does not produce original content, he argued, its role is to serve as a simple conduit for information. This argument also effectively seeks to absolve Facebook of responsibility for what happens on its platform. Perhaps people are sharing and commenting on more fake political stories than real ones on the site, but hey! don’t shoot the messenger.

Still, several recent high-profile events, including the 2016 US presidential election, suggest that it’s time for Facebook to face reality. Its professions of neutrality are at odds with how the platform really works, as well as how people use it. Like it or not, Facebook is more than a publishing platform. In order to operate ethically in the 21st century, it needs to embrace that fact—and start developing a clear editorial strategy.

Technology is not neutral

A lot of Americans are getting their news from Facebook. A May 2016 Pew survey found that a majority of US adults—62%—get news on social media, with 66% percent of Facebook users surveyed saying that they get news from the platform. In another recent Pew survey on social media use, two-thirds of respondents said that “a lot” or “some” of the content they saw was related to politics.

But Facebook’s relationship with politics is a complicated one. On the one hand, it has pursued an active role in US politics, taking credit for helping more than two million people register to vote. The platform stands to make an estimated $300 million from political advertising in 2016, and announced a partnership with ABC News to host special coverage on Election Day. On November 2, the company announced record online advertising revenue, fueled in part by US political spending.

However, Facebook also appears to have been applying an editorial strategy to its coverage of political news without admitting it. In early September, a report emerged that former editors at Facebook had suppressed conservative sources on the platform’s Trending Topics widget. The editors, journalists by trade, were tasked with curating the stories that appeared on the widget, and drafting short descriptions to match. The news sparked a backlash from conservatives, and the remaining editors were laid off soon after.

The editors were replaced with a proprietary algorithm, a move that quickly backfired. Without human editors, Facebook was unable to appropriately regulate which news stories were promoted—and in some cases conspiracy sites, fake news stories and even fabricated publications were allowed to trend.

Underlying its decision to replace humans with code is Facebook’s apparent belief that technology is neutral. In fact, algorithms are developed by human engineers and can perpetuate their biases; these algorithms in turn have a profound influence on what shows up in a user’s timeline.

Yet despite the overall degradation of Trending Topics, Facebook has failed to make any improvements. Users complain that the algorithm is missing important news stories, with topics like the ongoing anti-pipeline protests at Standing Rock Sioux reservation remaining largely absent from the Trends. In effect, by attempting to remove human bias from the curation of news, Facebook has left users susceptible to machine bias.

In other instances, it is human bias that has effected content. During 2016’s heated election season, Republican nominee Donald Trump used social media to fan the flames of racism and sexism, with some of his posts arguably conflicting with Facebook’s policy on hate speech.

Despite an internal push to moderate Trump’s posts, the company decided not to take the demagogue’s posts down, with Zuckerberg explaining that it would be inappropriate to censor a major political candidate’s posts. This was, at its core, an editorial decision.

We see this conflict at work as well with Facebook’s Community Standards. The Standards set relatively clear rules for what’s allowed, but the onus of reporting prohibited content falls largely on users. Located in various cities throughout the world, paid content moderators make split-second decisions as to whether content goes or stays, reviewing everything from text posts to beheading videos. Other content might be removed at the behest of foreign governments or law enforcement. Clearly, no matter what Facebook might argue, its rules don’t necessarily apply equally to all users.

Diversity and Donald Trump

This fall, Facebook faced further controversy when it was revealed that board member Peter Thiel had donated $1.25 million to the Donald Trump presidential campaign. (Thiel is now a member of Trump’s transition team.) In response, Zuckerberg wrote in a memo:

We care deeply about diversity. That’s easy to say when it means standing up for ideas you agree with. It’s a lot harder when it means standing up for the rights of people with different viewpoints to say what they care about. That’s even more important.

Zuckerberg is Thiel’s protégé, and his permissiveness toward Trump’s hateful political rhetoric seems to stem at least in part from his beliefs about ideological diversity. Thiel wrote in his 1995 book, “Real diversity requires a diversity of ideas, not simply a bunch of like-minded activists who resemble the bar scene from Star Wars.”

But Thiel’s large political contributions to a candidate who seeks to exacerbate racial and religious divisions is antithetical to the idea of an open and connected world. And frankly, ideological diversity is not Facebook’s main diversity problem.

Globally, Facebook’s senior leadership is 73% men. In the United States, representation in senior leadership roles is only 3% black and 3% Hispanic. Though the company recently touted improvements in its hiring process, in 2015 head of diversity Maxine Williams chalked up Facebook’s corporate homogeneity to a public education system that was not able to deliver a diverse talent “pipeline.” William’s comments were swiftly criticized. The lack of diversity amongst Facebook’s leadership is particularly frustrating given the demographics of its users. A recent Pew study demonstrated no notable differences in social media use by racial or ethnic group.

This lack of diversity also seems to manifest in Facebook’s more controversial policies and decisions. For example, ProPublica reported in October that Facebook’s advertising system allowed businesses to exclude groups associated with “ethnic affinities”—a clear violation of the Fair Housing Act, which bars discrimination based on race as well as gender, religion, disability, and other protected categories.

Facebook defended this function, saying that it was meant to help “people see messages that are both relevant to the cultural communities they are interested in and have content that reflects or represents their communities.” But as ProPublica, which first reported the feature, wrote: “Imagine if, during the Jim Crow era, a newspaper offered advertisers the option of placing ads only in copies that went to white readers. That’s basically what Facebook is doing nowadays.” Had Facebook’s advertising team been more diverse, it seems unlikely that such a tool would have been made available.

In this context, it’s easy to see why Zuckerberg’s defense of Thiel under the guise of “diversity” has infuriated so many people. Diversity of ideas is not the same as racial and gender diversity. The latter ensures technology companies are able to innovate and perform better, as well meet the needs of a diverse user base.

Facebook’s responsibilities

Given the amount of editorial control that Facebook exerts across its platform, the company’s stated industry position proclaiming neutrality seems to imply that it’s in deep denial. Its stated mission is to make the world more open and connected. Up till now, it’s acted as if the best way to do that is by acting as a tech company. But it’s time for the company to admit that it exerts so much influence over public opinion that it is, at its core, also a content creator.

Facebook can be a force for good, shedding light into new viewpoints and life experiences. But the company must recognize and internalize its significance in the world. Its meteoric rise to global prominence is unprecedented, and the social network is still influenced by an insular culture.

There is now a compelling public interest in the evolution of Facebook’s corporate culture. It needs to bring in new, diverse perspectives. It needs to increase transparency around its algorithms, content moderation policies and corporate governance accountability. And ultimately, Facebook must be more transparent about how it censors, curates, and promotes content. Only with that information can the rest of us make informed decisions about whether it is a channel to the world that we really want to be a part of.