To judge by the headlines being passed around on social media during her campaign, Hillary Clinton was often in serious trouble.

“Experts” believed the Democratic presidential candidate had suffered brain damage. Or maybe she was trying to hide alcohol and drug addiction. She was also facing imminent indictment, now that the Federal Bureau of Investigation had finally found evidence of criminality in her use of a private email server — though the New York Police Department, after uncovering shocking evidence of her links to money laundering and sex crimes involving children, might pounce first.

In the heat of the most bitter US presidential election in memory, the internet was having a field day. A barrage of fake news — much of it aimed at undermining Mrs Clinton or boosting her opponent — was just part of the diet. It included conspiracy theories, misdirection, prejudice, harassment and hate speech, specially created to circulate on the digital networks that are now central to mass communication and media consumption.

According to critics, the digital platforms contributed to a virulent tribalism, as long-simmering partisan divisions boiled over. And they served to further undermine faith in established media outlets, as many members of a polarised electorate found ready support for the prejudices and unfounded suspicions they already harboured.

The aftermath has brought criticism of some of the biggest internet companies — in particular Facebook and Twitter — and an admission that change is needed. “We have a problem in the tech industry we have to deal with,” says John Borthwick, a New York tech investor. “These platforms are central to our democracy. Something has started to go wildly wrong.”

Finger pointing

The backlash in the wake of the election of Donald Trump , the Republican candidate, has centred on fake news: false reports that are dressed up to look like genuine news articles, sometimes from news sources invented solely for the purpose. Passed around widely on Facebook, retweeted on Twitter or promoted by Google’s search algorithms, some of these stories managed to infiltrate the popular political discourse.

A site's fake news story about the popular vote in the US election

They included the “report” from the Denver Guardian, a non-existent publication, that an FBI agent suspected of leaking emails from Mrs Clinton’s private server had been found dead in a murder or suicide. With the velocity that characterises the flow of news on Facebook, that report was shared up to 100 times a minute on the network.

Not all the misinformation favoured the Republican candidate, though most of it did. Of the 20 fakes that generated the most audience engagement on Facebook in the final three months of the campaign, all but three were either pro-Trump or anti-Clinton, according to an analysis by news website BuzzFeed. The made-up stories also touched a nerve: Facebook’s users engaged more deeply with the fakes than the top 20 stories produced by a selection of established media companies.

The viral success of fake news on Facebook and the role the sharing of these articles might have had in tilting the election for Mr Trump have caused considerable angst inside the company.

“There is anxiety around the election and there are questions around the role that Facebook and other organisations may have played,” says one person with knowledge of the internal discussions at the social networking site.

President Barack Obama said last week that, when it is no longer possible to tell “what’s true and what’s not, and particularly in an age of social media . . . then we have problems”.

In Silicon Valley, where a dominant liberal culture is in shock over the election result, there has been finger pointing over the part the world’s most powerful tech companies may have played.

“People need to step up and call out the platforms for being effectively propaganda machines for either side,” says Justin Kan, a successful entrepreneur and now an investor at Y Combinator, which funds internet start-ups. “The leadership in Silicon Valley needs to call out Facebook to do the right thing.”

Mark Zuckerberg, Facebook’s chief executive, has rejected much of the criticism, while conceding that there is “more to be done” to stop the spread of fake news. In the days after the election, he claimed it was a “pretty crazy idea” that false stories had somehow had a bearing on the result. But growing pressure led him at the end of last week to outline a number of measures the site was taking to try to tackle the problem.

How a fake news item about Hillary Clinton appeared on Twitter

The steps taken by the big internet companies in the wake of the election highlight the pressure they have been under to act. Google and Facebook moved last week to prevent their advertising from appearing on sites that carry fake news, a belated attempt to make lying less profitable. Twitter suspended a batch of alt-right accounts — linked to US rightwing extremist groups — for hate speech.

Evidence that has surfaced since the election shows that the digital platforms will have to do more to weed out misinformation and harassment from their systems, taking in not just fake news but a wider range of abusive behaviour.

Bots — automated systems designed to imitate people — did much to circulate fake news headlines on Twitter, according to Philip Howard, a professor at Oxford university’s internet institute. About a fifth of all tweets about the election were generated by accounts that produce high volumes of posts, he says, a clear indication that they were bots rather than real users.

‘Digital denial’

Google’s algorithms have also been shown to be vulnerable. For instance, it is a week since a false report that George Soros — often a target of rightwing attacks — had died.

But a search for “George Soros” on November 20 still returned this headline at the top of the In the News section of Google’s first page of results: “Breaking Intel: George Soros DEAD”. The story was from a site called the Event Chronicle.

The Event Chronicle's fake news article about George Soros

Facebook’s Mr Zuckerberg says only 1 per cent of news circulating on the site was fake. But with the company now used as a news source by nearly half of Americans, that represents a vast amount of traffic. Also, Mr Howard says the location of bot groups operating on Facebook suggests much of the fake information was targeted at Facebook users in swing states like Ohio and Florida, potentially increasing its impact.

The failure to block the tide of misinformation has reawakened complaints from the traditional media world that the digital companies have deliberately turned a blind eye, much as they were accused of doing in the case of copyrighted content.

Google and Facebook have resisted describing themselves as media companies or publishers that are responsible for the content they distribute.

“These companies are in digital denial,” says Robert Thomson, chief executive of News Corp. “Of course they are publishers and being a publisher comes with the responsibility to protect and project the provenance of news. The great papers have grappled with that sacred burden over decades and centuries, and you can’t absolve yourself from that burden or the costs of compliance by saying, ‘We are a technology
company’.”

A Facebook post linking to a fake news item on Obamacare

Along with other recent failures, such as errors in measurement that led Facebook to inflate the number of times its video ads were seen, the furore over fake news has intensified calls for internet companies to think of themselves as media concerns.

“The measurement, fake news and extremist content issues highlight that new media or social media companies are not technology companies; they’re media companies,” says Sir Martin Sorrell, chief executive of WPP, the world’s largest advertising group. “ They are responsible for the content in their digital pipes.”

Yet the business imperatives the internet platforms follow may have given them too little incentive to exercise this type of responsibility. Weeding out false information “hasn’t been a priority”, says Mr Borthwick. “Content, more often than not, has been a means to an end — and the end is more sharing, more connectivity.”

One former Facebook staffer also says the way the company is run may have exacerbated the distribution of fake news. Its engineers focus on improving engagement — clicks, likes, comments, shares — as the primary measures of success of any new feature. New projects are typically released after six-month “sprints”, during which pressure to increase those metrics is intense.

“Engagement is a dangerous drug,” says a former Facebook manager. “Nobody is incentivised to think critically about unintended, often long-term consequences.”

Another Facebook post linking to a fake news item critical of President Obama

That may also contribute to a “filter bubble” problem that leaves users inside an echo chamber of similar views.

Worse, the pursuit of engagement for its own sake may exacerbate the problem and add to the flow of angry, hateful and inaccurate information. “There’s a lot of evidence that what people share is not necessarily what they researched but what gives them an emotional response,” says Mr Kan.

The posts that bring the strongest reactions, adds Mr Borthwick, are “catnip to the news feed”. So the Facebook engineers have an incentive to feature these items most prominently, aiding the spread of information that deepens political divisions.

Room for improvement

It is unclear how far the internet companies will move to address these issues. Early attention has turned to the algorithms used to weed out fake news, an area in which many experts believe there is scope for improvement.

Mr Zuckerberg has made no mention of another issue raised by critics: whether the sites should hire human editors. Employing people to carry out detailed filtering of content is impractical given the scale of the networks, say critics such as Mr Borthwick.

But he, and others, argue that internet companies should still hire “public editors” who can help to establish guidelines and shape their thinking in product design and other issues that will affect the way their services are used.

Such calls may continue to fall on deaf ears. “The culture they’ve built and the people they’ve hired” mean that internet companies like Facebook simply do not recognise any need for editorial sensibility, says Ben Edelman, an assistant professor at Harvard Business School.

The cultural chasm runs even deeper. On Twitter, a commitment to free speech has long contributed to the site’s hesitation to stamp out harassment, a flaw it belatedly tried to fix last week with new controls to combat bullying, racism and misogyny.

Mr Zuckerberg, casting Facebook more as a communication platform than a media site, takes a similar stance. “We believe in giving people a voice, which means erring on the side of letting people share what they want whenever possible,” he wrote last week.

But old authorities become muted in a world where users’ voices are pre-eminent. In an interview last week with The New Yorker, Mr Obama complained that, on a Facebook page, an explanation of global warming by a Nobel Prize winner looks no different than one by a paid climate change denier.

Mark Zuckerberg, chief executive officer and founder of Facebook Inc., gestures as he speaks during a session at the Techonomy 2016 conference in Half Moon Bay, California, U.S., on Thursday, Nov. 10, 2016. The annual conference, which brings together leaders in the technology industry, focuses on the centrality of technology to business and social progress and the urgency of embracing the rapid pace of change brought by technology. Photographer: David Paul Morris/Bloomberg
© Bloomberg

He added: “The capacity to disseminate misinformation, wild conspiracy theories, to paint the opposition in wildly negative light without any rebuttal — that has accelerated in ways that much more sharply polarise the electorate and make it very difficult to have a common conversation.”

In the wake of a bitterly divisive US election, Facebook users are retreating deeper into their “filter bubbles”. The bitterness of the loss, says Mr Howard, means that many on the losing side have been systematically “unfriending people who voted for the other candidate”.

The result is likely to be even deeper tribal divisions. That can only add to an environment in which many people are all too ready to believe the most biased or inaccurate information about the opposing camp — and to shout it out to anyone who will listen.

Blocking tools that help machines tell good stories from bad

How hard would it be to train a computer to spot fake news? Not very, according to critics who say companies such as Facebook and Twitter have not tried hard enough to block misinformation from their sites.

They compare the problem to previous online scourges, like the spam that once threatened to overwhelm email systems, as well as so-called “content farms” — sites that produce large amounts of low-quality articles designed to trick search engines and generate advertising. In these and similar cases, algorithms were adapted to identify and either expunge or downplay the offending material.

With their huge resources, the internet companies should find this an easy problem to crack, says Ben Edelman, an assistant professor at Harvard Business School. “Facebook has more money than God,” he adds.

It took four students 36 hours last week to come up with a rudimentary tool for blocking fake news. Developed during a coding competition at Princeton University, the browser plug-in adds warnings to information that cannot be verified and suggests alternative material pulled from more reliable sites, says Nabanita De, one of the developers.

A number of media companies are working on a project to help machines tell good information from bad, such as new types of metadata to help an algorithm judge a story’s reliability. Such signals might include how a news site was funded or information about the reporter, says Sally Lehrman, director of a journalism ethics programme at Santa Clara University who leads the project. Google and LinkedIn have backed the idea.

Writing last week, Mark Zuckerberg said Facebook was working on better ways to detect misinformation but the plague of fake news during the US elections has left many doubters.

Letter in response to this article:

How social media giants can preserve their position / From Arnold Holtzman

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Comments