Skip to main content

Why #NeverWarren should make you nervous about 2020

How Twitter made the Elizabeth Warren-Bernie Sanders dustup worse.

Elizabeth Warren and Bernie Sanders onstage after the Democratic debate held on January 14, 2020, in Des Moines, Iowa.
Elizabeth Warren and Bernie Sanders onstage after the Democratic debate held on January 14, 2020, in Des Moines, Iowa.
Elizabeth Warren and Bernie Sanders onstage after the Democratic debate held on January 14, 2020, in Des Moines, Iowa.
Scott Olson/Getty Images
Emily Stewart
Emily Stewart covered business and economics for Vox and wrote the newsletter The Big Squeeze, examining the ways ordinary people are being squeezed under capitalism. Before joining Vox, she worked for TheStreet.

The brewing feud between 2020 contenders Elizabeth Warren and Bernie Sanders came to a head this week on the Democratic debate stage and, inevitably, on Twitter. But is the online brawl as widespread and vitriolic as it seems? That’s actually a hard question to answer because when it comes to online political discourse, it can be very difficult to discern between manipulative disinformation and authentic, organically shared content.

Here’s the situation: Tensions between Warren and Sanders, longtime progressive allies, have been rising in recent days as 2020 primary voting approaches. Sanders’s campaign reportedly gave supporters a script encouraging them to go negative on Warren when talking to voters, and CNN subsequently reported that Sanders had told Warren he believed a woman couldn’t win in 2020 at a private meeting between the pair in 2018. Sanders has vehemently denied the account, while Warren has stated that it happened. During Tuesday’s debate, both Sanders and Warren repeated their accounts of the conversation, and the dynamic between them grew visibly tense. After the debate, CNN captured a video of a possibly frosty encounter between the pair.

The conflict spilled over onto Twitter and appeared to be magnified in a big way. The hashtag #NeverWarren began to trend, and a wave of users flocked to Warren’s Twitter account to flood her replies with snake emojis. As has been the case with so many viral hashtags and discussions on Twitter, the incident has again shown that when it comes to what’s gaining traction on the internet, we still have a hard time telling what’s real, what’s fake, and what’s being spread by whom. How much of the activity around #NeverWarren is generated by bots? How much of it comes from the so-called Bernie Bros, the online army behind the Vermont senator? And how much of it comes from Warren supporters trying to combat the #NeverWarren hashtag, or reporters tweeting about it, who are inadvertently causing it to trend higher on Twitter?

“It certainly harkens back to what we saw in 2016, and what we know happened in 2016. ... And there’s no reason for us to think that the same disinformation efforts that happened in 2016 aren’t happening right now,” said Whitney Phillips, a Syracuse University professor who studies media literacy and online ethics. “And so it creates this low level of paranoia with what you’re even looking at.”

Given how early it is in the 2020 presidential race — the Iowa caucuses are still about three weeks away, and we’re months away from having a Democratic nominee — this doesn’t bode well for the social media conversations to come, including potential disinformation, manipulation, and questions about whether what’s happening online is and isn’t real. “Every single event of this nature is going to be proving ground for something worse the following day,” Phillips said.

What we know — and what we don’t know — about why #NeverWarren started trending

We’ll never get an exact picture of where the #NeverWarren hashtag started, how it took off, and who spread it. Because of the opacity of Twitter’s inner workings, we don’t know what exactly causes a topic to gain traction on the platform. Usually, it’s a mix of both bot-generated and organic engagement.

“The bots work as amplifiers. ... They manipulate the platform so that more humans will talk about [a topic].”

Often, a news story or hashtag will originate with a specific website or person, and then the bots serve almost as middlemen in helping it take off and make it look like a lot of people are talking about it right away, explained Filippo Menczer, a professor of informatics and computer science at Indiana University. So, for example, a hashtag starts with a specific user, and then the bots start to spread it, and then more actual people pick up on it. Twitter’s trending algorithm then picks up on that and spreads it even further.

“The bots work as amplifiers,” said Menczer, who is also the creator of Hoaxy, a tool that tracks how information spreads on social media. “They’re used to manipulate the platform so that more humans will talk about [a topic]. By the time something goes viral or goes trending, a lot of humans are probably talking about it.”

And in the case of #NeverWarren, it’s not just people who are promoting the hashtag, but also those who are trying to combat it, who are making it spread. As NBC News reporter Ben Collins noted on Wednesday, many of the top tweets about the #NeverWarren hashtag came from people denouncing it. In other words, Warren’s supporters are accidentally making the situation worse.

The issue is that Twitter’s algorithm doesn’t distinguish sentiment when it identifies what’s trending — it’s only looking at engagement. This makes it difficult to parse the motivations of the people who are posting a hashtag and helping it trend.

Online battles like these have ramifications in real life: In this case, it makes both Warren and Sanders supporters feel like their conflict is worse than it may actually be. “They’re being told both implicitly and explicitly that they’re in a fight with each other,” Phillips said. “When you’re told that you’re in a fight, and you’re told that you’re mad at the other side, it’s really easy to fall into that. It’s life imitating the hashtag, basically.”

This is hardly the first time this has happened this election cycle. After the second round of Democratic debates in July, the #KamalaHarrisDestroyed hashtag caused a similar dustup between supporters of Kamala Harris and of Tulsi Gabbard. Conservative commentator Terrence K. Williams started the hashtag, and as the Wall Street Journal reported, a lot of accounts with “questionable characteristics” — probably bots — shared it. People on Twitter started to see it spreading, and then they started to share it because it struck a nerve with some of them. The bots are used to inject, feed, and amplify topics, narratives, and hashtags, but they wouldn’t work if they weren’t evoking a reaction in real people on Twitter.

“It’s the combination of the abuse and the biases of the algorithm and the biases of humans on the platform,” Menczer said.

We’re still struggling to deal with disinformation

The confusion surrounding #NeverWarren is just the latest instance in an ongoing problem: We’re still really confused about social media manipulation, and we don’t know how to deal with it responsibly.

Disinformation in and of itself sows division. People don’t have a clear idea of what social media manipulation is or how it works, and they struggle to identify it when they encounter it. In the wake of revelations that Russians used Facebook, Twitter, and other platforms to deepen partisan discord and spread polarized political messages during the 2016 election, people are hyper suspicious about whether what they’re seeing is real or fake.

You like what you’re seeing? It’s organic. You don’t? It’s a bot. You’re ready for a fight? You got one.

It also depends on what people want to believe. So with the #KamalaHarrisDestroyed hashtag, if you were in the California senator’s corner, you had reason to argue it was trending because of bots and manipulation. If you weren’t, you had a reason to say it was all organic. A similar thing happened around the death of financier and convicted sex offender Jeffrey Epstein. As conspiracy theories about what happened floated around Twitter, both #ClintonBodyCount and #TrumpBodyCount trended — with conservatives and liberals each tweeting the hashtag that reflected their politics. Donald Trump Jr. suggested that the latter was trending because of manipulation by Twitter itself. There’s no evidence to support that claim.

“It’s not only that we don’t know what Twitter’s algorithm is doing — we don’t know what people who are participating in the hashtags are doing, or why they’re doing it. So that’s why it becomes really easy to project an explanation that fits your worldview,” Phillips said.

Because of the confusion, people then fill in the gaps on their own and create narratives around what’s happening online according to what they want to believe. You like what you’re seeing? It’s organic. You don’t? It’s a bot. You’re ready for a fight? You got one.

“It’s really important not to fall into singular explanations. What is true is that you don’t know what is happening,” Phillips said. “A hashtag is only not true or real if nobody engages with it.”

Amid questions over the #NeverWarren hashtag on Wednesday, former Facebook executive Alex Stamos laid out some advice on how to approach similar situations on Twitter. “1) Don’t use a hashtag to criticize that hashtag. 2) Stop quote-tweeting small-follower accounts as criticism. 3) Don’t believe that the population of ‘people’ on Twitter is reflective of anything, including ‘candidate X’s followers.’”

In an email to Recode, Twitter said it had not found evidence of bot activity amplifying the #NeverWarren hashtag. The company noted that when people tweet a hashtag they disagree with, its trends feature neutrally clocks it as a trending topic — that’s not the same thing as inauthentic activity.

Part of the issue is that we don’t really know how Twitter’s algorithm works

There’s no single solution to this complex problem. Social media companies probably aren’t going to start telling us how their algorithms work anytime soon, and part of their argument for why is that if they did, their platforms would be even easier to manipulate. And as much as there’s a tendency to blame bots for everything, it’s basic human nature that’s the bigger culprit.

Like a lot of tech platforms, Twitter’s algorithm is largely a black box. The company publicly gives some information about what makes certain topics and hashtags trend and why individual people see certain content in their feeds more than other kinds, but it won’t say much more. The explanation from its website leaves a lot to be desired:

Trends are determined by an algorithm and, by default, are tailored for you based on who you follow, your interests, and your location. This algorithm identifies topics that are popular now, rather than topics that have been popular for a while or on a daily basis, to help you discover the hottest emerging topics of discussion on Twitter.

Basically, that means topics and hashtags start to trend when they become more popular than they have been in the past, and that what you see depends on what Twitter thinks you might be interested in. (Which it’s, um, not always great at.) The rest is up to the mysterious algorithm.

That’s why when there are allegations flying about social media manipulation by bots, or when conservatives make unfounded claims about social media bias, they’re so hard to definitively respond to. “You can’t show the receipts because these companies don’t want to show their receipts. We don’t really know how they work,” said Phillips.


Update: Story updated with comment from Twitter.

More in Technology

Arson attacks underscore the security and terror threats to the Paris OlympicsArson attacks underscore the security and terror threats to the Paris Olympics
Olympics

Officials say Paris will be the “safest place in the world” for the Olympics. Here’s what they’re up against.

By Joshua Keating
Warren Buffett’s breakup with the Gates Foundation will hurt the worldWarren Buffett’s breakup with the Gates Foundation will hurt the world
Future Perfect

Buffett’s decision to put his money into a foundation controlled by his kids is a mistake.

By Kelsey Piper
Scientists are trying to unravel the mystery behind modern AIScientists are trying to unravel the mystery behind modern AI
Future Perfect

It’s like studying the brain: really, really hard. They’re doing it anyway.

By Celia Ford
Could a short campaign be exactly what Kamala Harris needs?Could a short campaign be exactly what Kamala Harris needs?
Politics

Dozens of other democracies have short election cycles. Can the Democrats learn something from them?

By Ellen Ioanes
The worst internet outage still hasn’t happened yetThe worst internet outage still hasn’t happened yet
Technology

How a bit of bad code could bring down the whole world.

By Adam Clark Estes
Artificial intelligence isn’t a good argument for basic incomeArtificial intelligence isn’t a good argument for basic income
Future Perfect

A major study backed by OpenAI’s Sam Altman shows unconditional cash has benefits that have nothing to do with AI.

By Oshan Jarow