X

How the Rise of Artificial Intelligence Could Rock the Vote

From deepfakes to disinformation: You'd better buckle up. This year's presidential election is going to be a wild ride.

Bree Fowler Senior Writer
Bree Fowler writes about cybersecurity and digital privacy. Before joining CNET she reported for The Associated Press and Consumer Reports. A Michigan native, she's a long-suffering Detroit sports fan, world traveler, two star marathoner and champion baker of over-the-top birthday cakes and all-things sourdough.
Expertise Cybersecurity, Digital Privacy, IoT, Consumer Tech, Running and Fitness Tech, Smartphones, Wearables
Bree Fowler
6 min read
An image of the text "Vote presidential election 2024."

Where you get your political news has never been so important.

Getty

In a lot of ways, this year's presidential election is a redo of the last one. It's the same candidates talking about a lot of the same issues, and there are a lot of very emotional people on both sides.

And candidates being candidates, they're still bending and spinning the truth to fit their needs, as are their campaigns. You could argue that some of them take it a step further by outright lying and falsely claiming that the American election system is rigged, leaving it to their opponents, the media and the other fact checkers of the world to call them out when they do. None of that is anything new. 

AI Atlas art badge tag

So what's different this time, besides the fact that we're all four years older? 

We've added artificial intelligence to the mix. AI-powered deepfakes, both of the audio and video kind, continue to become more realistic, along with cheaper and easier to make. At the same time, AI is making it easier to target voters with disinformation about the election process, casting doubt on its legitimacy and potentially throwing the election process into chaos.  

"It's going to be a wild ride because there's just so much at stake and there's a lot of animosity going on between the two sides, unfortunately," Jon Clay, vice president of threat intelligence for the cybersecurity company Trend Micro, said during an interview at this year's RSA Conference in San Francisco.

Clay said that while in the past experts fretted about the potential that election systems could be hacked by the likes of Russia or another enemy of the US, the technical infrastructure holding up elections has proven itself to be tough, making it far more likely for the US to be hit with a disinformation operation rather than some kind of cyber attack. 

And generative AI is going to make that a lot easier for adversaries to feed those operations with convincing fake emails, photos, voice recordings and videos, he said.

"How are you going to be able to tell what's real and what's not real, what truth and what's not truth?" Clay said. "It's going to be a really difficult thing for anybody to start figuring out."

Deepfakes are getting better but are still far from perfect

Deepfake audio and video recordings designed to mimic a particular person predate the popularity of generative AI, but most were easy to spot, even by untrained eyes. Humans and all of their quirks are tough for computers to replicate. But generative AI tools have started to change that.

"I think we're almost fortunate that the election is happening right now and not a year later, because the next cycle will be worse," said Chester Wisniewski, global field chief technology officer for Sophos.

Most of the current deepfake software, at least what's available to the average American, just isn't that great, he said. The vast majority of people who run into them will notice that something is off. Maybe it's the way the speaker breathes or takes pauses at odd times.

But all bets are off if there's a well-resourced nation state, like Russia, behind the deepfake.

"Russia is always a risk and an interesting factor," Wisniewski said. "If they want to make a fake video that's really believable, they probably could right now.

Worries about potentially sophisticated deepfakes have companies like Pindrop working on software designed to detect them. Pindrop's software scans audio and video clips, then rates the likelihood of whether they're real or something generated by AI.

Vijay Balasubramaniyan, Pindrop's co-founder and CEO, said the hope is that the software could eventually be used by media organizations, government officials and others to verify the authenticity of content they may receive. 

But Wisniewski is skeptical about how helpful this kind of software could be, especially as it stands right now. The human ear is already very good at picking out deepfakes, so any kind of software designed to do the same would have to be extremely accurate and offer almost complete certainty to be useful.   

Of course that all could change very soon, given the current pace that deepfake technology is developing at, he said. Everyone may have the capability to create top-quality deepfakes by the time the 2028 election rolls around.

"We're probably going to be at a point where your crazy Uncle Bob can make them for a few dollars," Wisniewski said.

AI changes the disinformation game

Election-related disinformation is more than just deepfakes, and experts acknowledge it's coming from both home and abroad.

Going back to the 2016 election, Russia has used troll farms -- state-sponsored networks of fake social media accounts -- to create and amplify disinformation related to US elections on social media. At the same time, some political candidates and other proponents of the "big lie" used the same platforms, as well as traditional media, to make false claims that the 2020 election, which Donald Trump lost to Joe Biden, was somehow rigged.

The unrest spurred by the statements helped incite the Jan. 6, storming of the US Capitol, and to this day, Trump and numerous Republican politicians continue to falsely deny the legitimacy of that election.   

Speaking to a roundtable of reporters at the RSA Conference, Secretary of Homeland Security Alejandro Mayorkas said that he can't legally speak about the actions, or potential endorsements of disinformation activities, by specific candidates.

"But what I will say is the threat of disinformation is real and something that the government in partnership with state and local election officials is very focused on," Mayorkas said.   

Clay, of Trend Micro, says that with very little in the way of government regulation, it's largely been up to the social media companies to keep a handle on the misinformation and disinformation that plagues their platforms, including countless posts designed to look like stories created by the legitimate media.

For the companies that are well-staffed and truly do want to keep their platforms free of disinformation, this is a Herculean task, Clay said. And not all of the platforms actually care if they do.

Meanwhile, the ranks of the mainstream media continue to dwindle and many media organizations have chosen to leave platforms like X, formerly known as Twitter, because of its lack of moderation and overall hesitancy to remove even the most hateful posts. 

The future of democracy

It remains unclear just how many undecided voters remain. For the most part, America has split into two very divided camps that have sided with Biden or Trump, Wisniewski says. A faked video that shows one of the two in an unflattering light isn't going to sway a lot of people.

And given the current controversy surrounding whether Biden should remain in the race, given his disastrous performance in the recent presidential debate, unflattering deepfakes may no longer seem worth the effort.

Regardless, it remains in the best interest of countries like Russia for the US to remain bitterly split and what better way to do that than sow distrust in the election system, says Jim Coyle, US public sector chief technology officer for the cybersecurity firm Lookout.

He noted that it doesn't take much for misinformation and disinformation to go viral, pointing to the example of a website that posted lies about corruption in Ukraine. It went viral after members of Congress, including Rep. Marjorie Taylor Greene, posted links to it on social media and treated its claims as fact, vowing to pull financial support for Ukraine.

"If they can amplify these borderline crazy messages and turn them into something, could you turn the minds of a few?" Coyle asked. "Sure, but is that enough to change a campaign?"

The US needs to get serious now about fighting back against disinformation, says Adam Marrè, chief information security officer for Arctic Wolf and a former FBI agent.

That means pushing the Metas and Xs of the world to do more to get it off their platforms but also using existing regulations and civil litigation against those who post lies on those platforms, as well as passing new laws to govern them. 

"Our election systems are pretty well protected in terms of cybersecurity," Marrè said. "But what's going to protect them against social media mis- and disinformation?"