Loading...

How Scammers Used Deepfake Video to Dupe a Company Out of Millions

 

ai_hub_award
Share

 

It’s the video call that cost $25 million.

ccording to reports from Hong Kong police in February, a finance worker at a multinational company joined a video conference call with the company’s chief financial officer. On the call, the CFO directed the finance worker to transfer more than $25 million in funds to several bank accounts.

The finance worker reportedly had reservations about the request, thinking that the CFO looked “a little off.” The finance worker then reportedly turned to the other participants on the call for confirmation. They all agreed to the request. With that, the transfers went through. More than $25 million in funds were moved out of the company. Right into the hands of fraudsters.

As it turns out, the CFO on the worker’s call was a video deepfake. Along with everyone else.

Hong Kong’s public broadcaster, RTHK, quoted senior police superintendent Baron Chan as saying that AI deepfake technology was used to dupe the worker.

“[The fraudster] invited the informant [worker] to a video conference that would have many participants. Because the people in the video conference looked like the real people, the informant … made 15 transactions as instructed to five local bank accounts, which came to a total of HK$200 million,” he said.ii

Fraudsters now use AI deepfakes to pull off corporate scams.

Businesses now face an altogether new security threat: video deepfakes. In real time, scammers can pose as company officers, vendors, partners, what have you. Put plainly, we live in a time where the person on the other end of that video call might be a fake.

Scammers face several challenges before they can pull off a deepfake attack. The primary challenge they have is obtaining source material. To create a deepfake, they need images, video, and audio of the person they want to impersonate. Consider, though, that some company officials have relatively high profiles. They speak at conferences, hold webinars, and participate in earnings calls. Throw in a few photos and videos lifted from the target’s social media accounts, and scammers have the source material they need to create a deepfake.

The next challenge … scammers need a good story, one with emotional levers they can pull and coerce a victim to act. In the case of the Hong Kong scam, the deepfakes plied their victim with a mix of urgency and authority. The “CTO” wanted to move money and move that money immediately. With the other deepfakes on the call concurring with the CTO, the victim did as asked. In all, it was a classic case of a hand-picked victim subjected to a classic execution of social engineering.

Understandably, this story drew major coverage given the use of deepfakes and the haul they brought in. Moreover, the fact that the fraudsters orchestrated not just one but a host of deepfakes makes it that much more newsworthy. In light of this, companies and their employees have a new threat to look out for. And, better yet, prepare themselves for — deepfakes.

Preventing corporate AI deepfake scams.

While AI deepfakes hopping onto video conference calls certainly marks new territory in security, several long-standing measures for preventing corporate fraud remain the same. Additionally, some new preventative measures are called for as well.

Look for the signs of AI deepfakes.

Earlier, we mentioned how the victim in the Hong Kong attack mentioned that the CFO looked “a little off” on the video call. AI deepfakes, while convincing, sometimes have the tell-tale markers of a fake.

However, that’s changing. Quickly. As the tools for creating deepfakes continually improve, the deepfakes become increasingly difficult to spot.

Earlier generations of deepfake tools had difficulty tracking excessive head movement, like when the deepfake turned for a profile shot. Further, earlier tools required users to keep their hands off their faces. Placing a hand on the chin or over the mouth would break up the face of the deepfake. One more marker of earlier deepfake tools could be found in the eyes. They often had a glassy look, like they weren’t catching the light right. The same went for skin tones and lighting.

So yes, a deepfake might look “a little off.” Consider that a huge red flag. Yet don’t entirely count on this method of detection. As AI deepfake tools evolve, they’re remove such blemishes from the video.

Confirm, confirm, and confirm.

Any time that sensitive info or sums of money are involved, get confirmation of the request. Place a phone call to the person after receiving the request to ensure it’s indeed legitimate. Better yet, meet the individual in person if possible. In all, contact them outside the email, message, or call that initially made the request to ensure you’re not dealing with an imposter.

In the wake of targeted attacks on key stakeholders, some organizations have restructured the way they handle requests for data, funds, and other sensitive info. They require two or three people to fulfil such a request. This makes it tougher for scammers to run their cons. For starters, they have the burden of targeting two or more people. Then they face the further burden of convincing them all. This oversight gives companies a chance to fully validate requests, and potentially catch “urgent” bogus requests from scammers.

Fraudsters do their research — keep your guard up.

Fraudsters select their victims carefully in these targeted attacks. They hunt down employees with access to info and funds and then do their research on them. Using public records, data broker sites, “people finder” sites, and info from social media, fraudsters collect intel on their marks. Armed with that, they can pepper their conversations with references that sound more informed, more personal, and thus more convincing. Just because what’s being said feels or sounds somewhat familiar doesn’t always mean it’s coming from a trustworthy source.

Clean up your online presence.

With that, employees can reduce the amount of personal info others can find online. Features like McAfee Personal Data Cleanup can help remove personal info from some of the riskiest data broker sites out there. I also keep tabs on those sites if more personal info appears on them later. Additionally, employees can set their social media profiles to private by limiting access to “friends and family only,” which denies fraudsters another avenue of info gathering. Using our Social Privacy Manager can make that even easier. With just a few clicks, it can adjust more than 100 privacy settings across their social media accounts — making them more private as a result.

Defense against AI deepfake attacks.

Moving forward, we can expect to see more of these corporate AI deepfake attacks. On all manner of scales. The availability and power of AI tools makes it likely. However, as with many forms of targeted attacks, there’s something both fishy and uncanny about them. As we’ve seen, the employee targeted in the Hong Kong attack held suspicions … something was wrong about that call. Yet, who would expect a video conference call full of AI deepfakes? With this attack, companies should consider that such calls fall within the realm of possibility today.

As AI detection technologies evolve, companies will have additional tools to prevent these attacks. Yet the human factor remains an essential element of defense. These are scams, pure and simple. And scams have signs. Fraudsters use all kinds of social engineering tricks to get their victims to act. They’ll impose themselves as authority figures. They’ll add elements of urgency to their requests. And they’ll use people’s personal info in ways to make themselves appear familiar and trustworthy.

This is where we stand today: a basic understanding of AI deepfake technology, what it’s capable of, and the tricks that fraudsters can play with it can bolster a company’s defense against AI deepfake attacks. Indeed, they’re within the realm of possibility today. And a prepared workforce can help stop them in their tracks before they can do any harm.  

 

 

VPN necessary

Ready to Try AI-powered Protection?

Stay more secure and private with McAfee.