3D generated face representing artificial intelligence technology
Sports Cloud | Stocks | Getty Images
A growing wave of deepfake scams have robbed companies around the world of millions of dollars, and cybersecurity experts warn the situation is likely to get worse as criminals harness generative artificial intelligence to perpetrate fraud.
Deepfakes are videos, voices or images of real people that are digitally altered and manipulated (often through artificial intelligence) to convincingly distort them.
In one of the largest known cases this year, a Hong Kong financial worker was deceived into using Deepfake technology to pretend to be a colleague during a video call and transfer more than $25 million to the scammer. authorities told local media in February.
Last week, British engineering firm Arup confirmed to CNBC that it was the company involved, but could not disclose details of the matter as the investigation is ongoing.
David Fairman, chief of information and security at cybersecurity company Netskope, said such threats are increasing with the popularity of Open AI Chat GPT, launched in 2022, which is quickly pushing generative AI technology into the mainstream.
“The public accessibility of these services lowers the barrier to entry for cybercriminals—they no longer need to possess special technical skills,” Fairman said.
He added that as artificial intelligence technology continues to develop, the number and sophistication of scams continues to expand.
Upward trend
Various generative AI services can be used to produce human-like text, images, and video content and can therefore become powerful tools for illicit actors seeking to digitally manipulate and recreate certain individuals.
“Like many other businesses around the world, our business is regularly subject to attacks including invoice fraud, phishing scams, WhatsApp voice spoofing and deepfakes,” an Arup spokesperson told CNBC.
The finance staff member reportedly participated in a video call with what was believed to be the company’s chief financial officer and other employees, who asked him to make the transfer. However, other attendees at the conference were actually digitally recreated deepfakes.
Arup confirmed that “fake voices and images” were used in the incident, adding that “the number and sophistication of these attacks have increased dramatically in recent months”.
Chinese state media reported similar cases This year in Shanxi Province, a female finance employee was tricked into transferring 1.86 million yuan ($262,000) into the scammer’s account after a video call with her fake boss.
wider impact
Cybersecurity experts say that in addition to direct attacks, companies are increasingly concerned that deepfake photos, videos or speeches of senior executives could be used for malicious purposes.
Jason Hogg, cybersecurity expert and resident executive at Great Hill Partners, said deepfakes created by senior members of a company can be used to spread fake news, manipulate stock prices, denigrate a company’s brand and sales, and spread other harmful disinformation.
“It’s just superficial,” said Hogg, a former FBI agent.
He emphasized that generative artificial intelligence can create deep fakes based on large amounts of digital information, such as public content hosted on social media and other media platforms.
In 2022, Binance Chief Communications Officer Patrick Hillmann claimed in a report Blog article says Scammers created deepfakes of him based on previous press interviews and TV appearances, using them to trick clients and contacts into attending meetings.
Netskope’s Fellman said such risks have led some executives to begin purging or restricting their online operations because they feared they could be used as ammunition by cybercriminals.
Deepfake technology has spread beyond the corporate world.
from Fake porn pictures manipulated video Promote cookware, Taylor Swift and other celebrities have fallen victim to deepfake technology. Deepfakes among politicians are also rampant.
At the same time, some scammers Deepfakes of one’s family and friends Trying to scam them out of their money.
Hogg said the broader problem will accelerate and get worse over time, as cybercrime prevention requires thoughtful analysis to develop systems, practices and controls to defend against new technologies.
However, cybersecurity experts told CNBC that companies could strengthen their defenses against AI threats by improving employee education, cybersecurity testing, and requiring the use of passwords and multiple layers of approval for all transactions — something that would have prevented the likes of Arup case.