2024 will be the biggest global election year in history. This coincides with the rapid rise of deepfakes. According to a Sumsub report, in the Asia-Pacific region alone, deepfakes will surge by 1,530% from 2022 to 2023.
Photography Links | Stocks | Getty Images
Cybersecurity experts worry that artificial intelligence-generated content has the potential to distort our view of reality — a concern that is even more troubling in a year filled with crucial elections.
But one leading expert thinks otherwise, arguing that the threat to democracy from deepfakes may be “exaggerated.”
Martin Lee, technical lead for Cisco’s Talos security intelligence and research group, told CNBC that he believes deepfakes, while a powerful technology in their own right, are not as impactful as fake news.
However, new generative AI tools do “have the potential to make the generation of false content easier,” he added.
Material produced by artificial intelligence often contains common identifiable indicators that it was not produced by a real person.
It turns out that visual content is particularly susceptible to defects. For example, AI-generated images may contain visual anomalies, such as a person having more than two hands, or limbs that merge into the background of the image.
Distinguishing between synthetically generated speech audio and snippets of real human speech can be more difficult. But experts say artificial intelligence is still only as good as its training materials.
“Nonetheless, when viewed objectively, machine-generated content can often be detected. Regardless, the generation of the content is unlikely to limit an attacker,” Li said.
Experts have previously told CNBC they expect disinformation generated by artificial intelligence to be a major risk in upcoming elections around the world.
“Limited usefulness”
Matt Calkins, chief executive of Appian, an enterprise technology company that helps companies use software tools to more easily develop applications, said artificial intelligence has “limited usefulness.”
He added that many of today’s generative AI tools can be “boring.” “Once it gets to know you, it can go from amazing to useful (but) it can’t cross that line now.”
“Once we’re willing to trust artificial intelligence to understand ourselves, it’s going to be truly incredible,” Calkins told CNBC this week.
Calkins warned that this could make it a more effective and dangerous disinformation tool in the future, adding that he was dissatisfied with the progress of U.S. efforts to regulate the technology.
He added that artificial intelligence could produce something extremely “offensive” for U.S. lawmakers to take action. “Give us a year. Wait until artificial intelligence offends us. Then maybe we’ll make the right decision,” Calkins said. “Democracies are reactive institutions,” he said.
Still, Cisco’s Lee said that no matter how advanced artificial intelligence becomes, there are tried and tested ways to spot misinformation — whether it’s created by a machine or a human.
“People need to be aware that these attacks are happening and be aware of the techniques that may be used. When encountering content that triggers our emotions, we should stop, pause, and ask ourselves whether the information itself is trustworthy, Lee suggested.
“Was it published by a reputable media outlet? Did other reputable media outlets report the same thing?” he said. “If not, this may be a scam or disinformation campaign and should be ignored or reported.”