December 24, 2024

2024 will be the biggest global election year in history. This coincides with the rapid rise of deepfakes. According to a Sumsub report, in the Asia-Pacific region alone, deepfakes will surge by 1,530% from 2022 to 2023.

Photography Links | Stocks | Getty Images

Before the Indonesian general election on February 14, video Late Indonesian President Suharto’s remarks advocating for the political party he once chaired have gone viral.

The AI-generated deepfake video, which cloned his face and voice, racked up 4.7 million views on X alone.

This is not a one-time event.

In Pakistan, a deepfake of former Prime Minister Imran Khan emerged during national elections announcing that his party was boycotting the elections. Meanwhile, in the United States, voters in New Hampshire heard a deepfake video of President Joe Biden asking them not to vote in the presidential primary.

Deepfakes of politicians are becoming increasingly common, especially as 2024 is shaping up to be the biggest global election year in history.

It is said that At least 60 above the country four billion people Their leaders and representatives will be voted on this year, making deepfakes a matter of serious concern.

The risk of deep election fraud rises

according to a Sumsub November ReportFrom 2022 to 2023, the number of global deepfakes will increase tenfold. In the Asia-Pacific region alone, the number of deepfakes surged by 1,530% during the same period.

Between 2021 and 2023, online media, including social platforms and digital advertising, will see the largest increase in identity fraud rates, reaching 274%. Professional services, healthcare, transportation and video games are also industries affected by identity fraud.

Simon Chesterman, senior director of AI governance at Singapore Artificial Intelligence, said Asia is not yet ready to deal with the issue of deep fakes in elections on the regulatory, technical and educational fronts.

In its 2024 Global Threat ReportCybersecurity firm Crowdstrike reports that as the number of elections scheduled for this year continues to increase, nation-state actors including China, Russia and Iran are highly likely to sow chaos through misinformation or disinformation campaigns.

“A more serious interference would be if a major power decided to disrupt a country’s elections, which could be more influential than a political party playing on the fringes,” Chesterman said.

While some governments have the tools to prevent disinformation online, the fear is that the genie will be out of the bottle before there is time to push it back into the bottle.

Simon Chesterman

Senior Director of Artificial Intelligence, Singapore

However, he said the majority of deepfakes will still be manufactured by players within their respective countries.

Domestic actors could include opposition parties and political rivals or far-right and left-wing elements, said Carol Soon, principal fellow and director of socio-cultural affairs at the Singapore Policy Institute.

The dangers of deepfakes

At the very least, Suen said, deepfakes contaminate the information ecosystem, making it harder for people to find accurate information or form informed opinions about political parties or candidates.

Voters may also be turned off by a candidate if they see content about a scandalous issue go viral before being revealed to be false, Chesterman said. “While some governments have the tools (to prevent disinformation online), the concern is that the genie will be out of the bottle before there is time to push it back into the bottle.”

“We saw how quickly Deep fake porn involving Taylor Swift — these things spread very quickly,” he said, adding that regulation is often inadequate and extremely difficult to enforce. “It’s often too little, too late. “

How easy is it to make a deepfake video?

Adam Meyers, head of countermeasures at CrowdStrike, said deepfakes can also trigger confirmation bias in people: “Even if they know in their heart that it’s not true, if it’s information they want and something they want to believe, they will Won’t let it go.”

Chesterman also said false videos showing misconduct during elections, such as ballot stuffing, could lead to a loss of confidence in the validity of the election.

On the other hand, Suen said, candidates may deny their truth, which may be negative or unflattering, and attribute it to a deepfake.

Deepfakes in the 2024 election: What you need to know

Who is responsible?

As deepfakes grow, Facebook, Twitter and Google are working to detect and prevent them

“We should not rely solely on the good intentions of these companies,” Chesterman added. “That’s why there needs to be regulations and expectations set for these companies.”

To that end, the non-profit Content Provenance and Authenticity Alliance (C2PA) has launched Digital certificate of contentwhich will show viewers verified information such as the creator’s information, where and when it was created, and whether generative AI was used to create the material.

C2PA member companies include Adobe, Microsoft, Google and Intel.

OpenAI announces it will Implement C2PA Content Credentials Image created using DALL·E 3 products earlier this year.

“I think it would be too bad if I said, ‘Oh, yeah, I’m not worried.’ I feel great.’ Like, we’re going to have to watch that relatively closely this year (with) super strict monitoring ( and) super rigorous feedback.”

In an interview with Bloomberg at the World Economic Forum in January, OpenAI founder and CEO Sam Altman said the company was “very focused” on ensuring that its technology could not be used to rig elections.

“I think our role is very different from the role of distribution platforms like social media sites or news publishers,” he said. “We have to work with them, so it’s like you generate here and distribute here. There needs to be a relationship between them. Have good conversations.”

Meyers proposed creating a bipartisan, nonprofit technology entity whose sole mission would be to analyze and identify deepfakes.

“The public can then send them content they suspect is manipulated,” he said. “It’s not foolproof, but at least there’s some mechanism that people can rely on.”

But ultimately, while technology is part of the solution, a lot depends on consumers, and they’re not ready yet, Chesterman said.

The importance of educating the public was also quickly emphasized.

“We need to continue our outreach and engagement efforts to increase public vigilance and awareness when exposed to messages,” she said.

The public needs to be more vigilant; in addition to fact-checking in highly questionable situations, users also need to fact-check critical information, especially before sharing it with others, she said.

“There’s something for everyone,” Soon said. “Everyone works together.”

—CNBC’s MacKenzie Sigalos and Ryan Browne contributed to this report.

About The Author

Leave a Reply

Your email address will not be published. Required fields are marked *