Disinformation is expected to be one of the biggest cyber risks in the 2024 election.
Andrew Brooks | Image Source | Getty Images
Internet experts interviewed by CNBC said that the United Kingdom is expected to face a series of state-sponsored cyber attacks and disinformation activities before the 2024 election, and artificial intelligence is a major risk.
The British will vote in local elections on May 2, and a general election is expected to be held in the second half of this year, although British Prime Minister Sunak has not yet committed to a specific date.
The vote comes as the country faces a host of issues, including a cost-of-living crisis and deep divisions over immigration and asylum.
Todd McKinnon, CEO of identity security company Okta, told CNBC via email: “With the majority of British citizens voting at the polls on election day, I expect most cybersecurity risks will arise in the months leading up to election day. “.
This isn’t the first time.
In 2016, both the US presidential election and the UK’s Brexit referendum were found to be disrupted by disinformation shared on social media platforms, purportedly posted by Russian state-affiliated groups, although Moscow has denied these claims.
Since then, state actors have carried out routine attacks in multiple countries to manipulate election results, according to cyber experts.
Meanwhile, the UK last week claimed that a Chinese government-affiliated hacker group, APT 31, attempted to access the email accounts of British MPs, but said such attempts were unsuccessful. London imposed sanctions on Chinese individuals and a Wuhan technology company believed to be a front for APT 31.
The United States, Australia and New Zealand also imposed sanctions. China denies accusations of state-sponsored hacking, calling them “baseless”.
Cybercriminals using artificial intelligence
Cybersecurity experts expect malicious actors to interfere with the upcoming election in a variety of ways, especially through disinformation, and the situation is expected to be worse this year due to the widespread use of artificial intelligence.
Experts say synthetic images, video and audio generated using computer graphics, simulation methods and artificial intelligence – often referred to as “deepfakes” – will become commonplace as it becomes easier for people to create them.
Okta’s McKinnon added: “Nation-state actors and cybercriminals may use AI-based identity attacks such as phishing, social engineering, ransomware and supply chain compromise to target politicians, campaign staff and elections. Related agencies.”
“We will also certainly see an influx of AI and bot-driven content generated by threat actors spreading misinformation on a greater scale than we have seen in previous election cycles.”
The cybersecurity community is calling for greater awareness of such AI-generated misinformation and for international cooperation to mitigate the risk of such malicious activity.
Biggest election risk
Adam Meyers, head of counter-option operations at cybersecurity firm CrowdStrike, said artificial intelligence-driven disinformation is the biggest risk in the 2024 election.
“Right now, generative AI can be used to do bad things and it can be used to do good things, so we’re seeing increased adoption of both applications every day,” Meyers told CNBC.
According to Crowdstrike’s latest annual threat report, China, Russia and Iran are highly likely to use tools such as generative artificial intelligence to conduct misinformation and disinformation operations against various global elections.
“This democratic process is extremely fragile,” Meyers told CNBC. “When you start looking at how hostile nation-states like Russia or China or Iran are using generative artificial intelligence and some of the newer technologies to craft information and use deepfakes to create a compelling story or narrative that people will buy into, especially It’s when people already have this confirmation bias that it’s extremely dangerous.”
A key issue is that artificial intelligence is lowering the barrier to entry for criminals who want to exploit people online. This has happened in the form of: Scam emails crafted using Convenient transportation Artificial intelligence tools such as ChatGPT.
Dan Holmes, a fraud prevention expert at regulatory technology company Feedzai, said hackers are also developing more advanced personal attacks by training artificial intelligence models using our own data on social media.
“You can train these voice AI models very easily through exposure to social media,” Holmes told CNBC. “It’s about getting the emotional investment and really coming up with something creative.”
In the context of the election, in October 2023, a fake artificial intelligence-generated audio clip of opposition Labor leader Keir Starmer insulting party staff was posted to social media platform X. The post has been viewed 1.5 million times. Full Fact, a fact-based correction charity.
This is just one example of many deepfakes that cybersecurity experts fear will play out in the UK’s upcoming election later this year.
Election a test for tech giants
However, deepfake technology is becoming more advanced. For many technology companies, the race to beat them now is to fight fire with fire.
“Deepfakes went from theory to real life,” Onfido CEO Mike Tuchen said in an interview with CNBC last year.
“There’s a cat-and-mouse game of ‘AI vs. AI’ – using AI to detect deepfakes and mitigate the impact on our customers is a big battle right now.”
Internet experts say it’s becoming increasingly difficult to discern what’s really going on, but there may be some signs that the content has been digitally manipulated.
Artificial intelligence uses prompts to generate text, images, and videos, but it doesn’t always get it right. For example, if you are watching an AI-generated video of dinner and the spoon suddenly disappears, this is an example of an AI flaw.
Okta’s McKinnon added: “We will certainly see more deepfakes throughout the election, but one simple step we can take is to verify the authenticity of something before sharing it.”