December 27, 2024

Blocks forming a robot on a white background.

Yuichiro Kino | Moment | Getty Images

payment giant visa James Mirfin, global head of risk and identity solutions at Visa, told CNBC that the company is using artificial intelligence and machine learning to fight fraud.

company $40 billion blocked The number of fraudulent activities between October 2022 and September 2023 was almost double that of a year ago.

Visa’s Mirfin said the fraud tactics used by scammers include using artificial intelligence to generate master account numbers and continuously testing them. PAN is the card identifier on a payment card, usually 16 digits but in some cases up to 19 digits.

Using artificial intelligence bots, criminals repeatedly try to submit online transactions using a combination of master account number, card verification value (CVV) and expiration date until they receive an approval response.

This method is called an enumeration attack and results in $1.1 billion in fraud losses annuallyAccording to Visa, fraud accounts for a large portion of global losses.

“We look at over 500 different attributes of (each) transaction, score it and create a score, and this is an AI model that really does that,” Mirfin told CNBC. “We do about 300 billion transactions a year. trade.

Each transaction is assigned a real-time risk score, helping to detect and prevent enumeration attacks in transactions where purchases are processed remotely through a card reader or terminal without the need for a physical card.

“Every one of these (transactions) is being processed by artificial intelligence. It’s looking at a bunch of different attributes and we’re evaluating every single transaction,” Milfin said.

“So if you see a new type of fraud happening, our models will find it and catch it and rate those transactions as high risk, and then our customers can decide not to approve those transactions.”

Visa also uses artificial intelligence Assess the likelihood of fraud For token supply requests – combat fraudsters who use social engineering and other deceptive tactics to illegally supply tokens and execute fraudulent transactions.

Over the past five years, the company has invested $10 billion in technology to help reduce fraud and improve online security.

Generative fraud based on artificial intelligence

Cybercriminals are turning to generated artificial intelligence and other emerging technologies, including voice cloning and deepfakes, to deceive people, Miffin warned.

“Romance scams, investment scams, pig killings — they’re all using artificial intelligence,” he said.

Killing is a scam tactic in which criminals build relationships with victims and then convince them to put their money into fake cryptocurrency trading or investment platforms.

“If you think about what they’re doing, it’s not criminals sitting in a market picking up a phone and calling someone. They’re using some level of artificial intelligence, whether it’s voice cloning, whether it’s deepfakes , or are they using artificial intelligence to perform different types of social engineering,” Milfin said.

Generative AI tools like ChatGPT enable scammers to generate More convincing phishing messages to trick people.

Cybercriminals using generative AI will need less Cloning a sound requires less than three seconds of audioaccording to US identity and access management company Okta, which added that this could be used to trick family members into thinking a loved one is in trouble, or trick bank employees into transferring funds out of a victim’s account.

Generative AI tools are also being used to create celebrity deepfakes to deceive fans, Okta said.

“With the use of generative artificial intelligence and other emerging technologies, scams are more convincing than ever before, causing unprecedented losses for consumers,” Paul Fabara, Visa’s chief risk and customer service officer, said in the company’s report. . Twice-yearly threat report.

Artificial intelligence could make financial fraud a 'growth industry'

Deloitte Financial Services Center said in the report that cybercriminals who use generative artificial intelligence to commit fraud can use the same or fewer resources to target multiple victims at the same time and commit fraud at a lower cost. Report.

“Such incidents are likely to surge in the coming years as bad actors discover and deploy increasingly sophisticated and affordable generative AI to defraud banks and their customers,” the report states. The report estimates that generative AI may Will increase fraud losses to $40 billion.

Earlier this year, an employee of a Hong Kong company wired $25 million to a fraudster who impersonated his chief financial officer and directed the transfer.

Chinese state media reported similar cases This year in Shanxi province, an employee was tricked into transferring 1.86 million yuan ($262,000) to a scammer who used a deepfake of her boss during a video call.

About The Author

Leave a Reply

Your email address will not be published. Required fields are marked *