Generative Data Intelligence

AI’s Impact on Digital Fraud and Financial Crime

Date:

Nearly 70% of the 600 fraud-management, anti-money laundering, and risk and compliance officials surveyed in BioCatch’s first-ever AI-focused fraud and financial crime report say criminals are more adept at using artificial intelligence to commit financial
crime than banks are at using the technology to stop it. Equally concerning, around half of those same fraud-fighters report an increase in financial crime activity in the last year, and/or expect to see financial crime activity increase in 2024.

This report depicts a troubling and burgeoning trend in which criminals with minimal technical expertise or financial crime skillset are using this new technology to improve the quality, reach, and success of their digital-banking scams and financial crime
schemes. 

“Artificial intelligence can supercharge every scam on the planet,” BioCatch Director of Global Fraud Intelligence Tom Peacock said, “flawlessly localizing the language, slang, and proper nouns used and personalising for every individual victim the scam
type, images, audio, and/or video involved. AI gives us scams without borders and will require financial institutions to adopt new strategies and technologies to protect their customers.”

A staggering 91% of respondents report their organisation is now rethinking the use of voice-verification for big customers due to AI’s voice-cloning abilities. More than 70% of those surveyed say their company identified the use of synthetic identities
while onboarding new clients last year.  The Federal Reserve believes traditional fraud models fail to flag as many as 95% of synthetic identities used to apply for new accounts. It regards synthetic identity fraud as the fastest-growing type of financial
crime in the U.S., costing companies billions of dollars every year. 

“We can no longer trust our eyes and ears to verify digital identities,” BioCatch CMO Jonathan Daly said. “The AI era requires new senses for authentication. Our customers have proven behavioural intent signals are those new senses, allowing financial institutions
to sniff out deepfakes and voice-clones in real time to keep people’s hard-earned money safe.”

Other Key Survey Findings:

  • AI (Already) an Expensive Threat: More than half of the organisations represented in the survey say they lost between $5 and $25 million to AI-powered attacks in 2023.
  • Financial Institutions Also Using AI: Nearly 3/4 of those surveyed say their employer used AI to detect fraud and/or financial crime, while 87% say AI has increased the speed with which their organization responds to potential threats.
  • We Need to Talk: More than 40% of respondents say their company handled fraud and financial crime in separate departments that did not collaborate. Nearly 90% of those surveyed say financial institutions and government authorities need to share more
    information to combat fraud and financial crime.
  • AI to Help with Intelligence-Sharing: Nearly every respondent says they anticipate leveraging AI in the next 12 months to promote information-sharing about high-risk individuals across different banks.

“Today’s fraudsters are organised and savvy,” BioCatch CEO Gadi Mazor said. “They collaborate and share information instantly. Fraud fighters – including technology-solution providers like us, along with banks, regulators, and law enforcement – must do the
same if we expect to reverse the growing fraud numbers across the globe. We believe our recent partnership with The Knoble will advance this discussion and remove the perceived barriers to better, more meaningful collaboration and fraud-prevention.”

Survey of 600 Fraud-Fighters in 11 Countries on Four Continents Shows Paradox of Financial Institutions Already Using AI Tools to Defend Themselves as Criminals Launch AI-Super-Charged Attacks

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?