AI Identity Attacks - Financial Groups Unite to Combat Threats

Basically, financial groups are teaming up to fight fake identities created by AI.
Financial groups are uniting to tackle the rise of AI identity attacks, with deepfake incidents skyrocketing. Urgent action is needed from policymakers to protect financial institutions and consumers alike. Learn more about their proposed initiatives and the risks involved.
What Happened
In a bold move, several financial organizations have come together to address the alarming rise of AI-driven identity attacks. The American Bankers Association, the Better Identity Coalition, and the Financial Services Sector Coordinating Council released a joint paper highlighting the dramatic increase in deepfake incidents, which surged by 700% in 2023 compared to the previous year. This surge is attributed to the affordability of generative AI tools, making it easier for criminals and state-sponsored actors to exploit vulnerabilities in financial institutions.
The report warns that AI-enabled fraud losses in the United States could escalate to $40 billion by 2027, a staggering rise from $12.3 billion in 2023. This rapid growth is driven by various attack vectors, including deepfakes used for identity verification, AI-generated phishing campaigns, and synthetic identity creation. The financial sector is facing a multifaceted threat, with ten distinct attack categories currently targeting institutions.
Who's Being Targeted
The primary targets of these AI identity attacks are financial institutions, which are increasingly becoming victims of sophisticated fraud schemes. The rise in deepfake technology has made it easier for attackers to bypass traditional security measures. For instance, 60% of individuals have fallen prey to AI-automated phishing attacks, showcasing the effectiveness of these tactics.
The vulnerabilities in legacy authentication methods have been exacerbated by AI capabilities. SMS-based one-time passcodes and even passwords are now susceptible to phishing, as adversaries exploit these weaknesses at unprecedented scales. This situation poses significant risks not only to financial organizations but also to consumers, who may unknowingly provide sensitive information to fraudsters.
What Policymakers Are Being Asked to Do
The joint paper outlines several initiatives aimed at combating AI identity threats. The first initiative focuses on enhancing identity proofing and verification processes. A proposed task force led by the Treasury Department would work towards bridging the gap between physical and digital credentials, potentially utilizing mobile driver’s licenses that employ asymmetric public key cryptography. Such measures could make it significantly harder for deepfakes to succeed.
Additionally, the report emphasizes the need for stronger authentication methods. Policymakers are encouraged to promote the adoption of phishing-resistant technologies like FIDO security keys and passkeys. Furthermore, there is a call for public awareness campaigns to educate consumers about the importance of these technologies and the dangers of outdated security practices.
What to Watch
As the financial sector grapples with these challenges, the proposed initiatives aim to foster collaboration among federal, state, and local agencies. The report suggests that successful implementation of these strategies could take two to three years. However, the urgency of the situation cannot be overstated, as deepfakes represent a national problem that extends beyond the financial sector, impacting various industries.
The recommendations also highlight the need for international coordination on digital identity standards, as adversaries are actively involved in shaping these standards. As financial institutions prepare for a future where AI-driven identity threats become more prevalent, the collaborative efforts outlined in this report could be pivotal in safeguarding against these emerging risks.