FinCEN, FINRA Offer Steps to Manage AML and Fraud Risks Posed by GenAI
Many banks have embraced the potentially transformative applications of generative artificial intelligence (GenAI). These range from modernizing legacy systems and streamlining compliance assessments to personalizing customer experiences. A SAS report from October 2024 found that 60% of banking respondents use GenAI and another 38% intend to use it in the next two years. Ninety percent reported they have a dedicated GenAI budget for 2025.
Not surprisingly, however, the opportunities created by GenAI come with new challenges. Malign actors and criminal enterprises employ the same powerful tools. A study released earlier this year by the American Banker found that banks made enhanced security and fraud mitigation their top spending priorities in 2025 in response to rising fraud incidents. Recent reports from the Financial Crimes Enforcement Network (FinCEN) and the Financial Industry Regulatory Authority (FINRA) highlighted the emerging fraud and money laundering risks facilitated by GenAI and outlined steps financial institutions can take to mitigate these risks.
GenAI’s Capabilities
GenAI has significantly reduced the resources and sophistication required to produce high quality fake, or “synthetic,” content across a range of media. Synthetic content refers to media created through digital or artificial means or otherwise manipulated with technology. FINRA’s 2025 Annual Regulatory Oversight Report highlights the ability of threat actors to leverage GenAI to create imposter websites, domains and social media profiles to impersonate financial firms and registered representatives.
In December 2024, for example, the Securities and Exchange Commission (SEC) charged three defendants with impersonating legitimate securities brokers and investment advisors in an online scheme that stole more than $2.9 million from 28 investors. According to the SEC, the defendants created websites impersonating nearly two dozen actual securities brokers. They directed investors to fake online platforms created to make the investors think their portfolios were increasing in value. They even purchased voice-changing software to mimic the female investment advisors they were impersonating.
The scheme’s complexity reflects GenAI’s capacity to generate realistic-looking content, or “deepfakes.” Deepfakes manufacture what appear to be real events, such as a person doing or saying something they did not actually do or say. The expansive (and unnerving) threats associated with deepfakes have surfaced across several facets of society. They create particular risks for financial institutions.
In May 2024, fraudsters tricked an employee at a multinational firm into paying out $25 million. They set up a video call with what appeared to be the firm’s CFO and several staff members the employee recognized, but that were actually deepfakes.
GenAI-rendered deepfakes also allow criminals to avoid identity verification and due diligence controls in less sensational ways. According to FinCEN’s November 2024 report, criminals use GenAI to create falsified documents, including government-issued driver’s licenses and passports, either by editing an authentic source image or creating a synthetic image. Criminals also create synthetic identities by combining GenAI images with stolen personally identifiable information (PII) or entirely fake PII. Once accounts are opened using fraudulent identities, criminals use them to receive and launder the proceeds of related fraud schemes or funnel accounts.
Mitigating the Fraud and AML Risks Created by GenAI
The advances in GenAI present new challenges for banks implementing policies and procedures for Customer Identification Programs (CIP) and Customer Due Diligence (CDD), which must be reasonably designed to achieve compliance with regulations implemented under the Bank Secrecy Act. These challenges are especially acute for institutions that allow customers to open accounts entirely online. In response, FinCEN and FINRA outlined best practices to manage the risks of GenAI-enabled fraud, both during validation of a customer’s identity (i.e., confirming the identity exists and is unique and that the presented identification documents are authentic) and verification of the identity (i.e., confirming that the validated identity belongs to the customer).
To validate a customer’s identity, best practices include:
- Requiring both documentary (g., government-issued IDs) and non-documentary identifying information, or multiple forms of documentary information, and flagging inconsistencies among identity documents or between an identity document and other aspects of the customer’s profile.
- Contracting third-party vendors to verify the legitimacy of suspicious information or images, including by examining an image’s metadata or using software to detect deepfakes or specific manipulations.
- Reviewing account applications for common identifiers (e.g., email addresses and phone numbers) present in other applications or existing accounts, especially if the accounts otherwise appear unrelated.
- Reviewing account applications for use of temporary or fictitious email addresses.
Once a bank validates the customer’s identity, best practices to verify that the identity belongs to the customer opening the account include:
- Asking follow-up questions or requesting additional documents based on information from credit bureaus, credit reporting agencies or digital identity intelligence.
- Reviewing IP addresses or other available geolocation data associated with new online account applications for consistency with the customer’s home address.
- Limiting automated approval of multiple accounts for a single customer.
- Using multifactor authentication (MFA), including phishing-resistant MFA.
- Requiring live verification checks that prompt a customer to confirm their identity through audio or video.
These strategies are not foolproof. FinCEN notes that illicit actors may be able to respond to live verification prompts or access tools that create synthetic audio and video responses on their behalf (see the SEC action described above). However, FinCEN points out that because these responses can reveal inconsistencies in a deepfake identity, criminals often avoid or try to bypass them by claiming to be experiencing technical glitches or requesting to change communication methods.
Once the account has been opened, banks may be able to detect deepfake identity documents by triggering additional customer due diligence when accounts include some or all of the following signs of suspicious activity:
- Access to an account from an IP address that is inconsistent with the customer’s profile.
- Patterns of apparent coordinated activity among multiple similar accounts.
- High payment volumes to potentially higher-risk payees, such as gambling websites or digital asset exchanges.
- High volumes of chargebacks or rejected payments.
- Patterns of rapid transactions by a newly opened account or an account with little prior transaction history.
- Patterns of withdrawing funds immediately after deposit and in manners that make payments difficult to reverse in cases of suspected fraud, such as through international bank transfers or payments to offshore digital asset exchanges and gambling sites.
Takeaways for Banks
GenAI presents promising opportunities for innovation. But it also poses significant challenges for banks trying to counter increasingly sophisticated fraud and money laundering threats. Recent publications from FinCEN and FINRA highlight these threats, unpacking how illicit actors create synthetic content and deepfakes that can bypass traditional identity verification processes. While brazen fraud and money laundering schemes will continue to print sensational headlines, banks must prioritize the unglamorous legwork necessary to design identity verification processes and customer due diligence to counter the rapidly evolving threats posed by GenAI.
Please contact Chris Couch, Andrew Meaders or any member of the Phelps’ banking and financial services team with questions or for advice and guidance.