News & Insights

FinCEN Alert on Fraud Schemes with Deepfake Media

What happened?

On November 13, 2024, the U.S. Department of the Treasury’s Financial Crimes Enforcement Network (“FinCEN”) issued an alert to help financial institutions identify fraud schemes associated with the use of deepfake media (“deepfakes”) created with generative artificial intelligence (“GenAI”). The alert explains typologies associated with these schemes, provides red flag indicators to assist with identifying and reporting related suspicious activity, and reminds financial institutions of their reporting requirements under the Bank Secrecy Act (BSA). It also highlights another form of AI that poses a threat to financial institutions.

Starting in 2023 and continuing into 2024, FinCEN saw an increase in suspicious activity associated with deepfake media. FinCEN defines deepfakes as “a type of synthetic content that uses artificial intelligence/machine learning to create realistic but inauthentic videos, pictures, audio, and text,” and they represent another risk associated with AI that financial institutions need to be aware of.

These schemes often involve criminals altering or creating fraudulent identity documents to circumvent identity verification and authentication methods. Often, criminals adjust or create fraudulent identity documents to get around identity verification and authentication methods.

GenAI tools can create synthetic content that is hard to distinguish from human-generated outputs, and bad actors are taking advantage of this technology. For example, bad actors have used GenAI to alter images used for identification documents, such as driver’s licenses or passport cards and books. These images can be combined with stolen personal identifiable information (“PII”) or fake PII to create synthetic identities.

Additionally, bad actors may utilize GenAI-enabled social engineering attempts to coerce financial institutions into providing PII or giving access to an account or system. For example, a bad actor may use GenAI tools to impersonate an executive and instruct employees to transfer large amounts of money or complete illegitimate payment requests.

What Does This Mean for Me?

The good news is, there are steps financial institutions can take to mitigate the risks associated with deepfake media. Implementing multifactor authentication (“MFA”), including phishing-resistant MFA and live verification checks that require a customer’s confirmation of their identity through audio or video, are two items FinCEN lists as best practices for financial institutions to follow to reduce their vulnerability risks associated with deepfake media.

FinCEN also identified the following red flags financial institutions can look for to identify suspicious activity associated with the use of GenAI tools.

  1. A customer’s photo exhibits visual signs of being altered or is inconsistent with other identifying information provided.
  2. A customer submits multiple different identity documents.
  3. A customer uses a third-party webcam plugin during a live verification check or attempts to change communication methods during a live verification check.
  4. A customer declines to use MFA to verify their identity.
  5. A reverse-image lookup or open-source search of an identity photo matches an image in an online gallery of GenAI-produced faces. Firms should exercise caution when conducting reverse-image lookups to avoid inadvertently sharing a client’s legitimate photo.
  6. A customer’s geographic or device data is inconsistent with their identity documents.
  7. A newly opened account or an account with little prior transaction history or that has a pattern of rapid transactions; high payment volumes to potentially risky payees, such as gambling websites or digital asset exchanges; or high volumes of chargebacks or rejected payments.

Financial institutions should be proactive to ensure they remain up to date with risks posed by GenAI. Training and user education are also important. If your firm uses (or plans to use GenAI tools for business-related tasks,) then you should implement a strong training program. You may also want to review our previous flash reports regarding AI:

If you have any questions about AI or related issues, let us know, and one of our regulatory experts will contact you soon.