December 5, 2024
What happened?
On November 13, 2024, the U.S. Department of the Treasury’s Financial Crimes Enforcement Network (“FinCEN”) issued an alert to help financial institutions identify fraud schemes associated with the use of deepfake media (“deepfakes”) created with generative artificial intelligence (“GenAI”). The alert explains typologies associated with these schemes, provides red flag indicators to assist with identifying and reporting related suspicious activity, and reminds financial institutions of their reporting requirements under the Bank Secrecy Act (BSA). It also highlights another form of AI that poses a threat to financial institutions.
Starting in 2023 and continuing into 2024, FinCEN saw an increase in suspicious activity associated with deepfake media. FinCEN defines deepfakes as “a type of synthetic content that uses artificial intelligence/machine learning to create realistic but inauthentic videos, pictures, audio, and text,” and they represent another risk associated with AI that financial institutions need to be aware of.
These schemes often involve criminals altering or creating fraudulent identity documents to circumvent identity verification and authentication methods. Often, criminals adjust or create fraudulent identity documents to get around identity verification and authentication methods.
GenAI tools can create synthetic content that is hard to distinguish from human-generated outputs, and bad actors are taking advantage of this technology. For example, bad actors have used GenAI to alter images used for identification documents, such as driver’s licenses or passport cards and books. These images can be combined with stolen personal identifiable information (“PII”) or fake PII to create synthetic identities.
Additionally, bad actors may utilize GenAI-enabled social engineering attempts to coerce financial institutions into providing PII or giving access to an account or system. For example, a bad actor may use GenAI tools to impersonate an executive and instruct employees to transfer large amounts of money or complete illegitimate payment requests.
What Does This Mean for Me?
The good news is, there are steps financial institutions can take to mitigate the risks associated with deepfake media. Implementing multifactor authentication (“MFA”), including phishing-resistant MFA and live verification checks that require a customer’s confirmation of their identity through audio or video, are two items FinCEN lists as best practices for financial institutions to follow to reduce their vulnerability risks associated with deepfake media.
FinCEN also identified the following red flags financial institutions can look for to identify suspicious activity associated with the use of GenAI tools.
Financial institutions should be proactive to ensure they remain up to date with risks posed by GenAI. Training and user education are also important. If your firm uses (or plans to use GenAI tools for business-related tasks,) then you should implement a strong training program. You may also want to review our previous flash reports regarding AI:
If you have any questions about AI or related issues, let us know, and one of our regulatory experts will contact you soon.