Artificial Intelligence (AI) has transformed industries, boosting efficiency while simultaneously presenting substantial risks—a true double-edged sword.

Recognizing these challenges, several of the federal banking agencies have communicated via supervisory reports that when it comes to AI, "With great power comes great responsibility." (Oh wait, that was for Spider-Man.)

In all seriousness, these agencies have emphasized that AI can be both beneficial and harmful, highlighting the need for careful use. Let's look at some of the specific things that regulators have said about AI.

Regulators Say to Manage AI Models

In August 2021, the OCC released its Model Risk Management exam program. In the program, the OCC stated that a model is a tool which turns data into quantitative estimates. A model consists of three components:

  1. Information input component that delivers data to the model.
  2. Processing component that transforms inputs into estimates.
  3. Reporting component that translates the estimates into information.

This could be a method, system, or approach that could easily apply theories, techniques, or assumptions. This is a way to make predictions and then determine choices based on those predictions. Sounds a lot like AI, doesn't it? The program clarified that AI might qualify as a model, but it depends on the context. For example, a credit risk scoring algorithm that analyzes borrower data to predict default likelihood would likely be classified as a model. However, a customer service chatbot may not, unless it directly impacts financial decisions.

Learn more about model risk management in our blog: Model Risk Management FAQs for Community Banks and Credit Unions.

Regulators Say There is No Clear Definition for AI (Yet)

Long story short, it is challenging to define "AI." Some definitions focus on the algorithm portion of AI, while other definitions focus on the outputs that AI provides. In a recent speech, Governor Michelle W. Bowman explained that while each definition of AI has its purpose depending on its context, a narrow definition risks criticism, whereas a broad definition could lead to an overly generalized policy response. At a minimum though, whatever definition the regulators adopt must establish clear parameters about what types of activities and tools are covered.

Regulators Say Who's Responsible for Compliance

Issued in June 2024, the Interagency Final Rule on Automated Valuation Models (AVMs), highlights the use of AVMs in property evaluations. AVMs are tools that reduce turnaround time and costs. While they are valuable, they also carry risks. The rule outlines several key expectations, including policies, procedures, and control systems to ensure a high level of confidence, to protect against manipulation, to avoid conflict of interest, to require testing, and to comply with applicable nondiscrimination laws, regulations, and guidance.

While it may be tempting to want to pass the responsibility for compliance back to the vendors who build these AI systems, the final rule says that ultimate responsibility rests with the institutions who use the systems.

In short: Financial institutions need to do good due diligence to make sure that the AI systems they use (vendor-created or otherwise) are secure, fair, and compliant.

Check out Tandem's Artificial Intelligence (AI) Vendor Review Checklist to help manage your vendor relationships.

Regulators Say AI Creates New Threats

Several regulators have highlighted the connection between the emerging technology of AI and emerging threats.

  • The Financial Crimes Enforcement Network (FinCEN) issued an alert to help financial institutions identify fraud schemes associated with the use of deepfake media created with generative AI tools. The alert summarized typologies, explained red flags, and reminded financial institutions of their reporting requirements under the Bank Security Act.
  • The Federal Deposit Insurance Corporation (FDIC) published guidance in their 2024 Risk Review which specifically raised concerns about AI being used to potentially circumvent authentication.
  • The Federal Bureau of Investigation (FBI) released a public service announcement which included some common tactics criminals will use with generative AI. These tactics include schemes such as creating fake social media profiles, targeted messages, fraudulent websites, realistic images for phishing or counterfeit promotions, and audio or video deepfakes for impersonation and scams. It also includes using cloned voices for financial fraud, generating fake IDs, and creating misleading videos or real-time deepfake chats.

In short, as bad actors continue to have new doors opened to them with the help of AI, financial institutions need to be aware of these new schemes and tactics.

To learn more about the threats posed by AI, check out our blog on The Implications of Artificial Intelligence on Cybersecurity.

Regulators Say What They Are Doing About AI

The U.S. Department of the Treasury has started a broader effort to provide financial institutions with information on the effects of using AI. In this effort, they have provided reports and publications on their work to address AI use, as well as a list of use cases. With these resources, it can aid financial institutions in preparing for and controlling threats that come with AI.

But Wait...There's More!

The U.S. Department of The Treasury's report on Artificial Intelligence in Financial Services noted that there are several existing frameworks that relate to the use of AI that have been developed by government agencies. Each of these frameworks can be used to help manage risk associated with the use of AI.

For example:

Conclusion

In summary, as we start 2025, the regulators say that AI can be used as a helpful tool in financial institutions. With this said, responsibilities for security, fairness, and compliance still rest in the hands of the institutions using these AI systems. Just like any other tool, there will always be threats and risks that can arise.

To learn about how to manage AI third-party relationships, check out Tandem Vendor Management. Tandem offers a simplified and streamlined interface, designed to organize your vendor management program. Learn more at Tandem.App/Vendor-Management-Software.

Further Reading

Agency

Guidance

CISA

Joint Cyber Defense Collaborative (JCDC) AI Collaboration Playbook

FBI

PSA on Criminals Use Generative Artificial Intelligence to Facilitate Financial Fraud

FDIC

2024 Risk Review

FHFA

Advisory Bulletin on Artificial Intelligence/Machine Learning Risk Management

FinCEN

FinCEN Alert on Fraud Schemes Involving Deepfake Media Targeting Financial Institutions

FRB

Cybersecurity and Financial System Resilience Report (July 2024)

FRB

Speech on Artificial Intelligence in the Financial System

Interagency

Joint Statement on Deploying AI Systems Securely: Best Practices for Deploying Secure and Resilient AI Systems

Interagency

Joint Statement on Enforcement of Civil Rights, Fair Competition, Consumer Protection, and Equal Opportunity Laws in Automated Systems

Interagency

Interagency Guidance on Third-Party Relationships: Risk Management

Interagency

Interagency Final Rule on Quality Control Standards for Automated Valuation Models

Nacha

Payments and Artificial Intelligence: Protecting Yourself Against AI-Based Scams

NAIC

Model Bulletin: Use of Artificial Intelligence Systems by Insurers

NCUA

Written Statement: Oversight of U.S. Financial Regulators: Accountability and Financial Stability

NIST

Artificial Intelligence Risk Management Framework (AI RMF)

NYDFS

Industry Letter on Cybersecurity Risks Arising from Artificial Intelligence and Strategies to Combat Related Risks

OCC

Comptroller's Handbook: Safety and Soundness: Model Risk Management

OCC

Semiannual Risk Perspective (Spring 2024)

U.S. Treasury Department

Report on Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Sector

U.S. Treasury Department

Report on the Uses, Opportunities, and Risks of Artificial Intelligence in Financial Services