Federal banking regulators are increasingly approaching AI with cautious optimism. Here's what community banks and credit unions need to know in 2026. 

Table of Contents 

Introduction 

In January 2025, we published a blog titled What are the Regulators Saying about Artificial Intelligence (AI)At the time, the federal banking regulators were clearly cautious. There was a lot of uncertainty, paired with heavy emphasis on managing risk. Since that time, the tone has evolved. The agencies are sounding more optimistic, encouraging adoption while still keeping a close eye on the risks.  

So, where do things stand today? Let's take a look. 

Regulators Partner with Private Sector to Secure AI 

In February 2026, the U.S. Department of the Treasury announced the conclusion of a Public-Private Initiative to Strengthen Cybersecurity and Risk Management for AI. 

This partnership was called the Artificial Intelligence Executive Oversight Group (AIEOG). The AIEOG was formed through a partnership between the Financial Services Sector Coordinating Council (FSSCC) and the Finance and Banking Information Infrastructure Committee (FBIIC). The AIEOG effort produced several resources, including a Financial Services Artificial Intelligence Risk Management Framework (FS AI RMF).

In addition, the Cybersecurity and Infrastructure Security Agency (CISA) has partnered with international and U.S. partners to release guidance on Engaging with Artificial Intelligence and Careful Adoption of Agentic AI Services.  

Why This Matters: The financial industry, cybersecurity industry, and regulators are working together to ensure that guidance coming out isn't just top-down mandates, but practical resources designed to help you manage the risks that matter most. 

Regulators Adopt Formal AI Definitions 

Another resource published by the AIEOG was an AI Lexicon. The lexicon is designed to promote a shared understanding of key AI terms. 

For example, the lexicon defines artificial intelligence (AI) using the definition from 15 U.S.C. 9401: 

"A machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to 

(A) perceive real and virtual environments; 

(B) abstract such perceptions into models through analysis in an automated manner; and 

(C) use model inference to formulate options for information or action." 

In other words, AI is a system that uses human-set goals and inputs to understand its environment, create models, and make predictions or decisions. Having shared terminology, like this, may help clarify and standardize regulatory guidance in the future. 

Several other key terms defined in the lexicon include AI model, AI system, agentic AI, and generative AI. 

Why This Matters: Having shared definitions matters for your institution because when examiners, vendors, and employees use terms like "agentic AI," everyone should be working from the same playbook. 

Regulators See Financial Institutions Adopting AI 

In a testimony to the U.S. House Committee on Financial Services, FDIC Director Ryan Billingsley shared a variety of ways the federal banking agencies are seeing financial institutions implement AI, including: 

  • Fraud detection
  • Anti-money laundering (AML/CFT) processes
  • Credit underwriting and lending (e.g., summarizing loan applicant information)
  • Customer service (e.g., answering questions, summarizing calls)
  • Code development 

Why This Matters: If your institution is already using AI in any of these areas (or if you are considering using it), you're in good company, and the regulators are paying attention to how it's being done. 

Regulators Support Awareness and Thoughtful Innovation 

In their 2025 Annual Report to Congress, the Financial Stability Oversight Council (FSOC) said that AI is becoming a key part of the financial industry's infrastructure. At the end of the day, the FSOC recommends that regulatory agencies: 

"[E]xplore opportunities for AI to promote the resilience of the financial system, while also monitoring for potential risks to financial stability that might be posed by the adoption of AI both within and outside the financial services sector." 

The agencies continue to encourage awareness and thoughtful innovation in several ways: 

  • The Treasury Department launched an Artificial Intelligence Innovation (AI) Series. The purpose of the series is to "explore the highest-value AI use cases and identify practical approaches to scaling innovation while preserving safety and soundness."
  • The FFIEC published their Annual Report to Congress. In the report, the FFIEC indicated that they had provided multiple training opportunities to educate examiners on artificial intelligence, including in their annual FFIEC IT Examiners Conference. 
  • The OCC Comptroller testified to the agency's priorities, stating that the OCC will "work with OCC-supervised banks to clarify new ways for banks to conduct the very old business of banking and embrace new technologies like AI, ensuring these opportunities are available to all banks that wish to take advantage of them rather than a privileged few."
  • Additionally, the OCC's Semiannual Risk Perspective (Spring 2026) states, "The OCC supports responsible innovation, such as through genAI and agentic AI, as a means of modernizing the financial system and ensuring that banks of all sizes remain relevant and competitive. The OCC supports banks' efforts to integrate AI into core functions, while managing the risk in a safe and sound manner and in compliance with applicable laws and regulations." 
  • The NCUA published their 2026 – 2030 Strategic Plan. Strategic Objective 2.1 states that the NCUA plans to "foster an environment where federally insured credit unions can responsibly adopt financial technology, digital assets, and other innovations." The plan goes on to explain how AI fits into this. 
  • FRB Vice Chair for Supervision Michelle Bowman gave a speech on Artificial Intelligence in the Financial System. In the speech, she stated, "Innovation is a necessary component of financial services, and supervisory guidance should not be a barrier for banks to engage with new and evolving tools and technologies. Supervisors must take a balanced approach to new and emerging risks and the expected benefits while preserving the safety of the financial system."

Why This Matters: The message from regulators is coming through loud and clear: AI adoption is not only tolerated, but encouraged, as long as risk management keeps pace. 

Regulators Continue to Monitor AI-Powered Threats 

Some recent AI-powered threat trends highlighted by the federal banking agencies include: 

Why This Matters: As financial institutions continue to use AI in new ways, threat actors continue to use it in new ways, too. It is important for financial institutions to keep tabs on what's happening and how to protect against it. 

Regulators Clarify Model Risk Management Guidance for AI 

In April 2026, the FDIC, FRB, and OCC published revised Model Risk Management guidance. This guidance focuses on financial models and clarifies that generative and agentic AI models are excluded from its scope because they are evolving rapidly. However, financial institutions are still expected to apply appropriate governance and risk management practices for any AI systems used. 

Learn more in our blog: Model Risk Management FAQs for Community Banks & Credit Unions. 

Why This Matters: Even if generative and agentic AI fall outside the formal guidance for now, regulators still expect sound governance. So, don't treat "out of scope" as "off the hook." 

What This Means for Community Financial Institutions 

In mid-2026, federal banking regulators continue to focus on how artificial intelligence is being used across the financial industry. They are encouraging financial institutions to adopt AI in ways that fit their size and complexity, while staying focused on cybersecurity and ensuring that risk management keeps pace with the technology. 

Not sure where to start? Here are five steps to help your institution build a strong foundation for AI risk management: 

  1. Write (and communicate) your organization's AI policy with all employees.
  2. Determine what AI systems are allowed to be (or are currently being) used at your institution.
  3. Perform an AI risk assessment to identify reasonably foreseeable threats to your systems and data.
  4. Include AI vendors in your vendor management program.
  5. Keep your Board of Directors and senior management informed on your institution's AI use and associated risks. 

To learn more about what it means to manage the risk associated with AI, download our Artificial Intelligence Risk Management Workbook. This resource is a practical guide, written specifically for community financial institutions, to help you identify and control the risks associated with AI.  

Get your free copy now at Tandem.App/AI-Workbook. 

AI-Workbook 

Further Reading 

Agency 

Guidance 

AIEOG 

Financial Sector Artificial Intelligence Executive Oversight Group (AIEOG) Deliverables 

FBI 

PSA on Criminals Use Generative Artificial Intelligence to Facilitate Financial Fraud  

FBI 

2025 Internet Crime Report 

FDIC 

Artificial Intelligence Compliance Plan 

FDIC 

Innovation at the Speed of Markets: How Regulators Keep Pace with Technology 

FHFA 

Advisory Bulletin on Artificial Intelligence/Machine Learning Risk Management 

FinCEN 

Alert on Fraud Schemes Involving Deepfake Media Targeting Financial Institutions 

FRB 

Cybersecurity and Financial System Resilience Report (July 2025) 

FRB 

Speech on Artificial Intelligence in the Financial System 

FRB 

Speech on AI and Central Banking 

FSOC 

2025 Annual Report 

FSSCC 

Financial Sector Artificial Intelligence Executive Oversight Group Deliverables 

Interagency 

Joint Statement on Deploying AI Systems Securely: Best Practices for Deploying Secure and Resilient AI Systems 

Interagency 

Joint Statement on Enforcement of Civil Rights, Fair Competition, Consumer Protection, and Equal Opportunity Laws in Automated Systems 

Interagency 

Interagency Guidance on Third-Party Relationships: Risk Management 

Interagency 

Interagency Final Rule on Quality Control Standards for Automated Valuation Models 

Interagency 

Model Risk Management – Revised Guidance 

Nacha 

Payments and Artificial Intelligence: Protecting Yourself Against AI-Based Scams 

NAIC 

Model Bulletin: Use of Artificial Intelligence Systems by Insurers 

NCUA 

Strategic Plan 2026 - 2030 

NIST 

Artificial Intelligence Risk Management Framework (AI RMF) 

NIST 

Cybersecurity Framework Profile for Artificial Intelligence (Cyber AI Profile) Draft 

NYDFS 

Industry Letter on Cybersecurity Risks Arising from Artificial Intelligence and Strategies to Combat Related Risks 

OCC 

Semiannual Risk Perspective (Fall 2025) 

OCC 

Comptroller Gould Testifies on Agency Priorities 

Treasury Department 

Report on Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Sector 

Treasury Department 

Report on the Uses, Opportunities, and Risks of Artificial Intelligence in Financial Services 

Treasury Department 

Public-Private Initiative to Strengthen Cybersecurity and Risk Management for AI 

Treasury Department 

Artificial Intelligence (AI) Innovation Series 

White House 

America's AI Action Plan 

White House 

National Policy Framework for Artificial Intelligence 

 

Frequently Asked Questions (FAQs) 

Q: Are financial institutions prohibited from using AI by the regulators? 

A: No, the regulators consistently acknowledge that AI can improve efficiency and decision-making. The expectation is not to avoid AI, but to use it responsibly with appropriate controls, oversight, and risk management. 

Q: How do organizations know if an AI tool or system is "high risk"? 

A: Focus on impact. AI is higher risk if it: 

  • Influences financial decisions (e.g., credit, lending, fraud detection)
  • Uses or processes sensitive customer or business data
  • Impacts compliance obligations (e.g., fair lending, privacy)
  • Is customer-facing or could affect customer outcomes 

The more impact it has, the more governance and oversight it requires. 

Q: Do organizations need a formal AI policy? 

A: While an AI-specific policy is not legally required, most organizations should at least have an AI acceptable use policy, defined approval processes for AI tools, and guidance on handling sensitive data in AI systems. 

Q: How are regulators approaching AI right now? 

A: Regulators are continuing to evolve their approach. Rather than issuing highly prescriptive rules, they are reinforcing that existing risk management expectations still apply, including when institutions use emerging technologies like AI. 

Q: What's the biggest compliance risk with AI right now? 

A: The biggest risk is limited visibility. Many organizations do not have a complete picture of where or how AI is being used, particularly with shadow AI. This can create gaps in governance, oversight, and data protection. 

Q: Are there specific threats regulators are concerned about? 

A: Yes, regulators and agencies have highlighted risks such as deepfake fraud and impersonation, AI generated phishing and social engineering, circumvention of authentication controls, and synthetic identities and fraudulent content. Awareness and employee training are important controls in these areas.  

Q: Should organizations rely on AI vendors to handle compliance and risk management? 

A: No, regulators are clear that responsibility for AI use remains with the organization. This includes conducting vendor due diligence, maintaining ongoing monitoring, and understanding how the AI is being used and how it operates. 

Q: If generative AI falls outside traditional model risk management scope, what's the right approach? 

A: Apply a combination of existing risk management practices, including vendor management, data governance, and security controls. Model risk management can still be used where applicable, but it shouldn't be the only approach.