Model risk management is an evolving topic for community banks and credit unions. While models have been around for a long time, their prevalence in artificial intelligence (AI), machine learning (ML), and other software applications makes understanding and managing them more important than ever. Let's dive into some frequently asked questions about model risk management.

What is a "model?"

According to the interagency Supervisory Guidance on Model Risk Management, a "model" is:

"A quantitative method, system, or approach that applies statistical, economic, financial, or mathematical theories, techniques, and assumptions to process input data into quantitative estimates."

A model works like this.

Raw data goes into the model. The model processes the data and translates it into meaningful information. The model outputs the results in an easy-to-understand format. You then use the results to make informed business decisions.


For example, take a mortgage lending application. You input various data points about a property into the system, such as its location, size, and recent market trends. The application processes this information using statistical models and algorithms to generate a valuation report. Then, this report helps your institution determine whether offering a mortgage loan on the property is a good decision or not.

What is "model risk?"

No model is perfect. Because of this, there's always risk with using one.

The biggest risk associated with using a model is the risk of inaccurate outputs. These inaccurate outputs might occur for several reasons, like unintended misconfigurations, intentional tampering, or flawed input data. Poor model training and validation may also lead to issues like model drift or biased results.

While models are intended to help, if the model is incorrect or if you use the model's output incorrectly, this can result in increased risk across the board (e.g., strategic, financial, reputation, legal, operational, etc.).

Why is model risk management important?

Model risk management is important because it helps you have confidence in the model's outputs. It helps you make sure that whatever you're using the model to do, you can better trust the results.

How can I manage model risk?

Here are six steps you can follow to start managing model risk.

  • Step 1: Create an inventory of the models you use. Consider the models you may have developed internally and the ones you use which are developed by a third party. A good place to start would be looking at anything that provides scores, suggests values, and/or helps you make decisions (e.g., lending software applications).

  • Step 2: Prioritize the models by criticality. Put them in order based on how important they are to your business functions and decision-making processes. Start your risk management process with the most critical models first.

  • Step 3: Determine how the model is trained. Evaluate the quality of the data on which the model is being trained (e.g., pre-trained on large datasets, fine-tuned on data sets relevant to the organization, trained on user inputs, etc.).

  • Step 4: Identify what controls are in place to protect against biased, malicious, and/or unauthorized input. Models are susceptible to a variety of threats (e.g., scripting, prompt injection, training data manipulation, targeted poisoning, backdooring, etc.). Because of this, the system needs to have proper controls in place (e.g., data sanitization, input validation, anomaly detection, quality assurance, authentication and access controls, secure development, etc.).

  • Step 5: Validate the model's accuracy. Perform model validation to ensure the outputs are what you would expect them to be. Validation can take many forms (e.g., professional review, historical comparison, benchmarking, confidence scores, outlier detection, random sampling, etc.).

  • Step 6: Perform ongoing model validation. For many applications, performing model validation on a regular schedule (e.g., at least annually) is sufficient. If the underlying data changes frequently (e.g., real-time data streams, rapidly evolving user behavior, etc.) or supports critical operations, the model may need more frequent validation. Additionally, it is important to validate models after significant updates to make sure the changes did not break anything.

What if I only use third-party models?

Even if the model you use is developed by a third party, you are still ultimately responsible for the outcome. That being the case, you should ask your vendors about these areas. Determine how they train and validate their models. Based on the criticality of the model, consider requesting a model validation report and/or other proof of testing. In short, if you're depending on a vendor to make decisions for you, you need to be aware of how they make (and protect) those decisions.

Conclusion

If your business is basing key decisions on model outputs, it is important to ensure the models are configured and validated correctly. For additional information about managing the risk of vendors who use AI models, download our AI Review Checklist or use the Artificial Intelligence (AI) review template in the Tandem Vendor Management product. Learn more about how Tandem can help you at Tandem.App/Vendor-Management-Software.

Further Reading