top of page

Thanks for subscribing!

Search
Writer's pictureRichard Pace

The 10 Attributes of Highly-Effective Model Risk Managers

Updated: Sep 13

What Does it Take to Truly Differentiate Yourself For Long-Term Success?

10 Attributes of Highly-Effective Model Risk Managers

Twelve years ago this month, the Federal Reserve and the Office of the Comptroller of the Currency ("OCC") released their seminal Supervisory Guidance on Model Risk Management (SR 11-7 and OCC 2011-12) - an evolutionary update to the OCC's prior 2000-16 Risk Bulletin on Model Validation that forms the foundation for virtually all U.S. banks' current-day, three lines of defense model risk management programs.


This month, as I look back on my twenty-plus years of model risk management ("MRM") services across the banking industry, I've thought about the range of MRM programs I have observed - those that really got it right, and those that - despite a well-designed program - floundered. And the difference, I believe, is frequently driven by the mindset of the model risk managers executing the program. You see, many people view model risk management as a highly quantitative domain - and they are right. However, it is also - fundamentally - a risk management function and, as such, requires an additional distinct set of skills focused on rigorous and transparent processes, dealing with uncertainty, communicating in a new "language", making judgments, dealing with conflict, establishing limits, and appreciating the importance of both short-term risk mitigations and long-term solutions.


What this means is that highly-effective model risk ("MR") managers bring a multidimensional mindset to their work - looking through lenses that are both technical and strategic, quantitative and judgmental - and being technically proficient but also interpersonally savvy.


In recognition of this twelfth anniversary of our modern-day model risk management function, I'd like to drill down into this multidimensional mindset and share what I view as the attributes of a highly-effective MR manager. As you will see, the examples I use to describe these attributes tend to lean into the model validation area of the MRM function - which makes up the bulk of work for many MR managers; nevertheless, several of these attributes are also equally important in other MRM areas such as annual reviews, on-going monitoring, etc. Finally, I note that this list is not exhaustive, and I am sure others would identify other important attributes - many with which I would agree. Nevertheless, without further ado,


Let's dive in.


Identifies Unique Model Risks

Attribute 1: Identifies Unique Model Risks


A core activity of an MR manager is performing model validation testing to identify and measure the potential risks and limitations associated with a model's specific use(s). The scope of such validation testing should be grounded in a model-specific risk assessment that spans the areas of model development, implementation, and on-going use - and focuses on the identification of risks within these areas that impact the safe, sound, and compliant use of the model.


Many MRM programs may maintain standardized model risk templates for specific types of models - ensuring that MR managers base their validation testing on a consistent set of potential model risks. While such templates are quite useful, they are - by design - targeted to "standardized" model types and, accordingly, likely need to be modified to address appropriately the models - and model features - that deviate from such standards.


Such modifications require the MR manager to have the knowledge, experience, and business acumen to perform a model-specific risk assessment - identifying the unique risks of the model that are not covered by the template, and - where relevant - removing standard risks that do not apply. To do this successfully, MR managers need proficiency in the risk assessment process and possess broad knowledge of the specific types of risks that may be present for various types of model methodologies, model estimation approaches, data types, model implementation architectures, etc. - both during initial model development and, importantly, during on-going model use when key features of the model's operating environment may change - thereby creating emerging risks for the model's reliability.


Creates a Customized Model Validation Testing Plan

Attribute 2: Creates a Well-Designed, Customized Model Validation Testing Plan


Once the model risk assessment has been completed, the MR manager creates the model validation testing plan. For each relevant risk identified - whether it pertains to model development data, model methodology, model estimation, outcome analysis, etc., the MR manager should design an appropriate set of model validation testing activities to evaluate effectively the presence and, where necessary, the magnitude of that specific risk. For example, if the model developers linked multiple datasets together for use in the model estimation process, the MR manager should ensure that: (1) the developer's failure to completely and accurately link these datasets is identified as a potential risk and (2) his/her model validation testing procedures are designed adequately to assess whether this potential risk is present.


Similar to (1) above, the MRM program may also maintain model validation testing plan templates for specific types of models that MR managers are expected to use consistently. However, as these templates are typically based on "standardized" model types and features, they may not cover sufficiently any unique model risks or features that deviate from this standard. Accordingly, for the model validation testing plans to be effective, the MR manager should have the good judgment and confidence to recognize when to modify the work plan template to address such situations, and to design appropriate targeted testing procedures based on their accumulated knowledge and experience.


Strategically Interacts With Model Developers

Attribute 3: Strategically Interacts With Model Developers


Nothing can impede an MR manager's work more than an uncooperative model developer. This is mainly because many MRM activities require information and materials from the model developer - some in written form, but many from on-going meetings and other ad hoc personal interactions, such as:

  • Performing joint walkthroughs of the model development, implementation, and monitoring processes.


  • Performing joint walkthroughs of certain model-related computer code or datasets.


  • Asking clarifying questions on model documentation and other model development and/or implementation materials.


  • Communicating potential model validation testing findings and requesting responses.


In my experience, highly-effective MR managers engage in these interactions with both a strategic, and respectful, mindset. What this means foremost is that the MR manager should approach these interactions with empathy. In nearly all cases, developers have the best of intentions in developing models, they are proud of their work, and they care about how their peers and supervisors view the quality of their efforts. Having another party come in and "audit" this work creates anxiety and can easily trigger defensiveness.


Highly-effective MR managers understand this context and engage with the model developer(s) accordingly - with the strategy of maintaining cooperation and effective communication, and avoiding triggering defensiveness, in order to complete the project in a timely and professional manner. What does this look like in practice? Here are a few examples:

  • During the information collection phase of the project, the MR manager should keep all interactions focused on the facts and avoid communicating premature opinions. Raising potential issues before they have been fully researched and vetted can quickly and adversely impact the dynamic of their interactions - triggering defensiveness by the developer(s) and potentially compromising their credibility. In such cases, there will likely be a marked decrease in responsiveness and cooperation from the developer(s), future interactions may become adversarial, and the project may stall. As I used to counsel my team members, during critical information collection phases when cooperation from the model developer is vitally important, always "maintain a poker face" - even when you believe something may be very wrong. Communicating potential issues should be done strategically and after sufficient evidence has been collected and vetted - not prematurely or informally.


  • The MR manager should choose their words carefully when asking questions. Again, focus on the facts and avoid language or tone that implies skepticism, could be perceived as accusatory, or suggests that something is wrong. For example, rather than asking "Why didn't you test for X?", rephrase it to "My understanding is that you performed the following tests. Is this list complete? Am I missing anything?". As another example, rather than saying "I think there is an error in this code.", rephrase it to "My understanding is that this code is supposed to do X. However, it appears to be doing Y. Could you please help me in reconciling this." This phrasing has a neutral tone, is not accusatory, and it leaves open the possibility that the difference could be your misunderstanding.


  • The MR manager should choose their words carefully when describing potential validation findings or issues. Similar to the above, keep the descriptions factual, professional in tone, and focused on the model, not the developer(s). For example, rather than stating "Although X is a typical diagnostic test for this model, the developer didn't perform it", restate it as "The results for diagnostic test X were not available. Based on our own testing, we note that the results of this test indicate ....".



Consistent Asks "So What?"

Attribute 4: Consistently Asks Themself "So What?"


Many of the attributes I am discussing involve the MR manager adopting the mindset of a risk manager. While having the skills to perform the technical work associated with model validation testing is clearly one of the most important qualifications for the role, being able to interpret the results of your validation testing within a larger risk management context is what truly differentiates a technical analyst from an MR manager.


To develop this risk management mindset, it is important to ask yourself consistently "So What?" when you identify a model validation finding or issue. For example, ask yourself whether the following model validation findings are helpful from a risk management standpoint:

  • "Based on our testing, we identified the presence of significant multicollinearity between variables X4 and X10."


  • "We note that the use of a dynamic panel dataset creates dependency across the development data observations - which is a violation of the estimation methodology's assumptions."


  • "We note that 10% of variable X2's values are missing and were replaced with a default value that lacks sufficient support."


While each of these findings is important, they are all simply statements of fact. Multicollinearity is present. There is a violation of the estimation methodology's assumptions. The default value lacks sufficient support. While the MR manager has identified a risk, he/she is missing an equally important component of the risk management process - measurement - that is, the "So What?". Why should the model user care about it?

To mitigate effectively identified model risk issues, model users need to understand their impacts - i.e., what specifically could go wrong with the use of this model in the presence of this issue? What is the potential magnitude of this effect? How likely is it to occur? And these impacts need to be more than speculative. As risk managers, one of your primary goals is to assist model users with understanding the risks and limitations in the terms or language that they understand, and to work cooperatively (although independently) in considering appropriate risk mitigants.


Thinks Creatively in Measuring Model Risk Impacts

Attribute 5: Thinks Creatively in Measuring Model Risk Impacts


Before my retirement, a key part of my practice involved assisting audit teams in testing client models used to support material financial statement estimates - work that was closely aligned with standard model validation testing. As discussed in (4) above, whenever my team identified a finding, it was important that we could communicate to senior audit team members the "So What?". Why was this finding relevant? What was its practical impact on the accuracy and reliability of the financial estimate?


For technical errors, measuring the practical impact is straightforward - the technical error can be fixed and one can directly calculate the difference in model outputs. However, for other types of findings that are not pure technical errors, quantifying the potential impacts can be more challenging. This is where creativity is important (and let me be clear, when I say "creative" I mean innovative - thinking outside the box). For example, if you cannot estimate precisely the potential impact of a finding, can you estimate a reliable range of potential impacts? Or can you estimate reliably the upper bound of that range? Or can you demonstrate that the finding only has an impact under certain scenarios or conditions?


The key here is that you can't effectively manage model risk if you cannot effectively measure model risk - and thinking creatively about different ways to measure potential impacts is a key differentiating attribute of a highly-effective MR manager.


Consider Model Risk Interdependencies

Attribute 6: Considers Interdependencies of Model Risks and Impacts


Upon completion of model validation testing, it is common for model developers and users to be presented with a long list of validation findings that are described and assessed separately. While this reflects accurately the complete inventory of issues discovered during testing, it fails to express these issues in a manner that facilitates an effective understanding of the aggregate risks that are present - making it unnecessarily difficult for model developers and users to respond with proposed corrective actions and risk mitigants. To be fair, I am not suggesting that the former practice is wrong; what I am saying is that the communication of findings can be made more effective for the key stakeholders.

In my view, effective MR managers recognize the interdependency of model risks and adopt a risk-focused communication approach whereby similar findings are grouped together - whether such similarity arises from the model element or feature to which they pertain, or findings that have related impacts. Effective MR managers also consider these interdependencies when assessing impact. For example, let's return to one of the findings in (4) above "We note that the use of a dynamic panel dataset creates dependency across the development data observations - which is a violation of the estimation methodology's assumptions." Unfortunately, it is relatively difficult to measure the direct impact of this finding as it is theoretically based. However, being creative ((5) above!), and understanding the interdependencies of many model risks, we might postulate that if this finding did have a material impact, we would likely observe problems with the model's predictive performance.

Accordingly, the MR manager could assess whether this theoretical issue is a "fatal flaw" / Tier 1 / High-Risk finding through reference to the outcomes analysis - a validation testing activity designed to address directly other model risks. For example, if sufficient in-sample, out-of-sample, and out-of-time back-testing results have been produced, and there is no evidence of material or systematic model biases or other performance issues, then this may represent sufficient evidence that the impact of this theoretical issue is not high. In this case, knowledge of the interdependency of risk impacts helps the MR manager to measure indirectly the impact for which direct measurement is much more difficult.


""

Attribute 7: Tells a Clear and Concise Story


Building on the previous section, there is an important distinction between effective technical documentation and effective risk management information. The former is what one uses to comply with company and regulatory expectations regarding their model validation testing work and supporting evidence / workpapers. The latter, however, is something entirely different. It is a document - whether a written narrative or a set of slides - that distills key validation findings into a digestible "story" for key stakeholders - such as model users, risk management and financial executives, oversight committees, etc.


More specifically, it takes the 20 or so individual model validation findings and maps them into a different domain (risk management) - frequently by reducing their dimensionality, adding the appropriate "So What?"s, and facilitating at a more fundamental level: (1) the stakeholder's understanding of the model's risks and limitations, and (2) discussions of potential risk mitigants that would permit continued model use.


This can be a real struggle for some MR managers due to their primarily technical mindset. However, through interactions with these key stakeholders, and consistent self-questioning - e.g., "So what?" and "What could mitigate this risk?" - they can steadily develop the risk management mindset to serve this important link between the technical side and the risk management side of model risk management.


""

Attribute 8: Is Organizationally Multilingual


Key stakeholders for model risk management findings are organizationally-diverse. They encompass model developers, model users, senior management, company executives, auditors, and regulators and span nearly all organizational functions - such as finance / accounting, business lines, human resources, risk management, operations, etc. What this means, practically, is that MR managers must learn to communicate effectively with many different audiences - a large majority of which have limited technical backgrounds, will likely struggle to interpret the technical model validation findings, and whose questions may be very different than those of the model developers.


Typically, these stakeholders will want to understand:

  • The most significant / higher impact model validation findings.

  • What each of these findings means practically - not technically.

  • What the quantitative impacts of the findings are.

  • What the potential options are to either "correct" the finding or mitigate its impact (e.g., an on-top adjustment to the model's output, or a correction of a technical error).

Engaging in effective communications of this type requires many of the attributes discussed previously - knowing the "So What?"s, thinking creatively about estimating impacts, considering model risk interdependencies and impacts, and telling a cohesive and concise story. In fact, this may be one of the most important differentiating attributes for long-term advancement with a risk management function.


""

Attribute 9: Manages Disputes and Defuses Conflict


Despite the best of intentions, identifying and communicating issues to key model stakeholders - particularly for high-impact models - can create tension and conflict. Deadlines for model implementation can be placed in jeopardy, people may be concerned about the impacts on their performance evaluations and compensation, people may be in a state of denial when seeing the number of validation findings, financial reporting deadlines may be put at risk, and concerns may arise over the collateral impacts to internal control effectiveness assessments - to name just a few.


Such situations require careful management lest such tensions escalate - drawing in more senior executives, setting off organizational alarm bells, and impeding effective resolution of the findings. In my experience, most of these situations arise due to ineffective communication between the parties - along with frequent misunderstandings of what exactly the specific problems are, and a lack of clarity on what it might take to resolve such problems. Highly-effective MR managers will actively seek to defuse and de-escalate these situations - maintaining a calm demeanor, making sure the stakeholders feel (and are) heard, directly addressing their concerns (which likely requires a shift in communication approach (see previous sections)), and working collaboratively (but with appropriate independence) on determining the appropriate path forward.


To be clear, it is never appropriate to take a softer approach to the model validation findings and their associated risks in response to push-back, pressure, or escalation. As an independent risk management function, it is critically important to maintain that independence even in the face of significant resistance. What I am referring to here, is lowering the temperature of the interactions, making sure that you understand accurately the stakeholder's concerns, ensuring that you are responding substantively to those concerns, maintaining composure, and re-framing your communications if there continue to be misunderstandings. And, importantly, acknowledging and addressing mistakes if they truly are present.


Offers Advice on Potential Model Risk Mitigants

Attribute 10: Offers Advice on Potential Risk Mitigants


Over time, MR managers learn the types of risk mitigants that are appropriate to address certain common model validation findings. For example, for models already in use, breaches of model performance monitoring thresholds can typically be addressed temporarily through the use of on-top adjustments to the model's estimates. Risks associated with certain development data issues or model estimation concerns can typically be mitigated through robust evidence of the model's acceptable predictive performance under expected and stressed scenarios.

The key here is learning how to mitigate model risk in addition to how to identify model risk. While model developers and/or users are ultimately responsible for responding to the model validation findings, the MR manager's breadth and depth of experience resolving such findings can be an important source of value-add (and, frankly, comfort) to other key stakeholders who lack such insights. Clearly, appropriate independence must be maintained, but offering some alternative paths to resolution for their consideration is entirely reasonable, demonstrates good-faith cooperation, and likely helps to avoid conflict.


* * *


© Pace Analytics Consulting LLC, 2023.

1,284 views

Comentarios


Los comentarios se han desactivado.
Share your feedback on the AI LendScape Blog
Please rate your overall satisfaction with our blog content
Very dissatisfiedA bit dissatisfiedPretty satisfiedSatisfiedVery satisfied

Thanks for sharing!

Your feedback is anonymous.

bottom of page