PRA regulation changes in PS9/24 

December 2024
6 min read

The PRA’s near-final Rulebook PS9/24 introduces critical updates to credit risk regulations, balancing Basel 3.1 alignment with industry competitiveness, and Zanders offers expert support to navigate these changes efficiently.


The near-final PRA Rulebook PS9/24 published on 12 September 2024 includes substantial changes in credit risk regulation compared to the Consultation Paper CP16/22. While these amendments enhance clarity of Basel 3.1 implementation, institutions should conduct in-depth impact analysis to efficiently manage capital requirement. 

PRA has published draft proposal CP16/22 aligning closely with Basel 3.1 reforms. In response to industry feedback, the PRA has made material adjustments in PS9/24, which are aimed at better balancing alignment with international standards and maintaining the competitiveness of UK regulated institutions.  

Key takeaways 

1- Scope for a ‘Backstop’ revaluation every 5-years for valuation of real estate exposures 

2- SME and Infrastructure support factor is maintained, yet firm-specific adjustments will be introduced in pillar 2A. 

3- Despite industry concern on international competitiveness, the risk-sensitive approach for unrated corporate exposure is maintained. 

The implementation timeline is extended to 1 January 2026 with a 4-year transitional period, which is a one-year delay from the proposed implementation date of 1 January 2025 from CP16/22.

Real Estate Exposures  

According to the final regulations, the risk weights associated with regulatory real estate exposure will be calculated based on the type of property, the loan-to-value (LTV) ratio, and whether the repayments rely significantly on the cash flows produced by the property. In place of the potentially complex analysis proposed in CP16/22, the rules for determining whether a real estate exposure is materially dependent on cash flows have been significantly simplified and there is now a straightforward requirement for the classification of real estate exposures.  

One major change in the proposals relates to loans that are secured by commercial properties. The PRA has dropped the 100% risk weight floor for exposures backed by commercial real estate (CRE), provided that the repayment is not 'materially dependent on cash flows from the property' and that the exposure fits the 'regulatory real estate (RRE)' definition. Consequently, under the new rules, firms may, in some instances, benefit from low risk weights for commercial real estate, depending on the loan's loan-to-value (LTV) ratio and the type of counterparty involved. 

Additionally, the final rules regarding the revaluation of real estate have become more risk-sensitive. Although firms are still required to use the original valuation to calculate LTV, there is now a provision allowing for a ‘Backstop’ revaluation after five years. Going forward, the PRA has eliminated the need for firms to adjust valuations to reflect the 'prudent value' sustainable throughout the loan's duration. The requirements for downward revaluation have been simplified, mandating firms to reevaluate properties when they estimate a market value decline of over 10%. Furthermore, the PRA has specified that valuations can be conducted using a robust statistical model, such as an automated valuation model (AVM). 

SME support factor 

The PRA has maintained the draft proposal to remove the SME support factor under SA and IRB (Pillar 1), but has applied a firm-specific structural adjustment to Pillar 2A (the ‘SME lending adjustment’). The ‘SME lending adjustment’ aims to absorb the impact of removing the SME support factor in overall capital requirement.  

The PRA plans to communicate the adjusted Pillar 2 requirements to firms, ahead of the implementation date of the Basel 3.1 standards on 1 January 2026 (‘day 1’), so that firm-specific requirements will be updated at the same time as the Basel 3.1 standards are implemented. 

Infrastructure support factor 

The PRA has maintained the draft proposal to remove infrastructure support factor under SA and IRB approach, but has made two material changes which will diminish the impact on overall capital requirement.  

i. apply a firm-specific structural adjustment to Pillar 2A (the ‘SME lending adjustment’), which will minimize disruption in overall capital requirement. 

ii. introduce a new substantially stronger category in the slotting approach for IRB. PRA proposed lower risk-weight on the ‘substantially stronger’ IPRE exposures in CP16/22. The new definition of ‘substantially stronger’ is expected to include broader scope of IPRE exposures, thus lowering overall capital requirement. 

Unrated corporate exposures 

The PRA has maintained the draft proposal of introducing a two-way method on unrated corporate exposures: risk-sensitive and risk-neutral approach. Since the new approach does not apply in other jurisdictions, additional operational challenge is expected. Also, the 135% risk-weight on Non-IG(non-Investment Grade) unrated corporate exposure is higher than 100% in other jurisdictions, implying higher lending cost for UK regulated banks compared to its internationally regulated peers.

i. risk-sensitive approach : The PRA has proposed a risk-sensitive approach additional to the Basel III reforms. Exposures assessed by firms as IG would be risk-weighted at 65%, while exposures assessed by firms as Non-IG would be risk-weighted at 135%. This is a more risk-sensitive approach which aims to maintain an aggregate level of RWAs broadly consistent with the Basel III reforms.  

ii. risk-neutral approach: 100% risk weight is applied where the risk-sensitive approach is too costly or complex.  

In conclusion, the PRA's near-final Rulebook (PS9/24) reflects significant revisions to credit risk regulation that enhance clarity and alignment with Basel 3.1, while addressing industry feedback. The introduction of a five-year 'backstop' revaluation for real estate exposures, firm-specific adjustments for SMEs and infrastructure support factors, and the maintenance of a risk-sensitive approach for unrated corporate exposures underscore the PRA's commitment to balancing regulatory requirements with maintaining the competitiveness of UK institutions.  

The extended implementation timeline to 1 January 2026, along with the transitional period, allows firms adequate time for adjustment. Overall, these changes aim to foster a more robust and competitive banking environment, while also navigating the complexities introduced by differing international standards.  

How can Zanders help?  

We have extensive experience of implementation and validation of Pillar 1 & 2 models which allows us to effectively support our Clients managing the change process to full compliance with the latest regulations. 

From our experience, as the following are key areas on which our services can add most value: 

1- Carry out thorough self-assessment against new requirements including an impact analyses of new regulations on their capital requirements. 

2- Support model development activities to align models to new rules; we could be done either on an advisory basis or via direct supply of additional resources 

3- Support amendments to Pillar 2 models (which will have to reflect changes to Pillar 1 models) 

4- Support Internal Validation activities across Pillar 1 & 2 

5- Carry out quality assurance on final models & documentation before final submission to the PRA

6- Support adoption of solutions for prudential valuation of (real estate) collateral while integrating climate risk information. 

Please reach out to Paolo Vareschi or Suneet Dutta Roy to find out more about how we could support you on your journey to Basel 3.1 compliance. 

References

[1] Bank of England (2024), PS9/24 – Implementation of the Basel 3.1 standards near-final part 2 URL PS9/24 

[2] Bank of England (2022), CP16/22 – Implementation of the Basel 3.1 standards 
URL CP16/22 

How BCBS 239’s RDARR Principles Can Strengthen Risk Data Aggregation and Reporting in Financial Institutions

December 2024
2 min read

This blog explores how financial institutions can enhance their risk data aggregation and reporting by aligning with BCBS 239’s RDARR principles and the ECB’s supervisory expectations.


The ECB Banking Supervision has identified deficiencies in effective risk data aggregation and risk reporting (RDARR) as a key vulnerability in its planning of supervisory priorities for the 2023-25 cycle and has developed a comprehensive, targeted supervisory strategy for the upcoming years. ​

Banks are expected to step up their efforts and improve their capabilities in Risk Data Aggregation and Risk Reporting as Risk Data Architectures and supporting IT infrastructures are insufficient for most of the Financial Institutions. Hence, RDARR principles are expected to become more and more important during Internal Model Investigations and OnSite Inspections by the ECB.

In May 2024, the ECB published the Guide on effective risk data aggregation and risk reporting to ensure effective processes are in place to identify, manage, monitor and report the risks the institutions are or might be exposed to. With it, the ECB details its minimum supervisory expectations for a set of priority topics that have been identified as necessary preconditions for effective RDARR.

The ECB identifies seven priority areas, considered important prerequisites for robust governance arrangements and effective processes for identifying, monitoring and reporting tasks. The scope of application of these principles is reporting, Key Internal Models and other important models (e.g., IFRS9):

Relevance of BCBS 239 

RDARR represents the implementation of the principles outlined in BCBS 239, which was published by the Basel Committee on Banking Supervision (BCBS) in 2013. BCBS 239 is essential to maintain regulatory compliance, mitigate risks, and drive data-driven decision-making. Non-compliance can result in significant financial penalties, reputational damage, and increased scrutiny from regulatory bodies. Therefore, BCBS 239 is a crucial framework that enhances financial stability by setting robust standards for risk data aggregation and reporting. Its principles encourage institutions to embrace data-driven practices, ensuring resilience, transparency, and efficiency. While challenges such as legacy infrastructure, data quality, and evolving risks persist, banks can overcome these hurdles through strategic investment in governance, technology, and data-driven culture to build end-to-end data transparency. 

Zanders’ view on supervisory planning 

We believe the following 5 topics of the RDARR principles are of major importance for financial institutions: 

Establishing an effective program to review and address these topics, considering the nature, size, scale and complexity of each financial institution, will facilitate alignment with the ECB’s expectations. 

Zanders experience on RDARR implementation 

Data extends beyond being merely a technical database; it is a fundamental component of an organization’s strategic framework. Data-driven organizations are not defined solely by their technological solutions, but by the data culture across the entire organization. At Zanders, we have assisted clients in developing data strategies aligned with RDARR principles and supported the implementation of future-proof data utilization, including the integration of advanced tools such as AI. 

One critical observation is that organizations must urgently address key questions regarding their data: What governance structures are currently in place? Are roles and responsibilities within this governance framework clearly defined? Is the governance being effectively implemented as planned? What training, guidance, and support do employees require? Are data definitions and requirements consistently aligned across all stakeholders? When undertaking such an extensive program, institutions must carefully consider whether a top-down or bottom-up approach will be most effective. In the case of RDARR, success necessitates a comprehensive, dual-directional approach that fosters change across all levels. 

If you are unsure about your compliance with BCBS 239 and RDARR requirements, contact us today to ensure alignment with best practices.

Redefining Credit Portfolio Strategies: Balancing Risk & Reward in a Volatile Economy

December 2024
6 min read

This article delves into a three-step approach to portfolio optimization by harnessing the power of advanced data analytics and state-of-the-art quantitative models and tools.


In today's dynamic economic landscape, optimizing portfolio composition to fortify against challenges such as inflation, slower growth, and geopolitical tensions is ever more paramount. These factors can significantly influence consumer behavior and impact loan performance. Navigating this uncertain environment demands banks adeptly strike a delicate balance between managing credit risk and profitability.

Why does managing your risk reward matter?

Quantitative techniques are an essential tool to effectively optimize your portfolio’s risk reward profile, as this aspect is often based on inefficient approaches.

Existing models and procedures across the credit lifecycle, especially those relating to loan origination and account management, may not be optimized to accommodate current macro-economic challenges.

Figure 1: Credit lifecycle.

Current challenges facing banks

Some of the key challenges banks face when balancing credit risk and profitability include:

Our approach to optimizing your risk reward profile

Our optimization approach consists of a holistic three step diagnosis of your current practices, to support your strategy and encourage alignment across business units and processes.

The initial step of the process involves understanding your current portfolio(s) by using a variety of segmentation methodologies and metrics. The second step implements the necessary changes once your primary target populations have been identified. This may include reassessing your models and strategies across the loan origination and account management processes. Finally, a new state-of-the-art Early Warning System (EWS) can be deployed to identify emerging risks and take pro-active action where necessary.

A closer look at redefining your target populations

With the proliferation of advanced data analytics, banks are now better positioned to identify profitable, low-risk segments. Machine Learning (ML) methodologies such as k-means clustering, neural networks, and Natural Language Processing (NLP) enable effective customer grouping, behavior forecasting, and market sentiment analysis.

Risk-based pricing remains critical for acquisition strategies, assessing segment sensitivity to different pricing strategies, to maximize revenue and reduce credit losses.

Figure 2: In the illustration above, we can visually see the impact on earnings throughout the credit lifecycle driven by redefining the target populations and application of different pricing strategies.

In our simplified example, based on the RAROC metric applied to an unsecured loans portfolio, we take a 2-step approach:

1- Identify target populations by comparing RAROC across different combinations of credit scores and debt-to-income (DTI) ratios. This helps identify the most capital efficient segments to target.

2- Assess the sensitivity of RAROC to different pricing strategies to find the optimal price points to maximize profit  over a select period - in this scenario we use a 5-year time horizon.

Figure 3: The top table showcases the current portfolio mix and performance, while the bottom table illustrates the effects of adjusting the pricing and acquisition strategy. By redefining the target populations and changing the pricing strategy, it is possible to reallocate capital to the most profitable segments whilst maintaining within credit risk appetite. For example, 60% of current lending is towards a mix of low to high RAROC segments, but under the new proposed strategy, 70% of total capital is allocated to the highest RAROC segments.

Uncovering risks and seizing opportunities

The current state of Early Warning Systems

Many organizations rely on regulatory models and standard risk triggers (e.g., no. of customers 30 day past due, NPL ratio etc.) to set their EWS thresholds. Whilst this may be a good starting point, traditional models and tools often miss timely deteriorations and valuable opportunities, as they typically use limited and/or outdated data features.

Target state of Early Warning Systems

Leveraging timely and relevant data, combined with next-generation AI and machine learning techniques, enables early identification of customer deterioration, resulting in prompt intervention and significantly lower impairment costs and NPL ratios.

Furthermore, an effective EWS framework empowers your organization to spot new growth areas, capitalize on cross-selling opportunities, and enhance existing strategies, driving significant benefits to your P&L.

Figure 4: By updating the early warning triggers using new timely data and advanced techniques, detection of customer deterioration can be greatly improved enabling firms to proactively support clients and enhance the firm’s financial position.

Discover the benefits of optimizing your portfolios

Discover the benefits in optimizing your portfolios’ risk-reward profile using our comprehensive approach as we turn today’s challenges into tomorrow’s advantages. Such benefits include:

Conclusion

In today's rapidly evolving market, the need for sophisticated credit risk portfolio management is ever more critical. With our comprehensive approach, banks are empowered to not merely weather economic uncertainties, but to thrive within them by striking the optimal risk-reward balance. Through leveraging advanced data analytics and deploying quantitative tools and models, we help institutions strategically position themselves for sustainable growth, and comply with increasing regulatory demands especially with the advent of Basel IV. Contact us to turn today’s challenges into tomorrow’s opportunities.

For more information on this topic, contact Martijn de Groot (Partner) or Paolo Vareschi (Director).

Converging on resilience: Integrating CCR, XVA, and real-time risk management

November 2024
2 min read

In a world where the Fundamental Review of the Trading Book (FRTB) commands much attention, it’s easy for counterparty credit risk (CCR) to slip under the radar.


However, CCR remains an essential element in banking risk management, particularly as it converges with valuation adjustments. These changes reflect growing regulatory expectations, which were further amplified by recent cases such as Archegos. Furthermore, regulatory focus seems to be shifting, particularly in the U.S., away from the Internal Model Method (IMM) and toward standardised approaches. This article provides strategic insights for senior executives navigating the evolving CCR framework and its regulatory landscape.

Evolving trends in CCR and XVA

Counterparty credit risk (CCR) has evolved significantly, with banks now adopting a closely integrated approach with valuation adjustments (XVA) — particularly Credit Valuation Adjustment (CVA), Funding Valuation Adjustment (FVA), and Capital Valuation Adjustment (KVA) — to fully account for risk and costs in trade pricing. This trend towards blending XVA into CCR has been driven by the desire for more accurate pricing and capital decisions that reflect the true risk profile of the underlying instruments/ positions.

In addition, recent years have seen a marked increase in the use of collateral and initial margin as mitigants for CCR. While this approach is essential for managing credit exposures, it simultaneously shifts a portion of the risk profile into contingent market and liquidity risks, which, in turn, introduces requirements for real-time monitoring and enhanced data capabilities to capture both the credit and liquidity dimensions of CCR. Ultimately, this introduces additional risks and modelling challenges with respect to wrong way risk and clearing counterparty risk.

As banks continue to invest in advanced XVA models and supporting technologies, senior executives must ensure that systems are equipped to adapt to these new risk characteristics, as well as to meet growing regulatory scrutiny around collateral management and liquidity resilience.

The Internal Model Method (IMM) vs. SA-CCR

In terms of calculating CCR, approaches based on IMM and SA-CCR provide divergent paths. On one hand, IMM allows banks to tailor models to specific risks, potentially leading to capital efficiencies. SA-CCR, on the other hand, offers a standardised approach that’s straightforward yet conservative. Regulatory trends indicate a shift toward SA-CCR, especially in the U.S., where reliance on IMM is diminishing.

As banks shift towards SA-CCR for Regulatory capital and IMM is used increasingly for internal purposes, senior leaders might need to re-evaluate whether separate calibrations for CVA and IMM are warranted or if CVA data can inform IMM processes as well.

Regulatory focus on CCR: Real-time monitoring, stress testing, and resilience

Real-time monitoring and stress testing are taking centre stage following increased regulatory focus on resilience. Evolving guidelines, such as those from the Bank for International Settlements (BIS), emphasise a need for efficiency and convergence between trading and risk management systems. This means that banks must incorporate real-time risk data and dynamic monitoring to proactively manage CCR exposures and respond to changes in a timely manner.

CVA hedging and regulatory treatment under IMM

CVA hedging aims to mitigate counterparty credit spread volatility, which affects portfolio credit risk. However, current regulations limit offsetting CVA hedges against CCR exposures under IMM. This regulatory separation of capital for CVA and CCR leads to some inefficiencies, as institutions can’t fully leverage hedges to reduce overall exposure.

Ongoing BIS discussions suggest potential reforms for recognising CVA hedges within CCR frameworks, offering a chance for more dynamic risk management. Additionally, banks are exploring CCR capital management through LGD reductions using third-party financial guarantees, potentially allowing for more efficient capital use. For executives, tracking these regulatory developments could reveal opportunities for more comprehensive and capital-efficient approaches to CCR.

Leveraging advanced analytics and data integration for CCR

Emerging technologies in data analytics, artificial intelligence (AI), and scenario analysis are revolutionising CCR. Real-time data analytics provide insights into counterparty exposures but typically come at significant computational costs: high-performance computing can help mitigate this, and, if coupled with AI, enable predictive modelling and early warning systems. For senior leaders, integrating data from risk, finance, and treasury can optimise CCR insights and streamline decision-making, making risk management more responsive and aligned with compliance.

By leveraging advanced analytics, banks can respond proactively to potential CCR threats, particularly in scenarios where early intervention is critical. These technologies equip executives with the tools to not only mitigate CCR but also enhance overall risk and capital management strategies.

Strategic considerations for senior executives: Capital efficiency and resilience

Balancing capital efficiency with resilience requires careful alignment of CCR and XVA frameworks with governance and strategy. To meet both regulatory requirements and competitive pressures, executives should foster collaboration across risk, finance, and treasury functions. This alignment will enhance capital allocation, pricing strategies, and overall governance structures.

For banks facing capital constraints, third-party optimisation can be a viable strategy to manage the demands of SA-CCR. Executives should also consider refining data integration and analytics capabilities to support efficient, resilient risk management that is adaptable to regulatory shifts.

Conclusion

As counterparty credit risk re-emerges as a focal point for financial institutions, its integration with XVA, and the shifting emphasis from IMM to SA-CCR, underscore the need for proactive CCR management. For senior risk executives, adapting to this complex landscape requires striking a balance between resilience and efficiency. Embracing real-time monitoring, advanced analytics, and strategic cross-functional collaboration is crucial to building CCR frameworks that withstand regulatory scrutiny and position banks competitively.

In a financial landscape that is increasingly interconnected and volatile, an agile and resilient approach to CCR will serve as a foundation for long-term stability. At Zanders, we have significant experience implementing advanced analytics for CCR. By investing in robust CCR frameworks and staying attuned to evolving regulatory expectations, senior executives can prepare their institutions for the future of CCR and beyond thereby avoiding being left behind.

Insights into cracking model risk for prepayment models

October 2024
7 min read

This article examines different methods for quantifying and forecasting model risk in prepayment models, highlighting their respective strengths and weaknesses.


Within the field of financial risk management, professionals strive to develop models to tackle the complexities in the financial domain. However, due to the ever-changing nature of financial variables, models only capture reality to a certain extent. Therefore, model risk - the potential loss a business could suffer due to an inaccurate model or incorrect use of a model - is a pressing concern. This article explores model risk in prepayment models, analyzing various approaches to quantify and forecast this risk. 

There are numerous examples where model risk has not been properly accounted for, resulting in significant losses. For example, Long-Term Capital Management was a hedge fund that went bankrupt in the late 1990s because its model was never stress-tested for extreme market conditions. Similarly, in 2012, JP Morgan experienced a $6 billion loss and $920 million in fines due to flaws in its new value-at-risk model known as the ‘London Whale Trade’.  

Despite these prominent failures, and the requirements of CRD IV Article 85 for institutions to develop policies and processes for managing model risk,1 the quantification and forecasting of model risk has not been extensively covered in academic literature. This leaves a significant gap in the general understanding and ability to manage this risk. Adequate model risk management allows for optimized capital allocation, reduced risk-related losses, and a strengthened risk culture.  

This article delves into model risk in prepayment models, examining different methods to quantify and predict this risk. The objective is to compare different approaches, highlighting their strengths and weaknesses.  

Definition of Model Risk

Generally, model risk can be assessed using a bottom-up approach by analyzing individual model components, assumptions, and inputs for errors, or by using a top-down approach by evaluating the overall impact of model inaccuracies on broader financial outcomes. In the context of prepayments, this article adopts a bottom-up approach by using model error as a proxy for model risk, allowing for a quantifiable measure of this risk. Model error is the difference between the modelled prepayment rate and the actual prepayment rate. Model error occurs at an individual level when a prepayment model predicts a prepayment that does not happen, and vice versa. However, banks are more interested in model error at the portfolio level. A statistic often used by banks is the Single Monthly Mortality (SMM). The SMM is the monthly percentage of prepayments and can be calculated by dividing the amount of prepayments for a given month by the total amount of mortgages outstanding. 

Using the SMM, we can define and calculate the model error as the difference between the predicted SMM and the actual SMM: 

The European Banking Authority (EBA) requires financial institutions when calculating valuation model risk to set aside enough funds to be 90% confident that they can exit a position at the time of the assessment. Consequently, banks are concerned with the top 5% and lowest 5% of the model risk distribution (EBA, 2016, 2015). 2 Thus, banks are interested in the distribution of the model error as defined above, aiming to ensure they allocate the capital optimally for model risk in prepayment models.  

Approaches to Forecasting Model Risk 

By using model error as a proxy for model risk, we can leverage historical model errors to forecast future errors through time-series modelling. In this article, we explore three methods: the simple approach, the auto-regressive approach, and the machine learning challenger model approach.

Simple Approach

The first method proposed to forecast the expected value, and the variance of the model errors is the simple approach. It is the most straightforward way to quantify and predict model risk by analyzing the mean and standard deviation of the model errors. The model itself causes minimal uncertainty, as there are just two parameters which have to be estimated, namely the intercept and the standard deviation.

The disadvantage of the simple approach is that it is time-invariant. Consequently, even in extreme conditions, the expected value and the variance of model errors remain constant over time.

Auto-Regressive Approach

The second approach to forecast the model errors of a prepayment model is the auto-regressive approach. Specifically, this approach utilizes an AR(1) model, which forecasts the model errors by leveraging their lagged values. The advantage of the auto-regressive approach is that it takes into account the dynamics of the historical model errors when forecasting them, making it more advanced than the simple approach.

The disadvantage of the auto-regressive approach is that it always lags and that it does not take into account the current status of the economy. For example, an increase in the interest rate by 200 basis points is expected to lead to a higher model error, while the auto-regressive approach is likely to forecast this increase in model error one month later.

Machine Learning Challenger Model Approach                           

The third approach to forecast the model errors involves incorporating a Machine Learning (ML) challenger model. In this article, we use an Artificial Neural Network (ANN). This ML challenger model can be more sophisticated than the production model, as its primary focus is on predictive accuracy rather than interpretability. This approach uses risk measures to compare the production model with a more advanced challenger model. A new variable is defined as the difference between the production model and the challenger model.

Similar to the above approaches, the expected value of the model errors is forecasted by estimating the intercept, the parameter of the new variable, and the standard deviation. A forecast can be made and the difference between the production model and ML challenger model can be used as a proxy for future model risk.

The advantage of using the ML challenger model approach is that it is forward looking. This forward-looking method allows for reasonable estimates under both normal and extreme conditions, making it a reliable proxy for future model risk. In addition, when there are complex non-linear relationships between an independent variable and the prepayment rate, an ML challenger can be more accurate. Its complexity allows it to predict significant impacts better than a simpler, more interpretable production model. Consequently, employing an ML challenger model approach could effectively estimate model risk during substantial market changes.

A disadvantage of the machine learning approach is its complexity and lack of interpretability. Additionally, developing and maintaining these models often requires significant time, computational resources, and specialized expertise.

Conclusion 

The various methods to estimate model risk are compared in a simulation study. The ML challenger model approach stands out as the most effective method for predicting model errors, offering increased accuracy in both normal and extreme conditions. Both the simple and challenger model approach effectively predicts the variability of model errors, but the challenger model approach achieves a smaller standard deviation. In scenarios involving extreme interest rate changes, only the challenger model approach delivers reasonable estimates, highlighting its robustness. Therefore, the challenger model approach is the preferred choice for predicting model error under both normal and extreme conditions.

Ultimately, the optimal approach should align with the bank’s risk appetite, operational capabilities, and overall risk management framework. Zanders, with its extensive expertise in financial risk management, including multiple high-profile projects related to prepayments at G-SIBs as well as mid-size banks, can provide comprehensive support in navigating these challenges. See our expertise here.


Ready to take your IRRBB strategy to the next level?

Zanders is an expert on IRRBB-related topics. We enable banks to achieve both regulatory compliance and strategic risk goals by offering support from strategy to implementation. This includes risk identification, formulating a risk strategy, setting up an IRRBB governance and framework, and policy or risk appetite statements. Moreover, we have an extensive track record in IRRBB [EU1] and behavioral models such as prepayment models, hedging strategies, and calculating risk metrics, both from model development and model validation perspectives.

Contact our experts today to discover how Zanders can help you transform risk management into a competitive advantage. Reach out to: Jaap Karelse, Erik Vijlbrief, Petra van Meel, or Martijn Wycisk to start your journey toward financial resilience.

  1. https://www.eba.europa.eu/regulation-and-policy/single-rulebook/interactive-single-rulebook/11665
    CRD IV Article 85: Competent authorities shall ensure that institutions implement policies and processes to evaluate and manage the exposures to operational risk, including model risk and risks resulting from outsourcing, and to cover low-frequency high-severity events. Institutions shall articulate what constitutes operational risk for the purposes of those policies and procedures. ↩︎
  2. https://extranet.eba.europa.eu/sites/default/documents/files/documents/10180/642449/1d93ef17-d7c5-47a6-bdbc-cfdb2cf1d072/EBA-RTS-2014-06%20RTS%20on%20Prudent%20Valuation.pdf?retry=1
    Where possible, institutions shall calculate the model risk AVA by determining a range of plausible valuations produced from alternative appropriate modelling and calibration approaches. In this case, institutions shall estimate a point within the resulting range of valuations where they are 90% confident they could exit the valuation exposure at that price or better. In this article, we generalize valuation model risk to model risk. ↩︎

Budget at Risk: Empowering a global non-profit client with a clearer steer on FX risk

How can a non-profit organization operating on a global stage safeguard itself from foreign currency fluctuations? Here, we share how our ‘Budget at Risk’ model helped a non-profit client more accurately quantify the currency risk in its operations.


Charities and non-profit organizations face distinct challenges when processing donations and payments across multiple countries. In this sector, the impact of currency exchange losses is not simply about the effect on an organization’s financial performance, there’s also the potential disruption to projects to consider when budgets are at risk. Zanders developed a ‘Budget at Risk’ model to help a non-profit client with worldwide operations to better forecast the potential impact of currency fluctuations on their operating budget. In this article, we explain the key features of this model and how it's helping our client to forecast the budget impact of currency fluctuations with confidence.

The client in question is a global non-profit financed primarily through individual contributions from donors all over the world. While monthly inflows and outflows are in 16 currencies, the organization’s global reserves are quantified in EUR. Consequently, their annual operating budget is highly impacted by foreign exchange rate changes. To manage this proactively demands an accurate forecasting and assessment of: 

  • The offsetting effect of the inflows and outflows.  
  • The diversification effect coming from the level of correlation between the currencies.  

With the business lacking in-house expertise to quantify these risk factors, they sought Zanders’ help to develop and implement a model that would allow them to regularly monitor and assess the potential budget impact of potential FX movements.

Developing the BaR method

Having already advised the organization on several advisory and risk management projects over the past decade, Zanders was well versed in the organization’s operations and the unique nature of the FX risk it faces. The objective behind developing Budget at Risk (BaR) was to create a model that could quantify the potential risk to the organization’s operating budget posed by fluctuations in foreign exchange rates.  

The BaR model uses the Monte Carlo method to simulate FX rates over a 12-month period. Simulations are based on the monthly returns on the FX rates, modelled by drawings from a multivariate normal distribution. This enables the quantification of the maximum expected negative FX impact on the company’s budget over the year period at a certain defined level of confidence (e.g., 95%). The model outcomes are presented as a EUR amount to enable direct comparison with the level of FX risk in the company’s global reserves (which provides the company’s ‘risk absorbing capacity’). When the BaR outcome falls outside the defined bandwidth of the FX risk reserve, it alerts the company to consider selective FX hedging decisions to bring the BaR back within the desired FX risk reserve level. 

The nature of the model 

The purpose of the BaR model isn’t to specify the maximum or guaranteed amount that will be lost. Instead, it provides an indication of the amount that could be lost in relation to the budgeted cash flows within a given period, at the specified confidence interval. To achieve this, the sensitivity of the model is calibrated by: 

  • Modifying the confidence levels. This changes the sensitivity of the model to extreme scenarios. For example, the figure below illustrates the BaR for a 95% level of confidence and provides the 5% worst-case scenario. If a 99% confidence level was applied, it would provide the 1% worst (most extreme) case scenario.  
  • Selecting different lengths of sample data. This allows the calculation of the correlation and volatility of currency pairs. The period length of the sample data helps to assess the sensitivity to current events that may affect the FX market. For example, a sample period of 6 months is much more sensitive to current events than a sample of 5 years.  

Figure 1 – BaR for a 95% level of confidence 

Adjusting these parameters makes it possible to calculate the decomposition of the BaR per currency for a specified confidence level and length of data sample. The visual outcome makes the currency that’s generating most risk quick and easy to identify. Finally, the diversification effect on the BaR is calculated to quantify the offsetting effect of inflows and outflows and the correlation between the currencies. 

Table 1 – Example BaR output per confidence level and length of data sample 

Pushing parameters 

The challenge with the simulation and the results generated is that many parameters influence the outcomes – such as changes in cash flows, volatility, or correlation. To provide as much clarity as possible on the underlying assumptions, the impact of each parameter on the results must be considered. Zanders achieves this firstly by decomposing the impact by: 

  • Changing FX data to trigger a difference in the market volatility and correlation. 
  • Altering the cash flows between the two assessment periods. 

Then, we look at each individual currency to better understand its impact on the total result. Finally, additional background checks are performed to ensure the accuracy of the results. 

This multi-layered modelling technique provides base cases that generate realistic predictions of the impact of specific rate changes on the business’ operating budget for the year ahead. Armed with this knowledge, we then work with the non-profit client to develop suitable hedging strategies to protect their funding. 

Leveraging Zanders’ expertise 

FX scenario modelling is a complex process requiring expertise in currency movements and risk – a combination of niche skills that are uncommon in the finance teams of most non-profit businesses. But for these organizations, where there can be significant currency exposure, taking a proactive, data-driven approach to managing FX risk is critical. Zanders brings extensive experience in supporting NGO, charity and non-profit clients with modelling currency risk in a multiple currency exposure environment and quantifying potential hedge cost reduction by shifting from currency hedge to portfolio hedge.  

For more information, visit our NGOs & Charities page here, or contact the authors of this case study, Pierre Wernert and Jaap Stolp.

Customer successes

View all Insights

Biodiversity risks scoring: a quantitative approach

October 2024
9 min read

Explore how Zanders’ scoring methodology quantifies biodiversity risks, enabling financial institutions to safeguard portfolios from environmental and transition impacts.


Addressing biodiversity (loss) is not only relevant from an impact perspective; it is also quickly becoming a necessity for financial institutions to safeguard their portfolios against financial risks stemming from habitat destruction, deforestation, invasive species and/or diseases. 

In a previous article, published in November 2023, Zanders introduced the concept of biodiversity risks, explained how it can pose a risk for financial institutions, and discussed the expectations from regulators.1 In addition, we touched upon our initial ideas to introduce biodiversity risks in the risk management framework. One of the suggestions was for financial institutions to start assessing the materiality of biodiversity risk, for example by classifying exposures based on sector or location. In this article, we describe Zanders’ approach for classifying biodiversity risks in more detail. More specifically, we explore the concepts behind the assessment of biodiversity risks, and we present key insights into methodologies for classifying the impact of biodiversity risks; including a use case. 

Understanding biodiversity risks 

Biodiversity risks can be related to physical risk and/or transition risk events. Biodiversity physical risks results from environmental decay, either event-driven or resulting from longer-term patterns. Biodiversity transition risks results from developments aimed at preventing or restoring damage to nature. These risks are driven by impacts and dependencies that an undertaking has on natural resources and ecosystem services. The definition of impacts and dependencies and its relation to physical and transitional risks is explained below:

  • Companies impact natural assets through their business operations and output. For example, the production process of an oil company in a biodiversity sensitive area could lead to biodiversity loss. Impacts are mainly related to transition risk as sectors and economic activities that have a strong negative impact on environmental factors are likely to be the first affected by a change in policies, legal charges, or market changes related to preventing or restoring damage to nature. 
  • On the other hand, companies are dependent on certain ecosystem services. For example, agricultural companies are dependent on ecosystem services such as water and pollination. Dependencies are mainly related to physical risk as companies with a high dependency will take the biggest hit from a disruption or decay of the ecosystem service caused by e.g. an oil spill or pests. 

For banks, the impacts and dependencies of their own operations and of their counterparties can impact traditional financial (credit, liquidity, and market) and non-financial (operational and business) risks. In our biodiversity classification methodology, we assess both impacts and dependencies as indicators for physical and transition risk. This is further described in the next section.

Zanders’ biodiversity classification methodology

An important starting point for climate-related and environmental (C&E) risk management is the risk identification and materiality assessment. For C&E risks, and biodiversity in particular, obtaining data is a challenge. A quantitative assessment of materiality is therefore difficult to achieve. To address this, Zanders has developed a data driven classification methodology. By classifying the biodiversity impact and dependencies of exposures based on the sector and location of the counterparty, scores that quantify the portfolio’s physical and transition risks related to biodiversity are calculated. These scores are based on the databases of Exploring Natural Capital Opportunities, Risks and Exposure (ENCORE) and the World Wide Fund for Nature (WWF). 

Sector classification 

The sector classification methodology is developed based on the ENCORE database. ENCORE is a public database that is recognized by global initiatives such as Taskforce on Nature-related Financial Disclosures (TNFD) and Partnership for Biodiversity Accounting Financials (PBAF). ENCORE is a key tool for the “Evaluate” phase of the TNFD LEAP approach (Locate, Evaluate, Assess and Prepare).  

ENCORE was developed specifically for financial institutions with the goal to assist them in performing a high-level but data-driven scan of their exposures’ impacts and dependencies. The scanning is made across multiple dimensions of the ecosystem, including biodiversity-related environmental drivers. ENCORE evaluates the potential reliance on ecosystem services2 and the changes of impacts drivers3 on natural capital assets4. It does so by assigning scores to different levels of a sector classification (sector, subindustry and production process). These scores are assigned for 11 impact drivers and 21 ecosystem services. ENCORE provides a score ranging from Very Low to Very High for a broad range of production processes, sub-sectors and sectors. 

To compute the sector scores, ENCORE does not offer a methodology for aggregating scores for impacts drivers and ecosystem services. Therefore, ENCORE does not provide an overall dependency and impact per sector, sub-industry, or production process. However, Zanders has created a methodology to calculate a final aggregated impact and dependency score. The result of this aggregation is a single impact and a single dependency score for each ENCORE sector, sub-industry or production process. In addition, an overall impacts and dependencies scores are computed for the portfolio, based on its sector distribution. In both cases, scores range from 0 (no impact/dependency) to 5 (very high impact or dependency).

Location classification

The location scoring methodology is developed based on the WWF Biodiversity Risk Filter (hereafter called WWF BRF).5 The WWF BRF is a public tool that supports a location-specific analysis of physical- and transition-related biodiversity risks. 

The WWF BRF consists of a set of 33 biodiversity indicators: 20 related to physical risks and 13 related to reputational risks, which are provided at country, but also on a more granular regional level. These indicators are aggregated by the tool itself, which ultimately provides one single scape physical risk and scape reputational risk per location.

To compute overall location scores, the WWF BRF does not offer a methodology for aggregating scores for countries and determine the overall transition risk (based on the scape reputational risk scores) and physical risk (based on the scape physical risk scores). However, Zanders has created a methodology to calculate a final aggregated transition and physical risk score for the portfolio, based on its geographical distribution. The result of this aggregation is a single transition and physical risk score for the portfolio, ranging from 0 (no risk) to 5 (very high risk). 

Use case: RI&MA for biodiversity risks in a bank portfolio 

In this section, we present a use case of classifying biodiversity risks for the portfolio of a fictional financial institution, using the sector and location scoring methodologies developed by Zanders. 

The exposures of this financial institution are concentrated in four sectors: Real estate, Oil & Gas, Soft commodities and Luxury goods. Moreover, the operations of these sectors are located across four different countries: the Netherlands, Switzerland, Morocco and China. The following matrix shows the percentage of exposures of the financial institution for each combination of sector and country: 

ENCORE provides scores for 11 ecosystem services and 21 impacts drivers. Those related to biodiversity risks are transformed to a range from 0 to 5. After that, biodiversity ecosystem services and biodiversity impacts drivers are aggregated into an overall biodiversity impacts and dependencies scores, respectively. The following table shows the mapping between the sectors in the portfolio and the corresponding sub-industry in the ENCORE database, including the aggregated biodiversity impacts and dependencies scores computed for those sub-industries. The mapping is done at sub-industry level, since it is the level of granularity of the ENCORE sector classification that better fits the sectors defined in the fictional portfolio. In addition, the overall impacts and dependencies scores are computed, by taking the weighted average sized by the sector distribution of the portfolio. This leads to scores of 3.8 and 2.4 for the impacts and dependencies scores, respectively. 

The WWF BRF provides biodiversity indicators at country level. It already provides an aggregated score for physical risk (namely, scape physical score) and for transition risk (namely, scape reputational risk score), so no further aggregation is needed. Therefore, the corresponding scores for the four countries within the bank portfolio are selected. As the last step, the location scores are transformed to a range similar to the sector scores, i.e., from 0 (no physical/transition risk) to 5 (very high physical/transition risk). The results are shown in the following table. In addition, the overall impacts and dependencies scores are computed, by taking the weighted average sized by the geographical distribution of the portfolio. This leads to scores of 3.9 and 3.3 for the physical and transition risk scores, respectively. 

Results of the sector and location scores can be displayed for a better understanding and to enable comparison between sectors and countries. Bubble charts, such as the ones show below, present the sectors and location scores together with the size of the exposures in the portfolio (by the size of each bubble). 

Combined with the size of the exposures, the results suggest that biodiversity-related physical and transition risks could result in financial risks for Soft commodities and Oil & Gas. This is due to high impacts and dependencies and their relevant size in the portfolio. Moreover, despite a low dependencies score, biodiversity risks could also impact the Real estate sector due to a combination of its high impact score and the high sector concentration (45% of the portfolio). From a location perspective, exposures located in China could face high biodiversity transition risks, while exposures located in Morocco are the most vulnerable to biodiversity physical risks. In addition, relatively high scores for both physical and transition risk scores for Netherlands, combined with the large size of these exposures in the portfolio, could also lead to additional financial risk.’ 

These results, combined with other information such as loan maturities, identified transmission channels, or expert inputs, can be used to inform the materiality of biodiversity risks. 

Conclusion 

Assessing the materiality of biodiversity risks is crucial for financial institutions in order to understand the risks and opportunities in their loan portfolios. In this article, Zanders has presented its approach for an initial quantification of biodiversity risks. Curious to learn how Zanders can support your financial institutions with the identification and quantification of biodiversity risks and the integration into the risk frameworks? Please reach out to Marije Wiersma, Iryna Fedenko or Miguel Manzanares.

  1. https://zandersgroup.com/en/insights/blog/biodiversity-risks-and-opportunities-for-financial-institutions-explained ↩︎
  2. In accordance with ENCORE, ecosystem services are the links between nature and business. Each of these services represent a benefit that nature provides to enable or facilitate business production processes.  ↩︎
  3. In accordance with ENCORE AND Natural Capital Protocol (2016), an impacts driver is a measurable quantity of a natural resource that is used as an input to production or a measurable non-product output of business activity. ↩︎
  4. In accordance with ENCORE, natural capital assets are specific elements within nature that provide the goods and services that the economy depends on. ↩︎
  5. The WWF also provides a similar tool, the WWF Water Risk Filter, which could be used as to assess specific water-related environmental risks. ↩︎

Unlocking the Hidden Gems of the SAP Credit Risk Analyzer 

June 2024
4 min read

Are you leveraging the SAP Credit Risk Analyzer to its full potential?


While many business and SAP users are familiar with its core functionalities, such as limit management applying different limit types and the core functionality of attributable amount determination, several less known SAP standard features can enhance your credit risk management processes.


In this article, we will explore these hidden gems, such as Group Business Partners and the ways to manage the limit utilizations using manual reservations and collateral. 

Group Business Partner Use

One of the powerful yet often overlooked features of the SAP Credit Risk Analyzer is the ability to use Group Business Partners (BP). This functionality allows you to manage credit and settlement risk at a bank group level rather than at an individual transactional BP level. By consolidating credit and settlement exposure for related entities under a single group business partner, you can gain a holistic view of the risks associated with an entire banking group. This is particularly beneficial for organizations dealing with banking corporations globally and allocating a certain amount of credit/settlement exposure to banking groups. It is important to note that credit ratings are often reflected at the group bank level. Therefore, the use of Group BPs can be extended even further with the inclusion of credit ratings, such as S&P, Fitch, etc. 

Configuration: Define the business partner relationship by selecting the proper relationship category (e.g., Subsidiary of) and setting the Attribute Direction to "Also count transactions from Partner 1 towards Partner 2," where Partner 2 is the group BP. 

Master Data: Group BPs can be defined in the SAP Business Partner master data (t-code BP). Ensure that all related local transactional BPs are added in the relationship to the appropriate group business partner. Make sure the validity period of the BP relationship is valid. Risk limits are created using the group BP instead of the transactional BP. 

Reporting: Limit utilization (t-code TBLB) is consolidated at the group BP level. Detailed utilization lines show the transactional BP, which can be used to build multiple report variants to break down the limit utilization by transactional BP (per country, region, etc.). 

Having explored the benefits of using Group Business Partners, another feature that offers significant flexibility in managing credit risk is the use of manual reservations and collateral contracts. 

Use of Manual Reservations 

Manual reservations in the SAP Credit Risk Analyzer provide an additional layer of flexibility in managing limit utilization. This feature allows risk managers to manually add a portion of the credit/settlement utilization for specific purposes or transactions, ensuring that critical operations are not hindered by unexpected credit or settlement exposure. It is often used as a workaround for issues such as market data problems, when SAP is not able to calculate the NPV, or for complex financial instruments not yet supported in the Treasury Risk Management (TRM) or Credit Risk Analyzer (CRA) settings. 

Configuration: Apart from basic settings in the limit management, no extra settings are required in SAP standard, making the use of reservations simpler. 

Master data: Use transaction codes such as TLR1 to TLR3 to create, change, and display the reservations, and TLR4 to collectively process them. Define the reservation amount, specify the validity period, and assign it to the relevant business partner, transaction, limit product group, portfolio, etc. Prior to saving the reservation, check in which limits your reservation will be reflected to avoid having any idle or misused reservations in SAP. 

While manual reservations provide a significant boost to flexibility in limit management, another critical aspect of credit risk management is the handling of collateral. 

Collateral 

Collateral agreements are a fundamental aspect of credit risk management, providing security against potential defaults. The SAP Credit Risk Analyzer offers functionality for managing collateral agreements, enabling corporates to track and value collateral effectively. This ensures that the collateral provided is sufficient to cover the exposure, thus reducing the risk of loss.  

SAP TRM supports two levels of collateral agreements:  

  1. Single-transaction-related collateral 
  2. Collateral agreements.  

Both levels are used to reduce the risk at the level of attributable amounts, thereby reducing the utilization of limits. 

Single-transaction-related collateral: SAP distinguishes three types of collateral value categories: 

  • Percentual collateralization 
  • Collateralization using a collateral amount 
  • Collateralization using securities 

Configuration: configure collateral types and collateral priorities, define collateral valuation rules, and set up the netting group. 

Master Data: Use t-code KLSI01_CFM to create collateral provisions at the appropriate level and value. Then, this provision ID can be added to the financial object. 

Reporting: both manual reservations and collateral agreements are visible in the limit utilization report as stand- alone utilization items. 

By leveraging these advanced features, businesses can significantly enhance their risk management processes. 

Conclusion

The SAP Credit Risk Analyzer is a comprehensive tool that offers much more than meets the eye. By leveraging its hidden functionalities, such as Group Business Partner use, manual reservations, and collateral agreements, businesses can significantly enhance their credit risk management processes. These features not only provide greater flexibility and control but also ensure a more holistic and robust approach to managing credit risk. As organizations continue to navigate the complexities of the financial landscape, unlocking the full potential of the SAP Credit Risk Analyzer can be a game-changer in achieving effective risk management. 

If you have questions or are keen to see the functionality in our Zanders SAP Demo system, please feel free to contact Aleksei Abakumov or any Zanders SAP consultant. 

Default modelling in an age of agility

June 2024
4 min read

Are you leveraging the SAP Credit Risk Analyzer to its full potential?


In brief:

  • Prevailing uncertainty in geopolitical, economic and regulatory environments demands a more dynamic approach to default modelling.
  • Traditional methods such as logistic regression fail to address the non-linear characteristics of credit risk.
  • Score-based models can be cumbersome to calibrate with expertise and can lack the insight of human wisdom.
  • Machine learning lacks the interpretability expected in a world where transparency is paramount.
  • Using the Bayesian Gaussian Process Classifier defines lending parameters in a more holistic way, sharpening a bank’s ability to approve creditworthy borrowers and reject proposals from counterparties that are at a high risk of default.

Historically high levels of economic volatility, persistent geopolitical unrest, a fast-evolving regulatory environment – a perpetual stream of disruption is highlighting the limitations and vulnerabilities in many credit risk approaches. In an era where uncertainty persists, predicting risk of default is becoming increasingly complex, and banks are increasingly seeking a modelling approach that incorporates more flexibility, interpretability, and efficiency.

While logistic regression remains the market standard, the evolution of the digital treasury is arming risk managers with a more varied toolkit of methodologies, including those powered by machine learning. This article focuses on the Bayesian Gaussian Process Classifier (GPC) and the merits it offers compared to machine learning, score-based models, and logistic regression.

A non-parametric alternative to logistic regression

The days of approaching credit risk in a linear, one-dimensional fashion are numbered. In today’s fast paced and uncertain world, to remain resilient to rising credit risk, banks have no choice other than to consider all directions at once. With the GPC approach, the linear combination of explanatory variables is replaced by a function, which is iteratively updated by applying Bayes’ rule (see Bayesian Classification With Gaussian Processes for further detail).

For default modelling, a multivariate Gaussian distribution is used, hence forsaking linearity. This allows the GPC to parallel machine learning (ML) methodologies, specifically in terms of flexibility to incorporate a variety of data types and variables and capability to capture complex patterns hidden within financial datasets.

A model enriched by expert wisdom

Another way GPC shows similar characteristics to machine learning is in how it loosens the rigid assumptions that are characteristic of many traditional approaches, including logistic regression and score-based models. To explain, one example is the score-based Corporate Rating Model (CRM) developed by Zanders. This is the go-to model of Zanders to assess the creditworthiness of corporate counterparties. However, calibrating this model and embedding the opinion of Zanders’ corporate rating experts is a time-consuming task. The GPC approach streamlines this process significantly, delivering both greater cost- and time-efficiencies. The incorporation of prior beliefs via Bayesian inference permits the integration of expert knowledge into the model, allowing it to reflect predetermined views on the importance of certain variables. As a result, the efficiency gains achieved through the GPC approach don’t come at the cost of expert wisdom.

Enabling explainable lending decisions

As well as our go-to CRM, Zanders also houses machine learning approaches to default modelling. Although this generates successful outcomes, with machine learning, the rationale behind a credit decision is not explicitly explained. In today’s volatile environment, an unexplainable solution can fall short of stakeholder and regulator expectations – they increasingly want to understand the reasoning behind lending decisions at a forensic level. 

Unlike the often ‘black-box’ nature of ML models, with GPC, the path to a decision or solution is both transparent and explainable. Firstly, the GPC model’s hyperparameters provide insights into the relevance and interplay of explanatory variables with the predicted outcome. In addition, the Bayesian framework sheds light on the uncertainty surrounding each hyperparameter. This offers a posterior distribution that quantifies confidence in these parameter estimates. This aspect adds substantial risk assessment value, contrary to the typical point estimate outputs from score-based models or deterministic ML predictions. In short, an essential advantage of the GPC over other approaches is its ability to generate outcomes that withstand the scrutiny of stakeholders and regulators.

A more holistic approach to probability of default modelling

In summary, if risk managers are to tackle the mounting complexity of evaluating probability of default, they need to approach it non-linearly and in a way that’s explainable at every level of the process. This is throwing the spotlight onto more holistic approaches, such as the Gaussian Process Classifier. Using this methodology allows for the incorporation of expert intuition as an additional layer to empirical evidence. It is transparent and accelerates calibration without forsaking performance. This presents an approach that not only incorporates the full complexity of credit risk but also adheres to the demands for model interpretability within the financial sector.

Are you interested in how you could use GPC to enhance your approach to default modelling? Contact Kyle Gartner for more information.

BASEL IV & Real Estate Exposures 

May 2024
4 min read

Are you leveraging the SAP Credit Risk Analyzer to its full potential?


The Basel IV reforms published in 2017 will enter into force on January 1, 2025, with a phase-in period of 5 years. These are probably the most important reforms banks will go through after the introduction of Basel II. The reforms introduce changes in many areas. In the area of credit risk, the key elements of the banking package include the revision of the standardized approach (SA), and the introduction of the output floor. 

In this article, we will analyse in detail the recent updates made to real estate exposures and their impact on capital requirements and internal processes, with a particular focus on collateral valuation methods. 

Real Estate Exposures 

Lending for house purchases is an important business for banks. More than one-third of bank loans in the EU are collateralised with residential immovable property. The Basel IV reforms introduce a more risk-sensitive framework, featuring a more granular classification system. 

Standardized Approach 

The new reforms aim for banks to diminish the advantages gained from using the Internal Ratings-Based (IRB) model. All financial institutions that calculate capital requirements with the IRB approach are now required to concurrently use the standardized approach. Under the Standardized Approach, financial institutions have the option to choose from two methods for assigning risk weights: the whole-loan approach and the split-loan approach. 

Collateral Valuation  

A significant change introduced by the reforms concerns collateral valuation. Previously, the framework allowed banks to determine the value of their real estate collateral based on either the market value (MV) concept or the mortgage lending value (MLV) concept. The revised framework no longer differentiates between these two concepts and introduces new requirements for valuing real estate for lending purposes by establishing a new definition of value. This aims to mitigate the impact of cyclical effects on the valuation of property securing a loan and to maintain more stable capital requirements for mortgages. Implementing an independent valuation that adheres to prudent and conservative criteria can be challenging and may result in significant and disruptive changes in valuation practices.  

Conclusion  

To reduce the impact of cyclical effects on the valuation of property securing a loan and to keep capital requirements for mortgages more stable, the regulator has capped the valuation of the property, so that it cannot for any reason be higher than the one at origination, unless modifications to that property unequivocally increase its value. Regulators have high expectations for accounting for environmental and climate risks, which can influence property valuations in two ways. On the one hand, these risks can trigger a decrease in property value. On the other hand, they can enhance value, as modifications that improve a property's energy performance or resilience to physical risks - such as protection and adaptation measures for buildings and housing units - may be considered value-increasing factors. 

Where Zanders can help 

Based on our experience, we specialize in assisting financial institutions with various aspects of Basel IV reforms, including addressing challenges such as limited data availability, implementing new modelling approaches, and providing guidance on interpreting regulatory requirements.  

For further information, please contact Marco Zamboni. 

Fintegral

is now part of Zanders

In a continued effort to ensure we offer our customers the very best in knowledge and skills, Zanders has acquired Fintegral.

Okay

RiskQuest

is now part of Zanders

In a continued effort to ensure we offer our customers the very best in knowledge and skills, Zanders has acquired RiskQuest.

Okay

Optimum Prime

is now part of Zanders

In a continued effort to ensure we offer our customers the very best in knowledge and skills, Zanders has acquired Optimum Prime.

Okay
This site is registered on wpml.org as a development site.