Converging on resilience: Integrating CCR, XVA, and real-time risk management
November 2024
2 min read
Author:
Sylvain Cheroutre
Share:
In a world where the Fundamental Review of the Trading Book (FRTB) commands much attention, it’s easy for counterparty credit risk (CCR) to slip under the radar.
However, CCR remains an essential element in banking risk management, particularly as it converges with valuation adjustments. These changes reflect growing regulatory expectations, which were further amplified by recent cases such as Archegos. Furthermore, regulatory focus seems to be shifting, particularly in the U.S., away from the Internal Model Method (IMM) and toward standardised approaches. This article provides strategic insights for senior executives navigating the evolving CCR framework and its regulatory landscape.
Evolving trends in CCR and XVA
Counterparty credit risk (CCR) has evolved significantly, with banks now adopting a closely integrated approach with valuation adjustments (XVA) — particularly Credit Valuation Adjustment (CVA), Funding Valuation Adjustment (FVA), and Capital Valuation Adjustment (KVA) — to fully account for risk and costs in trade pricing. This trend towards blending XVA into CCR has been driven by the desire for more accurate pricing and capital decisions that reflect the true risk profile of the underlying instruments/ positions.
In addition, recent years have seen a marked increase in the use of collateral and initial margin as mitigants for CCR. While this approach is essential for managing credit exposures, it simultaneously shifts a portion of the risk profile into contingent market and liquidity risks, which, in turn, introduces requirements for real-time monitoring and enhanced data capabilities to capture both the credit and liquidity dimensions of CCR. Ultimately, this introduces additional risks and modelling challenges with respect to wrong way risk and clearing counterparty risk.
As banks continue to invest in advanced XVA models and supporting technologies, senior executives must ensure that systems are equipped to adapt to these new risk characteristics, as well as to meet growing regulatory scrutiny around collateral management and liquidity resilience.
The Internal Model Method (IMM) vs. SA-CCR
In terms of calculating CCR, approaches based on IMM and SA-CCR provide divergent paths. On one hand, IMM allows banks to tailor models to specific risks, potentially leading to capital efficiencies. SA-CCR, on the other hand, offers a standardised approach that’s straightforward yet conservative. Regulatory trends indicate a shift toward SA-CCR, especially in the U.S., where reliance on IMM is diminishing.
As banks shift towards SA-CCR for Regulatory capital and IMM is used increasingly for internal purposes, senior leaders might need to re-evaluate whether separate calibrations for CVA and IMM are warranted or if CVA data can inform IMM processes as well.
Regulatory focus on CCR: Real-time monitoring, stress testing, and resilience
Real-time monitoring and stress testing are taking centre stage following increased regulatory focus on resilience. Evolving guidelines, such as those from the Bank for International Settlements (BIS), emphasise a need for efficiency and convergence between trading and risk management systems. This means that banks must incorporate real-time risk data and dynamic monitoring to proactively manage CCR exposures and respond to changes in a timely manner.
CVA hedging and regulatory treatment under IMM
CVA hedging aims to mitigate counterparty credit spread volatility, which affects portfolio credit risk. However, current regulations limit offsetting CVA hedges against CCR exposures under IMM. This regulatory separation of capital for CVA and CCR leads to some inefficiencies, as institutions can’t fully leverage hedges to reduce overall exposure.
Ongoing BIS discussions suggest potential reforms for recognising CVA hedges within CCR frameworks, offering a chance for more dynamic risk management. Additionally, banks are exploring CCR capital management through LGD reductions using third-party financial guarantees, potentially allowing for more efficient capital use. For executives, tracking these regulatory developments could reveal opportunities for more comprehensive and capital-efficient approaches to CCR.
Leveraging advanced analytics and data integration for CCR
Emerging technologies in data analytics, artificial intelligence (AI), and scenario analysis are revolutionising CCR. Real-time data analytics provide insights into counterparty exposures but typically come at significant computational costs: high-performance computing can help mitigate this, and, if coupled with AI, enable predictive modelling and early warning systems. For senior leaders, integrating data from risk, finance, and treasury can optimise CCR insights and streamline decision-making, making risk management more responsive and aligned with compliance.
By leveraging advanced analytics, banks can respond proactively to potential CCR threats, particularly in scenarios where early intervention is critical. These technologies equip executives with the tools to not only mitigate CCR but also enhance overall risk and capital management strategies.
Strategic considerations for senior executives: Capital efficiency and resilience
Balancing capital efficiency with resilience requires careful alignment of CCR and XVA frameworks with governance and strategy. To meet both regulatory requirements and competitive pressures, executives should foster collaboration across risk, finance, and treasury functions. This alignment will enhance capital allocation, pricing strategies, and overall governance structures.
For banks facing capital constraints, third-party optimisation can be a viable strategy to manage the demands of SA-CCR. Executives should also consider refining data integration and analytics capabilities to support efficient, resilient risk management that is adaptable to regulatory shifts.
Conclusion
As counterparty credit risk re-emerges as a focal point for financial institutions, its integration with XVA, and the shifting emphasis from IMM to SA-CCR, underscore the need for proactive CCR management. For senior risk executives, adapting to this complex landscape requires striking a balance between resilience and efficiency. Embracing real-time monitoring, advanced analytics, and strategic cross-functional collaboration is crucial to building CCR frameworks that withstand regulatory scrutiny and position banks competitively.
In a financial landscape that is increasingly interconnected and volatile, an agile and resilient approach to CCR will serve as a foundation for long-term stability. At Zanders, we have significant experience implementing advanced analytics for CCR. By investing in robust CCR frameworks and staying attuned to evolving regulatory expectations, senior executives can prepare their institutions for the future of CCR and beyond thereby avoiding being left behind.
Insights into cracking model risk for prepayment models
October 2024
7 min read
Authors:
Jimmy Tang, Joost Hommes
Share:
This article examines different methods for quantifying and forecasting model risk in prepayment models, highlighting their respective strengths and weaknesses.
Within the field of financial risk management, professionals strive to develop models to tackle the complexities in the financial domain. However, due to the ever-changing nature of financial variables, models only capture reality to a certain extent. Therefore, model risk - the potential loss a business could suffer due to an inaccurate model or incorrect use of a model - is a pressing concern. This article explores model risk in prepayment models, analyzing various approaches to quantify and forecast this risk.
There are numerous examples where model risk has not been properly accounted for, resulting in significant losses. For example, Long-Term Capital Management was a hedge fund that went bankrupt in the late 1990s because its model was never stress-tested for extreme market conditions. Similarly, in 2012, JP Morgan experienced a $6 billion loss and $920 million in fines due to flaws in its new value-at-risk model known as the ‘London Whale Trade’.
Despite these prominent failures, and the requirements of CRD IV Article 85 for institutions to develop policies and processes for managing model risk,1 the quantification and forecasting of model risk has not been extensively covered in academic literature. This leaves a significant gap in the general understanding and ability to manage this risk. Adequate model risk management allows for optimized capital allocation, reduced risk-related losses, and a strengthened risk culture.
This article delves into model risk in prepayment models, examining different methods to quantify and predict this risk. The objective is to compare different approaches, highlighting their strengths and weaknesses.
Definition of Model Risk
Generally, model risk can be assessed using a bottom-up approach by analyzing individual model components, assumptions, and inputs for errors, or by using a top-down approach by evaluating the overall impact of model inaccuracies on broader financial outcomes. In the context of prepayments, this article adopts a bottom-up approach by using model error as a proxy for model risk, allowing for a quantifiable measure of this risk. Model error is the difference between the modelled prepayment rate and the actual prepayment rate. Model error occurs at an individual level when a prepayment model predicts a prepayment that does not happen, and vice versa. However, banks are more interested in model error at the portfolio level. A statistic often used by banks is the Single Monthly Mortality (SMM). The SMM is the monthly percentage of prepayments and can be calculated by dividing the amount of prepayments for a given month by the total amount of mortgages outstanding.
Using the SMM, we can define and calculate the model error as the difference between the predicted SMM and the actual SMM:
The European Banking Authority (EBA) requires financial institutions when calculating valuation model risk to set aside enough funds to be 90% confident that they can exit a position at the time of the assessment. Consequently, banks are concerned with the top 5% and lowest 5% of the model risk distribution (EBA, 2016, 2015). 2 Thus, banks are interested in the distribution of the model error as defined above, aiming to ensure they allocate the capital optimally for model risk in prepayment models.
Approaches to Forecasting Model Risk
By using model error as a proxy for model risk, we can leverage historical model errors to forecast future errors through time-series modelling. In this article, we explore three methods: the simple approach, the auto-regressive approach, and the machine learning challenger model approach.
Simple Approach
The first method proposed to forecast the expected value, and the variance of the model errors is the simple approach. It is the most straightforward way to quantify and predict model risk by analyzing the mean and standard deviation of the model errors. The model itself causes minimal uncertainty, as there are just two parameters which have to be estimated, namely the intercept and the standard deviation.
The disadvantage of the simple approach is that it is time-invariant. Consequently, even in extreme conditions, the expected value and the variance of model errors remain constant over time.
Auto-Regressive Approach
The second approach to forecast the model errors of a prepayment model is the auto-regressiveapproach. Specifically, this approach utilizes an AR(1) model, which forecasts the model errors by leveraging their lagged values. The advantage of the auto-regressive approach is that it takes into account the dynamics of the historical model errors when forecasting them, making it more advanced than the simple approach.
The disadvantage of the auto-regressive approach is that it always lags and that it does not take into account the current status of the economy. For example, an increase in the interest rate by 200 basis points is expected to lead to a higher model error, while the auto-regressive approach is likely to forecast this increase in model error one month later.
Machine Learning Challenger Model Approach
The third approach to forecast the model errors involves incorporating a Machine Learning (ML) challenger model. In this article, we use an Artificial Neural Network (ANN).This ML challenger model can be more sophisticated than the production model, as its primary focus is on predictive accuracy rather than interpretability. This approach uses risk measures to compare the production model with a more advanced challenger model. A new variable is defined as the difference between the production model and the challenger model.
Similar to the above approaches, the expected value of the model errors is forecasted by estimating the intercept, the parameter of the new variable, and the standard deviation. A forecast can be made and the difference between the production model and ML challenger model can be used as a proxy for future model risk.
The advantage of using the ML challenger model approach is that it is forward looking. This forward-looking method allows for reasonable estimates under both normal and extreme conditions, making it a reliable proxy for future model risk. In addition, when there are complex non-linear relationships between an independent variable and the prepayment rate, an ML challenger can be more accurate. Its complexity allows it to predict significant impacts better than a simpler, more interpretable production model. Consequently, employing an ML challenger model approach could effectively estimate model risk during substantial market changes.
A disadvantage of the machine learning approach is its complexity and lack of interpretability. Additionally, developing and maintaining these models often requires significant time, computational resources, and specialized expertise.
Conclusion
The various methods to estimate model risk are compared in a simulation study. The ML challenger model approach stands out as the most effective method for predicting model errors, offering increased accuracy in both normal and extreme conditions. Both the simple and challenger model approach effectively predicts the variability of model errors, but the challenger model approach achieves a smaller standard deviation. In scenarios involving extreme interest rate changes, only the challenger model approach delivers reasonable estimates, highlighting its robustness. Therefore, the challenger model approach is the preferred choice for predicting model error under both normal and extreme conditions.
Ultimately, the optimal approach should align with the bank’s risk appetite, operational capabilities, and overall risk management framework. Zanders, with its extensive expertise in financial risk management, including multiple high-profile projects related to prepayments at G-SIBs as well as mid-size banks, can provide comprehensive support in navigating these challenges. See our expertise here.
Ready to take your IRRBB strategy to the next level?
Zanders is an expert on IRRBB-related topics. We enable banks to achieve both regulatory compliance and strategic risk goals by offering support from strategy to implementation. This includes risk identification, formulating a risk strategy, setting up an IRRBB governance and framework, and policy or risk appetite statements. Moreover, we have an extensive track record in IRRBB [EU1] and behavioral models such as prepayment models, hedging strategies, and calculating risk metrics, both from model development and model validation perspectives.
Contact our experts today to discover how Zanders can help you transform risk management into a competitive advantage. Reach out to: Jaap Karelse, Erik Vijlbrief, Petra van Meel, or Martijn Wycisk to start your journey toward financial resilience.
https://www.eba.europa.eu/regulation-and-policy/single-rulebook/interactive-single-rulebook/11665 CRD IV Article 85: Competent authorities shall ensure that institutions implement policies and processes to evaluate and manage the exposures to operational risk, including model risk and risks resulting from outsourcing, and to cover low-frequency high-severity events. Institutions shall articulate what constitutes operational risk for the purposes of those policies and procedures. ↩︎
Budget at Risk: Empowering a global non-profit client with a clearer steer on FX risk
How can a non-profit organization operating on a global stage safeguard itself from foreign currency fluctuations? Here, we share how our ‘Budget at Risk’ model helped a non-profit client more accurately quantify the currency risk in its operations.
Charities and non-profit organizations face distinct challenges when processing donations and payments across multiple countries. In this sector, the impact of currency exchange losses is not simply about the effect on an organization’s financial performance, there’s also the potential disruption to projects to consider when budgets are at risk. Zanders developed a ‘Budget at Risk’ model to help a non-profit client with worldwide operations to better forecast the potential impact of currency fluctuations on their operating budget. In this article, we explain the key features of this model and how it's helping our client to forecast the budget impact of currency fluctuations with confidence.
The client in question is a global non-profit financed primarily through individual contributions from donors all over the world. While monthly inflows and outflows are in 16 currencies, the organization’s global reserves are quantified in EUR. Consequently, their annual operating budget is highly impacted by foreign exchange rate changes. To manage this proactively demands an accurate forecasting and assessment of:
The offsetting effect of the inflows and outflows.
The diversification effect coming from the level of correlation between the currencies.
With the business lacking in-house expertise to quantify these risk factors, they sought Zanders’ help to develop and implement a model that would allow them to regularly monitor and assess the potential budget impact of potential FX movements.
Developing the BaR method
Having already advised the organization on several advisory and risk management projects over the past decade, Zanders was well versed in the organization’s operations and the unique nature of the FX risk it faces. The objective behind developing Budget at Risk (BaR) was to create a model that could quantify the potential risk to the organization’s operating budget posed by fluctuations in foreign exchange rates.
The BaR model uses the Monte Carlo method to simulate FX rates over a 12-month period. Simulations are based on the monthly returns on the FX rates, modelled by drawings from a multivariate normal distribution. This enables the quantification of the maximum expected negative FX impact on the company’s budget over the year period at a certain defined level of confidence (e.g., 95%). The model outcomes are presented as a EUR amount to enable direct comparison with the level of FX risk in the company’s global reserves (which provides the company’s ‘risk absorbing capacity’). When the BaR outcome falls outside the defined bandwidth of the FX risk reserve, it alerts the company to consider selective FX hedging decisions to bring the BaR back within the desired FX risk reserve level.
The nature of the model
The purpose of the BaR model isn’t to specify the maximum or guaranteed amount that will be lost. Instead, it provides an indication of the amount that could be lost in relation to the budgeted cash flows within a given period, at the specified confidence interval. To achieve this, the sensitivity of the model is calibrated by:
Modifyingthe confidence levels. This changes the sensitivity of the model to extreme scenarios. For example, the figure below illustrates the BaR for a 95% level of confidence and provides the 5% worst-case scenario. If a 99% confidence level was applied, it would provide the 1% worst (most extreme) case scenario.
Selecting different lengths of sample data. Thisallows the calculation of the correlation and volatility of currency pairs. The period length of the sample data helps to assess the sensitivity to current events that may affect the FX market. For example, a sample period of 6 months is much more sensitive to current events than a sample of 5 years.
Figure 1 – BaR for a 95% level of confidence
Adjusting these parameters makes it possible to calculate the decomposition of the BaR per currency for a specified confidence level and length of data sample. The visual outcome makes the currency that’s generating most risk quick and easy to identify. Finally, the diversification effect on the BaR is calculated to quantify the offsetting effect of inflows and outflows and the correlation between the currencies.
Table 1 – Example BaR output per confidence level and length of data sample
Pushing parameters
The challenge with the simulation and the results generated is that many parameters influence the outcomes – such as changes in cash flows, volatility, or correlation. To provide as much clarity as possible on the underlying assumptions, the impact of each parameter on the results must be considered. Zanders achieves this firstly by decomposing the impact by:
Changing FX data to trigger a difference in the market volatility and correlation.
Altering the cash flows between the two assessment periods.
Then, we look at each individual currency to better understand its impact on the total result. Finally, additional background checks are performed to ensure the accuracy of the results.
This multi-layered modelling technique provides base cases that generate realistic predictions of the impact of specific rate changes on the business’ operating budget for the year ahead. Armed with this knowledge, we then work with the non-profit client to develop suitable hedging strategies to protect their funding.
Leveraging Zanders’ expertise
FX scenario modelling is a complex process requiring expertise in currency movements and risk – a combination of niche skills that are uncommon in the finance teams of most non-profit businesses. But for these organizations, where there can be significant currency exposure, taking a proactive, data-driven approach to managing FX risk is critical. Zanders brings extensive experience in supporting NGO, charity and non-profit clients with modelling currency risk in a multiple currency exposure environment and quantifying potential hedge cost reduction by shifting from currency hedge to portfolio hedge.
In the high-stakes world of private equity, where the pressure to deliver exceptional returns is relentless, the playbook is evolving. Gone are the days when financial engineering—relying
Zanders Transfer Pricing Suite is an innovative, cloud-based solution designed for companies looking to automate the Transfer Pricing compliance of financial transactions. With over five years
This paper offers a straightforward analysis of the Basel Committee on Banking Supervision's standards on crypto asset exposures and their adoption by 2025. It critically assesses
In recent years, consumers’ and investors’ interest in sustainability has been growing. Since 2015, assets under management in ESG funds have nearly tripled, the outstanding value of
In this report, biodiversity loss ranks as the fourth most pressing concern after climate change adaptation, mitigation failure, and natural disasters. For financial institutions (FIs), it
Carbon offset processes are currently dominated by private actors providing legitimacy for the market. The two largest of these, Verra and Gold Standard, provide auditing services, carbon
Early October, the Basel Committee on Banking Supervision (BCBS) published a report[1] on the 2023 banking turmoil that involved the failure of several US banks as well as Credit Suisse. The
The SAP Business Technology Platform (BTP) is not just a standalone product or a conventional module within SAP's suite of ERP systems; rather, it serves as a strategic platform from SAP,
Treasurers dealing with multiple jurisdictions, scattered banking landscape, and local requirements face many challenges in this regard. Japan is one of the markets where bank connectivity
Seventy banks have been considered, which is an increase of twenty banks compared to the previous exercise. The portfolios of the participating banks contain around three quarters of all EU
The CSP helps reinforce the controls protecting participants from cyberattack and ensures their effectivity and that they adhere to the current Swift security requirements.
*Swift does not
As a result of the growing importance of this transformative technology and its applications, various regulatory initiatives and frameworks have emerged, such as Markets in Crypto-Assets
Over the past decades, banks significantly increased their efforts to implement adequate frameworks for managing interest rate risk in the banking book (IRRBB). These efforts typically focus
Effective liquidity management is essential for businesses of all sizes, yet achieving it is often challenging. Many organizations face difficulties due to fragmented data, inconsistent
There are risks in undertaking any big treasury transformation project, but the risks of not adjusting to the changing world around you can be far bigger. Recognizing the potential pitfalls of
Exploring S/4HANA Functionalities
The roundtable session started off with the presentation of SAP on some of the new S/4HANA functionalities. New functionalities in the areas of
Accurately attributing changes in counterparty credit exposures is essential for understanding risk profiles and making informed decisions. However, traditional approaches for exposure
However, CCR remains an essential element in banking risk management, particularly as it converges with valuation adjustments. These changes reflect growing regulatory expectations, which were
The timelines for the entire exercise have been extended to accommodate the changes in scope: Launch of exercise (macro scenarios)Second half of January 2025First submission of results to
Within the field of financial risk management, professionals strive to develop models to tackle the complexities in the financial domain. However, due to the ever-changing nature of financial
Charities and non-profit organizations face distinct challenges when processing donations and payments across multiple countries. In this sector, the impact of currency exchange losses is not
Addressing biodiversity (loss) is not only relevant from an impact perspective; it is also quickly becoming a necessity for financial institutions to safeguard their portfolios against
A force for change
The NGO sector today is facing a multitude of conflicting pressures. Growing humanitarian need has heightened the pressure on these organizations to change the world, but
SAP highlighted their public vs. private cloud offerings, RISE and GROW products, new AI chatbot applications, and their SAP Analytics Cloud solution. In addition to SAP's insights, several
SAP In-House Cash (IHC) has enabled corporates to centralize cash, streamline payment processes, and recording of intercompany positions via the deployment of an internal bank. S/4 HANA
Historically, SAP faced limitations in this area, but recent innovations have addressed these challenges. This article explores how the XML framework within SAP’s Advanced Payment Management
Despite the several global delays to FRTB go-live, many banks are still struggling to be prepared for the implementation of profit and loss attribution (PLA) and the risk factor eligibility
In a world of persistent market and economic volatility, the Corporate Treasury function is increasingly taking on a more strategic role in navigating the uncertainties and driving corporate
Security in payments is a priority that no corporation can afford to overlook. But how can bank connectivity be designed to be secure, seamless, and cost-effective? What role do local
C. Steinweg Group is a market-leading logistics and warehousing company with over 6,250 employees and warehouses and terminals that span more than 100 locations in 55 countries worldwide. With
In brief
Despite an upturn in the economic outlook, uncertainty remains ingrained into business operations today.
As a result, most corporate treasuries are
After a long period of negative policy rates within Europe, the past two years marked a period with multiple hikes of the overnight rate by central banks in Europe, such as the European
On the 22nd of August, SAP and Zanders hosted a webinar on the topic of optimizing your treasury processes with SAP S/4HANA, with the focus on how to benefit from S/4HANA for the cash &
Biodiversity risks scoring: a quantitative approach
October 2024
9 min read
Authors:
Marije Wiersma, Sjoerd Blijlevens, Miguel Manzanares
Share:
Explore how Zanders’ scoring methodology quantifies biodiversity risks, enabling financial institutions to safeguard portfolios from environmental and transition impacts.
Addressing biodiversity (loss) is not only relevant from an impact perspective; it is also quickly becoming a necessity for financial institutions to safeguard their portfolios against financial risks stemming from habitat destruction, deforestation, invasive species and/or diseases.
In a previous article, published in November 2023, Zanders introduced the concept of biodiversity risks, explained how it can pose a risk for financial institutions, and discussed the expectations from regulators.1 In addition, we touched upon our initial ideas to introduce biodiversity risks in the risk management framework. One of the suggestions was for financial institutions to start assessing the materiality of biodiversity risk, for example by classifying exposures based on sector or location. In this article, we describe Zanders’ approach for classifying biodiversity risks in more detail. More specifically, we explore the concepts behind the assessment of biodiversity risks, and we present key insights into methodologies for classifying the impact of biodiversity risks; including a use case.
Understanding biodiversity risks
Biodiversity risks can be related to physical risk and/or transition risk events. Biodiversity physical risks results from environmental decay, either event-driven or resulting from longer-term patterns. Biodiversity transition risks results from developments aimed at preventing or restoring damage to nature. These risks are driven by impacts and dependencies that an undertaking has on natural resources and ecosystem services. The definition of impacts and dependencies and its relation to physical and transitional risks is explained below:
Companies impact natural assets through their business operations and output. For example, the production process of an oil company in a biodiversity sensitive area could lead to biodiversity loss. Impacts are mainly related to transition risk as sectors and economic activities that have a strong negative impact on environmental factors are likely to be the first affected by a change in policies, legal charges, or market changes related to preventing or restoring damage to nature.
On the other hand, companies are dependent on certain ecosystem services. For example, agricultural companies are dependent on ecosystem services such as water and pollination. Dependencies are mainly related to physical risk as companies with a high dependency will take the biggest hit from a disruption or decay of the ecosystem service caused by e.g. an oil spill or pests.
For banks, the impacts and dependencies of their own operations and of their counterparties can impact traditional financial (credit, liquidity, and market) and non-financial (operational and business) risks. In our biodiversity classification methodology, we assess both impacts and dependencies as indicators for physical and transition risk. This is further described in the next section.
Zanders’ biodiversity classification methodology
An important starting point for climate-related and environmental (C&E) risk management is the risk identification and materiality assessment. For C&E risks, and biodiversity in particular, obtaining data is a challenge. A quantitative assessment of materiality is therefore difficult to achieve. To address this, Zanders has developed a data driven classification methodology. By classifying the biodiversity impact and dependencies of exposures based on the sector and location of the counterparty, scores that quantify the portfolio’s physical and transition risks related to biodiversity are calculated. These scores are based on the databases of Exploring Natural Capital Opportunities, Risks and Exposure (ENCORE) and the World Wide Fund for Nature (WWF).
Sector classification
The sector classification methodology is developed based on the ENCORE database. ENCORE is a public database that is recognized by global initiatives such as Taskforce on Nature-related Financial Disclosures (TNFD) and Partnership for Biodiversity Accounting Financials (PBAF). ENCORE is a key tool for the “Evaluate” phase of the TNFD LEAP approach (Locate, Evaluate, Assess and Prepare).
ENCORE was developed specifically for financial institutions with the goal to assist them in performing a high-level but data-driven scan of their exposures’ impacts and dependencies. The scanning is made across multiple dimensions of the ecosystem, including biodiversity-related environmental drivers. ENCORE evaluates the potential reliance on ecosystem services2 and the changes of impacts drivers3 on natural capital assets4. It does so by assigning scores to different levels of a sector classification (sector, subindustry and production process). These scores are assigned for 11 impact drivers and 21 ecosystem services. ENCORE provides a score ranging from Very Low to Very High for a broad range of production processes, sub-sectors and sectors.
To compute the sector scores, ENCORE does not offer a methodology for aggregating scores for impacts drivers and ecosystem services. Therefore, ENCORE does not provide an overall dependency and impact per sector, sub-industry, or production process. However, Zanders has created a methodology to calculate a final aggregated impact and dependency score. The result of this aggregation is a single impact and a single dependency score for each ENCORE sector, sub-industry or production process. In addition, an overall impacts and dependencies scores are computed for the portfolio, based on its sector distribution. In both cases, scores range from 0 (no impact/dependency) to 5 (very high impact or dependency).
Location classification
The location scoring methodology is developed based on the WWF Biodiversity Risk Filter (hereafter called WWF BRF).5 The WWF BRF is a public tool that supports a location-specific analysis of physical- and transition-related biodiversity risks.
The WWF BRF consists of a set of 33 biodiversity indicators: 20 related to physical risks and 13 related to reputational risks, which are provided at country, but also on a more granular regional level. These indicators are aggregated by the tool itself, which ultimately provides one single scape physical risk and scape reputational risk per location.
To compute overall location scores, the WWF BRF does not offer a methodology for aggregating scores for countries and determine the overall transition risk (based on the scape reputational risk scores) and physical risk (based on the scape physical risk scores). However, Zanders has created a methodology to calculate a final aggregated transition and physical risk score for the portfolio, based on its geographical distribution. The result of this aggregation is a single transition and physical risk score for the portfolio, ranging from 0 (no risk) to 5 (very high risk).
Use case: RI&MA for biodiversity risks in a bank portfolio
In this section, we present a use case of classifying biodiversity risks for the portfolio of a fictional financial institution, using the sector and location scoring methodologies developed by Zanders.
The exposures of this financial institution are concentrated in four sectors: Real estate, Oil & Gas, Soft commodities and Luxury goods. Moreover, the operations of these sectors are located across four different countries: the Netherlands, Switzerland, Morocco and China. The following matrix shows the percentage of exposures of the financial institution for each combination of sector and country:
ENCORE provides scores for 11 ecosystem services and 21 impacts drivers. Those related to biodiversity risks are transformed to a range from 0 to 5. After that, biodiversity ecosystem services and biodiversity impacts drivers are aggregated into an overall biodiversity impacts and dependencies scores, respectively. The following table shows the mapping between the sectors in the portfolio and the corresponding sub-industry in the ENCORE database, including the aggregated biodiversity impacts and dependencies scores computed for those sub-industries. The mapping is done at sub-industry level, since it is the level of granularity of the ENCORE sector classification that better fits the sectors defined in the fictional portfolio. In addition, the overall impacts and dependencies scores are computed, by taking the weighted average sized by the sector distribution of the portfolio. This leads to scores of 3.8 and 2.4 for the impacts and dependencies scores, respectively.
The WWF BRF provides biodiversity indicators at country level. It already provides an aggregated score for physical risk (namely, scape physical score) and for transition risk (namely, scape reputational risk score), so no further aggregation is needed. Therefore, the corresponding scores for the four countries within the bank portfolio are selected. As the last step, the location scores are transformed to a range similar to the sector scores, i.e., from 0 (no physical/transition risk) to 5 (very high physical/transition risk). The results are shown in the following table. In addition, the overall impacts and dependencies scores are computed, by taking the weighted average sized by the geographical distribution of the portfolio. This leads to scores of 3.9 and 3.3 for the physical and transition risk scores, respectively.
Results of the sector and location scores can be displayed for a better understanding and to enable comparison between sectors and countries. Bubble charts, such as the ones show below, present the sectors and location scores together with the size of the exposures in the portfolio (by the size of each bubble).
Combined with the size of the exposures, the results suggest that biodiversity-related physical and transition risks could result in financial risks for Soft commodities and Oil & Gas. This is due to high impacts and dependencies and their relevant size in the portfolio. Moreover, despite a low dependencies score, biodiversity risks could also impact the Real estate sector due to a combination of its high impact score and the high sector concentration (45% of the portfolio). From a location perspective, exposures located in China could face high biodiversity transition risks, while exposures located in Morocco are the most vulnerable to biodiversity physical risks. In addition, relatively high scores for both physical and transition risk scores for Netherlands, combined with the large size of these exposures in the portfolio, could also lead to additional financial risk.’
These results, combined with other information such as loan maturities, identified transmission channels, or expert inputs, can be used to inform the materiality of biodiversity risks.
Conclusion
Assessing the materiality of biodiversity risks is crucial for financial institutions in order to understand the risks and opportunities in their loan portfolios. In this article, Zanders has presented its approach for an initial quantification of biodiversity risks. Curious to learn how Zanders can support your financial institutions with the identification and quantification of biodiversity risks and the integration into the risk frameworks? Please reach out to Marije Wiersma, Iryna Fedenko or Miguel Manzanares.
In accordance with ENCORE, ecosystem services are the links between nature and business. Each of these services represent a benefit that nature provides to enable or facilitate business production processes. ↩︎
In accordance with ENCORE AND Natural Capital Protocol (2016), an impacts driver is a measurable quantity of a natural resource that is used as an input to production or a measurable non-product output of business activity. ↩︎
In accordance with ENCORE, natural capital assets are specific elements within nature that provide the goods and services that the economy depends on. ↩︎
The WWF also provides a similar tool, the WWF Water Risk Filter, which could be used as to assess specific water-related environmental risks. ↩︎
Unlocking the Hidden Gems of the SAP Credit Risk Analyzer
June 2024
4 min read
Author:
Aleksei Abakumov
Share:
Are you leveraging the SAP Credit Risk Analyzer to its full potential?
While many business and SAP users are familiar with its core functionalities, such as limit management applying different limit types and the core functionality of attributable amount determination, several less known SAP standard features can enhance your credit risk management processes.
In this article, we will explore these hidden gems, such as Group Business Partners and the ways to manage the limit utilizations using manual reservations and collateral.
Group Business Partner Use
One of the powerful yet often overlooked features of the SAP Credit Risk Analyzer is the ability to use Group Business Partners (BP). This functionality allows you to manage credit and settlement risk at a bank group level rather than at an individual transactional BP level. By consolidating credit and settlement exposure for related entities under a single group business partner, you can gain a holistic view of the risks associated with an entire banking group. This is particularly beneficial for organizations dealing with banking corporations globally and allocating a certain amount of credit/settlement exposure to banking groups. It is important to note that credit ratings are often reflected at the group bank level. Therefore, the use of Group BPs can be extended even further with the inclusion of credit ratings, such as S&P, Fitch, etc.
Configuration: Define the business partner relationship by selecting the proper relationship category (e.g., Subsidiary of) and setting the Attribute Direction to "Also count transactions from Partner 1 towards Partner 2," where Partner 2 is the group BP.
Master Data: Group BPs can be defined in the SAP Business Partner master data (t-code BP). Ensure that all related local transactional BPs are added in the relationship to the appropriate group business partner. Make sure the validity period of the BP relationship is valid. Risk limits are created using the group BP instead of the transactional BP.
Reporting: Limit utilization (t-code TBLB) is consolidated at the group BP level. Detailed utilization lines show the transactional BP, which can be used to build multiple report variants to break down the limit utilization by transactional BP (per country, region, etc.).
Having explored the benefits of using Group Business Partners, another feature that offers significant flexibility in managing credit risk is the use of manual reservations and collateral contracts.
Use of Manual Reservations
Manual reservations in the SAP Credit Risk Analyzer provide an additional layer of flexibility in managing limit utilization. This feature allows risk managers to manually add a portion of the credit/settlement utilization for specific purposes or transactions, ensuring that critical operations are not hindered by unexpected credit or settlement exposure. It is often used as a workaround for issues such as market data problems, when SAP is not able to calculate the NPV, or for complex financial instruments not yet supported in the Treasury Risk Management (TRM) or Credit Risk Analyzer (CRA) settings.
Configuration: Apart from basic settings in the limit management, no extra settings are required in SAP standard, making the use of reservations simpler.
Master data: Use transaction codes such as TLR1 to TLR3 to create, change, and display the reservations, and TLR4 to collectively process them. Define the reservation amount, specify the validity period, and assign it to the relevant business partner, transaction, limit product group, portfolio, etc. Prior to saving the reservation, check in which limits your reservation will be reflected to avoid having any idle or misused reservations in SAP.
While manual reservations provide a significant boost to flexibility in limit management, another critical aspect of credit risk management is the handling of collateral.
Collateral
Collateral agreements are a fundamental aspect of credit risk management, providing security against potential defaults. The SAP Credit Risk Analyzer offers functionality for managing collateral agreements, enabling corporates to track and value collateral effectively. This ensures that the collateral provided is sufficient to cover the exposure, thus reducing the risk of loss.
SAP TRM supports two levels of collateral agreements:
Single-transaction-related collateral
Collateral agreements.
Both levels are used to reduce the risk at the level of attributable amounts, thereby reducing the utilization of limits.
Single-transaction-related collateral: SAP distinguishes three types of collateral value categories:
Percentual collateralization
Collateralization using a collateral amount
Collateralization using securities
Configuration: configure collateral types and collateral priorities, define collateral valuation rules, and set up the netting group.
Master Data: Use t-code KLSI01_CFM to create collateral provisions at the appropriate level and value. Then, this provision ID can be added to the financial object.
Reporting: both manual reservations and collateral agreements are visible in the limit utilization report as stand- alone utilization items.
By leveraging these advanced features, businesses can significantly enhance their risk management processes.
Conclusion
The SAP Credit Risk Analyzer is a comprehensive tool that offers much more than meets the eye. By leveraging its hidden functionalities, such as Group Business Partner use, manual reservations, and collateral agreements, businesses can significantly enhance their credit risk management processes. These features not only provide greater flexibility and control but also ensure a more holistic and robust approach to managing credit risk. As organizations continue to navigate the complexities of the financial landscape, unlocking the full potential of the SAP Credit Risk Analyzer can be a game-changer in achieving effective risk management.
If you have questions or are keen to see the functionality in our Zanders SAP Demo system, please feel free to contact Aleksei Abakumov or any Zanders SAP consultant.
Default modelling in an age of agility
June 2024
4 min read
Author:
Kyle Gartner
Share:
Are you leveraging the SAP Credit Risk Analyzer to its full potential?
In brief:
Prevailing uncertainty in geopolitical, economic and regulatory environments demands a more dynamic approach to default modelling.
Traditional methods such as logistic regression fail to address the non-linear characteristics of credit risk.
Score-based models can be cumbersome to calibrate with expertise and can lack the insight of human wisdom.
Machine learning lacks the interpretability expected in a world where transparency is paramount.
Using the Bayesian Gaussian Process Classifier defines lending parameters in a more holistic way, sharpening a bank’s ability to approve creditworthy borrowers and reject proposals from counterparties that are at a high risk of default.
Historically high levels of economic volatility, persistent geopolitical unrest, a fast-evolving regulatory environment – a perpetual stream of disruption is highlighting the limitations and vulnerabilities in many credit risk approaches. In an era where uncertainty persists, predicting risk of default is becoming increasingly complex, and banks are increasingly seeking a modelling approach that incorporates more flexibility, interpretability, and efficiency.
While logistic regression remains the market standard, the evolution of the digital treasury is arming risk managers with a more varied toolkit of methodologies, including those powered by machine learning. This article focuses on the Bayesian Gaussian Process Classifier (GPC) and the merits it offers compared to machine learning, score-based models, and logistic regression.
A non-parametric alternative to logistic regression
The days of approaching credit risk in a linear, one-dimensional fashion are numbered. In today’s fast paced and uncertain world, to remain resilient to rising credit risk, banks have no choice other than to consider all directions at once. With the GPC approach, the linear combination of explanatory variables is replaced by a function, which is iteratively updated by applying Bayes’ rule (see Bayesian Classification With Gaussian Processes for further detail).
For default modelling, a multivariate Gaussian distribution is used, hence forsaking linearity. This allows the GPC to parallel machine learning (ML) methodologies, specifically in terms of flexibility to incorporate a variety of data types and variables and capability to capture complex patterns hidden within financial datasets.
A model enriched by expert wisdom
Another way GPC shows similar characteristics to machine learning is in how it loosens the rigid assumptions that are characteristic of many traditional approaches, including logistic regression and score-based models. To explain, one example is the score-based Corporate Rating Model (CRM) developed by Zanders. This is the go-to model of Zanders to assess the creditworthiness of corporate counterparties. However, calibrating this model and embedding the opinion of Zanders’ corporate rating experts is a time-consuming task. The GPC approach streamlines this process significantly, delivering both greater cost- and time-efficiencies. The incorporation of prior beliefs via Bayesian inference permits the integration of expert knowledge into the model, allowing it to reflect predetermined views on the importance of certain variables. As a result, the efficiency gains achieved through the GPC approach don’t come at the cost of expert wisdom.
Enabling explainable lending decisions
As well as our go-to CRM, Zanders also houses machine learning approaches to default modelling. Although this generates successful outcomes, with machine learning, the rationale behind a credit decision is not explicitly explained. In today’s volatile environment, an unexplainable solution can fall short of stakeholder and regulator expectations – they increasingly want to understand the reasoning behind lending decisions at a forensic level.
Unlike the often ‘black-box’ nature of ML models, with GPC, the path to a decision or solution is both transparent and explainable. Firstly, the GPC model’s hyperparameters provide insights into the relevance and interplay of explanatory variables with the predicted outcome. In addition, the Bayesian framework sheds light on the uncertainty surrounding each hyperparameter. This offers a posterior distribution that quantifies confidence in these parameter estimates. This aspect adds substantial risk assessment value, contrary to the typical point estimate outputs from score-based models or deterministic ML predictions. In short, an essential advantage of the GPC over other approaches is its ability to generate outcomes that withstand the scrutiny of stakeholders and regulators.
A more holistic approach to probability of default modelling
In summary, if risk managers are to tackle the mounting complexity of evaluating probability of default, they need to approach it non-linearly and in a way that’s explainable at every level of the process. This is throwing the spotlight onto more holistic approaches, such as the Gaussian Process Classifier. Using this methodology allows for the incorporation of expert intuition as an additional layer to empirical evidence. It is transparent and accelerates calibration without forsaking performance. This presents an approach that not only incorporates the full complexity of credit risk but also adheres to the demands for model interpretability within the financial sector.
Are you interested in how you could use GPC to enhance your approach to default modelling? Contact Kyle Gartner for more information.
BASEL IV & Real Estate Exposures
May 2024
4 min read
Author:
Marco Zamboni
Share:
Are you leveraging the SAP Credit Risk Analyzer to its full potential?
The Basel IV reforms published in 2017 will enter into force on January 1, 2025, with a phase-in period of 5 years. These are probably the most important reforms banks will go through after the introduction of Basel II. The reforms introduce changes in many areas. In the area of credit risk, the key elements of the banking package include the revision of the standardized approach (SA), and the introduction of the output floor.
In this article, we will analyse in detail the recent updates made to real estate exposures and their impact on capital requirements and internal processes, with a particular focus on collateral valuation methods.
Real Estate Exposures
Lending for house purchases is an important business for banks. More than one-third of bank loans in the EU are collateralised with residential immovable property. The Basel IV reforms introduce a more risk-sensitive framework, featuring a more granular classification system.
Standardized Approach
The new reforms aim for banks to diminish the advantages gained from using the Internal Ratings-Based (IRB) model. All financial institutions that calculate capital requirements with the IRB approach are now required to concurrently use the standardized approach. Under the Standardized Approach, financial institutions have the option to choose from two methods for assigning risk weights: the whole-loan approach and the split-loan approach.
Collateral Valuation
A significant change introduced by the reforms concerns collateral valuation. Previously, the framework allowed banks to determine the value of their real estate collateral based on either the market value (MV) concept or the mortgage lending value (MLV) concept. The revised framework no longer differentiates between these two concepts and introduces new requirements for valuing real estate for lending purposes by establishing a new definition of value. This aims to mitigate the impact of cyclical effects on the valuation of property securing a loan and to maintain more stable capital requirements for mortgages. Implementing an independent valuation that adheres to prudent and conservative criteria can be challenging and may result in significant and disruptive changes in valuation practices.
Conclusion
To reduce the impact of cyclical effects on the valuation of property securing a loan and to keep capital requirements for mortgages more stable, the regulator has capped the valuation of the property, so that it cannot for any reason be higher than the one at origination, unless modifications to that property unequivocally increase its value. Regulators have high expectations for accounting for environmental and climate risks, which can influence property valuations in two ways. On the one hand, these risks can trigger a decrease in property value. On the other hand, they can enhance value, as modifications that improve a property's energy performance or resilience to physical risks - such as protection and adaptation measures for buildings and housing units - may be considered value-increasing factors.
Where Zanders can help
Based on our experience, we specialize in assisting financial institutions with various aspects of Basel IV reforms, including addressing challenges such as limited data availability, implementing new modelling approaches, and providing guidance on interpreting regulatory requirements.
Zanders supercharges the growth of its US risk advisory practice with the appointment of managing director, Dan Delean
May 2024
4 min read
Author:
Laura Jane Johnson
Share:
Are you leveraging the SAP Credit Risk Analyzer to its full potential?
Dan Delean recently joined Zanders as Managing Director of our newly formed US risk advisory practice. With a treasury career spanning more than 30 years, including 15 years specializing in risk advisory, he comes to us with an impressive track record of building high performing Big 4 practices. As Dan will be spearheading the growth of Zanders’ risk advisory capabilities in the US, we asked him to share his vision for our future in the region.
Q. What excites you the most about leading Zanders' entry into the US market for risk advisory services?
Dan: This is a chance to build a world-class risk advisory practice in the US. Under the leadership of Paul DeCrane, the quality of Zanders reputation in the US has already been firmly established and I’m excited to build on this. I love to build – teams, solutions, physical building – and I am unnaturally passionate about treasury. Treasury is a small universe here, so getting traction is a key challenge – but once we do, it will catch fire.
Q. What do you see as the unique challenges (or opportunities) for Zanders in the US market?
Dan: A key concern for financial institutions in the US right now is the low availability of highly competent treasury professionals. Rising interest rates, combined with economic and political uncertainty, are driving up demand for deeper treasury insights in the US. In particular, the regulatory regime here is increasing its focus on liquidity and funding challenges, with a number of banking organizations on the ‘list’ for closing. But while the need for deep treasury competencies is growing fast, the pool of talent can’t keep up with this demand. This is an expertise gap Zanders is perfectly placed to address.
Q. How do you plan to tailor Zanders' risk advisory services to meet the specific needs and expectations of US clients?
Dan: My plan is to attract the best talent available, building a team with the capability to work with clients to tackle the hardest problems in the market. I want to build a recognized risk advisory team, that’s trusted by clients with difficult challenges. My intention is to focus on building these competencies through a highly focused approach to teaming.
Q. In what ways do you believe Zanders' approach to risk advisory services sets it apart from other firms in the US market?
Dan: Focus on competencies and effective teaming will make Zanders stand out among advisory businesses in the US. Zanders is an expert-driven, competency focused practice, with a large team of seasoned treasury and risk professionals and a willingness to team up with other industry players. This approach is not common in the US. Most firms here deploy leverage models or are highly technical.
Q. What kind of culture or working environment do you aim to foster within the US branch of Zanders?
Dan: I’m committed to recruiting well, training even better, and being a key supporter of my team. I believe culture starts at the top, so all team members that join or work with us need to buy into the expert model and Zanders’ approach to advisory. Within this culture, trust and accountability will always be core tenets – these will be central to my approach to teaming.
With his value-driven, competency-led approach to teaming and practice development, there’s no-one better qualified than Dan to lead the growth of our US risk advisory. To learn more about Zanders and what makes us different, please visit our About Zanders page.
Finding resilience amid chaos: The 5 observations defining the treasury function in 2024
March 2024
4 min read
Author:
Sander van Tol
Share:
Are you leveraging the SAP Credit Risk Analyzer to its full potential?
Economic instability, a pandemic, geopolitical turbulence, rising urgency to get to net zero – a continuousstream of demands and disruption have pushed businesses to their limits in recent years. What this has proven without doubt is that treasury can no longer continue to be an invisible part of the finance function. After all, accurate cash flow forecasting, working capital and liquidity management are all critical C-suite issues. So, with the case for a more strategic treasury accepted, CFOs are now looking to their corporate treasurer more than ever for help with building financial resilience and steering the business towards success.
The future form of corporate treasury is evolving at pace to meet the demands, so to bring you up to speed, we discuss in this article five key observations we believe will have the most significant impact on the treasury function in the coming year(s).
1. A sharper focus on productivity and performance
Except for some headcount reductions, treasury has remained fairly protected from the harsh cost-cutting measures of recent years. However, with many OPEX and CAPEX budgets for corporate functions under pressure, corporate treasurers need to be prepared to justify or quantify the added value of their function and demonstrate how treasury technology is contributing to operational efficiencies and cost savings. This requires a sharper focus on improving productivity and enhancing performance.
To deliver maximum performance in 2024, treasury must focus on optimizing structures, processes, and implementation methods. Further digitalization (guided by the blueprint provided by Treasury 4.0) will naturally have an influential role in process optimization and workflow efficiency. But to maintain treasury budgets and escape an endless spiral of cost-cutting programs will take a more holistic approach to improving productivity. This needs to incorporate developments in three factors of production – personnel, capital, and data (in this context, knowledge).
In addition, a stronger emphasis on the contribution of treasury to financial performance is also required. Creating this direct link between treasury output and company financial performance strengthens the function’s position in budget discussions and reinforces its role both in finance transformation processes and throughout the financial supply chain.
2. Treasury resilience, geopolitical risk and glocalization
Elevated levels of geopolitical risk are triggering heightened caution around operational and financial resilience within multinationals. As a result, many corporations are rethinking their geographical footprint and seeking ways to tackle overdependence on certain geographical markets and core suppliers. This has led to the rise of ‘glocalization’ strategies, which typically involve moving away from the traditional approach of offshoring operations to low-cost destinations to a more regional approach that’s closer to the end market.
The rise of glocalization is forcing treasury to recalibrate its target operating model to adopt a more regionalized approach. This typically involves changing from a ‘hub and spoke’ model to multiple hubs. But the impact on treasury is not only structural. Operating in many emerging and frontier markets creates heightened risks around currency restrictions, lack of local funding and the inability to repatriate cash. Geopolitical tensions can also have spillover effects to the financial markets in these countries. This necessitates the application of more financial resilience thinking from treasury.
3. Cash is king, data is queen
Cash flow forecasting remains a top priority for corporate treasurers. This is driving the rise of technology capable of producing more accurate cash flow predictions, faster and more efficiently. Predictive and prescriptive analytics and AI-based forecasting provide more precise and detailed outcomes compared to human forecasting. While interfaces or APIs can be applied to accelerate information gathering, facilitating faster and automated decision-making. But to leverage the benefits of these advanced applications of technology requires robust data foundations. In other words, while technology plays a role in improving the cash flow forecasting process, it relies on an accurate and timely source of real-time data. As such, one can say that cash may still be king, but data is queen.
In addition, a 2023 Zanders survey underscored the critical importance of high-quality data in financial risk management. In particular, the survey highlighted the criticality of accurate exposure data and pointed out the difficulties faced by multinational corporations in consolidating and interpreting information. This stressed the necessity of robust financial risk management through organizational data design, leveraging existing ERP or TMS technology or establishing a data lake for processing unstructured data.
4. The third wave of treasury digitalization
We’ve taken the three waves of digitalization coined by Steve Case (former CEO of US internet giant AOL) and applied them to the treasury function. The first wave was the development of stand-alone treasury and finance solutions, followed by the second wave bringing internal interfaces and external connectivity between treasury systems. The third wave is about how to leverage all the data coming from this connected treasury ecosystem. With generative AI predicted to have an influential role in this third phase, corporate treasurers need to incorporate the opportunities and challenges it poses into their organizations' digital transformation journeys and into discussions and decisions related to other technologies within their companies, such as TMS, ERP, and banking tools.
We also predict the impact and success of this third wave in treasury digitalization will be dependent on having the right regulatory frameworks to support its implementation and operation. The reality is, although we all aspire to work in a digital, connected world, we must be prepared to encounter many analogue frictions – like regulatory requirements for paper-based proof, sometimes in combination with ‘wet’ signatures and stamped documents. This makes the adoption of mandates, such as the MLETR (Model Law on Electronic Transferable Records) a priority.
5. Fragmentation and interoperability of the payment landscape
A side effect of the increasing momentum around digital transformation is fragmentation across the payments ecosystem. This is largely triggered by a rapid acceleration in the use of digital payments in various forms. We’ve now seen successful trials of Central Bank Digital Currency, Distributed Ledger Technology to enable cross border payments, a rise in the use of digital wallets not requiring a bank account, and the application of cross border instant payments. All of these developments lead us to believe that international banking via SWIFT will be challenged in the future and treasurers should prepare for a more fragmented international payment ecosystem that supports a multitude of different payment types. To benefit from this development, interoperability will be crucial.
Conclusion: A turning point for treasury
A succession of black swan events in recent years has exposed a deep need for greater financial resilience. The treasury function plays a vital role in helping their CFO build this. This is accelerating both the scale and pace of transformation across the treasury function, with wide-ranging effects on its role in the C-suite, position in finance, the priorities and structure of the function, and the investment required to support much-needed digitalization.
For more information on the five observations outline here, you can read the extended version of this article.
European committee accepts NII SOT while EBA published its roadmap for IRRBB
March 2024
4 min read
Share:
Are you leveraging the SAP Credit Risk Analyzer to its full potential?
The European Committee (EC) has approved the regulatory technical standards (RTS) that include the specification of the Net Interest Income (NII) Supervisory Outlier Test (SOT). The SOT limit for the decrease in NII is set at 5% of Tier 1 capital. Since the three-month scrutiny period has ended it is expected that the final RTS will be published soon. 20 days after the publication the RTS will go into force. The acceptance of the NII SOT took longer than expected among others due to heavy pushback from the banking sector. The SOT, and the fact that some banks rely heavily on it for their internal limit framework is also one of the key topics on the heatmap IRRBB published by the European Banking Authority (EBA). The heatmap detailing its scrutiny plans for implementing interest rate risk in the banking book (IRRBB) standards across the EU. In the short to medium term (2024/Mid-2025), the focus is on
The EBA has noted that some banks use the as an internal limit without identifying other internal limits. The EBA will explore the development of complementary indicators useful for SREP purposes and supervisory stress testing.
The different practices on behavioral modelling of NMDs reported by the institutions.
The variety of hedging strategies that institutions have implemented.
Contribute to the Dynamic Risk Management project of the International Accounting Standards Board (IASB), which will replace the macro hedge accounting standard.
In the medium to long-term objectives (beyond mid-2025) the EBA mentions it will monitor the five-year cap on NMDs and CSRBB definition used by banks. No mention is made by the EBA on the consultation started by the Basel Committee on Banking Supervision, on the newly calibrated interest rate scenarios methodology and levels. In the coming weeks, Zanders will publish a series of articles on the Dynamic Risk Management project of the IASB and what implications it will have for banks. Please contact us if you have any questions on this topic or others such as NMD modelling or the internal limit framework/ risk appetite statements.
Fintegral
is now part of Zanders
In a continued effort to ensure we offer our customers the very best in knowledge and skills, Zanders has acquired Fintegral.