The Benefits of Exposure Attribution in Counterparty Credit Risk 

November 2024
3 min read

In an increasingly complex regulatory landscape, effective management of counterparty credit risk is crucial for maintaining financial stability and regulatory compliance.


Accurately attributing changes in counterparty credit exposures is essential for understanding risk profiles and making informed decisions. However, traditional approaches for exposure attribution often pose significant challenges, including labor-intensive manual processes, calculation uncertainties, and incomplete analyses.  

In this article, we discuss the issues with existing exposure attribution techniques and explore Zanders’ automated approach, which reduces workloads and enhances the accuracy and comprehensiveness of the attribution. 

Our approach to attributing changes in counterparty credit exposures 

The attribution of daily exposure changes in counterparty credit risk often presents challenges that strain the resources of credit risk managers and quantitative analysts. To tackle this issue, Zanders has developed an attribution methodology that efficiently automates the attribution process, improving the efficiency, reactivity and coverage of exposure attribution. 

Challenges in Exposure Attribution 

Credit risk managers monitor the evolution of exposures over time to manage counterparty credit risk exposures against the bank’s risk appetite and limits. This frequently requires rapid analysis to attribute the changes to exposures, which presents several challenges: 

Zanders’ approach: an automated approach to exposure attribution 

Our methodology resolves these problems with an analytics layer that interfaces with the risk engine to accelerate and automate the daily exposure attribution process. The results can also be accessed and explored via an interactive web portal, providing risk managers and senior management with the tools they need to rapidly analyze and understand their risk. 

Key features and benefits of our approach 

Zanders’ approach provides multiple improvements to the exposure attribution process. This reduces the workloads of key risk teams and increases risk coverage without additional overheads. Below, we describe the benefits of each of the main features of our approach. 

Zanders Recommends 

An automated attribution of exposures empowers banks teams to better understand and handle their counterparty credit risk. To make the best use of automated attribution techniques, Zanders recommends that banks: 

  • Increase risk scope: The increased efficiency of attribution should be used to provide a more comprehensive and granular coverage of the exposures of counterparties, sectors and regions. 
  • Reduce quant utilization: Risk managers should use automated dashboards and analytics to perform their own exposure investigations, reducing the workload of quantitative risk teams. 
  • Augment decision making: Risk managers should utilize dashboards and analytics to ensure they make more timely and informed decisions. 
  • Proactive monitoring: Automated reports and monitoring should be reviewed regularly to ensure risks are tackled in a proactive manner. 
  • Increase information transfer: Dashboards should be made available across teams to ensure that information is shared in a transparent, consistent and more timely manner. 

Conclusion

The effective management of counterparty credit risk is a critical task for banks and financial institutions. However, the traditional approach of manual exposure attribution often results in inefficient processes, calculation uncertainties, and incomplete analyses. Zanders' innovative methodology for automating exposure attribution offers a comprehensive solution to these challenges and provides banks with a robust framework to navigate the complexities of exposure attribution. The approach is highly effective at improving the speed, coverage, and accuracy of exposure attribution, supporting risk managers and senior management to make informed and timely decisions. 

For more information about how Zanders can support you with exposure attribution, please contact Dilbagh Kalsi (Partner) or Mark Baber (Senior Manager).

Converging on resilience: Integrating CCR, XVA, and real-time risk management

November 2024
2 min read

In a world where the Fundamental Review of the Trading Book (FRTB) commands much attention, it’s easy for counterparty credit risk (CCR) to slip under the radar.


However, CCR remains an essential element in banking risk management, particularly as it converges with valuation adjustments. These changes reflect growing regulatory expectations, which were further amplified by recent cases such as Archegos. Furthermore, regulatory focus seems to be shifting, particularly in the U.S., away from the Internal Model Method (IMM) and toward standardised approaches. This article provides strategic insights for senior executives navigating the evolving CCR framework and its regulatory landscape.

Evolving trends in CCR and XVA

Counterparty credit risk (CCR) has evolved significantly, with banks now adopting a closely integrated approach with valuation adjustments (XVA) — particularly Credit Valuation Adjustment (CVA), Funding Valuation Adjustment (FVA), and Capital Valuation Adjustment (KVA) — to fully account for risk and costs in trade pricing. This trend towards blending XVA into CCR has been driven by the desire for more accurate pricing and capital decisions that reflect the true risk profile of the underlying instruments/ positions.

In addition, recent years have seen a marked increase in the use of collateral and initial margin as mitigants for CCR. While this approach is essential for managing credit exposures, it simultaneously shifts a portion of the risk profile into contingent market and liquidity risks, which, in turn, introduces requirements for real-time monitoring and enhanced data capabilities to capture both the credit and liquidity dimensions of CCR. Ultimately, this introduces additional risks and modelling challenges with respect to wrong way risk and clearing counterparty risk.

As banks continue to invest in advanced XVA models and supporting technologies, senior executives must ensure that systems are equipped to adapt to these new risk characteristics, as well as to meet growing regulatory scrutiny around collateral management and liquidity resilience.

The Internal Model Method (IMM) vs. SA-CCR

In terms of calculating CCR, approaches based on IMM and SA-CCR provide divergent paths. On one hand, IMM allows banks to tailor models to specific risks, potentially leading to capital efficiencies. SA-CCR, on the other hand, offers a standardised approach that’s straightforward yet conservative. Regulatory trends indicate a shift toward SA-CCR, especially in the U.S., where reliance on IMM is diminishing.

As banks shift towards SA-CCR for Regulatory capital and IMM is used increasingly for internal purposes, senior leaders might need to re-evaluate whether separate calibrations for CVA and IMM are warranted or if CVA data can inform IMM processes as well.

Regulatory focus on CCR: Real-time monitoring, stress testing, and resilience

Real-time monitoring and stress testing are taking centre stage following increased regulatory focus on resilience. Evolving guidelines, such as those from the Bank for International Settlements (BIS), emphasise a need for efficiency and convergence between trading and risk management systems. This means that banks must incorporate real-time risk data and dynamic monitoring to proactively manage CCR exposures and respond to changes in a timely manner.

CVA hedging and regulatory treatment under IMM

CVA hedging aims to mitigate counterparty credit spread volatility, which affects portfolio credit risk. However, current regulations limit offsetting CVA hedges against CCR exposures under IMM. This regulatory separation of capital for CVA and CCR leads to some inefficiencies, as institutions can’t fully leverage hedges to reduce overall exposure.

Ongoing BIS discussions suggest potential reforms for recognising CVA hedges within CCR frameworks, offering a chance for more dynamic risk management. Additionally, banks are exploring CCR capital management through LGD reductions using third-party financial guarantees, potentially allowing for more efficient capital use. For executives, tracking these regulatory developments could reveal opportunities for more comprehensive and capital-efficient approaches to CCR.

Leveraging advanced analytics and data integration for CCR

Emerging technologies in data analytics, artificial intelligence (AI), and scenario analysis are revolutionising CCR. Real-time data analytics provide insights into counterparty exposures but typically come at significant computational costs: high-performance computing can help mitigate this, and, if coupled with AI, enable predictive modelling and early warning systems. For senior leaders, integrating data from risk, finance, and treasury can optimise CCR insights and streamline decision-making, making risk management more responsive and aligned with compliance.

By leveraging advanced analytics, banks can respond proactively to potential CCR threats, particularly in scenarios where early intervention is critical. These technologies equip executives with the tools to not only mitigate CCR but also enhance overall risk and capital management strategies.

Strategic considerations for senior executives: Capital efficiency and resilience

Balancing capital efficiency with resilience requires careful alignment of CCR and XVA frameworks with governance and strategy. To meet both regulatory requirements and competitive pressures, executives should foster collaboration across risk, finance, and treasury functions. This alignment will enhance capital allocation, pricing strategies, and overall governance structures.

For banks facing capital constraints, third-party optimisation can be a viable strategy to manage the demands of SA-CCR. Executives should also consider refining data integration and analytics capabilities to support efficient, resilient risk management that is adaptable to regulatory shifts.

Conclusion

As counterparty credit risk re-emerges as a focal point for financial institutions, its integration with XVA, and the shifting emphasis from IMM to SA-CCR, underscore the need for proactive CCR management. For senior risk executives, adapting to this complex landscape requires striking a balance between resilience and efficiency. Embracing real-time monitoring, advanced analytics, and strategic cross-functional collaboration is crucial to building CCR frameworks that withstand regulatory scrutiny and position banks competitively.

In a financial landscape that is increasingly interconnected and volatile, an agile and resilient approach to CCR will serve as a foundation for long-term stability. At Zanders, we have significant experience implementing advanced analytics for CCR. By investing in robust CCR frameworks and staying attuned to evolving regulatory expectations, senior executives can prepare their institutions for the future of CCR and beyond thereby avoiding being left behind.

Confirmed Methodology for Credit Risk in EBA 2025 Stress Test 

November 2024
2 min read

On November 12 2024, the confirmed methodology for the EBA 2025 stress testing exercise was published on the EBA website. This is the final version of the draft for initial consultation that was published earlier.


The timelines for the entire exercise have been extended to accommodate the changes in scope:
Launch of exercise (macro scenarios)Second half of January 2025
First submission of results to the EBAEnd of April 2025 
Second submission to the EBAEarly June 2025 
Final submission to the EBAEarly July 2025 
Publication of resultsBeginning of August 2025 

Below we share the most significant aspects for Credit Risk and related challenges. In the coming weeks we will share separate articles to cover areas related to Market Risk, Net Interest Income & Expenses and Operational Risk. 

The final methodology, along with the requirements introduced by the CRR3 poses significant challenges on the execution of the Credit Risk stress testing. Earlier we provided details on this topic and possible impacts on stress testing results, see our article: “Implications of CRR3 for the 2025 EU-wide stress test” Regarding the EBA 2025 stress test we view the following 5 points as key areas of concern: 

1- The EBA stress test requires different starting points; actual and restated CRR3 figures. This raises requirements in data management, reporting and implementation of related processes.  

2- The EBA stress test requires banks to report both transitional and fully loaded results under CRR3; this requires the execution of additional calculations and implementation of supporting data processes. 

3- The changes in classification of assets require targeted effort on the modelling side, stress test approach and related data structures. 

4- Implementation of the Standardized Approach output floor as part of the stress test logic. 

5- Additional effort is needed to correctly align Pillar 1 and Pillar 2 models, in terms of development, implementation and validation. 

At Zanders, we specialize in risk advisory and our consultants have participated in every single EU wide stress testing exercise, as well as a few others going back to the initial stress tests in 2009 following the Great Financial Crisis. We can support you throughout all key stages of the stress testing exercise across all areas to ensure a successful submission of the final templates. 

Based on the expertise in Stress Testing we have gained over the last 15 years, our clients benefit the most from our services in these areas: 

  • Full gap analysis against latest set of requirements 
  • Review, design and implementation of data processes & relevant data quality controls 
  • Alignment of Pillar 2 models to Pillar 1 (including CCR3 requirements) 
  • Design, implementation and execution of stress testing models 
  • Full automation of populating EBA templates including reconciliation and data quality checks. 

Contact us for more information about how we can help make this your most successful run yet. Reach out to Martijn de Groot, Partner at Zanders.

Insights into cracking model risk for prepayment models

October 2024
7 min read

This article examines different methods for quantifying and forecasting model risk in prepayment models, highlighting their respective strengths and weaknesses.


Within the field of financial risk management, professionals strive to develop models to tackle the complexities in the financial domain. However, due to the ever-changing nature of financial variables, models only capture reality to a certain extent. Therefore, model risk - the potential loss a business could suffer due to an inaccurate model or incorrect use of a model - is a pressing concern. This article explores model risk in prepayment models, analyzing various approaches to quantify and forecast this risk. 

There are numerous examples where model risk has not been properly accounted for, resulting in significant losses. For example, Long-Term Capital Management was a hedge fund that went bankrupt in the late 1990s because its model was never stress-tested for extreme market conditions. Similarly, in 2012, JP Morgan experienced a $6 billion loss and $920 million in fines due to flaws in its new value-at-risk model known as the ‘London Whale Trade’.  

Despite these prominent failures, and the requirements of CRD IV Article 85 for institutions to develop policies and processes for managing model risk,1 the quantification and forecasting of model risk has not been extensively covered in academic literature. This leaves a significant gap in the general understanding and ability to manage this risk. Adequate model risk management allows for optimized capital allocation, reduced risk-related losses, and a strengthened risk culture.  

This article delves into model risk in prepayment models, examining different methods to quantify and predict this risk. The objective is to compare different approaches, highlighting their strengths and weaknesses.  

Definition of Model Risk

Generally, model risk can be assessed using a bottom-up approach by analyzing individual model components, assumptions, and inputs for errors, or by using a top-down approach by evaluating the overall impact of model inaccuracies on broader financial outcomes. In the context of prepayments, this article adopts a bottom-up approach by using model error as a proxy for model risk, allowing for a quantifiable measure of this risk. Model error is the difference between the modelled prepayment rate and the actual prepayment rate. Model error occurs at an individual level when a prepayment model predicts a prepayment that does not happen, and vice versa. However, banks are more interested in model error at the portfolio level. A statistic often used by banks is the Single Monthly Mortality (SMM). The SMM is the monthly percentage of prepayments and can be calculated by dividing the amount of prepayments for a given month by the total amount of mortgages outstanding. 

Using the SMM, we can define and calculate the model error as the difference between the predicted SMM and the actual SMM: 

The European Banking Authority (EBA) requires financial institutions when calculating valuation model risk to set aside enough funds to be 90% confident that they can exit a position at the time of the assessment. Consequently, banks are concerned with the top 5% and lowest 5% of the model risk distribution (EBA, 2016, 2015). 2 Thus, banks are interested in the distribution of the model error as defined above, aiming to ensure they allocate the capital optimally for model risk in prepayment models.  

Approaches to Forecasting Model Risk 

By using model error as a proxy for model risk, we can leverage historical model errors to forecast future errors through time-series modelling. In this article, we explore three methods: the simple approach, the auto-regressive approach, and the machine learning challenger model approach.

Simple Approach

The first method proposed to forecast the expected value, and the variance of the model errors is the simple approach. It is the most straightforward way to quantify and predict model risk by analyzing the mean and standard deviation of the model errors. The model itself causes minimal uncertainty, as there are just two parameters which have to be estimated, namely the intercept and the standard deviation.

The disadvantage of the simple approach is that it is time-invariant. Consequently, even in extreme conditions, the expected value and the variance of model errors remain constant over time.

Auto-Regressive Approach

The second approach to forecast the model errors of a prepayment model is the auto-regressive approach. Specifically, this approach utilizes an AR(1) model, which forecasts the model errors by leveraging their lagged values. The advantage of the auto-regressive approach is that it takes into account the dynamics of the historical model errors when forecasting them, making it more advanced than the simple approach.

The disadvantage of the auto-regressive approach is that it always lags and that it does not take into account the current status of the economy. For example, an increase in the interest rate by 200 basis points is expected to lead to a higher model error, while the auto-regressive approach is likely to forecast this increase in model error one month later.

Machine Learning Challenger Model Approach                           

The third approach to forecast the model errors involves incorporating a Machine Learning (ML) challenger model. In this article, we use an Artificial Neural Network (ANN). This ML challenger model can be more sophisticated than the production model, as its primary focus is on predictive accuracy rather than interpretability. This approach uses risk measures to compare the production model with a more advanced challenger model. A new variable is defined as the difference between the production model and the challenger model.

Similar to the above approaches, the expected value of the model errors is forecasted by estimating the intercept, the parameter of the new variable, and the standard deviation. A forecast can be made and the difference between the production model and ML challenger model can be used as a proxy for future model risk.

The advantage of using the ML challenger model approach is that it is forward looking. This forward-looking method allows for reasonable estimates under both normal and extreme conditions, making it a reliable proxy for future model risk. In addition, when there are complex non-linear relationships between an independent variable and the prepayment rate, an ML challenger can be more accurate. Its complexity allows it to predict significant impacts better than a simpler, more interpretable production model. Consequently, employing an ML challenger model approach could effectively estimate model risk during substantial market changes.

A disadvantage of the machine learning approach is its complexity and lack of interpretability. Additionally, developing and maintaining these models often requires significant time, computational resources, and specialized expertise.

Conclusion 

The various methods to estimate model risk are compared in a simulation study. The ML challenger model approach stands out as the most effective method for predicting model errors, offering increased accuracy in both normal and extreme conditions. Both the simple and challenger model approach effectively predicts the variability of model errors, but the challenger model approach achieves a smaller standard deviation. In scenarios involving extreme interest rate changes, only the challenger model approach delivers reasonable estimates, highlighting its robustness. Therefore, the challenger model approach is the preferred choice for predicting model error under both normal and extreme conditions.

Ultimately, the optimal approach should align with the bank’s risk appetite, operational capabilities, and overall risk management framework. Zanders, with its extensive expertise in financial risk management, including multiple high-profile projects related to prepayments at G-SIBs as well as mid-size banks, can provide comprehensive support in navigating these challenges. See our expertise here.


Ready to take your IRRBB strategy to the next level?

Zanders is an expert on IRRBB-related topics. We enable banks to achieve both regulatory compliance and strategic risk goals by offering support from strategy to implementation. This includes risk identification, formulating a risk strategy, setting up an IRRBB governance and framework, and policy or risk appetite statements. Moreover, we have an extensive track record in IRRBB [EU1] and behavioral models such as prepayment models, hedging strategies, and calculating risk metrics, both from model development and model validation perspectives.

Contact our experts today to discover how Zanders can help you transform risk management into a competitive advantage. Reach out to: Jaap Karelse, Erik Vijlbrief, Petra van Meel, or Martijn Wycisk to start your journey toward financial resilience.

  1. https://www.eba.europa.eu/regulation-and-policy/single-rulebook/interactive-single-rulebook/11665
    CRD IV Article 85: Competent authorities shall ensure that institutions implement policies and processes to evaluate and manage the exposures to operational risk, including model risk and risks resulting from outsourcing, and to cover low-frequency high-severity events. Institutions shall articulate what constitutes operational risk for the purposes of those policies and procedures. ↩︎
  2. https://extranet.eba.europa.eu/sites/default/documents/files/documents/10180/642449/1d93ef17-d7c5-47a6-bdbc-cfdb2cf1d072/EBA-RTS-2014-06%20RTS%20on%20Prudent%20Valuation.pdf?retry=1
    Where possible, institutions shall calculate the model risk AVA by determining a range of plausible valuations produced from alternative appropriate modelling and calibration approaches. In this case, institutions shall estimate a point within the resulting range of valuations where they are 90% confident they could exit the valuation exposure at that price or better. In this article, we generalize valuation model risk to model risk. ↩︎

Biodiversity risks scoring: a quantitative approach

October 2024
9 min read

Explore how Zanders’ scoring methodology quantifies biodiversity risks, enabling financial institutions to safeguard portfolios from environmental and transition impacts.


Addressing biodiversity (loss) is not only relevant from an impact perspective; it is also quickly becoming a necessity for financial institutions to safeguard their portfolios against financial risks stemming from habitat destruction, deforestation, invasive species and/or diseases. 

In a previous article, published in November 2023, Zanders introduced the concept of biodiversity risks, explained how it can pose a risk for financial institutions, and discussed the expectations from regulators.1 In addition, we touched upon our initial ideas to introduce biodiversity risks in the risk management framework. One of the suggestions was for financial institutions to start assessing the materiality of biodiversity risk, for example by classifying exposures based on sector or location. In this article, we describe Zanders’ approach for classifying biodiversity risks in more detail. More specifically, we explore the concepts behind the assessment of biodiversity risks, and we present key insights into methodologies for classifying the impact of biodiversity risks; including a use case. 

Understanding biodiversity risks 

Biodiversity risks can be related to physical risk and/or transition risk events. Biodiversity physical risks results from environmental decay, either event-driven or resulting from longer-term patterns. Biodiversity transition risks results from developments aimed at preventing or restoring damage to nature. These risks are driven by impacts and dependencies that an undertaking has on natural resources and ecosystem services. The definition of impacts and dependencies and its relation to physical and transitional risks is explained below:

  • Companies impact natural assets through their business operations and output. For example, the production process of an oil company in a biodiversity sensitive area could lead to biodiversity loss. Impacts are mainly related to transition risk as sectors and economic activities that have a strong negative impact on environmental factors are likely to be the first affected by a change in policies, legal charges, or market changes related to preventing or restoring damage to nature. 
  • On the other hand, companies are dependent on certain ecosystem services. For example, agricultural companies are dependent on ecosystem services such as water and pollination. Dependencies are mainly related to physical risk as companies with a high dependency will take the biggest hit from a disruption or decay of the ecosystem service caused by e.g. an oil spill or pests. 

For banks, the impacts and dependencies of their own operations and of their counterparties can impact traditional financial (credit, liquidity, and market) and non-financial (operational and business) risks. In our biodiversity classification methodology, we assess both impacts and dependencies as indicators for physical and transition risk. This is further described in the next section.

Zanders’ biodiversity classification methodology

An important starting point for climate-related and environmental (C&E) risk management is the risk identification and materiality assessment. For C&E risks, and biodiversity in particular, obtaining data is a challenge. A quantitative assessment of materiality is therefore difficult to achieve. To address this, Zanders has developed a data driven classification methodology. By classifying the biodiversity impact and dependencies of exposures based on the sector and location of the counterparty, scores that quantify the portfolio’s physical and transition risks related to biodiversity are calculated. These scores are based on the databases of Exploring Natural Capital Opportunities, Risks and Exposure (ENCORE) and the World Wide Fund for Nature (WWF). 

Sector classification 

The sector classification methodology is developed based on the ENCORE database. ENCORE is a public database that is recognized by global initiatives such as Taskforce on Nature-related Financial Disclosures (TNFD) and Partnership for Biodiversity Accounting Financials (PBAF). ENCORE is a key tool for the “Evaluate” phase of the TNFD LEAP approach (Locate, Evaluate, Assess and Prepare).  

ENCORE was developed specifically for financial institutions with the goal to assist them in performing a high-level but data-driven scan of their exposures’ impacts and dependencies. The scanning is made across multiple dimensions of the ecosystem, including biodiversity-related environmental drivers. ENCORE evaluates the potential reliance on ecosystem services2 and the changes of impacts drivers3 on natural capital assets4. It does so by assigning scores to different levels of a sector classification (sector, subindustry and production process). These scores are assigned for 11 impact drivers and 21 ecosystem services. ENCORE provides a score ranging from Very Low to Very High for a broad range of production processes, sub-sectors and sectors. 

To compute the sector scores, ENCORE does not offer a methodology for aggregating scores for impacts drivers and ecosystem services. Therefore, ENCORE does not provide an overall dependency and impact per sector, sub-industry, or production process. However, Zanders has created a methodology to calculate a final aggregated impact and dependency score. The result of this aggregation is a single impact and a single dependency score for each ENCORE sector, sub-industry or production process. In addition, an overall impacts and dependencies scores are computed for the portfolio, based on its sector distribution. In both cases, scores range from 0 (no impact/dependency) to 5 (very high impact or dependency).

Location classification

The location scoring methodology is developed based on the WWF Biodiversity Risk Filter (hereafter called WWF BRF).5 The WWF BRF is a public tool that supports a location-specific analysis of physical- and transition-related biodiversity risks. 

The WWF BRF consists of a set of 33 biodiversity indicators: 20 related to physical risks and 13 related to reputational risks, which are provided at country, but also on a more granular regional level. These indicators are aggregated by the tool itself, which ultimately provides one single scape physical risk and scape reputational risk per location.

To compute overall location scores, the WWF BRF does not offer a methodology for aggregating scores for countries and determine the overall transition risk (based on the scape reputational risk scores) and physical risk (based on the scape physical risk scores). However, Zanders has created a methodology to calculate a final aggregated transition and physical risk score for the portfolio, based on its geographical distribution. The result of this aggregation is a single transition and physical risk score for the portfolio, ranging from 0 (no risk) to 5 (very high risk). 

Use case: RI&MA for biodiversity risks in a bank portfolio 

In this section, we present a use case of classifying biodiversity risks for the portfolio of a fictional financial institution, using the sector and location scoring methodologies developed by Zanders. 

The exposures of this financial institution are concentrated in four sectors: Real estate, Oil & Gas, Soft commodities and Luxury goods. Moreover, the operations of these sectors are located across four different countries: the Netherlands, Switzerland, Morocco and China. The following matrix shows the percentage of exposures of the financial institution for each combination of sector and country: 

ENCORE provides scores for 11 ecosystem services and 21 impacts drivers. Those related to biodiversity risks are transformed to a range from 0 to 5. After that, biodiversity ecosystem services and biodiversity impacts drivers are aggregated into an overall biodiversity impacts and dependencies scores, respectively. The following table shows the mapping between the sectors in the portfolio and the corresponding sub-industry in the ENCORE database, including the aggregated biodiversity impacts and dependencies scores computed for those sub-industries. The mapping is done at sub-industry level, since it is the level of granularity of the ENCORE sector classification that better fits the sectors defined in the fictional portfolio. In addition, the overall impacts and dependencies scores are computed, by taking the weighted average sized by the sector distribution of the portfolio. This leads to scores of 3.8 and 2.4 for the impacts and dependencies scores, respectively. 

The WWF BRF provides biodiversity indicators at country level. It already provides an aggregated score for physical risk (namely, scape physical score) and for transition risk (namely, scape reputational risk score), so no further aggregation is needed. Therefore, the corresponding scores for the four countries within the bank portfolio are selected. As the last step, the location scores are transformed to a range similar to the sector scores, i.e., from 0 (no physical/transition risk) to 5 (very high physical/transition risk). The results are shown in the following table. In addition, the overall impacts and dependencies scores are computed, by taking the weighted average sized by the geographical distribution of the portfolio. This leads to scores of 3.9 and 3.3 for the physical and transition risk scores, respectively. 

Results of the sector and location scores can be displayed for a better understanding and to enable comparison between sectors and countries. Bubble charts, such as the ones show below, present the sectors and location scores together with the size of the exposures in the portfolio (by the size of each bubble). 

Combined with the size of the exposures, the results suggest that biodiversity-related physical and transition risks could result in financial risks for Soft commodities and Oil & Gas. This is due to high impacts and dependencies and their relevant size in the portfolio. Moreover, despite a low dependencies score, biodiversity risks could also impact the Real estate sector due to a combination of its high impact score and the high sector concentration (45% of the portfolio). From a location perspective, exposures located in China could face high biodiversity transition risks, while exposures located in Morocco are the most vulnerable to biodiversity physical risks. In addition, relatively high scores for both physical and transition risk scores for Netherlands, combined with the large size of these exposures in the portfolio, could also lead to additional financial risk.’ 

These results, combined with other information such as loan maturities, identified transmission channels, or expert inputs, can be used to inform the materiality of biodiversity risks. 

Conclusion 

Assessing the materiality of biodiversity risks is crucial for financial institutions in order to understand the risks and opportunities in their loan portfolios. In this article, Zanders has presented its approach for an initial quantification of biodiversity risks. Curious to learn how Zanders can support your financial institutions with the identification and quantification of biodiversity risks and the integration into the risk frameworks? Please reach out to Marije Wiersma, Iryna Fedenko or Miguel Manzanares.

  1. https://zandersgroup.com/en/insights/blog/biodiversity-risks-and-opportunities-for-financial-institutions-explained ↩︎
  2. In accordance with ENCORE, ecosystem services are the links between nature and business. Each of these services represent a benefit that nature provides to enable or facilitate business production processes.  ↩︎
  3. In accordance with ENCORE AND Natural Capital Protocol (2016), an impacts driver is a measurable quantity of a natural resource that is used as an input to production or a measurable non-product output of business activity. ↩︎
  4. In accordance with ENCORE, natural capital assets are specific elements within nature that provide the goods and services that the economy depends on. ↩︎
  5. The WWF also provides a similar tool, the WWF Water Risk Filter, which could be used as to assess specific water-related environmental risks. ↩︎

Navigating Treasury Transformation: Key Insights from TAC’s SAP Conference in Brussels

September 2024
7 min read

At TAC’s recent SAP for Treasury and Working Capital Management in Brussels, SAP alongside some of their clients present several topics that rank highly on the treasurer’s agenda.


SAP highlighted their public vs. private cloud offerings, RISE and GROW products, new AI chatbot applications, and their SAP Analytics Cloud solution. In addition to SAP's insights, several clients showcased their treasury transformation journeys with a focus on in-house banking, FX hedge management, and payment factory implementation. This article provides a brief overview of SAP's RISE and GROW offerings, with a larger focus on SAP’s public vs. private cloud offerings and their new AI virtual assistant, Joule.

SAP RISE and GROW

The SAP RISE solution seeks to help companies transition to cloud-based services. It is designed as a comprehensive offering that combines software, services, and support into a single package, including the core components of SAP S/4HANA cloud, Business Process Intelligence (BPI), SAP Business Network, and SAP Business Technology Platform (BTP). On the other hand, SAP GROW is a program that facilitates the implementation and organization of SAP solutions. This offering is more tailored towards optimizing, rather than transitioning, company processes. SAP GROW still includes S/4HANA public cloud solution, enabling growing companies to manage their operations without requiring extensive on-site infrastructure.

Ultimately, companies experiencing significant growth and seeking scalable, efficient solutions would benefit most from the SAP GROW offering, while SAP RISE is more suited for companies looking to accelerate their digital transformation with a focus on agility, rapid innovation, and business resilience.

Public Cloud vs. Private Cloud

SAP systems can be hosted both on the public and private cloud. The public cloud delivers greater scalability, whereas the private cloud provides enhanced security and complete control of data and governance. Often the choice between SAP public or private cloud is driven by business requirements, budget, compliance needs, and desired levels of customization. These variables, along with other important factors, are compared in Figure 1.

Figure 1: SAP Public Cloud vs. SAP Private Cloud

In summary, organizations considering SAP should carefully weigh these differences when choosing between public and private cloud. SAP is actively developing the functionality within its public cloud offering, making it an increasingly suitable option for both small-to-medium enterprises seeking rapid deployment and cost efficiency, as well as larger enterprises that require powerful solutions with limited customization needs. On the other hand, SAP private cloud remains a preferred choice for larger enterprises with complex, unique process requirements, extensive customization needs, and strict data compliance regulations.

Joule's Virtual Assistant

SAP's Business AI solutions initiative is introducing its newest member, the Joule Copilot. Similar to OpenAI's ChatGPT, the Joule virtual assistant is available at the user's command. Users simply need to ask the copilot questions or explain a particular issue, and Joule will provide intelligent answers drawn from the vast amount of business data stored across the SAP systems and third-party sources.

Joule Key Features
Contextual Recommendations
Provide personalized, context-specific suggestions based on the user's role and activities. Joule can help users by suggesting possible next steps, identifying potential issues, and offering insights that can be actioned upon by users.

Enhanced User Experience
Offers an intuitive, interactive interface designed to simplify user interaction with SAP applications. Joule aims to reduce complexity and streamline workflows, allowing users to simplify their daily processes.

Real-Time Insights
Artificial Intelligence and Machine Learning capabilities enable Joule to analyze vast amounts of data in real-time, providing predictive insights and analytics to support the user's decision-making process.

Integration with SAP Ecosystem
Joule is fully integrated with SAP’s existing products, such as S/4HANA and SAP Business Technology Platform (BTP), ensuring seamless data flow and interconnectivity across various SAP solutions.

Customization and Extensibility
Joule can be tailored to the specific needs of different industries and business processes. It also accounts for the specific role of the user when providing recommendations and can be customized to align with a company’s organizational requirements and workflows within their system.

Applications of Joule in Finance
SAP Joule can significantly enhance financial operations by leveraging AI-driven insights, automation, and predictive analytics. Joule has many applications within finance, the most important being:

Automated Financial Reporting
SAP Joule can automatically generate and distribute financial reports, offering insights based on real-time data. Joule uses its AI and ML capabilities to identify trends, flag anomalies, and provide explanations for variances, ultimately helping finance teams to make informed decisions quickly. Not only does Joule provide insight, but it also increases operational efficiency, allowing finance professionals to focus on strategic activities rather than report gathering and distribution.

Predictive Analytics and Forecasting
SAP Joules embedded ML capabilities enable the prediction of future financial outcomes based on historical data and current trends. Whether you are forecasting revenues, cash flows, or expenses, Joule provides the ideal tools for an accurate forecast. Alongside the forecasting capabilities, Joule can also assess financial risks by analyzing market conditions, historical data, and other relevant factors, which allows risk management to take a proactive approach to risk mitigation.

Accounts Receivable and Payable Management
Joule can predict payment behaviors, which can help organizations optimize their cash flows by forecasting when payments are likely to be received or when outgoing payments will occur. In addition to this, Joule has automatic invoice processing capabilities, which can reduce errors and speed up the accounts payable process.

Investment Analysis
For organizations managing investments, Joule can analyze portfolio performance and suggest adjustments to maximize return while still complying with risk limits. Embedded scenario analysis capabilities assist finance teams to assess the potential impact of various investment decisions on their portfolio.

Real-Time Financial Monitoring
Finance teams can use Joule to create real-time dashboards that provide an overview of key financial metrics, enabling quick responses to emerging issues or opportunities. Joule can set up alerts for critical financial thresholds, such as reserves dropping below a certain level, to ensure timely intervention.

All in all, SAP Joule represents a significant step forward in SAP’s strategy to embed AI and ML into its core products, empowering business users with smarter, data-driven capabilities.

Conclusion

This conference summary briefly highlights SAP’s RISE and GROW offerings, with RISE driving cloud-based digital transformation and GROW striving to optimize operations. It contrasts the scalability and cost-efficiency of the public cloud with the control and customization offered by the private cloud. Lastly, it introduces SAP’s new virtual assistant seeking to enhance financial operations through AI-driven insights, automation, and scalability to improve productivity while still maintaining user control over decisions and data security. If you have any further questions regarding the SAP conference or any information in this article, please contact j.vinson@zandersgroup.com.

In-House Banking vs. In-House Cash: Should You Make the Change?

September 2024
7 min read

An introduction to IHB for companies planning a new implementation, along with key considerations for those transitioning from IHC.


SAP In-House Cash (IHC) has enabled corporates to centralize cash, streamline payment processes, and recording of intercompany positions via the deployment of an internal bank. S/4 HANA In-House Banking (IHB) , released in 2022, in combination with Advanced Payment Management (APM), is SAP’s revamped internal banking solution.

This article will introduce IHB for corporates planning a new implementation and highlight some key considerations for those looking to transition from IHC.

IHB is embedded in APM and included in the same license. It leverages APM’s payment engine functionality and benefits from direct integration for end-to-end processing, monitoring/reporting, and exception handling.

Figure 1: Solution architecture / Integration of In-House Banking (SAP, 2023)

IHC and IHB share several core functionalities, including a focus on managing intercompany financial transactions and balances effectively and ensuring compliance with regulatory requirements. Both solutions also integrate seamlessly with the broader SAP ecosystem and offer robust reporting capabilities.

However, there are significant differences between the two. While IHC relies on the traditional SAP GUI interface, IHB runs on the more modern and intuitive SAP Fiori interface, offering a better user experience. IHB overcomes limitations of IHC, namely in areas such as cut-off times and payment approval workflows and provides native support for withholding tax. Moreover, it also offers tools for managing master data, including the mass download and upload of IHB accounts, features that are otherwise missing in IHC.

Two key distinctions exist in payment routing flexibility and the closing process. IHB, when deployed with APM, manages payment routing entirely as master data, enabling organizations to more easily adapt to evolving business requirements, whereas IT involvement for configuration changes is required for those running IHC exclusively. Lastly, IHB supports multiple updates throughout the day, such as cash concentration, statement reporting, and transfers to FI, and is hence more in tune with the move towards real-time information, whereas IHC is restricted to a rigid end-of-day closing process.

Intrigued? Continue reading to delve deeper into how IHB compares with IHC.

Master data

2.1 Business Partners

The Business Partner (BP) continues to be a pre-requisite for the opening of IHB accounts, but new roles have been introduced.

Tax codes for withholding tax applicable to credit or debit interest can now be maintained at the BP level and feed into the standard account balancing process for IHB. The Withholding Tax set up under FI is leveraged and hence avoids the need for custom development as currently required for IHC.

2.2 In-House Bank Accounts

Relative to IHC, the process of maintaining accounts in IHB is simplified and more intuitive.

Statements can be sent to various recipients and in different formats (e.g., CAMT.53, PDF) based on settings maintained at account level. Intraday statement reporting functionality is included, as well as PDF notifications for balances on accounts and interest calculated as part of the account balancing process.

Figure 2: Maintaining IHB Account Correspondence

IHB offers native functionality for mass account download/upload, a feature that is missing in IHC. The mass download option allows data to be exported to Excel, adjusted offline, and subsequently loaded into BAM .

In the upcoming release, the bank account subledger concept will also be supported for IHB accounts managed in BAM.

2.3 Conditions

The underlying setup has been simplified and can now be performed entirely as master data, unlike for IHC, which in comparison requires some customization to be done as part of the implementation.

IHC technically offers slightly more interest conditions (e.g., commitment interest), but IHB covers the fundamentals for account balancing. More importantly, average/linear compound interest calculation methodology is available with IHB to support risk-free rates.

2.4 Workflows

Unlike IHC, which only offers the option of activating “dual control” for some processes (e.g., closure of IHC accounts), IHB introduces flexible workflows for all core master data attributes (e.g., accounts, conditions, limits, etc.).

IHB's flexible workflows allow for multiple approval steps and dynamic workflow recipient determination based on predefined conditions.

Transactional Data

3.1 Scenarios & payment integration/routing

The following set of scenarios are in scope for IHB:

  • Intercompany payments
  • Payments On-Behalf-Of (POBO)
  • Central Incoming
  • Cash Pooling

Payment integration is achieved via APM and supports several options, namely IDocs, connectors for Multi-Bank Connectivity (MBC ), file uploads, etc. Moreover, the connector for MBC can be used to support more elaborate integration scenarios, such as connecting decentralized AP systems or a public cloud instance to APM.

More noteworthy is that the flexible payment routing in APM is used to handle the routing of payments and is managed entirely according to business needs as master data. This is particularly relevant for corporates running IHC as a “payment factory” who are considering the adoption of APM & IHB, as routing is entirely configuration-based when using IHC exclusively. There are additional advantages of using APM as a payment factory, especially in terms of payment cut-offs and approval workflows. However, these benefits can be obtained by using APM in conjunction with IHB or IHC.

3.2 Foreign Currency Payments

A distinct set of bid/offer rates can be assigned per transaction type and used to convert between payment currency and IHC account currency at provisional and final posting stages. In contrast, for IHB, a single exchange rate type is maintained at the IHB Bank Area level and drives the FX conversion.

Compared to IHC, applying different rates depending on the payment scenario will require a different design, and special consideration is needed for corporates running complex multilateral netting processes in IHC that are planning to transition to IHB.

Intraday/End of Day Processing

4.1 End of Day Sequence

The end-of-day closing concept applies to IHB as well. Unlike IHC, IHB allows many of the related steps, such as intraday statement reporting, cash concentration, and transfers to FI, to be triggered throughout the day.

A dedicated app further streamlines processing by enabling the scheduling and management of jobs via pre-delivered templates.

4.2 Bank Statements

APM converters are leveraged to produce messages in the desired format (MT940, CAMT.53, or PDF ). Unlike IHC, FINSTA IDocs are no longer supported, which is an important factor to consider when migrating participants that are still on legacy ERP systems.

The settings maintained under the bank statement section of the IHB account drive the format and distribution method (e.g., delivery via MBC or email) to the participants.

4.3 General Ledger Transfer

The new Accounting Business Transaction Interface (ABTI) supports general ledger transfers from IHB to FI several times a day, unlike IHC, which is triggered only once at the end of the day.

Overall, the accounting schemas are more straightforward, which is reflected in the underlying setup required to support IHB. However, relative to IHC, there is technically less flexibility in determining the relevant G/Ls for end-of-day transfers to FI. Due diligence is recommended for corporates moving from IHC to ensure that existing processes are adapted to the new ways of working.

Conclusion

There is no official end-of-life support date for IHC, so corporates can still implement it with or without APM, though this approach presents challenges. Key considerations include
the lack of ongoing development for IHC, SAP’s focus on ensuring IHB matches IHC’s capabilities, and the fact that IHB is already included in the APM license, while IHC requires a separate license.

Initial issues with IHB are expected but will likely be resolved as more companies adopt the functionality and additional features are rolled out. For corporates with moderately complex requirements or those willing to align their processes with standard functionality, IHB is ultimately easier to implement and manage operationally.

To ensure a smooth transition to or adoption of IHB, Zanders offers expert implementation services. If your organization is contemplating IHB or transitioning from IHC, contact Zanders for guidance and support with any questions you may have.

References

SAP, 2023. Solution architecture - Integration of In-House Banking [Online] SAP. Available from: https://help.sap.com/docs/SAP_S4HANA_ON-PREMISE/e200555127f24878bed8d1481c9d5a0b/3dbe688b4c8840da8567f811be2bc1b4.html?locale=en-US&version=2023.001

Enhancing Centralized Payment Processing in SAP: Innovations in Advanced Payment Management

September 2024
4 min read

Are you aware of the advancements in centralized processing of custom payment formats within SAP systems?


Historically, SAP faced limitations in this area, but recent innovations have addressed these challenges. This article explores how the XML framework within SAP’s Advanced Payment Management (APM) now effectively handles complex payment formats, streamlining and optimizing treasury functions.

SAP Bank Communication Management (BCM) has been SAP’s solution for integrating a corporate’s SAP system with their banks. It is offering a seamless and secure connection to either a payment network or directly to the bank’s host-to-host solution. Payment and collection files generated from the Payment Medium Workbench can be transferred directly to the external systems, and status messages and bank statements can be received and processed into the SAP application.

The BCM solution has proven to be successful, and corporates wanted to leverage the solution also for transferring messages that do not have their origin in the SAP itself. For example, payment files coming from an HR system or from a legacy ERP system need to be transferred via the same connection that has been established with the BCM setup. For this requirement, SAP introduced the SAP Bank Communication Management option for multisystem payment consolidation, more commonly referred to as the BCM Connector. This add-on made it possible to process payment files generated in external modules or systems into SAP BCM.

However, the BCM Connector has an important limitation: it can only forward the exact format received to external parties. This means that if a legacy system provides a payment file in a proprietary format, it cannot be converted to a more commonly accepted (e.g., XML) format. As a result, compliance with bank requirements regarding payment formats lies with the originating application, limiting the ability to manage payment formats centrally from within the SAP system.

SAP has recognized this limitation and has been focusing on developing a new module to support organizations with a need for centralized payment processing. This solution is called SAP Advanced Payment Management (APM) and offers support for several scenarios for managing payments and payment formats in a centralized environment.

One of the main features of the APM solution is the file handling, which is implemented through an XML Framework. In short, this means that all payment files that need to be processed are handled as an XML message in a canonical data model.  This allows for standardized payment processing across various incoming formats.

The main advantages of the XML framework are:

  • Parsing into easy formats
    • Complex message structures are mapped into generic XML structures that are aligned to the most used message standards (ISO20022). These structures also match to a large extent to the internal data structure of the business objects within SAP APM.
  • Embedded XML schema validation
    • The XSD schema files for the input structures can be loaded into APM to be used for validation of incoming messages. This takes a way the burden of defining and implementing custom-built validations for these files.
  • Interface for easy implementation
    • Transferring the data elements from the incoming messages into the canonical data structures can be achieved via predefined Badi's and using several easy-to-use methods for retrieving and analyzing the source message.
  • Parallel processing for large messages
    • The XML Framework offers functionality to divide large messages into smaller building blocks and process them in parallel. This makes APM a very powerful solution that is able to process large numbers of payments in a timely manner.

The solution can also be used for non-XML messages. This requires a preprocessing step where the source file is converted to an XML representation of the file. After this step, the message can be processed like any other XML file including the validation and parallel processing options.

Implementing this solution for XML and non-XML formats is a technical exercise that requires ABAP-skills and knowledge of the SAP enhancement framework . However, SAP has made it easier to implement custom formats with the framework while fully utilizing the capabilities of the APM module.

For organizations seeking to enhance their payment processing capabilities through centralized management and innovative solutions like SAP Advanced Payment Management, our team is equipped to provide expert guidance. To explore how APM can support your treasury operations and ensure seamless financial integration, please reach out to us at r.claassen@zandersgroup.com.

PLA and the RFET: A Perfect FRTB Storm 

September 2024
6 min read

Banks face challenges with PLA and RFET under FRTB; a unified approach can reduce capital requirements and improve outcomes by addressing shared risk factors.


Despite the several global delays to FRTB go-live, many banks are still struggling to be prepared for the implementation of profit and loss attribution (PLA) and the risk factor eligibility test (RFET). As both tests have the potential to considerably increase capital requirements, they are high on the agenda for most banks which are attempting to use the internal models approach (IMA).  

In this article, we explore the difficulties with both tests and also highlight some underlying similarities. By leveraging these similarities to develop a unified PLA and RFET system, we describe how PLA and RFET failures can be avoided to reduce the potential capital requirements for IMA banks.

Difficulties with PLA

Since its introduction into the FRTB framework by the Basel Committee on Banking Supervision (BCBS), the PLA test has been a consistent cause for concern for banks attempting to use the IMA. The test is designed to ensure that Front Office (FO) and Risk P&Ls are sufficiently aligned. As such, it ensures that banks’ internal models for market risk accurately reflect the risk they are exposed to. To assess this alignment, the PLA test compares the Hypothetical P&L (HPL) from the FO with the risk-theoretical P&L (RTPL) from Risk using two statistical tests - the Spearman correlation and the Kolmogorov-Smirnov (KS) test. 

There are potentially significant consequences of trading desks not passing the test. At best, the desk will incur capital add-ons. At worst, the desk will be forced to use the more punitive standardised approach (SA), which may increase capital requirements even more. 

There are several difficulties with PLA: 

  • No existing systems: As the test has never before been a regulatory requirement, many banks do not have suitable existing systems and processes which can be leveraged to identify the causes of PLA failures. Although the KS and Spearman tests are easy to implement, isolating the causes of PLA failures can be difficult. 
  • Risk factor mapping: Banks often do not have accurate and reliable mapping between the risk factors in the FO and Risk models. Remediation of the inaccurate mapping can often be a slow and manual process, making it extremely difficult to identify the risk factors which are causing the PLA failure. 
  • Data inconsistency: As the data feeds between Risk and FO models can be different, there can be a large number of potential causes of P&L differences. Even small differences in data granularity, convexity capture or even holiday calendars can cause misalignments which may result in PLA failures. 
  • Hedged portfolios: Well-hedged portfolios often find it more challenging to pass the PLA test. When portfolios are hedged, the total P&L of the portfolio is reduced, leading to a larger relative error than that of an unhedged portfolio, potentially causing PLA failures. You can read more about this topic on our other blog post – ‘To Hedge or Not to Hedge: Navigating the Catch-22 of FRTB’s PLA Test’

Issues with the RFET

The RFET ensures that all risk factors in the internal model have a minimum level of liquidity and enough market data to be accurately used. Liquidity is measured by the number of real price observations which have been observed in the past 12 months. Any risk factors that do not meet the minimum liquidity standards outlined in FRTB are known as non-modellable risk factors (NMRFs). Similar to the consequences of failing the PLA test and having to use the SA, NMRFs must use the more conservative stressed expected shortfall (SES) capital calculations, leading to higher capital requirements. Research shows that NMRFs can account for over 30% of capital requirements, making them one of the most punitive drivers of increased capital within the IMA. The impact of NMRFs is often considered to be disproportionately large and also unpredictable.

There are several difficulties with the RFET: 

  • Wide scope: The RFET requires all risk factors to be collected across multiple desks and systems. Mapping instruments to risk factors can be a complicated and lengthy process. Consequently, implementing and operationalizing the RFET can be difficult. 
  • Diversification benefit: Modellable risk factors are capitalised using the expected shortfall (ES) which allows for diversification benefits. However, NMRFs are capitalised using the stressed expected shortfall (SES) which does not provide the same benefits, resulting in larger capital. 
  • Proxy development: Although proxies can be used to overcome a lack of data, developing them can be time-consuming and require considerable effort. Determining proxies requires exploratory work which often has uncertain outcomes.​ Furthermore, all proxies need to be validated and justified to the regulator. 
  • Vendor data: It can be difficult for banks to quantify the cost benefit of purchasing external data to increase the number of real price observations versus the cost of more NMRFs. Ultimately, the result of the RFET is based on a bank’s access to real price observation data. Although two banks may have identical exposures and risk, they may have completely different capital requirements due to their access to the correct data.  

The interconnectedness of both tests 

Despite their individual difficulties, there are a number of similarities between PLA and the RFET which can be leveraged to ensure efficient implementation of the IMA: 

  • Although PLA is performed at the desk-level, the underlying risk factors are the same as those which are used for the RFET.​ 
  • Both tests potentially impact the ES model as the PLA/RFET outcomes may instigate modifications to the model in order to improve the results. For example, any changes in data source to increase the liquidity of NMRFs (which is a common way to overcome RFET issues) would require PLA to be rerun.​ 
  • Ultimately, if any changes are made to the underlying risk factors, both tests must be performed again.​ 
  • Hence, although they are relatively simple tests (Spearman Correlation and KS, and a count of real price observations for the RFET), banks must develop a reliable architecture to dynamically change risk factors and efficiently rerun PLA and RFET tests. 

Zanders’ recommendation 

As they greatly impact one another, a unified system allows both components to be run together. Due to their interdependencies, a unified PLA-RFET system makes it easier for banks to dynamically modify risk factors and improve results for both tests.​  

  • In order to truly have a unified PLA-RFET system, the PLA results must also be brought down to the risk factor level. This is done by understanding and quantifying which risk factors are causing the discrepancies between RTPL and HPL and causing poor PLA statistics. More information about this can be found in our other blog post ‘FRTB: Profit and Loss Attribution (PLA) Analytics’
  • Once the risk factors causing PLA failures have been identified, a unified approach can prioritise risk factors which, if remediated, improve PLA statistics and also efficiently reduce NMRF SES capitalisation. 

Conclusion 

While PLA is crucial for IMA approval, it presents numerous operational and technical challenges. Similarly, the RFET introduces additional complexities by enforcing strict liquidity and data standards for risk factors, with failing risk factors subject to harsher capital treatments. The interconnected nature of both tests highlights the need for a cohesive strategy, where adjustments to one test can directly influence outcomes in the other. Ultimately, banks need to invest in robust systems that allow for dynamic adjustments to risk factors and efficient reruns of both tests. A unified PLA-RFET approach can streamline processes, reduce capital penalties, and improve test results by focusing on the underlying risk factors common to both assessments.  

For more information about this topic and how Zanders can help you design and implement a unified PLA and RFET system, please contact Dilbagh Kalsi (Partner) or Hardial Kalsi (Manager).

Insights into FX Risk in Business Planning and Analysis

September 2024
4 min read

Strengthen strategic decision-making by bridging the FX impact gap. Empower Treasury as a proactive partner in predicting and minimizing global and local FX risks through advanced analytics


In a world of persistent market and economic volatility, the Corporate Treasury function is increasingly taking on a more strategic role in navigating the uncertainties and driving corporate success.

Even in the most mature organizations, the involvement of the Treasury center in FX risk management often begins with collecting forecasted exposures from subsidiaries. However, to fundamentally enhance the performance of the FX risk management process, it is crucial to understand the nature of these FX exposures and their impacts on the upstream business processes where they originate.

Enabling this requires the optimization of the end-to-end FX hedging lifecycle, from subsidiary financial planning and analysis (FP&A) that identifies the exposure to Treasury hedging. Improvements in the exposure identification process and FX impact analytics necessitate the use of intelligent systems and closer cooperation between Treasury and business functions.

Traditional models

While the primary goal of local business units is to enhance the performance of their respective operations, fluctuating FX rates will always directly impact the overall financial results and, in many cases, obscure the true business performance of the entity. A common strategy to separate business performance from FX impacts is to use constant budgeting and planning rates for management reporting, where the FX impact is nullified. These budgeting and planning rates typically reflect the most likely hedged rates achieved by Treasury, considering the hedging policies and forecasted hedging horizons. However, this strategy can lead to unexpected shocks in financial reporting and obscure the impacts of FX exposure forecasting and hedging performance.

When these shocks occur, conclusions about their causes, such as over or under-hedging or unrealistic planning rates, can only be drawn through retrospective analysis of the results. Unfortunately, this analysis often comes too late to address the underlying issues.

The most common Treasury tools used to measure the accuracy of business forecasting are Forecast vs. Forecast and Actual vs. Forecast accuracy reporting. These tools help identify recurring trouble areas that may need improvement. However, while these metrics indicate where forecasting accuracy can be improved, they do not easily translate into a quantification of the predicted or actual financial impact required for business planning purposes.

End-to-End FX risk management in a Treasury 4.x environment

Finance transformation projects, paired with system centralization and standardization, may offer an opportunity to create better integration between Treasury and its business partners, bridging the information gap and providing better insight and early analysis of future FX results. Treasury systems data related to hedging performance, together with improved up-to-date exposure forecasting, can paint a clearer picture of the up-to-date performance against the plan.

While some principles may remain the same, such as using planning and budgeting rates to isolate the business performance for analysis, the expected FX impacts at a business level can equally be analyzed and accounted for as part of the regular FP&A processes, answering questions such as:

  • What is the expected impact of over- or under-hedging on the P&L?
  • What is the expected impact from late hedging of exposures?
  • What is the expected impact from misaligned budgeting and planning rates compared to the achieved hedging rates?

The Zanders Whitepaper, "Treasury 4.x – The Age of Productivity, Performance, and Steering," outlines the enablers for Treasury to fulfill its strategic potential, identifying Productivity, Performance, and Steering as key areas of focus.

In the area of Performance, the benefits of enhanced insights and up-to-date metrics for forecasting the P&L impacts of FX are clear. Early identification of expected FX impacts in the FP&A processes provides both time and opportunity to respond to risks sooner. Improved insights into the causes of FX impacts offer direction on where issues should be addressed. The outcome should be enhanced predictability of the overall financial results.

In addition to increased Performance, there are additional benefits in clearer accountability for the results. In the three questions above, the first two address timely forecasting accuracy, while the third pertains to the Treasury team's ability to achieve the rates set by the organization. With transparent accountability for the FX impact, Treasury gains an additional tool to steer the organization toward improved budgeting processes and create KPIs to ensure effective strategy implementation. This provides a valuable addition to the commonly used forecast vs. forecast exposure analysis, as the FX impacts resulting from that performance can be easily identified.

Conclusion

Although FP&A processes are crucial for clear strategic decision-making around business operations and financial planning, the FX impact—potentially a significant driver of financial results—is not commonly monitored with the same extent and detail as business operations metrics.

Improving the FX analytics of these processes can largely bridge the information gap between business performance and financial performance. This also allows Treasury to be utilized as a more engaged business partner to the rest of the operations in the prediction and explanation of FX impact, while providing strategic direction on how these impacts can be minimized, both globally and at local operations levels.

Implementing such an end-to-end process may be intimidating, but data and technology improvements embraced in the context of finance transformation projects may open the door to exploring these ideas. With cooperation between Treasury and the business, a true end-to-end FX risk management process may be within reach.

Fintegral

is now part of Zanders

In a continued effort to ensure we offer our customers the very best in knowledge and skills, Zanders has acquired Fintegral.

Okay

RiskQuest

is now part of Zanders

In a continued effort to ensure we offer our customers the very best in knowledge and skills, Zanders has acquired RiskQuest.

Okay

Optimum Prime

is now part of Zanders

In a continued effort to ensure we offer our customers the very best in knowledge and skills, Zanders has acquired Optimum Prime.

Okay
This site is registered on wpml.org as a development site.