The usage of proxies under FRTB

November 2021
4 min read

Learn how banks can reduce capital charges under FRTB by using proxies, external data, and customized risk factor bucketing to minimize non-modellable risk factors (NMRFs).


Non-modellable risk factors (NMRFs) have been shown to be one of the largest contributors to capital charges under FRTB. The use of proxies is one of the methods that banks can employ to increase the modellability of risk factors and reduce the number of NMRFs. Other potential methods for improving the modellability of risk factors is using external data sources and modifying risk factor bucketing approaches.

Proxies and FRTB

A proxy is utilised when there is an insufficient historical data for a risk factor. A lack of historical data increases the likelihood of the risk factor failing the Risk Factor Eligibility Test (RFET). Consequently, using proxies ensures that the number of NMRFs is reduced and capital charges are kept to a minimum. Although the use of proxies is allowed, regulation states that their usage must be limited, and they must have sufficiently similar characteristics to the risk factors which they represent.

Banks must be ready to provide evidence to regulators that their chosen proxies are conceptually and empirically sound. Despite the potential reduction in capital, developing proxy methodologies can be time-consuming and require considerable ongoing monitoring. There are two main approaches which are used to develop proxies: rules-based and statistical.

Proxy decomposition

FRTB regulation allows NMRFs to be decomposed into modellable components and a residual basis, which must be capitalised as non-modellable. For example, credit spreads for small issuers which are not highly liquid can be decomposed into a liquid credit spread index component, which is classed as modellable, and a non-modellable basis or spread.  

To test modellability using the RFET, 12-months of data is required for the proxy and basis components. If the basis between the proxy and the risk factor has not been identified and properly capitalised, only the proxy representation of the risk factor can be used in the Risk Theoretical P&L (RTPL). However, if the capital requirement for a basis is determined, either: (i) the proxy risk factor and the basis; or (ii) the original risk factor itself can be included in the RTPL.

Banks should aim to produce preliminary analysis on the cost benefits of proxy development – does the cost and effort of developing proxies outweigh the capital which could be saved by increasing risk factor modellability? For example, proxies which are highly volatile may also result in increasing NMRF capital charges.

Approaches for the development of proxies

Both rules-based and statistical approaches to developing proxies require considerable effort. Banks should aim to develop statistical approaches as they have been shown to be more accurate and also more efficient in reducing capital requirements for banks.

Rules-based approach

Rules-based approaches are more simplistic, however are less accurate than the statistical approaches. They find the “closest fit” modellable risk factor using somewhat more qualitative methods. For example, picking the closest tenor on a yield curve (see below), using relevant indices or ETFs, or limiting the search for proxies to the same sector as the underlying risk factor.

Similarly, longer tenor points (which may not be traded as frequently) can be decomposed into shorter-tenor points and cross-tenor basis spread.

Statistical approach

Statistical approaches are more quantitate and more accurate than the rules-based approaches. However, this inevitably comes with computational expense. A large number of candidates are tested using the chosen statistical methodology and the closest is picked (see below).

For example, a regression approach could be used to identify which of the candidates are most correlated with the underlying risk factor. Studies have shown that statistical approaches not only produce the more accurate proxies, but can also reduce capital charges by almost twice as much as simpler rules-based approaches.

Conclusion

Risk factor modellability is a considerable concern for banks as it has a direct impact on the size of their capital charges. Inevitably, reducing the number of NMRFs is a key aim for all IMA banks. In this article, we show that developing proxies is one of the strategies that banks can use to minimise the amount of NMRFs in their models. Furthermore, we describe the two main approaches for developing proxies: rules-based and statistical. Although rules-based approaches are less complicated to develop, statistical approaches show much better accuracy and hence have the potential to better reduce capital charges.

FRTB: Improving the Modellability of Risk Factors

June 2021
4 min read

Learn how banks can reduce capital charges under FRTB by using proxies, external data, and customized risk factor bucketing to minimize non-modellable risk factors (NMRFs).


Under the FRTB internal models approach (IMA), the capital calculation of risk factors is dependent on whether the risk factor is modellable. Insufficient data will result in more non-modellable risk factors (NMRFs), significantly increasing associated capital charges.

NMRFs

Risk factor modellability and NMRFs

The modellability of risk factors is a new concept which was introduced under FRTB and is based on the liquidity of each risk factor. Modellability is measured using the number of ‘real prices’ which are available for each risk factor. Real prices are transaction prices from the institution itself, verifiable prices for transactions between arms-length parties, prices from committed quotes, and prices from third party vendors.

For a risk factor to be classed as modellable, it must have a minimum of 24 real prices per year, no 90-day period with less than four prices, and a minimum of 100 real prices in the last 12 months (with a maximum of one real price per day). The Risk Factor Eligibility Test (RFET), outlined in FRTB, is the process which determines modellability and is performed quarterly. The results of the RFET determine, for each risk factor, whether the capital requirements are calculated by expected shortfall or stressed scenarios.

Consequences of NMRFs for banks

Modellable risk factors are capitalised via expected shortfall calculations which allow for diversification benefits. Conversely, capital for NMRFs is calculated via stressed scenarios which result in larger capital charges. This is due to longer liquidity horizons and more prudent assumptions used for aggregation. Although it is expected that a low proportion of risk factors will be classified as non-modellable, research shows that they can account for over 30% of total capital requirements. 

There are multiple techniques that banks can use to reduce the number and impact of NMRFs, including the use of external data, developing proxies, and modifying the parameterisation of risk factor curves and surfaces. As well as focusing on reducing the number of NMRFs, banks will also need to develop early warning systems and automated reporting infrastructures to monitor the modellability of risk factors. These tools help to track and predict modellability issues, reducing the likelihood that risk factors will fail the RFET and increase capital requirements.

Methods for reducing the number of NMRFs

Banks should focus on reducing their NMRFs as they are associated with significantly higher capital charges. There are multiple approaches which can be taken to increase the likelihood that a risk factor passes the RFET and is classed as modellable.

Enhancing internal data

The simplest way for banks to reduce NMRFs is by increasing the amount of data available to them. Augmenting internal data with external data increases the number of real prices available for the RFET and reduces the likelihood of NMRFs. Banks can purchase additional data from external data vendors and data pooling services to increase the size and quality of datasets.

It is important for banks to initially investigate their internal data and understand where the gaps are. As data providers vary in which services and information they provide, banks should not only focus on the types and quantity of data available. For example, they should also consider data integrity, user interfaces, governance, and security. Many data providers also offer FRTB-specific metadata, such as flags for RFET liquidity passes or fails.

Finally, once a data provider has been chosen, additional effort will be required to resolve discrepancies between internal and external data and ensure that the external data follows the same internal standards.

Creating risk factor proxies

Proxies can be developed to reduce the number or magnitude of NMRFs, however, regulation states that their use must be limited. Proxies are developed using either statistical or rules-based approaches.

Rules-based approaches are simplistic, yet generally less accurate. They find the “closest fit” modellable risk factor using more qualitative methods, e.g. using the closest tenor on the interest rate curve. Alternatively, more accurate approaches model the relationship between the NMRF and modellable risk factors using statistical methods. Once a proxy is determined, it is classified as modellable and only the basis between it and the NMRF is required to be capitalised using stressed scenarios.

Determining proxies can be time-consuming as it requires exploratory work with uncertain outcomes. Additional ongoing effort will also be required by validation and monitoring units to ensure the relationship holds and the regulator is satisfied.

Developing own bucketing approach

Instead of using the prescribed bucketing approach, banks can use their own approach to maximise the number of real price observations for each risk factor.

For example, if a risk model requires a volatility surface to price, there are multiple ways this can be parametrised.  One method could be to split the surface into a 5x5 grid, creating 25 buckets that would each require sufficient real price observations to be classified as modellable. Conversely, the bank could instead split the surface into a 2x2 grid, resulting in only four buckets. The same number of real price observations would then need to be allocated between significantly less buckets, decreasing the chances of a risk factor being a NMRF.

It should be noted that the choice of bucketing approach affects other aspects of FRTB. Profit and Loss Attribution (PLA) uses the same buckets of risk factors as chosen for the RFET. Increasing the number of buckets may increase the chances of passing PLA, however, also increases the likelihood of risk factors failing the RFET and being classed as NMRFs.

Conclusion

In this article, we have described several potential methods for reducing the number of NMRFs. Although some of the suggested methods may be more cost effective or easier to implement than others, banks will most likely, in practice, need to implement a combination of these strategies in parallel. The modellability of risk factors is clearly an important part of the FRTB regulation for banks as it has a direct impact on required capital. Banks should begin to develop strategies for reducing the number of NMRFs as early as possible if they are to minimise the required capital when FRTB goes live.

Targeted Review of Internal Models (TRIM): Review of observations and findings for Traded Risk

May 2021
4 min read

Learn how banks can reduce capital charges under FRTB by using proxies, external data, and customized risk factor bucketing to minimize non-modellable risk factors (NMRFs).


The EBA has recently published the findings and observations from their TRIM on-site inspections. A significant number of deficiencies were identified and are required to be remediated by institutions in a timely fashion.

Since the Global Financial Crisis 2007-09, concerns have been raised regarding the complexity and variability of the models used by institutions to calculate their regulatory capital requirements. The lack of transparency behind the modelling approaches made it increasingly difficult for regulators to assess whether all risks had been appropriately and consistently captured.

The TRIM project was a large-scale multi-year supervisory initiative launched by the ECB at the beginning of 2016. The project aimed to confirm the adequacy and appropriateness of approved Pillar I internal models used by Significant Institutions (SIs) in euro area countries. This ensured their compliance with regulatory requirements and aimed to harmonise supervisory practices relating to internal models.

TRIM executed 200 on-site internal model investigations across 65 SIs from over 10 different countries. Over 5,800 deficiencies were identified. Findings were defined as deficiencies which required immediate supervisory attention. They were categorised depending on the actual or potential impact on the institution’s financial situation, the levels of own funds and own funds requirements, internal governance, risk control, and management.

The findings have been followed up with 253 binding supervisory decisions which request that the SIs mitigate these shortcomings within a timely fashion. Immediate action was required for findings that were deemed to take a significant time to address.

Assessment of Market Risk

TRIM assessed the VaR/sVaR models of 31 institutions. The majority of severe findings concerned the general features of the VaR and sVaR modelling methodology, such as data quality and risk factor modelling.

19 out of 31 institutions used historical simulation, seven used Monte Carlo, and the remainder used either a parametric or mixed approach. 17 of the historical simulation institutions, and five using Monte Carlo, used full revaluation for most instruments. Most other institutions used a sensitivities-based pricing approach.

VaR/sVaR Methodology

Data: Issues with data cleansing, processing and validation were seen in many institutions and, on many occasions, data processes were poorly documented.

Risk Factors: In many cases, risk factors were missing or inadequately modelled. There was also insufficient justification or assessment of assumptions related to risk factor modelling.

Pricing: Institutions frequently had inadequate pricing methods for particular products, leading to a failure for the internal model to adequately capture all material price risks. In several cases, validation activities regarding the adequacy of pricing methods in the VaR model were insufficient or missing.

RNIME: Approximately two-thirds of the institutions had an identification process for risks not in model engines (RNIMEs). For ten of these institutions, this directly led to an RNIME add-on to the VaR or to the capital requirements.

Regulatory Backtesting

Period and Business Days: There was a lack of clear definitions of business and non-business days at most institutions. In many cases, this meant that institutions were trading on local holidays without adequate risk monitoring and without considering those days in the P&L and/or the VaR.

APL: Many institutions had no clear definition of fees, commissions or net interest income (NII), which must be excluded from the actual P&L (APL). Several institutions had issues with the treatment of fair value or other adjustments, which were either not documented, not determined correctly, or were not properly considered in the APL. Incorrect treatment of CVAs and DVAs and inconsistent treatment of the passage of time (theta) effect were also seen.

HPL: An insufficient alignment of pricing functions, market data, and parametrisation between the economic P&L (EPL) and the hypothetical P&L (HPL), as well as the inconsistent treatment of the theta effect in the HPL and the VaR, was seen in many institutions.

Internal Validation and Internal Backtesting

Methodology: In several cases, the internal backtesting methodology was considered inadequate or the levels of backtesting were not sufficient.

Hypothetical Backtesting: The required backtesting on hypothetical portfolios was either not carried or was only carried out to a very limited extent

IRC Methodology

TRIM assessed the IRC models of 17 institutions, reviewing a total of 19 IRC models. A total of 120 findings were identified and over 80% of institutions that used IRC models received at least one high-severity finding in relation to their IRC model. All institutions used a Monte Carlo simulation method, with 82% applying a weekly calculation. Most institutions obtained rates from external rating agency data. Others estimated rates from IRB models or directly from their front office function. As IRC lacks a prescriptive approach, the choice of modelling approaches between institutes exhibited a variety of modelling assumptions, as illustrated below.

Recovery rates: The use of unjustified or inaccurate Recovery Rates (RR) and Probability of Defaults (PD) values were the cause of most findings. PDs close to or equal to zero without justification was a common issue, which typically arose for the modelling of sovereign obligors with high credit quality. 58% of models assumed PDs lower than one basis point, typically for sovereigns with very good ratings but sometimes also for corporates. The inconsistent assignment of PDs and RRs, or cases of manual assignment without a fully documented process, also contributed to common findings.

Modellingapproach: The lack of adequate modelling justifications presented many findings, including copula assumptions, risk factor choice, and correlation assumptions. Poor quality data and the lack of sufficient validation raised many findings for the correlation calibration.

Assessment of Counterparty Credit Risk

Eight banks faced on-site inspections under TRIM for counterparty credit risk. Whilst the majority of investigations resulted in findings of low materiality, there were severe weaknesses identified within validation units and overall governance frameworks.

Conclusion

Based on the findings and responses, it is clear that TRIM has successfully highlighted several shortcomings across the banks. As is often the case, many issues seem to be somewhat systemic problems which are seen in a large number of the institutions. The issues and findings have ranged from fundamental problems, such as missing risk factors, to more complicated problems related to inadequate modelling methodologies. As such, the remediation of these findings will also range from low to high effort. The SIs will need to mitigate the shortcomings in a timely fashion, with some more complicated or impactful findings potentially taking a considerable time to remediate.

FRTB: Harnessing Synergies Between Regulations

March 2021
4 min read

Learn how banks can reduce capital charges under FRTB by using proxies, external data, and customized risk factor bucketing to minimize non-modellable risk factors (NMRFs).


Regulatory Landscape

Despite a delay of one year, many banks are struggling to be ready for FRTB in January 2023. Alongside the FRTB timeline, banks are also preparing for other important regulatory requirements and deadlines which share commonalities in implementation. We introduce several of these below.

SIMM

Initial Margin (IM) is the value of collateral required to open a position with a bank, exchange or broker.  The Standard Initial Margin Model (SIMM), published by ISDA, sets a market standard for calculating IMs. SIMM provides margin requirements for financial firms when trading non-centrally cleared derivatives.

BCBS 239

BCBS 239, published by the Basel Committee on Banking Supervision, aims to enhance banks’ risk data aggregation capabilities and internal risk reporting practices. It focuses on areas such as data governance, accuracy, completeness and timeliness. The standard outlines 14 principles, although their high-level nature means that they are open to interpretation.

SA-CVA

Credit Valuation Adjustment (CVA) is a type of value adjustment and represents the market value of the counterparty credit risk for a transaction. FRTB splits CVA into two main approaches: BA-CVA, for smaller banks with less sophisticated trading activities, and SA-CVA, for larger banks with designated CVA risk management desks.

IBOR

Interbank Offered Rates (IBORs) are benchmark reference interest rates. As they have been subject to manipulation and due to a lack of liquidity, IBORs are being replaced by Alternative Reference Rates (ARRs). Unlike IBORs, ARRs are based on real transactions on liquid markets rather than subjective estimates.

Synergies With Current Regulation

Existing SIMM and BCBS 239 frameworks and processes can be readily leveraged to reduce efforts in implementing FRTB frameworks.

SIMM

The overarching process of SIMM is very similar to the FRTB Sensitivities-based Method (SbM), including the identification of risk factors, calculation of sensitivities and aggregation of results. The outputs of SbM and SIMM are both based on delta, vega and curvature sensitivities. SIMM and FRTB both share four risk classes (IR, FX, EQ, and CM). However, in SIMM, credit is split across two risk classes (qualifying and non-qualifying), whereas it is split across three in FRTB (non-securitisation, securitisation and correlation trading). For both SbM and SIMM, banks should be able to decompose indices into their individual constituents. 

We recommend that banks leverage the existing sensitivities infrastructure from SIMM for SbM calculations, use a shared risk factor mapping methodology between SIMM and FRTB when there is considerable alignment in risk classes, and utilise a common index look-through procedure for both SIMM and SbM index decompositions.

BCBS 239

BCBS 239 requires banks to review IT infrastructure, governance, data quality, aggregation policies and procedures. A similar review will be required in order to comply with the data standards of FRTB. The BCBS 239 principles are now in “Annex D” of the FRTB document, clearly showing the synergy between the two regulations. The quality, transparency, volume and consistency of data are important for both BCBS 239 and FRTB. Improving these factors allow banks to easily follow the BCBS 239 principles and decrease the capital charges of non-modellable risk factors. BCBS 239 principles, such as data completeness and timeliness, are also necessary for passing P&L attribution (PLA) under FRTB.

We recommend that banks use BCBS 239 principles when designing the necessary data frameworks for the FRTB Risk Factor Eligibility Test (RFET), support FRTB traceability requirements and supervisory approvals with existing BCBS 239 data lineage documentation, and produce market risk reporting for FRTB using the risk reporting infrastructure detailed in BCBS 239.

Synergies With Future Regulation

The IBOR transition and SA-CVA will become effective from 2023. Aligning the timelines and exploiting the similarities between FRTB, SA-CVA and the IBOR transition will support banks to be ready for all three regulatory deadlines.

SA-CVA

Four of the six risk classes in SA-CVA (IR, FX, EQ, and CM) are identical to those in SbM. SA-CVA, however, uses a reduced granularity for risk factors compared to SbM. The SA-CVA capital calculation uses a similar methodology to SbM by combining sensitivities with risk weights. SA-CVA also incorporates the same trade population and metadata as SbM. SA-CVA capital requirements must be calculated and reported to the supervisor at the same monthly frequency as for the market risk standardised approach.

We recommend that banks combine SA-CVA and SbM risk factor bucketing tasks in a common methodology to reduce overall effort, isolate common components of both models as a feeder model, allowing a single stream for model development and validation, and develop a single system architecture which can be configured for either SbM or SA-CVA.

IBOR Transition

Although not a direct synergy, the transition from IBORs will have a direct impact to the Internal Models Approach (IMA) for FRTB and eligibility of risk factors. As the use of IBORs are discontinued, banks may observe a reduction in the number of real-price observations for associated risk factors due to a reduction in market liquidity. It is not certain if these liquidity issues fall under the RFET exemptions for systemic circumstances, which apply to modellable risk factors which can no longer pass the test. It may be difficult for banks to obtain stress-period data for ARRs, which could lead to substantial efforts to produce and justify proxies. The transition may cause modifications to trading desk structure, the integration of external data providers, and enhanced operational requirements, which can all affect FRTB.

We recommend that banks investigate how much data is available for ARRs, for both stress-period calculations and real-price observations, develop any necessary proxies which will be needed to overcome data availability issues, as soon as possible, and Calculate IBOR capital consequences through the existing FRTB engine.

Conclusion

FRTB implementation is proving to be a considerable workload for banks, especially those considering opting for the IMA. Several FRTB requirements, such as PLA and RFET, are completely new requirements for banks. As we have shown in this article, there are several other important regulatory requirements which banks are currently working towards. As such, we recommend that banks should leverage the synergies which are seen across this regulatory landscape to reduce the complexity and workload of FRTB.

Strengthening Model Risk Management at ABN AMRO – Insights from Martijn Habing

Martijn Habing, head of Model Risk Management (MoRM) at ABN AMRO bank, spoke at the Zanders Risk Management Seminar about the extent to which a model can predict the impact of an event.


The MoRM division of ABN AMRO comprises around 45 people. What are the crucial conditions to run the department efficiently?

Habing: “Since the beginning of 2019, we have been divided into teams with clear responsibilities, enabling us to work more efficiently as a model risk management component. Previously, all questions from the ECB or other regulators were taken care of by the experts of credit risk, but now we have a separate team ready to focus on all non-quantitative matters. This reduces the workload on the experts who really need to deal with the mathematical models. The second thing we have done is to make a stronger distinction between the existing models and the new projects that we need to run. Major projects include the Definition of default and the introduction of IFRS 9. In the past, these kinds of projects were carried out by people who actually had to do the credit models. By having separate teams for this, we can scale more easily to the new projects – that works well.”What exactly is the definition of a model within your department? Are they only risk models, or are hedge accounting or pricing models in scope too?

“We aim to identify the widest range of models as possible, both in size and type. From an administrative point of view, we can easily do 600 to 700 models. But with such a number, we can't validate them all in the same degree of depth. We therefore try to get everything in picture, but this varies per model what we look at.”

To what extent does the business determine whether a validation model is presented?

“We want to have all models in view. Then the question is: how do you get a complete overview? How do you know what models there are if you don't see them all? We try to set this up in two ways. On the one hand, we do this by connecting to the change risk assessment process. We have an operational risk department that looks at the entire bank in cycles of approximately three years. We work with operational risk and explain to them what they need to look out for, what ‘a model’ is according to us and what risks it can contain. On the other hand, we take a top-down approach, setting the model owner at the highest possible level. For example, the director of mortgages must confirm for all processes in his business that the models have been well developed, and the documentation is in order and validated. So, we're trying to get a view on that from the top of the organization. We do have the vast majority of all models in the picture.”

Does this ever lead to discussion?

“Yes, that definitely happens. In the bank's policy, we’ve explained that we make the final judgment on whether something is a model. If we believe that a risk is being taken with a model, we indicate that something needs to be changed.”

Some of the models will likely be implemented through vendor systems. How do you deal with that in terms of validation?

“The regulations are clear about this: as a bank, you need to fully understand all your models. We have developed a vast majority of the models internally. In addition, we have market systems for which large platforms have been created by external parties. So, we are certainly also looking at these vendor systems, but they require a different approach. With these models you look at how you parametrize – which test should be done with it exactly? The control capabilities of these systems are very different. We're therefore looking at them, but they have other points of interest. For example, we perform shadow calculations to validate the results.”

How do you include the more qualitative elements in the validation of a risk model?

“There are models that include a large component from an expert who makes a certain assessment of his expertise based on one or more assumptions. That input comes from the business itself; we don't have it in the models and we can't control it mathematically. At MoRM, we try to capture which assumptions have been made by which experts. Since there is more risk in this, we are making more demands on the process by which the assumptions are made. In addition, the model outcome is generally input for the bank's decision. So, when the model concludes something, the risk associated with the assumptions will always be considered and assessed in a meeting to decide what we actually do as a bank. But there is still a risk in that.”

How do you ensure that the output from models is applied correctly?

“We try to overcome this by the obligation to include the use of the model in the documentation. For example, we have a model for IFRS 9 where we have to indicate that we also use it for stress testing. We know the internal route of the model in the decision-making of the bank. And that's a dynamic process; there are models that are developed and used for other purposes three years later. Validation is therefore much more than a mathematical exercise to see how the numbers fall apart.”

Typically, the approach is to develop first, then validate. Not every model will get a ‘validation stamp’. This can mean that a model is rejected after a large amount of work has been done. How can you prevent this?

“That is indeed a concrete problem. There are cases where a lot of work has been put into the development of a new model that was rejected at the last minute. That's a shame as a company. On the one hand, as a validation department, you have to remain independent. On the other hand, you have to be able to work efficiently in a chain. These points can be contradictory, so we try to live up to both by looking at the assumptions of modeling at an early stage. In our Model Life Cycle we have described that when developing models, the modeler or owner has to report to the committee that determines whether something can or can’t. They study both the technical and the business side. Validation can therefore play a purer role in determining whether or not something is technically good.”

To be able to better determine the impact of risks, models are becoming increasingly complex. Machine learning seems to be a solution to manage this, to what extent can it?

“As a human being, we can’t judge datasets of a certain size – you then need statistical models and summaries. We talk a lot about machine learning and its regulatory requirements, particularly with our operational risk department. We then also look at situations in which the algorithm decides. The requirements are clearly formulated, but implementation is more difficult – after all, a decision must always be explainable. So, in the end it is people who make the decisions and therefore control the buttons.”

To what extent does the use of machine learning models lead to validation issues?

“Seventy to eighty percent of what we model and validate within the bank is bound by regulation – you can't apply machine learning to that. The kind of machine learning that is emerging now is much more on the business side – how do you find better customers, how do you get cross-selling? You need a framework for that; if you have a new machine learning model, what risks do you see in it and what can you do about it? How do you make sure your model follows the rules? For example, there is a rule that you can't refuse mortgages based on someone's zip code, and in the traditional models that’s well in sight. However, with machine learning, you don't really see what's going on ‘under the hood’. That's a new risk type that we need to include in our frameworks. Another application is that we use our own machine learning models as challenger models for those we get delivered from modeling. This way we can see whether it results in the same or other drivers, or we get more information from the data than the modelers can extract.”

How important is documentation in this?

“Very important. From a validation point of view, it’s always action point number one for all models. It’s part of the checklist, even before a model can be validated by us at all. We have to check on it and be strict about it. But particularly with the bigger models and lending, the usefulness and need for documentation is permeated.”

Finally, what makes it so much fun to work in the field of model risk management?

“The role of data and models in the financial industry is increasing. It's not always rewarding; we need to point out where things go wrong – in that sense we are the dentist of the company. There is a risk that we’re driven too much by statistics and data. That's why we challenge our people to talk to the business and to think strategically. At the same time, many risks are still managed insufficiently – it requires more structure than we have now. For model risk management, I have a clear idea of what we need to do to make it stronger in the future. And that's a great challenge.”

Customer successes

View all Insights

Standardizing Financial Risk Management – ING’s Accelerating Think Forward Strategy and IRRBB Framework Transformation

In 2014, with its Think Forward strategy, ING set the goal to further standardize and streamline its organization. At the time, changes in international regulations were also in full swing. But what did all this mean for risk management at the bank? We asked ING’s Constant Thoolen and Gilbert van Iersel.


According to Constant Thoolen, global head of financial risk at ING, the Accelerating Think Forward strategy, an updated version of the Think Forward strategy that they just call ATF, comprises several different elements.

"Standardization is a very important one. And from standardization comes scalability and comparability. To facilitate this standardization within the financial risk management team, and thus achieve the required level of efficiency, as a bank we first had to make substantial investments so we could reap greater cost savings further down the road."

And how exactly did ING translate this into financial risk management?

Thoolen: "Obviously, there are different facets to that risk, which permeates through all business lines. The interest rate risk in the banking book, or IRRBB, is a very important part of this. Alongside the interest rate risk in trading activities, the IRRBB represents an important risk for all business lines. Given the importance of this type of risk, and the changing regulatory complexion, we decided to start up an internal IRRBB program."

So the challenge facing the bank was how to develop a consistent framework in benchmarking and reporting the interest rate risk?

"The ATF strategy has set requirements for the consistency and standardization of tooling," explains Gilbert van Iersel, head of financial risk analysis. "On the one hand, our in-house QRM program ties in with this. We are currently rolling out a central system for our ALM activities, such as analyses and risk measurements—not only from a risk perspective but from a finance one too. Within the context of the IRRBB program, we also started to apply this level of standardization and consistency throughout the risk-management framework and the policy around it. We’re doing so by tackling standardization in terms of definitions, such as: what do we understand by interest rate risk, and what do benchmarks like earnings-at-risk or NII-at-risk actually mean? It’s all about how we measure and what assumptions we should make."

What role did international regulations play in all this?

Van Iersel: "An important one. The whole thing was strengthened by new IRRBB guidelines published by the EBA in 2015. It reconciled the ATF strategy with external guidelines, which prompted us to start up the IRRBB program."

So regulations served as a catalyst?

Thoolen: "Yes indeed. But in addition to serving as a foothold, the regulations, along with many changes and additional requirements in this area, also posed a challenge. Above all, it remains in a state of flux, thanks to Basel, the EBA, and supervision by the ECB. On the one hand, it’s true that we had expected the changes, because IRRBB discussions had been going on for some time. On the other hand, developments in the regulatory landscape surrounding IRRBB followed one another quite quickly. This is also different from the implementation of Basel II or III, which typically require a preparation and phasing-in period of a few years. That doesn’t apply here because we have to quickly comply with the new guidelines."

Did the European regulations help deliver the standardization that ING sought as an international bank?

Thoolen: "The shift from local to European supervision probably increased our need for standardization and consistency. We had national supervisors in the relevant countries, each supervising in their own way, with their own requirements and methodologies. The ECB checked out all these methodologies and created best practices on what they found. Now we have to deal with regulations that take in all Eurozone countries, which are also countries in which ING is active. Consequently, we are perfectly capable of making comparisons between the implementation of the ALM policy in the different countries. Above all, the associated risks are high on the agenda of policymakers and supervisors."

Van Iersel: "We have also used these standards in setting up a central treasury organization, for example, which is also complementary to the consistency and standardization process."

Thoolen: "But we’d already set the further integration of the various business units in motion, before the new regulations came into force. What’s more, we still have to deal with local legislation in the countries in which we operate outside Europe, such as Australia, Singapore, and the US. Our ideal world would be one in which we have one standard for our calculations everywhere."

What changed in the bank’s risk appetite as a result of this changing environment and the new strategy?

Van Iersel: "Based on newly defined benchmarks, we’ve redefined and shaped our risk appetite as a component part of the strategic program. In the risk appetite process we’ve clarified the difference between how ING wants to manage the IRRBB internally and how the regulator views the type of risk. As a bank, you have to comply with the so-called standard outlier test when it comes to the IRRBB. The benchmark commonly employed for this is the economic value of equity, which is value-based. Within the IRRBB, you can look at the interest rate risk from a value or an income perspective. Both are important, but they occasionally work against one another too. As a bank, we’ve made a choice between them. For us, a constant stream of income was the most important benchmark in defining our interest rate risk strategy, because that’s what is translated to the bottom line of the results that we post. Alongside our internal decision to focus more closely on income and stabilize it, the regulator opted to take a mainly value-based approach. We have explicitly incorporated this distinction in our risk appetite statements. It’s all based on our new strategy; in other words, what we are striving for as a bank and what will be the repercussions for our interest rate risk management. It’s from there that we define the different risk benchmarks."

Which other types of risk does the bank look at and how do they relate to the interest rate risk?

Van Iersel: “From the financial risk perspective, you also have to take into account aspects like credit spreads, changes in the creditworthiness of counterparties, as well as market-related risks in share prices and foreign exchange rates. Given that all these collectively influence our profitability and solvency position, they are also reflected in the Core Tier I ratio. There is a clear link to be seen there between the risk appetite for IRRBB and the overall risk appetite that we as a bank have defined. IRRBB is a component part of the whole, so there’s a certain amount of interaction between them to be considered; in other words, how does the interest rate risk measure up to the credit risk? On top of that, you have to decide where to deploy your valuable capacity. All this has been made clearer in this program.”

Does this mean that every change in the market can be accommodated by adjusting the risk appetite?

Thoolen: “Changing behavior can indeed influence risks and change the risk appetite, although not necessarily. But it can certainly lead to a different use of risk. Moreover, IFRS 9 has changed the accounting standards. Because the Core Tier 1 ratio is based on the accounting standard, these IFRS 9 changes determine the available capital too. If IFRS 9 changes the playing field, it also exerts an influence on certain risk benchmarks.”

In addition to setting up a consistent framework, the standardization of the models used by the different parts of ING was also important. How does ING approach the selection and development of these models?

Thoolen: “With this in mind, we’ve set up a structure with the various business units that we collaborate with from a financial risk perspective. We pay close attention to whether a model is applicable in the environment in which it’s used. In other words, is it a good fit with what’s happening in the market, does it cover all the risks as you see them, and does it have the necessary harmony with the ALM system? In this way, we want to establish optimum modeling for savings or the repayment risk of mortgages, for example.”

But does that also work for an international bank with substantial portfolios in very different countries?

Thoolen: “While there is model standardization, there is no market standardization. Different countries have their own product combinations and, outside the context of IRRBB, have to comply with regulations that differ from other countries. A savings product in the Netherlands will differ from a savings product in Belgium, for example. It’s difficult to define a one-size-fits-all model because the working of one market can be much more specific than another—particularly when it comes to regulations governing retail and wholesale. This sometimes makes standardization more difficult to apply. The challenge lies in the fact that every country and every market is specific, and the differences have to be reconciled in the model.”

Van Iersel: “The model was designed to measure risks as well as possible and to support the business to make good decisions. Having a consistent risk appetite framework can also make certain differences between countries or activities more visible. In Australia, for example, many more floating-rate mortgages are sold than here in the Netherlands, and this alters the sensitivity of the bank’s net interest income when the interest rate changes. Risk appetite statements must facilitate such differences.”

To what extent does the use of machine learning models lead to validation issues?

“Seventy to eighty percent of what we model and validate within the bank is bound by regulation – you can't apply machine learning to that. The kind of machine learning that is emerging now is much more on the business side – how do you find better customers, how do you get cross-selling? You need a framework for that; if you have a new machine learning model, what risks do you see in it and what can you do about it? How do you make sure your model follows the rules? For example, there is a rule that you can't refuse mortgages based on someone's zip code, and in the traditional models that’s well in sight. However, with machine learning, you don't really see what's going on ‘under the hood’. That's a new risk type that we need to include in our frameworks. Another application is that we use our own machine learning models as challenger models for those we get delivered from modeling. This way we can see whether it results in the same or other drivers, or we get more information from the data than the modelers can extract.”

Thoolen: “But opting for a single ALM system imposes this model standardization on you and ensures that, once it’s integrated, it will immediately comply with many conditions. The process is still ongoing, but it’s a good fit with the standardization and consistency that we’re aiming for.”


In conjunction with the changing regulatory environment, the Accelerating Think Forward Strategy formed the backdrop for a major collaboration with Zanders: the IRRBB project. In the context of this project, Zanders researched the extent to which the bank’s interest rate risk framework complied with the changing regulations. The framework also assessed ING’s new interest rate risk benchmarks and best practices. Based on the choices made by the bank, Zanders helped improve and implement the new framework and standardized models in a central risk management system.

Customer successes

View all Insights

Fintegral

is now part of Zanders

In a continued effort to ensure we offer our customers the very best in knowledge and skills, Zanders has acquired Fintegral.

Okay

RiskQuest

is now part of Zanders

In a continued effort to ensure we offer our customers the very best in knowledge and skills, Zanders has acquired RiskQuest.

Okay

Optimum Prime

is now part of Zanders

In a continued effort to ensure we offer our customers the very best in knowledge and skills, Zanders has acquired Optimum Prime.

Okay
This site is registered on wpml.org as a development site.