Strengthening Model Risk Management at ABN AMRO – Insights from Martijn Habing

Martijn Habing, head of Model Risk Management (MoRM) at ABN AMRO bank, spoke at the Zanders Risk Management Seminar about the extent to which a model can predict the impact of an event.


The MoRM division of ABN AMRO comprises around 45 people. What are the crucial conditions to run the department efficiently?

Habing: “Since the beginning of 2019, we have been divided into teams with clear responsibilities, enabling us to work more efficiently as a model risk management component. Previously, all questions from the ECB or other regulators were taken care of by the experts of credit risk, but now we have a separate team ready to focus on all non-quantitative matters. This reduces the workload on the experts who really need to deal with the mathematical models. The second thing we have done is to make a stronger distinction between the existing models and the new projects that we need to run. Major projects include the Definition of default and the introduction of IFRS 9. In the past, these kinds of projects were carried out by people who actually had to do the credit models. By having separate teams for this, we can scale more easily to the new projects – that works well.”What exactly is the definition of a model within your department? Are they only risk models, or are hedge accounting or pricing models in scope too?

“We aim to identify the widest range of models as possible, both in size and type. From an administrative point of view, we can easily do 600 to 700 models. But with such a number, we can't validate them all in the same degree of depth. We therefore try to get everything in picture, but this varies per model what we look at.”

To what extent does the business determine whether a validation model is presented?

“We want to have all models in view. Then the question is: how do you get a complete overview? How do you know what models there are if you don't see them all? We try to set this up in two ways. On the one hand, we do this by connecting to the change risk assessment process. We have an operational risk department that looks at the entire bank in cycles of approximately three years. We work with operational risk and explain to them what they need to look out for, what ‘a model’ is according to us and what risks it can contain. On the other hand, we take a top-down approach, setting the model owner at the highest possible level. For example, the director of mortgages must confirm for all processes in his business that the models have been well developed, and the documentation is in order and validated. So, we're trying to get a view on that from the top of the organization. We do have the vast majority of all models in the picture.”

Does this ever lead to discussion?

“Yes, that definitely happens. In the bank's policy, we’ve explained that we make the final judgment on whether something is a model. If we believe that a risk is being taken with a model, we indicate that something needs to be changed.”

Some of the models will likely be implemented through vendor systems. How do you deal with that in terms of validation?

“The regulations are clear about this: as a bank, you need to fully understand all your models. We have developed a vast majority of the models internally. In addition, we have market systems for which large platforms have been created by external parties. So, we are certainly also looking at these vendor systems, but they require a different approach. With these models you look at how you parametrize – which test should be done with it exactly? The control capabilities of these systems are very different. We're therefore looking at them, but they have other points of interest. For example, we perform shadow calculations to validate the results.”

How do you include the more qualitative elements in the validation of a risk model?

“There are models that include a large component from an expert who makes a certain assessment of his expertise based on one or more assumptions. That input comes from the business itself; we don't have it in the models and we can't control it mathematically. At MoRM, we try to capture which assumptions have been made by which experts. Since there is more risk in this, we are making more demands on the process by which the assumptions are made. In addition, the model outcome is generally input for the bank's decision. So, when the model concludes something, the risk associated with the assumptions will always be considered and assessed in a meeting to decide what we actually do as a bank. But there is still a risk in that.”

How do you ensure that the output from models is applied correctly?

“We try to overcome this by the obligation to include the use of the model in the documentation. For example, we have a model for IFRS 9 where we have to indicate that we also use it for stress testing. We know the internal route of the model in the decision-making of the bank. And that's a dynamic process; there are models that are developed and used for other purposes three years later. Validation is therefore much more than a mathematical exercise to see how the numbers fall apart.”

Typically, the approach is to develop first, then validate. Not every model will get a ‘validation stamp’. This can mean that a model is rejected after a large amount of work has been done. How can you prevent this?

“That is indeed a concrete problem. There are cases where a lot of work has been put into the development of a new model that was rejected at the last minute. That's a shame as a company. On the one hand, as a validation department, you have to remain independent. On the other hand, you have to be able to work efficiently in a chain. These points can be contradictory, so we try to live up to both by looking at the assumptions of modeling at an early stage. In our Model Life Cycle we have described that when developing models, the modeler or owner has to report to the committee that determines whether something can or can’t. They study both the technical and the business side. Validation can therefore play a purer role in determining whether or not something is technically good.”

To be able to better determine the impact of risks, models are becoming increasingly complex. Machine learning seems to be a solution to manage this, to what extent can it?

“As a human being, we can’t judge datasets of a certain size – you then need statistical models and summaries. We talk a lot about machine learning and its regulatory requirements, particularly with our operational risk department. We then also look at situations in which the algorithm decides. The requirements are clearly formulated, but implementation is more difficult – after all, a decision must always be explainable. So, in the end it is people who make the decisions and therefore control the buttons.”

To what extent does the use of machine learning models lead to validation issues?

“Seventy to eighty percent of what we model and validate within the bank is bound by regulation – you can't apply machine learning to that. The kind of machine learning that is emerging now is much more on the business side – how do you find better customers, how do you get cross-selling? You need a framework for that; if you have a new machine learning model, what risks do you see in it and what can you do about it? How do you make sure your model follows the rules? For example, there is a rule that you can't refuse mortgages based on someone's zip code, and in the traditional models that’s well in sight. However, with machine learning, you don't really see what's going on ‘under the hood’. That's a new risk type that we need to include in our frameworks. Another application is that we use our own machine learning models as challenger models for those we get delivered from modeling. This way we can see whether it results in the same or other drivers, or we get more information from the data than the modelers can extract.”

How important is documentation in this?

“Very important. From a validation point of view, it’s always action point number one for all models. It’s part of the checklist, even before a model can be validated by us at all. We have to check on it and be strict about it. But particularly with the bigger models and lending, the usefulness and need for documentation is permeated.”

Finally, what makes it so much fun to work in the field of model risk management?

“The role of data and models in the financial industry is increasing. It's not always rewarding; we need to point out where things go wrong – in that sense we are the dentist of the company. There is a risk that we’re driven too much by statistics and data. That's why we challenge our people to talk to the business and to think strategically. At the same time, many risks are still managed insufficiently – it requires more structure than we have now. For model risk management, I have a clear idea of what we need to do to make it stronger in the future. And that's a great challenge.”

Customer successes

View all Insights

Standardizing Financial Risk Management – ING’s Accelerating Think Forward Strategy and IRRBB Framework Transformation

In 2014, with its Think Forward strategy, ING set the goal to further standardize and streamline its organization. At the time, changes in international regulations were also in full swing. But what did all this mean for risk management at the bank? We asked ING’s Constant Thoolen and Gilbert van Iersel.


According to Constant Thoolen, global head of financial risk at ING, the Accelerating Think Forward strategy, an updated version of the Think Forward strategy that they just call ATF, comprises several different elements.

"Standardization is a very important one. And from standardization comes scalability and comparability. To facilitate this standardization within the financial risk management team, and thus achieve the required level of efficiency, as a bank we first had to make substantial investments so we could reap greater cost savings further down the road."

And how exactly did ING translate this into financial risk management?

Thoolen: "Obviously, there are different facets to that risk, which permeates through all business lines. The interest rate risk in the banking book, or IRRBB, is a very important part of this. Alongside the interest rate risk in trading activities, the IRRBB represents an important risk for all business lines. Given the importance of this type of risk, and the changing regulatory complexion, we decided to start up an internal IRRBB program."

So the challenge facing the bank was how to develop a consistent framework in benchmarking and reporting the interest rate risk?

"The ATF strategy has set requirements for the consistency and standardization of tooling," explains Gilbert van Iersel, head of financial risk analysis. "On the one hand, our in-house QRM program ties in with this. We are currently rolling out a central system for our ALM activities, such as analyses and risk measurements—not only from a risk perspective but from a finance one too. Within the context of the IRRBB program, we also started to apply this level of standardization and consistency throughout the risk-management framework and the policy around it. We’re doing so by tackling standardization in terms of definitions, such as: what do we understand by interest rate risk, and what do benchmarks like earnings-at-risk or NII-at-risk actually mean? It’s all about how we measure and what assumptions we should make."

What role did international regulations play in all this?

Van Iersel: "An important one. The whole thing was strengthened by new IRRBB guidelines published by the EBA in 2015. It reconciled the ATF strategy with external guidelines, which prompted us to start up the IRRBB program."

So regulations served as a catalyst?

Thoolen: "Yes indeed. But in addition to serving as a foothold, the regulations, along with many changes and additional requirements in this area, also posed a challenge. Above all, it remains in a state of flux, thanks to Basel, the EBA, and supervision by the ECB. On the one hand, it’s true that we had expected the changes, because IRRBB discussions had been going on for some time. On the other hand, developments in the regulatory landscape surrounding IRRBB followed one another quite quickly. This is also different from the implementation of Basel II or III, which typically require a preparation and phasing-in period of a few years. That doesn’t apply here because we have to quickly comply with the new guidelines."

Did the European regulations help deliver the standardization that ING sought as an international bank?

Thoolen: "The shift from local to European supervision probably increased our need for standardization and consistency. We had national supervisors in the relevant countries, each supervising in their own way, with their own requirements and methodologies. The ECB checked out all these methodologies and created best practices on what they found. Now we have to deal with regulations that take in all Eurozone countries, which are also countries in which ING is active. Consequently, we are perfectly capable of making comparisons between the implementation of the ALM policy in the different countries. Above all, the associated risks are high on the agenda of policymakers and supervisors."

Van Iersel: "We have also used these standards in setting up a central treasury organization, for example, which is also complementary to the consistency and standardization process."

Thoolen: "But we’d already set the further integration of the various business units in motion, before the new regulations came into force. What’s more, we still have to deal with local legislation in the countries in which we operate outside Europe, such as Australia, Singapore, and the US. Our ideal world would be one in which we have one standard for our calculations everywhere."

What changed in the bank’s risk appetite as a result of this changing environment and the new strategy?

Van Iersel: "Based on newly defined benchmarks, we’ve redefined and shaped our risk appetite as a component part of the strategic program. In the risk appetite process we’ve clarified the difference between how ING wants to manage the IRRBB internally and how the regulator views the type of risk. As a bank, you have to comply with the so-called standard outlier test when it comes to the IRRBB. The benchmark commonly employed for this is the economic value of equity, which is value-based. Within the IRRBB, you can look at the interest rate risk from a value or an income perspective. Both are important, but they occasionally work against one another too. As a bank, we’ve made a choice between them. For us, a constant stream of income was the most important benchmark in defining our interest rate risk strategy, because that’s what is translated to the bottom line of the results that we post. Alongside our internal decision to focus more closely on income and stabilize it, the regulator opted to take a mainly value-based approach. We have explicitly incorporated this distinction in our risk appetite statements. It’s all based on our new strategy; in other words, what we are striving for as a bank and what will be the repercussions for our interest rate risk management. It’s from there that we define the different risk benchmarks."

Which other types of risk does the bank look at and how do they relate to the interest rate risk?

Van Iersel: “From the financial risk perspective, you also have to take into account aspects like credit spreads, changes in the creditworthiness of counterparties, as well as market-related risks in share prices and foreign exchange rates. Given that all these collectively influence our profitability and solvency position, they are also reflected in the Core Tier I ratio. There is a clear link to be seen there between the risk appetite for IRRBB and the overall risk appetite that we as a bank have defined. IRRBB is a component part of the whole, so there’s a certain amount of interaction between them to be considered; in other words, how does the interest rate risk measure up to the credit risk? On top of that, you have to decide where to deploy your valuable capacity. All this has been made clearer in this program.”

Does this mean that every change in the market can be accommodated by adjusting the risk appetite?

Thoolen: “Changing behavior can indeed influence risks and change the risk appetite, although not necessarily. But it can certainly lead to a different use of risk. Moreover, IFRS 9 has changed the accounting standards. Because the Core Tier 1 ratio is based on the accounting standard, these IFRS 9 changes determine the available capital too. If IFRS 9 changes the playing field, it also exerts an influence on certain risk benchmarks.”

In addition to setting up a consistent framework, the standardization of the models used by the different parts of ING was also important. How does ING approach the selection and development of these models?

Thoolen: “With this in mind, we’ve set up a structure with the various business units that we collaborate with from a financial risk perspective. We pay close attention to whether a model is applicable in the environment in which it’s used. In other words, is it a good fit with what’s happening in the market, does it cover all the risks as you see them, and does it have the necessary harmony with the ALM system? In this way, we want to establish optimum modeling for savings or the repayment risk of mortgages, for example.”

But does that also work for an international bank with substantial portfolios in very different countries?

Thoolen: “While there is model standardization, there is no market standardization. Different countries have their own product combinations and, outside the context of IRRBB, have to comply with regulations that differ from other countries. A savings product in the Netherlands will differ from a savings product in Belgium, for example. It’s difficult to define a one-size-fits-all model because the working of one market can be much more specific than another—particularly when it comes to regulations governing retail and wholesale. This sometimes makes standardization more difficult to apply. The challenge lies in the fact that every country and every market is specific, and the differences have to be reconciled in the model.”

Van Iersel: “The model was designed to measure risks as well as possible and to support the business to make good decisions. Having a consistent risk appetite framework can also make certain differences between countries or activities more visible. In Australia, for example, many more floating-rate mortgages are sold than here in the Netherlands, and this alters the sensitivity of the bank’s net interest income when the interest rate changes. Risk appetite statements must facilitate such differences.”

To what extent does the use of machine learning models lead to validation issues?

“Seventy to eighty percent of what we model and validate within the bank is bound by regulation – you can't apply machine learning to that. The kind of machine learning that is emerging now is much more on the business side – how do you find better customers, how do you get cross-selling? You need a framework for that; if you have a new machine learning model, what risks do you see in it and what can you do about it? How do you make sure your model follows the rules? For example, there is a rule that you can't refuse mortgages based on someone's zip code, and in the traditional models that’s well in sight. However, with machine learning, you don't really see what's going on ‘under the hood’. That's a new risk type that we need to include in our frameworks. Another application is that we use our own machine learning models as challenger models for those we get delivered from modeling. This way we can see whether it results in the same or other drivers, or we get more information from the data than the modelers can extract.”

Thoolen: “But opting for a single ALM system imposes this model standardization on you and ensures that, once it’s integrated, it will immediately comply with many conditions. The process is still ongoing, but it’s a good fit with the standardization and consistency that we’re aiming for.”


In conjunction with the changing regulatory environment, the Accelerating Think Forward Strategy formed the backdrop for a major collaboration with Zanders: the IRRBB project. In the context of this project, Zanders researched the extent to which the bank’s interest rate risk framework complied with the changing regulations. The framework also assessed ING’s new interest rate risk benchmarks and best practices. Based on the choices made by the bank, Zanders helped improve and implement the new framework and standardized models in a central risk management system.

Customer successes

View all Insights

Mortgage valuation, a discounted cash flow method

August 2017
3 min read

With the advance of the current low interest rate environment and increased regulatory requirements, modeling mortgages for valuation purposes is more complex. Additionally, the applicable valuation method depends on the purpose of the valuation.


The most common valuation method for mortgage funds is known as the ‘fair value’ method, consisting of two building blocks: the cash flows and a discount curve. The first prerequisite to apply the fair value method is to determine future cash flows, based on the contractual components and behavioral modelling. The other prerequisite is to derive the appropriate rate for discounting via a top-down or bottom-up approach.

Two building blocks

The appropriate approach and level of complexity in the mortgage valuation depend on the underlying purpose. Examples of valuation purposes are: regulatory, accounting, risk or sales of the mortgage portfolio. For example BCBS, IRRBB, Solvency, IFRS and the EBA ask for (specific) valuation methods of mortgages. The two building blocks for a ‘fair value’ calculation of mortgages are expected cash flows and a discount curve.

The market value is the sum of the expected cash flows at the moment of valuation, which are derived by discounting future expected cash flows with an appropriate curve. For both building block models, choices have to be made resulting in a tradeoff between the accuracy level and the computational effort.

Figure 1: Constructing the expected cash flows from the contractual cash flows for a loan with an annuity repayment type.

Cash flow schedule

The contractual cash flows are projected cash flows, including repayments. These can be derived based on the contractually agreed loan components, such as the interest rate, the contractual maturity and the redemption type.

The three most commonly used redemption types in the mortgage market are:

  • Bullet: interest only payments, no contractual repayment cash flows except at maturity
  • Linear: interest (decreasing monthly) and constant contractual repayment cash flows
  • Annuity: fixed cash flows, consisting of an interest and contractual repayment part

However, the expected cash flows will most likely differ from this contractually agreed pattern due to additional prepayments. Especially in the current low interest rate environment, borrowers frequently make prepayments on top of the scheduled repayments.

Figure 1 shows how to calculate an expected cash flow schedule by adding the prepayment cash flows to the contractual cash flow. There are two methods to derive : client behavior dependent on interest rates and client behavior independent of interest rates. The independent method uses an historical analysis, indicating a backward looking element. This historical analysis can include a dependency on certain contract characteristics.

On the other hand, the interest rate dependent behavior is forward looking and depends on the expected level of the interest rates. Monte Carlow simulations can model interest dependent behavior.

Another important factor in client behavior are penalties paid in case of a prepayment above a contractually agreed threshold. These costs are country and product specific. In Italy, for example, these extra costs do not exist, which could currently result in high prepayments rates.

Discount curve

The curve used for cash flow discounting is always a zero curve. The zero curve is constructed from observed interest rates which are mapped on zero-coupon bonds to maturities across time. There are three approaches to derive the rates of this discount curve: the top down-approach, the bottom-up approach or the negotiation approach. The first two methods are the most relevant and common.

In theory, an all-in discount curve consists of a riskfree rate and several spread components. The ‘base’ interest curve concerns the risk-free interest rate term structure in the market at the valuation date with the applicable currency and interest fixing frequency (or use ccy- and basis-spreads). The spreads included depend on the purpose of the valuation. For a fair value calculation, the following spreads are added: liquidity spread, credit spread, operational cost, option cost, cost of capital and profit margin. An example of spreads included for other valuation purposes are offerings costs and origination fee.

Top-down versus Bottom-up

The chosen calculation approach depends on the available data, the ability to determine spread components, preferences and the purpose of the valuation.

A top-down method derives the applied rates of the discount curve from all-in mortgage rates on a portfolio level. Different rates should be used to construct a discount curve per mortgage type and LTV level, and should take into account the national guaranteed amount (NHG in the Netherlands). Subtract all-in mortgage rates spreads that should not part of the discount curve, such as the offering costs. Use this top-down approach when limited knowledge or tools are available to derive all the individual spread components. The all-in rates can be obtained from the following sources: mortgage rates in the market, own mortgage rates or by designing a mortgage pricing model.

Figure 2

The bottom-up approach constructs the applied discount curve by adding all applicable spreads on top of the zero curve at a contract level. This method requires that several spread components can be calculated separately. The top-down approach is quicker, but less precise than the bottom-up approach, which is more accurate but also computationally heavy. Additionally, the bottom-up method is only possible if the appropriate spreads are known or can be derived. One example of a derivation of a spread component is credit spreads determined from expected losses based on an historical analysis and current market conditions.

In short

A fair value calculation performed by a discounted cash flow method consists of two building blocks: the expected cash flows and a discount curve. This requires several model choices before calculating a fair value of a mortgage (portfolio).

The expected cash flow model is based on the contractual cash flows and any additional prepayments. The mortgage prepayments can be modeled by assuming interest dependent or interest independent client behavior.

To construct the discount curve, the relevant spreads should be added to the risk-free curve. The decision for a top-down or bottom-up approach depends on the available data, the ability to determine spread components, preferences and the purpose of the valuation.

These important choices do not only apply for fair value calculations but are applicable for many other mortgage valuation purposes.

 Zanders Valuation Desk

Independent, high quality, market practice and accounting standard proof are the main drivers of our Valuation Desk. For example, we ensure a high quality and professionalism with a strict, complete and automated check on the market data from our market data provider on a daily basis. Furthermore, we have increased our independence by implementing the F3 solution from FINCAD in our current valuation models. This permits us to value a larger range of financial instruments with a high level of quality, accuracy and wider complexity.

For more information or questions concerning valuation issues, please contact Pierre Wernert: p.wernert@zanders.eu.

IFRS 17: the impact of the building blocks approach

August 2017
3 min read

With the advance of the current low interest rate environment and increased regulatory requirements, modeling mortgages for valuation purposes is more complex. Additionally, the applicable valuation method depends on the purpose of the valuation.


The new standards will have a significant impact on the measurement and presentation of insurance contracts in the financial statements and require significant operational changes. This article takes a closer look at the new standards, and illustrates the impact with a case study.

The standard model, as defined by IFRS 17, of measuring the value of insurance contracts is the ‘building blocks approach’. In this approach, the value of the contract is measured as the sum of the following components:

  • Block 1: Sum of the future cash flows that relate directly to the fulfilment of the contractual  obligations.
  • Block 2: Time value of the future cash flows. The discount rates used to determine the time value reflect the characteristics of the insurance contract.
  • Block 3: Risk adjustment, representing the compensation that the insurer requires for bearing the uncertainty in the amount and timing of the cash flows.
  • Block 4: Contractual service margin (CSM), representing the amount available for overhead and profit on the insurance contract. The purpose  of the CSM is to prevent a gain at initiation of the contract.

Risk adjustment vs risk margin

IFRS 17 does not provide full guidance on how the risk adjustment should be calculated. In theory, the compensation required by the insurer for bearing the risk of the contract would be equal to the cost of the needed capital. As most insurers within the IFRS jurisdiction capitalize based on Solvency II (SII) standards, it is likely that they will leverage on their past experience. In fact, there are many similarities between the risk adjustment and the SII risk margin.

The risk margin represents the compensation required for non-hedgeable risks by a third party that would take over the insurance liabilities. However, in practice, this is calculated using the capital models of the insurer itself. Therefore, it seems likely that the risk margin and risk adjustment will align. Differences can be expected though. For example, SII allows insurers to include operational risk in the risk margin, while this is not allowed under IFRS 17.

Liability adequacy test

Determining the impact of IFRS 17 is not straightforward: the current IFRS accounting standard leaves a lot of flexibility to determine the reserve value for insurance liabilities (one of the reasons for introducing IFRS 17). The reserve value reported under current IFRS is usually grandfathered from earlier accounting standards, such as Dutch GAAP. In general, these reserves can be defined as the present value of future benefits, where the technical interest rate and the assumptions for mortality are locked-in at pricing.

However, insurers are required to perform liability adequacy testing (LAT), where they compare the reserve values with the future cash flows calculated with ‘market consistent’ assumptions. As part of the market consistent valuation, insurers are allowed to include a compensation for bearing risk, such as the risk adjustment. Therefore, the biggest impact on the reserve value is expected from the introduction of the CSM.

The IASB has defined a hierarchy for the approach to measure the CSM at transition date. The preferred method is the ‘full retrospective application’. Under this approach, the insurer is required to measure the insurance contract as if the standard had always applied. Hence, the value of the insurance contract needs to be determined at the date of initial recognition and consecutive changes need to be determined all the way to transition date. This process is outlined in the following case study.

A case study

The impact of the new IFRS standards is analyzed for the following policy:

  • The policy covers the risk that a mortgage owner deceases before the maturity of the loan. If this event occurs, the policy pays the remaining notional of the loan.
  • The mortgage is issued on 31 December 2015 and has an initial notional value of € 200,000 that is amortized in 20 years. The interest percentage is set at 3 per cent.
  • The policy pays an annual premium of € 150. The annual estimated costs of the policy are equal to 10 per cent of the premium.

In the case of this policy, an insurer needs to capitalize for the risk that the policy holder’s life expectancy decreases and the risk that expenses will increase (e.g. due to higher than expected inflation). We assume that the insurer applies the SII standard formula, where the total capital is the sum of the capital for the individual risk types, based on 99.5 per cent VaR approach, taking diversification into account.

The cost of capital would then be calculated as follows:

  • Capital for mortality risk is based on an increase of 15 per cent of the mortality rates.
  • Capital for expense risk is based on an increase of 10 per cent in expense amount combined with an increase of 1 per cent in the inflation.
  • The diversification between these risk types is assumed to be 25 per cent.
  • Future capital levels are assumed to be equal to the current capital levels, scaled for the decrease in outstanding policies and insurance coverage.
  • The cost-of-capital rate equals 6 per cent.

At initiation (i.e. 2015 Q4), the value of the contract under the new standards equals the sum of:

  • Block 1: € 482
  • Block 2: minus € 81
  • Block 3: minus € 147
  • Block 4: minus € 254
Consecutive changes

The insurer will measure the sum of blocks 1, 2 and 3 (which we refer to as the fulfilment cash flows) and the remaining amount of the CSM at each reporting date. The amounts typically change over time, in particular when expectations about future mortality and interest rates are updated. We distinguish four different factors that will lead to a change in the building blocks:

Step 1. Time effect
Over time, both the fulfilment cash flows and the CSM are fully amortized. The amortization profile of both components can be different, leading to a difference in the reserve value.

Step 2. Realized mortality is lower than expected
In our case study, the realized mortality is about 10 per cent lower than expected. This difference is recognized in P&L, leading to a higher profit in the first year. The effect on the fulfilment cash flows and CSM is limited. Consequently, the reserve value will remain roughly the same.

Step 3. Update of mortality assumptions
Updates of the mortality assumptions affect the fulfilment cash flows, which is simultaneously recognized in the CSM. The offset between the fulfilment cash flows and the CSM will lead to a very limited impact on the reserve value. In this case study, the update of the life table results in higher expected mortality and increased future cash outflows.

Step 4. Decrease in interest rates
Updates of the interest rate curve result in a change in the fulfilment cash flows. This change is not offset in the CSM, but is recognized in the other comprehensive income. Therefore a decrease in the discount curve will result in a significant change in the insurance liability. Our case study assumes a decrease in interest rates from 2 per cent to 1 per cent. As a result, the fulfilment cash flows increase, which is immediately reflected by an increase in the reserve value.

The impact of each step on the reserve value and underlying blocks is illustrated below.

Onwards

The policy will evolve over time as expected, meaning that mortality will be realized as expected and discount rates do not change anymore. The reserve value and P&L over time will evolve as illustrated below.

The profit gradually decreases over time in line with the insurance coverage (i.e. outstanding notional of the mortgage). The relatively high profit in 2016 is (mainly) the result of the realized mortality that was lower than expected (step 2 described above).

As described before, under the full retrospective application, the insurer would be required to go all the way back to the initial recognition to measure the CSM and all consecutive changes. This would require insurers to deep-dive back into their policy administration systems. This has been acknowledged by the IASB by allowing insurers to implement the standards three years after final publication. Insurers will have to undertake a huge amount of operational effort and have already started with their impact analyses. In particular, the risk adjustment seems a challenging topic that requires an understanding of the capital models of the insurer.

Zanders can support in these qualitative analyses and can rely on its past experience with the implementation of Solvency II.

The additional insights of stress scenarios: Delta Lloyd Bank

In order to assess their risk management practices, the Dutch Central Bank (DNB) requires all banks to complete an annual Supervisory Review and Evaluation Process (SREP), including capital and liquidity management self-assessments. To calculate the effect of specific stress test scenarios on the balance sheet and profitability, Delta Lloyd Bank asked Zanders to build a stress test model.


Delta Lloyd Bank is the only bank within the Delta Lloyd Group with a business model which offers mortgages and attracts savings. With a balance sheet of approximately € 5 billion, the bank is a relatively small player in the Dutch banking arena. Delta Lloyd Bank operates in the ever-changing legal and regulatory environment and there is a clear interest to consistently demonstrate how a bank can maintain compliance over the next few years.

Balance sheet projection tool

Delta Lloyd Bank has an asset liability management (ALM) tool which maps out expected mortgage and savings flows. The bank sees how much interest income mortgages generate over a certain time period and when they will be paid back. “We can forecast this for years ahead,” says Andries Broekhuijsen, Teamleader Financial Risk with Delta Lloyd Bank. “Mortgages are calculated at contract level and by using our ALM tool we can also decide if we will grant new mortgages. You get a projection of how the balance sheet will develop. In conjunction with the Business Control department you can calculate a P&L (profit and loss account) for the next five years.” Broekhuijsen adds that the bank then goes a step further. “We have capital ratios, liquidity ratios and several other requirements stemming from the regulatory body. On the basis of the P&L and balance developments we can plot these over time. We have developed an environment where you can see which assumption or development satisfies which regulatory requirement. Also where you don’t comply, and how you can do something about it. For us, within the company, this has become a well-structured process which we use every quarter to forecast one or more years – standard balance sheet forecasting. In the balance sheet projection tool it was not possible to work out different macro-economic scenarios.”

Macro-economic developments

Delta Lloyd Bank wanted to add stress tests to the tool and asked Zanders to help. “The balance sheet projection tool formed the basis for the stress test model which Zanders developed,” Koen Vogels, Actuarial Analyst with Delta Lloyd Bank explains. “There is a sort of extra layer added to the existing tool so when we input certain developments the impact of different scenarios are then presented up clearly and comprehensibly.” Macro-economic developments, like interest rate increases, a drop in house prices or a rise in unemployment, after all, affect the value of the bank’s investments. Vogels: “We needed the insight afforded by the stress tests; what happens with this projection and what are the sensitive issues? Which ratios, for example, change within a certain scenario?”

The balance sheet forecast made by the bank assumes a stable economic situation. “We don’t have an economic office which takes a structural view of economic developments,” Broekhuijsen says. “For our ALM we assume that most economic variables remain constant. In some cases that is not realistic. Using the report Zanders produced, we have been able to develop a number of scenarios based on various economic developments. Unemployment and house prices are very important for us as a mortgage lender. House prices determine how much security we have, while high unemployment can increase the chance of people not being able to pay back their mortgage. Picturing such developments gives us greater insight in our risk profile. We have a relatively high number of NHG mortgages (mortgages which fall under the National Mortgage Guarantee, ed.) and it appears that even if house prices drop substantially we run relatively low risk.”

A more dynamic risk situation

Even though Delta Lloyd Bank has several years of experience carrying out stress tests, they saw room to improve accuracy and efficiency. “How certain macro-economic variables impact relevant risk factors and then the balance sheet is set out in the stress test model. In that way an estimate can be made as to the outcome of the capital and liquidity ratios in specific market circumstances,” says consultant Steyn Verhoeven, who, on behalf of Zanders, helped develop the model. “The model translates certain developments in unemployment figures for example, as an effect on the possibility of payment default by clients. The balance sheet projection tool previously only highlighted one basic scenario, while the stress test model can cover various macro-economic scenarios. This provides the bank with a much wider and more dynamic risk picture. Stress tests not only quantify a minimal capital supply, they instigate discussion on how to deal with negative developments, Verhoeven thinks: “Results from a stress test give management valuable insight into the risk profile of the bank. Which conditions should they be paying the most attention to and what means do they have to turn the tide?”

Scenarios and new assumptions

How do you determine exactly the scenarios you want to understand? Broekhuijsen: “Our basis was the stress test from the EBA (European Banking Authority, ed) and from that we refined the number of scenarios. There is, for example, a ‘baseline scenario’, which is a positive scenario that assumes an increase in house prices. We don’t just look at how bad it can get, but also at improvement.” The problem with developing scenarios is that they have to have enough stress but also tell a useful story, Verhoeven says. “You can make a scenario as extreme as you like, but it does not necessarily furnish the most valuable insights. When developing the stress test model we deliberately opted to work out several scenarios with different stress levels.” A second challenge is the so-called second level effect of a scenario, Broekhuijsen adds. “Take rising interest rates. This results in repricing mortgages; after a certain time the fixed interest period comes to an end and the mortgage rate goes up. But this could also mean that people will want to pay back their mortgage more quickly, because otherwise their costs will increase too much. We have not taken that sort of interactive effect into account, and this is a point needs improvement.”

Reverse stress tests

Over the past few years, regulators have put more focus on stress tests. “Stress tests identify the circumstances when business as usual is no longer enough to keep your organization from dangerous territory,” Broekhuijsen explains. “But if all goes well, this only happens in very extreme circumstances.” As well as a sensitivity analysis and the scenario analysis, many banks carry out reverse stress testing. “You use this to make a recovery plan for a near default, in which you evaluate whether you have taken enough measures to be able to recover. You reason backwards; you determine the ratio of capital unlikely to recover and then investigate which development could cause this to happen. It could be that the credit risk when house prices drop is much lower than the interest rate risk resulting from a drop in interest. Each risk has a different impact,” according to Broekhuijsen.

Complex material

With the aid of the stress test model, Delta Lloyd Bank produces a comprehensive stress test report in a short period of time. Broekhuijsen: “It comprises 15 pages with 5 scenarios and sometimes 20 sensitivity analyses. That is a complete package which we run as soon as we have the quarterly update of our strategic plan. We can show all the issues. The Asset and Liability Commission (ALCO) uses the information to determine if the planned ratio is not too low or too high. That again has an impact on our strategy.” The stress test model also enables the bank to anticipate new regulations. “It is a complex subject,” says Broekhuijsen. “Because there are so many demands made on banks by legal and regulatory bodies, it is difficult to develop a long-term strategy which fulfills these demands. It is therefore very important that we have this tool. We can add all new regulations to the tool and as a result change our strategy; therefore scenarios are restricted to everything which is actually possible and on that basis we can decide on our selection. For example, from IFRS 9, prognoses are more relevant. Elements from the stress test environment are also requested by the regulatory bodies.”

Further integration

Broekhuijsen is happy with the result and the teamwork. “Even the user interface which Zanders built was an eye opener; it is extremely user friendly. We had very little insight and now we have a great starting point. You can do a sensitivity analysis very quickly by using one single variable from the various scenarios in the stress test. We also have other points we can develop, but our emphasis is now on further integration of the stress tests. At the same time we are trying to make the risk picture more dynamic and more interactive.”

Customer successes

View all Insights

Hedge accounting changes under IFRS 9

October 2016
3 min read

With the advance of the current low interest rate environment and increased regulatory requirements, modeling mortgages for valuation purposes is more complex. Additionally, the applicable valuation method depends on the purpose of the valuation.


Cross-currency interest rate swaps (CC-IRS), options, FX forwards and commodity trades are just a few examples of financial instruments which will be affected by the upcoming changes. The time value, forward points and cross-currency basis spread will receive different accounting treatment under IFRS 9. Within Zanders, we feel the need to clarify these key changes that deserve as much awareness as possible.

1. Accounting for the forward element in foreign currency forwards

Each FX forward contract possesses a spot and forward element. The forward element represents the interest rate differential between the two currencies. Under IFRS 9 (similar to IAS 39), it is allowed to designate the entire contract or just the spot component as the hedging instrument. When designating the spot component only, the change in fair value of the forward element is recognised in OCI and accumulated in a separate component of equity. Simultaneously, the fair value of the forward points at initial recognition is amortised, most expected linearly, over the life of the hedge.

Again, this accounting treatment is only allowed in case the critical terms are aligned (similar). If at inception the actual value of the forward element exceeds the aligned value, changes in the fair value based on the aligned item will go through OCI. The difference between the fair value of the actual and aligned forward elements is recognized in P&L. In case the value of the aligned forward element exceeds the actual value at inception, changes in fair value are based on the lower of aligned versus actual and go to OCI. The remaining change of actual will be recognized in P&L.

Please refer to the example below:

In this example, we consider an entity X which is hedging a future receivable with an FX forward contract.

MtM change of the forward = 105,000 (spot element) + 15,000 (forward element) = 120,000.
MtM change of the hedged item = 105,000 (spot element) + 5,000 (forward element) = 110,000.

 We look at alternatives under IAS39 and IFRS9 that show different accounting methods depending on the separation between the spot and forward rates.

Under IAS39 and without a spot/forward separation, the hedging instrument represents the sum of the spot and the forward element (105 000 spot + 15 000 forward= 120 000). The hedged item consisting of 105 000 spot element and 5 000 forward element and the hedge ratio being within the boundaries, the minimum between the hedging instrument and hedged item is listed as OCI, and the difference between the hedging instrument and the hedged item goes to the P&L.

However, with the spot/forward separation under IAS39, the forward component is not included in the hedging relationship and is therefore taken straight to the P&L. Everything that exceeds the movement of the hedged item is considered as an “over hedge” and will be booked in P&L.

Line 3 and 4 under IFRS9 characterise comparable registration practices than under IAS39. The changes come in when we examine line 5, where the forward element of 5 000 can be registered as OCI. In this case, a test on both the spot and the forward element is performed, compared to the previous line where only one test takes place.

2. Rebalancing in a commodity hedge relation

Under influence of changing economic circumstances, it could be necessary to change the hedge ratio, i.e. the ratio between the amount of hedged item and the amount of hedging instruments. Under IAS 39, changes to a hedge ratio require the entity to discontinue hedge accounting and restart with a new hedging relationship that captures the desired changes. The IFRS 9 hedge accounting model allows you to refine your hedge ratio without having to discontinue the hedge relationship. This can be achieved by rebalancing.

Rebalancing is possible if there is a situation where the change in the relationship of the hedging instrument and the hedged item can be compensated by adjusting the hedge ratio. The hedge ratio can be adjusted by increasing or decreasing either the number of designated hedging instruments or hedged items.

When rebalancing a hedging relationship, an entity must update its documentation of the analysis of the sources of hedge ineffectiveness that are expected to affect the hedging relationship during its remaining term.

Please refer to the example below:

Entity X is hedging a forecast receivable with a FX call.

MtM change of the option = 100,000 (intrinsic value) + 40,000 (time value) = 140,000.
MtM change of the hedged item = 100,000 (intrinsic value) + 30,000 (time value) = 130,000.

In example 3, we consider entity X to be hedging a forecast receivable via an FX call. Note that under IAS39 the hedged item cannot contain an optionality if this optionality is not present in the underlying exposure. Hence, in this example, the hedged item cannot contain any time value. The time value of 30,000 can be used under IFRS9, but only by means of a separate test (see row 5).

In line 1, we can see that without a time-intrinsic separation, the hedge relationship is no longer within the 80-125% boundary; therefore, it needs to be discontinued and the full MtM has to be booked in the P&L. In line 2, there is a time-intrinsic separation, and the 40 000 representing the time value of the option are not included in the hedge relationship, meaning that they go straight to the P&L.

Under IFRS9 with no time-intrinsic separation (line 3), the hedging relationship is accounted for in the usual manner, as the ineffectiveness boundary is not applicable, with 100 000 going representing OCI, and the over hedged 40 000 going to the P&L.

However, the time-intrinsic separation under IFRS9 in line 4 is similar to line 2 under IAS39, in which we choose to immediately remove the time value for the option from the hedging relationship. We therefore have to account for 40 000 of time value in the P&L.

In the last line, we separate between time and intrinsic values, but the time value of the option is aimed to be booked into OCI. In this case, a test on both the intrinsic and the time element is performed. We can therefore comprise 100 000 in the intrinsic OCI, 30 000 in the time OCI, and 10 000 as an over hedge in the P&L.

4. Cross-currency basis spread are considered a cost of hedging

The cross-currency basis spread can be defined as the liquidity premium of one currency over the other. This premium applies to exchanges of currencies in the future, e.g. a hedging instrument like an FX forward contract. If a cross currency interest rate swap is used in combination with a single currency hedged item, for which this spread is not relevant, hedge ineffectiveness could arise.

In order to cope with this mismatch, it has been decided to expand the requirements regarding the costs of hedging. Hedging costs can be seen as cost incurred to protect against unfavourable changes. Similar to the accounting for the forward element of the forward rate, an entity can exclude the cross-currency basis spread and account for it separately when designating a hedging instrument. In case a hypothetical derivative is used, the same principle applies. IFRS 9 states that the hypothetical derivative cannot include features that do not exist in the hedged item. Consequently, cross-currency basis spread cannot be part of the hypothetical derivative in the previously mentioned case. This means that hedge ineffectiveness will exist.

Please refer to the example below:

In example 4, we consider an entity X hedging a USD loan with a CCIRS.

MtM change of CCIRS = 215,000 – 95,000 (cross-currency basis) = 120,000.
MtM change hedged = 195,000 – 90,000 (cross-currency basis) = 105,000.

Under IAS39, there is only one way to account for CCIRS. The full amount of 120 000 (including the – 95 000 cross-currency basis) is considered as the hedging instrument, meaning that 105 000 can be listed as OCI and 15 000 of over hedge have to go to the P&L.

Under IFRS9, there is the option to exclude the cross-currency basis and account for it separately.

In line 2, we can see the conditions under IFRS9 when a cross-currency basis is included: the cross-currency basis cannot be comprised in the hedged item, so there is an under hedge of 75 000.

In line 3, we exclude the cross-currency basis from the test for the hedging instrument. By registering the MtM movement of 195 000 as OCI, we then account for the 95 000 of cross-currency basis, as well as -/- 20 000 of over hedge in the P&L. In line 4, the cross-currency basis is included in a separate hedge relationship – we therefore perform an extra test on the cross-currency basis (aligned versus actual values) . From the first test, -/- 195,000 is registered as OCI and -/- 20,000 (“over hedge” part) in P&L; from the cross-currency basis test 90,000 represents OCI and 5,000 has to be included in P&L.

The forward-looking provisions of IFRS 9

August 2016
3 min read

With the advance of the current low interest rate environment and increased regulatory requirements, modeling mortgages for valuation purposes is more complex. Additionally, the applicable valuation method depends on the purpose of the valuation.


Most banks are struggling to work out how to implement the new impairment rules. Uncertainty over how to deal with current expected credit loss taking into account future macroeconomic scenarios as required by IFRS 9, means credit risk modeling experts, quants and finance experts are in uncharted waters. Different firms have different options on the matter. The primary objective of accounting standards is to provide financial information that stakeholders find useful when making decisions. The new accounting rules regarding provisions will make reserves more timely and sufficient. However, with the new standard, banks are squeezed between P&L volatility, model risk, macroeconomic forecasting and compliance with accounting standards.

Impact

IFRS 9 will, amongst others, rock the balance sheet, affect business models, risk awareness, processes, analytics, data and systems across several dimensions.

We will name a few related to the financials:

  • Transition from IAS 39 to IFRS 9 will lead to a change in the level of provision for credit losses. The transition of the current provisions, which are based only on actual losses and incurred but not reported (IBNR) losses, to an expected loss is likely to have significant impact on shareholder equity, net income and capital ratios.
  • P&L volatility is expected to increase after transition, since deterioration in credit quality or changes in expected credit loss will have a direct impact on P&L. The P&L volatility will, however, significantly differ per type of credit portfolio, also depending on counterparty ratings and remaining maturity. Portfolios with loans rated below investment grade will move faster from ‘state 1’ to ‘state 2’ (see box), since a move within investment grade ratings is not seen as a credit quality deterioration. Portfolios with long maturities will face large P&L volatility when moving from state 1 to state 2.
  • Capital levels and deal pricing will be affected by the expected provisions.

Total P&L over time will not change, since the expected credit loss provision is booked against the actual credit losses during lifetime. If there is no actual credit loss, all provisions will fall free as profit towards maturity.

Forward-looking

IFRS 9 requires financial institutions to adjust the current backward-looking incurred loss based credit provision into a forward-looking expected credit loss. This sounds logical for an accounting provision and it assumes that existing relevant models within risk management may be applied. However, there are some difficulties to overcome.

Incorporating forward-looking information means moving away from the through-the-cycle approach towards an estimation of the ‘business cycle’ of potential credit losses. A forward-looking expected credit loss calculation should be based on an accurate estimation of current and future probability of default (PD), exposure at default (EAD), loss given default (LGD), and discount factors. Discount factors according to IFRS 9 are based on the effective interest rate; this subject will not be further addressed here. The EAD can mainly be derived from current exposure, contractual cash flows and an estimate of unscheduled repayments and an expectation of the use of undrawn credit limits. Both unscheduled repayments and undrawn amounts are known to be business cycle dependent. Forecasting these items can be derived from historical observations.

Of course, the best calibration is on defaulted data since we determine exposure at default. If insufficient data is available, cycle dependent unscheduled repayments and drawing of credit limits can be derived from the entire credit portfolio, preferably corrected with some expert judgement to reflect the situation at default.

Banks have internal rating models in place to assign a PD to a counterparty and for trenching the portfolio in different levels with a specific PD. From a capital point of view, these ratings are mostly calibrated to a through-the-cycle level of observed defaults. Now using all the bank’s forward-looking information may improve estimates if business cycle(s) can be identified, potential scenarios of the development of the cycle in the future can be forecasted, including how the cycle affects a bank’s PD term structure. This would be a macroeconomic and econometric heaven if there were sufficient data available to derive accurate and statistically significant models. Otherwise, banks need to rely more on expert judgement and external macroeconomic reports.

Next to the PD term structures, LGD term structures are required to calculate a life time expected loss. Deriving an accurate LGD term structure from realized defaults requires a large default database. Deriving a business-cycle dependent LGD term structure requires an even bigger database of accurately and timely documented losses. The level of business cycle dependency of LGD significantly differs per type of counterparty, industry, and collateral. Subordination is not much cycle dependent, while loans covered with collateral, such as mortgage loans, may result in large movements in LGDs over time. Hence, this requires different LGD term structures for different LGD types and levels.

Economic scenarios

Incorporating forward-looking information means modeling business cycle dependency in your PD and LGD. For significant drivers, future scenarios are required to calculate expected credit loss. At most banks, these forward-looking scenarios are commonly the domain of economic research departments. Macroeconomic forecasting concentrates mainly on country-specific variables. Growth of domestic product, unemployment rates, inflation indices and interest rates are typical projected variables.

Usually, only large international banks with an economic research department are able to project consistent economic outlooks and scenarios. Next to macro scenarios, industry specific forecasts are important. Industry risk models enable a bank to make forecasts for a certain industry segment, e.g. chemicals, automotive or oil & gas. Industry models are often based on variables such as market conditions, barriers to entry and default data. At some banks, industries are analyzed and scored by economic researchers. At others, usually smaller banks, industries are ranked by sector business specialists.

Industry scorings often form input for rating models and are important factors for portfolio management purposes. Therefore, caution is required in correlation between drivers of ratings and drivers of the PD term structure.

Credit portfolios

For homogenous retail exposures, forward-looking elements can be considered on a portfolio level by modeling the dependencies of PD and LGD percentages for realized defaults and losses; in essence this is a bottom-up approach. For mortgage portfolios, cycle dependency relates, for example, to unemployment and house price indices, among other factors. However, statistically significant parameters and models for default relations are difficult to obtain since there is a common time gap in observing and administrating both defaults and business cycle.

Model significance can be improved by adding additional variables with increasing risk of overfitting. Even if there is statistical proof for macroeconomic dependencies in PD and LGD rates, it is advised to be cautious, since it also requires designing credible macroeconomic scenarios. As business cycles are difficult to predict, this could lead to extra P&L volatility and an increase in the complexity and ‘explainability’ of figures. Therefore, regular back-testing and continuous monitoring are important for an accurate and robust provision mechanism, especially in the first years after the model is introduced.

For non-retail exposures, country and industry risk are, if embedded in the credit rating models, already part of the annual individual credit review and rating assignment processes. In the monthly financial reporting, additional country and industry risk factors can be taken into account on a portfolio basis, making provisions more forward looking; in essence a top-down approach. If necessary, risk management can make adjustments on an individual basis for wholesale counterparties, and facilities. A forward-looking overlay should improve the accuracy of provisions and a timely and adequate recognition of credit risk, instead of “too little, too late” as under the existing rules.

Governance

Because of the forward-looking character of IFRS 9, and the increasing role of risk models, a transparent and robust governance framework will become more important. Coordination and communication are required across risk, finance, business units, audit and IT.

Risk management typically delivers the expected credit loss parameters and calculations to finance on a monthly basis. Proposals for retail and nonretail adjustments briefly described above, must be discussed and agreed upon, after which the final proposal is submitted to the approval authority.

The governance framework should be documented and reviewed on an annual basis, and highlight key functions, stakeholders, definitions, data management, model (re)development, model implementation, portfolio monitoring and validation. In addition, all parties involved should speak the same credit risk language, have access to detailed data underlying the calculation of the provision and a good under- standing of the model and implications of decisions and parametrization. Only then can the finance department obtain an accurate understanding of the level and change of the provision and clearly inform the board and other stakeholders.

Zanders recommends preparing early for IFRS 9 and having a deep and thorough understanding of the impact, as well as the robust tooling and processes in place. Don’t just wait and ‘watch the hare running’, but start early, and at least run a shadow period during daylight to allow sufficient time.

Hassle-free CECL and IFRS9 compliance? Try our new Condor ECL tool!

A new interest-rate risk framework for BNG bank

March 2016

BNG Bank, established to offer low-rate loans to the Dutch government and public interest institutions, helps lower the cost of public amenities, but its balance sheet’s sensitivity to financial market fluctuations highlights the need for a robust interest rate risk framework.


BNG Bank was founded more than 100 years ago – firstly under the name Gemeentelijke Credietbank – as a purchasing association with the main task of bundling the financing requirements of Dutch local authorities so that purchasing benefits could be obtained on capital markets. In 1922, the name was changed to Bank voor Nederlandsche Gemeenten and even today the main aim is, in essence, the same. What has changed is the role of local authorities, says John Reichardt, a member of the Board of BNG Bank. He explains: “Over the past few years they have diversified. Many of their responsibilities are now independent or even privatized. Hospitals, electricity boards and housing companies, for example, were in the hands of local authorities but now operate independently. They are, however, still our clients because they provide public services.”

Different to Other Banks

To satisfy the financing requirements of its clients, BNG Bank collects money on the international capital markets to realize ‘bundled’ purchasing benefits. “And we pass these benefits on to our customers,” says Reichardt. “While our customers have become more diverse over time, our product portfolio has widened. Some thirty years ago we became a bank, with a comprehensive banking license, and this meant we could take up short-term loans, make investments, and handle our customers’ payments. We try to be a full-service bank, but then only for services our customers need.”

The state holds half of the shares and the remainder belongs to local authorities and provinces/counties. “Because of this we always have the dilemma: should we go for more profit and more dividend, or should our strong purchasing position be reflected immediately in our prices by means of a moderate pricing strategy? Our goal is to be big in our market – we think we should keep 35 to 50 percent of the total outstanding debt on our balance sheet. We are not striving for maximum profit, and that differentiates us from many other banks. Although we are a private company, we do also feel we are a part of the government,” says Reichardt.

Changed Worlds

BNG Bank has only one branch in The Hague, with 300 employees. The bank has grown considerably, mainly over the past few years. As of the start of the financial crisis, a number of services from other parties have disappeared, so BNG Bank was often called upon to step in. Now, partly as a result of this, it has become one of the systematically important Dutch banks. “From a character point of view, we are more of a middle-sized company, but as far as the balance sheet is concerned, we are a large bank. We earn our money by buying cheaply, but also by trying to pass this on as cheaply as possible to our customers – with a small commission. This brings with it a strong focus on risk management, including managing our own assets and the associated risks. These are partly credit risks, but we have fewer risks than other banks – because, thanks to the government, our customers are usually very creditworthy.”

BNG Bank also runs certain interest rate risks that have to be controlled on a day-to-day basis. “We have done this in a certain way for a long time, but in the meantime the world has changed,” says Hans Noordam, head of risk management at BNG Bank. “So we thought it was time to give the method a face-lift to test whether we are doing it right, with the right instruments and whether we are looking at the right things? We also wanted someone else to take a good look at it.”

So BNG Bank concluded that the interest rate risk framework had to be revised. “Our approach once was state of the art but, as always with the dialectics of progress, we didn’t do enough ourselves to keep up with changes in that respect,” Reichardt explains. “When we looked at the whole management of interest rate risk, on the one hand it was about the departments involved, and on the other hand the measurement system – the instruments we used and everything associated with them used to produce information which enabled decision-making on our position strategy. That is a big project.

Project Harry

Over the past few years various developments have taken place in the area of market risk. When BNG Bank changed its products and methods, various changes also took place in the areas of risk management and valuation, including extra requirements from the regulator. “So we started a preliminary investigation and formed one unit within risk management,” says Reichardt. At the end of 2012, BNG Bank appointed Petra Danisevska as head of risk management/ALM (RM/ALM). “We agreed not to reinvent the wheel ourselves, but mainly to look closely at best market practices,” she says.

Zanders helped us with this. In May 2013 we started an investigation to find out which interest rate risks were present in the bank and where improvement levels could be made.

Petra Danisevska, Head of risk management/ALM (RM/ALM) at BNG Bank

quote

Noordam explains that they agreed on suggested steps with the Asset Liability Committee (ALCO), which also provided input and expressed preferences. A plan was then made and the outlines sketched. To convert that into concrete actions, Noordam says that a project was initiated at the beginning of 2014: Project Harry. “This gets its name from BNG Bank’s location, also the home of a Dutch cartoon character, called Haagse Harry. He was the symbol of the whirlwind which was to whip through the bank,” says Noordam.

Within ALCO Limits

“During the (economic) crisis, all sorts of things happened which influenced the valuation of our balance sheet,” Reichardt explains. “They also had many effects on the measurement of our interest rate risk. We had to apply totally different curves – sometimes with very strange results. Our company is set up in a way that with our economic hedging and our hedge accounting, we can buy for X and pass it on to our customers for X plus a couple of basis points, which during the period of the loan reverts to us. We retain a small amount and on the basis of this pay out a dividend – our model is that simple. However, since the valuations were influenced by market changes, we were more or less obliged to take measures in order to stay within our ALCO limits. These measures, with respect to managing our interest position, would not have been realizable under our current philosophy; simply because they weren’t necessary. We knew we had to find a solution for that phenomenon in the project. After much discussion we were able to find a solution: to be more reliable within the technical framework of anticipating market movements which strongly influence valuation of financial instruments. In other words: the spread risk and the rate risk had to be separately measured and managed from one another. The world had changed and our interest rate risk management, as well as reporting and calculations based upon it, had to as well.”

After revision of the interest rate risk framework, as of the second half of 2015, all interest-rate risk measurements, their drivers and reporting were changed. The market risks as a result of the changes in interest rate curves, were then measured and reported on a daily basis by the RM/ALM department. “There is definitely better management of the interest rate risk; we generate more background data and create more possibilities to carry out analyses,” Danisevska explains. “We now have detailed figures that we couldn’t get before, with which we can show ALCO the risk and the accompanying, assumed return.”

More proactive

Noordam knew that Project Harry would involve a considerable effort. “The risk framework would inevitably suffer quite a lot. It had to be innovated on the basis of calculated conditions, while the implementation required a lot of internal resources and specific knowledge. Technical points had to be solved, while relationships had to be safeguarded; many elements with all sorts of expertise had to be integrated. The European Central Bank was stringent – that took up a lot of time and work. We had an asset quality review (AQR) and a stress test – that was completely new to us. Sometimes we were tempted to stay on known ground, but even during those periods we were able to carry on with the project. We rolled up our shirtsleeves and together we gained from the experience.”

Reichardt says: “It was a tough project for us, with complex subject matter and lots of different opinions. In total it took us seven quarters to complete. However, I think we have accomplished more than we expected at the beginning. With a combination of our own people and external expertise, we have managed to make up for lost ground. We have exchanged the rags for riches and we have been successful. Where do we stand now? As well as the required numbers, we have a clear view of what our thoughts are on ‘what is interest rate risk and what isn’t’. The only thing we still have to do is to fine-tune the roles: what can you expect from risk managers and risk takers, and how will they react to this? We will continue to monitor it. RM/ALM as a department is in any case a lot more proactive – that was an important goal for us. We can be more successful, but the department is really earning its spurs within the bank and that means profit for everyone.”

Customer successes

View all Insights

Replicating investment portfolios

February 2016
3 min read

Many banks use a framework of replicating investment portfolios to measure and manage the interest rate risk of variable savings deposits. There are two commonly used methodologies, known as the marginal investment strategy and the portfolio investment strategy. While these have the same objective, the effects for margin and interest maturity may vary. We review these strategies on the basis of a quantitative and a qualitative analysis.


A replicating investment portfolio is a collection of fixed-income investments based on an investment strategy that aims to reflect the typical interest rate maturity of the savings deposits (also referred to as ‘non-maturing deposits’). The investment strategy is formulated so that the margin between the portfolio return and the savings interest rate is as stable as possible, given various scenarios.

A replicating framework enables a bank to base its interest rate risk measurement and management on investments with a fixed maturity and price – while the deposits have no contractual maturity or price. In addition, a bank can use the framework to transfer the interest rate risk from the business lines to the central treasury, by turning the investments into contractual obligations. There are two commonly used methodologies for constructing the replicating portfolios: the marginal investment strategy and the portfolio investment strategy. These strategies have the same objective, but have different effects on margin and interest-rate term, given certain scenarios.

Strategies defined

An investment strategy determines the monthly allocation of the investable volume across various maturities. The investable volume in month t ( It ) consists of two parts:

The first part is equal to the decrease or increase in the volume of savings deposits compared to the previous month. The second part is equal to the total principal of all investments in the investment portfolio maturing in the current month (end date m = t ), Σi,m=t vi,m.

By investing or re-investing the volume of these two parts, the total principal of the investment portfolio will equal the savings volume outstanding at that moment. When an investment is generated, it receives the market interest rate relating to the maturity at that time. The portfolio investment return is determined as the principal weighted average interest rate.

The difference between a marginal investment strategy and a portfolio investment strategy is that in a marginal investment strategy, the volume is invested with a fixed allocation across fixed maturities. In a portfolio strategy, these parameters are flexible, however investments are generated in such a way that the resulting portfolio each month has the same (target) proportional maturity profile. The maturity profile provides the total monthly principal of the currently outstanding investments that will mature in the future.

In the savings modelling framework, the interest rate risk profile of the savings portfolio is estimated and defined as a (proportional) maturity profile. For the portfolio investment strategy, the target maturity profile is set equal to this estimated profile. For the marginal investment strategy, the ‘investment rule’ is derived from the estimated profile using a formula. Under long lasting constant or stable volume of savings deposits, the investment portfolio given the investment rule converges to the estimated profile.

Strategies illustrated

In Figure 1, the difference between the two strategies is graphically illustrated in an example. The example provides the development of replicating portfolios of the two strategies in two consecutive months upon increasing savings volume. The replicating portfolios initially consist of the same investments with original maturities of one month, 12 months and 36 months. To this end, the same investments and corresponding principals mature. The total maturing principal will be reinvested and the increase in savings volume will be invested.

Figure 1: Maturity profiles for the marginal (figure on top) and portfolie (figure below) investment strategies given increasing volume.

Note that if the savings volume would have remained constant, both strategies would have generated the same investments. However, with changing savings volume, the strategies will generate different investments and a different number of investments (3 under the marginal strategy, and 36 under the portfolio strategy).

The interest rate typical maturities and investment returns will therefore differ, even if market interest rates do not change. For the quantitative properties of the strategies, the decision will therefore focus mainly on margin stability and the interest rate typical maturity given changes in volume (and potential simultaneous movements in market interest rates).

Scenario analysis

The quantitative properties of the investment strategies are explained by means of a scenario analysis. The analysis compares the development of the duration, margin and margin stability of both strategies under various savings volume and market interest rate scenarios.

Client interest rate
As part of the simulation of a margin, a client interest rate is modeled. The model consists of a set of sensitivities to market interest rates (M1,t) and moving averages of market interest rates (MA12,t). The sensitivities to the variables show the degree to which the bank has to reflect market movements in its client interest rate, given the profile of its savings clients.

The model chosen for the interest rate for the point in time t (CRt) is as follows:


Up to a certain degree, the model is representative of the savings interest rates offered by (retail) banks.

Investment strategies
The investment rules are formulated so that the target maturity profiles of the two strategies are identical. This maturity profile is then determined so that the same sensitivities to the variables apply as for the client rate model. An overview of the investment strategies is given in Table 1.

The replication process is simulated for 200 successive months in each scenario. The starting point for the investment portfolio under both strategies is the target maturity profile, whereby all investments are priced using a constant historical (normal) yield curve. In each scenario, upward and downward shocks lasting 12 months are applied to the savings volume and the yield curve after 24 months.

Example scenario

The results of an example scenario are presented in order to show the dynamics of both investment strategies. This example scenario is shown in Figure 2. The results in terms of duration and margin are shown in Figure 3.

As one would expect, the duration for the portfolio investment strategy remains the same over the entire simulation. For the marginal investment strategy, we see a sharp decline in the duration during the ‘shock period’ for volume, after which a double wave motion develops on the duration. In short, this is caused by the initial (marginal) allocation during the ‘stress’ and subsequent cycles of reinvesting it.

With an upward volume shock, the margin for the portfolio strategy declines because the increase in savings volume is invested at downward shocked market interest rates. After the shock period, the declining investment return and client rate converge. For the marginal strategy this effect also applies and in addition the duration effects feed into the margin development.

Scenario spectrum
In the scenario analysis the standard deviation of the margin series, also known as the margin volatility, serves as a proxy for margin stability. The results in terms of margin stability for the full range of market interest rate and volume scenarios are summarized in Figure 4.

Figure 4: Margin volatility of marginal (left-hand figure) and portfolio strategy (right-hand figure) for upward (above) and downward (below) volume shocks.

From the figures, it can be seen that the margin of the marginal investment strategy has greater sensitivity to volume and interest rate shocks. Under these scenarios the margin volatility is on average 2.3 times higher, with the factor ranging between 1.5 and 4.5. In general, for both strategies, the margin volatility is greatest under negative interest-rate shocks combined with upward or downward volume shocks.

Replication in practice

The scenario analysis shows that the portfolio strategy has a number of advantages over the marginal strategy. First of all, the maturity profile remains constant at all times and equal to the modeled maturity of the savings deposits. Under the marginal strategy, the interest rate typical maturity can vary from it over long periods, even when there are no changes in market interest environment or behavior in the savings portfolio.

Secondly, the development of the margin is more stable under volume and interest rate shocks. The margin volatility under the marginal investment strategy is actually at least one and a half times higher under the chosen scenarios.

An intuitive process
These benefits might, however, come at the expense of a number of qualitative aspects that may form an important consideration when it comes to implementation. Firstly, the advantage of a constant interest-rate profile for the portfolio strategy, comes at the expense of intuitive combinations of investments. This may be important if these investments form contractual obligations for the transfer of the interest rate risk.

The strategy, namely, requires generating a large number of investments that can even have negative principals in case of a (small) decline of savings volume. Secondly, the shocks in the duration in a marginal strategy might actually be desirable and in line with savings portfolio developments. For example, if due to market or idiosyncratic circumstances there is high inflow of deposit volume, this additional volume may be relatively more interest rate sensitive justifying a shorter duration.

Nevertheless, the example scenario shows that after such a temporary decline a temporary increase will follow for which this justification no longer applies.

The choice

A combination of the two strategies may also be chosen as a compromise solution. This involves the use of a marginal strategy whereby interventions trigger a portfolio strategy at certain times. An intervention policy could be established by means of limits or triggers in the risk governance. Limits can be set for (unjustifiable) deviations from the target duration, whereas interventions can be triggered by material developments in the market or the savings portfolio.

In its choice for the strategy, the bank is well-advised to identify the quantitative and qualitative effects of the strategies. Ultimately, the choice has to be in line with the character of the bank, its savings portfolio and the resulting objective of the process.

  1. The profile shown is a summary of the whole maturity profile. In the whole profile, 5.97% of the replicating volume matures in the first month, 2.69% per month in the second to the 12th month, etc.
  2. Note that this is a proxy for the duration based on the weighted average maturity of the target maturity profile.

An extended version of this article is published in our Savings Special. Would you like to read it? Please send an e-mail to marketing@zanders.eu.

More articles about ‘The modeling of savings’:

The Matching Adjustment versus the Volatility Adjustment

September 2015
3 min read

With the advance of the current low interest rate environment and increased regulatory requirements, modeling mortgages for valuation purposes is more complex. Additionally, the applicable valuation method depends on the purpose of the valuation.


On April 30th 2014, the European Insurance and Occupational Pensions Authority (EIOPA) published the technical specifications for the preparatory phase towards Solvency II. The technical specifi cations on the long-term guarantee package offer the insurers basically two options to mitigate ‘artificial’ fluctuations in their own funds, the Volatility Adjustment and the Matching Adjustment. What is their impact and what are the main differences between these two measures?

Solvency II aims to unify the EU insurance market and will come into effect on January 1st 2016. The technical specifications published by EIOPA will be used for interim reporting during 2015.

Although the specifications are not yet finalized, it is unlikely that they will change extensively. The technical specifications consist of two parts; part one focuses on the valuation and calculation of the capital requirements and part two focuses on the long-term guarantee (LTG) package. The LTG package was agreed upon in November 2013 and has been one of the key areas of debate in the Solvency II legislation.

Artificial volatility

The LTG package consists of regulatory measures to ensure that short-term market movements are appropriately treated with regards to the long-term nature of the insurance business. It aims to prevent ‘artificial’ volatility in the ‘own funds’ of insurers, while still reflecting the market consistent approach of Solvency II. When insurance companies invest long-term in fixed income markets, they are exposed to credit spread fluctuations not related to an increased probability of default of the counterparty.

These fluctuations impact the market value of the assets and own funds, but not the return of the investments itself as they are held to maturity. The LTG package consists of three options for insurers to deal with this so-called ‘artificial’ volatility: the Volatility Adjustment, the Matching Adjustment and transitional measures.

Figure 1

The transitional measures allow insurers to move smoothly from Solvency I to Solvency II and apply to the risk-free curve and technical provisions. However, the most interesting measures are the Volatility Adjustment and the Matching Adjustment. The impact of both measures is difficult to assess and it is a strategic choice which measure should be applied.

Both try to prevent fluctuations in the own funds due to artificial volatility, yet their requirements and use are rather different. To find out more about these differences, we immersed ourselves into the impact of the Volatility Adjustment and the Matching Adjustment.

The Volatility Adjustment

The Volatility Adjustment (VA) is a constant addition to the risk-free curve, which used to calculate the Ultimate Forward Rate (UFR). It is designed to protect insurers with long-term liabilities from the impact of volatility on the insurers’ solvency position. The VA is based on a risk-corrected spread on the assets in a reference portfolio. It is defined as the spread between the interest rate of the assets in the reference portfolio and the corresponding risk-free rate, minus the fundamental spread (which represents default or downgrade risk).

The VA is provided and updated by EIOPA and can differ for each major currency and country. The VA is added to the liquid part of the risk-free zero-coupon rates, i.e. until the so-called Last Liquid Point (LLP). After the LLP, the curve converges to the UFR. The resulting rates are used to produce the relevant risk-free curve.

The Matching Adjustment

The Matching Adjustment (MA) is a parallel shift applied to the entire basic risk-free term structure and serves the same purpose as the VA. The MA is calculated based on the match between the insurers’ assets and the liabilities. The MA is corrected for the fundamental spread. Note that, although the MA is usually higher than the VA, the MA can possibly become negative. The MA can only be applied to a portfolio of life insurance obligations with an assigned portfolio of assets that covers the best estimate of the liabilities.

The mismatch between the cash flows of the assets and the cash flows of the liabilities must not be a material risk in relation to the risks inherent to the insurance business. These portfolios need to be identified, organized and managed separately from other activities of the insurers. Furthermore, the assigned portfolio of assets cannot be used to cover losses arising from other activities of the insurers.

The more of these portfolios are created for an insurance company, the less diversification benefits are possible. Therefore, the MA does not necessarily lead to an overall benefit.

Differences between VA and MA

The main difference between the VA and the MA is that the VA is provided by EIOPA and based on a reference portfolio, while the MA is based on a portfolio of the insurance company.

Other differences include:

  • The VA is applied until the LLP, after which the curve converges to the UFR, while the MA is a parallel shift of the whole risk-free curve;
  • The MA can only be applied to specifically identified portfolios;
  • The VA can be used together with the transitional measures in the preparatory phase, the MA cannot;
  • The MA has to be taken into account for the calculation of the Solvency Capital Requirement (SCR) for spread risk. The VA does not respond to SCR shocks for spread risks.

Figure 2: Graphical representations of balance sheets. The blue box represents the assets, the red box the liabilities, and the green box the available capital.

The impact of the VA and MA is twofold. Both adjustments have a direct impact on the available capital and next to this, the MA impacts the SCR. As a result, the level of free capital is affected as well. While the exact impact of the adjustments depends on firm-specific aspects (e.g. cash flows, the asset mix), an indication of the effects on available capital as well as the SCR is given in Figure 2. Please note that this is an example in which all numbers are fictitious and used merely for illustrative purposes.

Impact on available capital

Both the VA and the MA are an addition to the curve used to discount the liabilities, and will therefore lead to an increase in the available capital. The left chart in Figure 2 shows the Base scenario, without adjustment to the risk-free curve. Implementing the VA reduces the market value of the liabilities, but has no effect on the assets. As a result, the available capital increases, which can be seen in the middle chart.

A similar but larger effect can be seen in the right chart, which displays the outcome of the MA. The larger effect on the available capital after the MA compared to the VA is due to two components.

  1. The MA is usually higher than the VA, and
  2. the MA is applied to the whole curve.
Impact on the SCR

The calculation of the total SCR, using the Standard Formula, depends on several marginal SCRs. These marginal SCRs all represent a change in an associated risk factor (e.g. spread shocks, curve shifts), and can be seen as the decrease in available capital after an adverse scenario occurs. The risk factors can have an impact on assets, liabilities and available capital, and therefore on the required capital.

Take for example the marginal SCR for spread risk. A spread shock will have a direct, and equal, negative impact on the assets for each scenario. However, since a change in the assets has an impact on the level of the MA, the liabilities are impacted too when the MA is applied. The two left charts in Figure 3 show the results of an increase in the spread, where, by applying the spread shock, the available capital decreases by the same amount (denoted by the striped boxes).

Figure 3: Graphical representations of balance sheets after a positive spread shock. The lined boxes represent a decrease of the corresponding balance sheet item. Note that, in the MA case, the liabilities decrease (striped red box) due to an increase of the MA.

Hence, the marginal SCR for the spread shock will be equal for the Base case and the VA case. The right chart displays an equal effect on the assets. However, the decrease of the assets results in an increase of the MA. Therefore, the liabilities decrease in value too. Consequently, the available capital is reduced to a lesser extent compared to the Base or VA case.

The marginal SCR example for a spread shock clearly shows the difference in impact on the marginal SCR between the MA on the one hand, and the VA and Base case on the other hand. When looking at marginal SCRs driven by other risk factors, a similar effect will occur. Note that the total SCR is based on the marginal SCRs, including diversification effects. Therefore, the impact on the total SCR differs from the sum of the impacts on the marginal SCRs.

Impact on free capital

The impact on the level of free capital also becomes clear in Figure 3. Note that the level of free capital is calculated as available capital minus required capital. It follows directly that the application of either the VA or the MA will result in a higher level of free capital compared to the Base case. Both adjustments initially result in a higher level of available capital.

In addition, the MA may lead to a decrease in the SCR which has an extra positive impact on the free capital. The level of free capital is represented by the solid green boxes in Figure 3. This figure shows that the highest level of free capital is obtained for the MA, followed by the VA and the Base case respectively.

Conclusion

Our example shows that both the VA and the MA have a positive effect on the available capital. Apart from its restrictions and difficulties of the implementation, the MA leads to the greatest benefits in terms of available and free capital.

In addition, applying the MA could lead to a reduction of the SCR. However, the specific portfolio requirements, practical difficulties, lower diversification effects and the possibility of having a negative MA, could offset these benefits.

Besides this, the MA cannot be used in combination with the transitional measures. In order to assess the impact of both measures on the regulatory solvency position for an insurance company, an in-depth investigation is required where all firm specific characteristics are taken into account.

Fintegral

is now part of Zanders

In a continued effort to ensure we offer our customers the very best in knowledge and skills, Zanders has acquired Fintegral.

Okay

RiskQuest

is now part of Zanders

In a continued effort to ensure we offer our customers the very best in knowledge and skills, Zanders has acquired RiskQuest.

Okay

Optimum Prime

is now part of Zanders

In a continued effort to ensure we offer our customers the very best in knowledge and skills, Zanders has acquired Optimum Prime.

Okay
This site is registered on wpml.org as a development site.