Low interest rates, decreasing margins and regulatory pressure: banks are faced with a variety of challenges regarding non-maturing deposits. Accurate and robust models for non-maturing deposits are more important than ever. These complex models depend largely on a number of modeling choices. In the savings modeling series, Zanders lays out the main modeling choices, based on our experience at a variety of Tier 1, 2 and 3 banks.
WHAT ARE HIDDEN SAVINGS?
Because the low or zero rates offered by banks provide little motivation to move money to savings accounts, many banking customers use their current accounts as savings account. It is very likely that customers will move part of this money to savings accounts when rates increase again. This ‘hidden savings’ or ‘savings substitution’ volume and savings accounts volume have the same interest rate sensitivity, including the asymmetric ‘flooring’ effect.
SO, HOW DO I DEAL WITH THEM?
Given the existence of these hidden savings, it might be justified to model it with a shorter maturity, thereby increasing funding stability. Because hidden savings proves to be very difficult to quantify and substantiate in practice, its modeling is still not general practice with Risk and ALM managers. The banks that do include the hidden savings effect typically use historical data-based approaches, combined with expert-based guidelines on the measurement approach and significance thresholds. Significance thresholds can be relative (a fixed percentage of total current accounts volume) or absolute amounts (for example 100 million euro of volume).
"Because the low or zero rates offered by banks provide little motivation to move money to savings accounts, many banking customers use their current accounts as savings account."
USING HISTORICAL DATA
Some banks use historical portfolio data to estimate the hidden savings portion of current accounts. Hidden savings is defined as the portion of volume after subtracting the volatile and long-term volume. The volatile (non-stable) volume is estimated based on intra-month (daily) volume fluctuations. The long-term, non-repricing, volume (core volume) can be estimated based on historical minimum volume levels.
Another measurement approach is to use account-level data to estimate the hidden savings volume. The average current account balance development over time is used to identify a trend of accelerating balance levels. Hidden savings is derived as the portion of current account volume above historically identified trends. To identify these historical trends, sufficient historical data on time periods with a significant difference between savings and current accounts rates are required.
SAVINGS MODELING SERIES
This short article is part of the Savings Modeling Series, a series of articles covering five hot topics in NMD for banking risk management. The other articles in this series are:
Low interest rates, decreasing margins and regulatory pressure: banks are faced with a variety of challenges regarding non-maturing deposits. Accurate and robust models for non-maturing deposits are more important than ever. These complex models depend largely on a number of modeling choices. In the savings modeling series, Zanders lays out the main modeling choices, based on our experience at a variety of Tier 1, 2 and 3 banks.
Are you interested in a more in-depth comparison of deposit modeling concepts? Click here.
For banks with significant non-maturing deposits portfolios, Risk Management functions need to have a robust behavioural risk model. This model is required for Interest Rate Risk in the Banking Book reporting, hedge, stress testing, risk transfer, and ad-hoc analyses. Although specific modeling assumptions vary per bank, cashflow-based models, a replicating portfolio model, or a hybrid model are market practice model concepts. The choice for one of these models is strongly linked to model purpose and use. Each concept has its benefits and drawbacks for different purposes and uses.
CASHFLOW-BASED MODELS
Cashflow-based models consist of two sub-models for the deposit rate and volume that forecast coupon and notional cashflows, respectively. Both sub-models measure the relationship between behavioural risk and underlying explanatory factors. Cashflow-based models are suited to include asymmetric pricing effects (such as flooring of rates) in resulting risk metrics. Since the approach captures rate and volume dynamics well, it is also often used for ad-hoc behavioural risk analysis and stress testing.
"The choice for one of these models is strongly linked to model purpose and use."
REPLICATING PORTFOLIO MODELS
Replicating Portfolio models replicate a deposit portfolio into simple financial instruments (e.g., bonds) such that its risk profile matches the risk profile of the underlying deposits. The advantage is that it converts a complex product into tangible financial instruments with a coupon and maturity. This simplified portfolio is well-suited to transfer risk from business units to treasury departments. A disadvantage of the model is that it does not fully capture non-linear deposit behaviour, for example the asymmetric pricing effects resulting from the floor. This makes the approach less suited for stress testing or ad-hoc behavioural risk analysis for senior management.
Read our extensive analysis of replicating portfolio models here.
HYBRID MODELS
Hybrid models, consisting of both a cash flow model and replicating portfolio model, combine the benefits of the other approaches, but at the cost of increased complexity. These models are often used by banks that want to use the model for a wide range of purposes: risk transfer to treasury departments, risk reporting, ad-hoc behavioural risk analysis, and stress testing. To prevent a larger mismatch between the models, most banks ensure that the risk profiles (duration or DV01) of both models align.
SAVINGS MODELING SERIES
This short article is part of the Savings Modeling Series, a series of articles covering five hot topics in NMD for banking risk management. The other articles in this series are:
Low interest rates, decreasing margins and regulatory pressure: banks are faced with a variety of challenges regarding non-maturing deposits. Accurate and robust models for non-maturing deposits are more important than ever. These complex models depend largely on a number of modeling choices. In the savings modeling series, Zanders lays out the main modeling choices, based on our experience at a variety of Tier 1, 2 and 3 banks.
Identifying the core of non-maturing deposits has become increasingly important for European banking Risk and ALM managers. This is especially true for retail banks whose funding mostly comprises deposits. The last years, the concept of core deposits was formalized by the Basel Committee and included in various regulatory standards. European regulators consider a disclosure requirement of the core NMD portion to regulators and possibly to public stakeholders. Despite these developments, a lot of banks still wonder: What is core deposits and how do I identify them?
FINDING FUNDING STABILITY: CORE PORTION OF DEPOSITS
Behavioural risk profiles for client deposits can be quite different per bank and portfolio. A portion of deposits can be stable in volume and price where other portions are volatile and sensitive to market rate changes. Before banks determine the behavioural (investment) profile for these funds, it should be analysed which deposits are suitable for long-term investment. This portion is often labelled as core deposits.
Basel standards define core deposits as balances that are highly likely to remain stable in terms of volume and are unlikely to reprice after interest rate changes. Behaviour models can vary a lot between (or even within) banks and are hard to compare. A simple metric such as the proportion of core deposits should make a comparison easier. The core breakdown alone should be sufficient to substantiate differences in the investment and risk profiles of deposits.
"A good definition of core deposit volume is tailored to banks’ deposit behavioural risk model."
Regulatory guidelines do not define the exact confidence level and horizon used for core analysis. Therefore banks need to formulate an interpretation of the regulatory guidance and set the assumptions on which their analysis is based. A good definition of core deposit volume is tailored to banks’ deposit behavioural risk model. Ideally, the core percentage can be calculated directly from behavioural model parameters. ALM and Risk managers should start with the review of internal behavioural models: how are volume and pricing stability modelled and how are they translated into investment restrictions?
SAVINGS MODELING SERIES
This short article is part of the Savings Modeling Series, a series of articles covering five hot topics in NMD for banking risk management. The other articles in this series are:
Low interest rates, decreasing margins and regulatory pressure: banks are faced with a variety of challenges regarding non-maturing deposits. Accurate and robust models for non-maturing deposits are more important than ever. These complex models depend largely on a number of modeling choices. In the savings modeling series, Zanders lays out the main modeling choices, based on our experience at a variety of Tier 1, 2 and 3 banks.
One of the puzzles for Risk and ALM managers at banks the last years has been determining the interest rate risk profile of non-maturing deposits. Banks need to substantiate modeling choices and parametrization of the deposit models to both internal and external validation and regulatory bodies. Traditionally, banks used historically observed relationships between behavioural deposit components and their drivers for the parametrization. Because of the low interest rate environment and outlook, historic data has lost (part of) its forecasting power. Alternatives such as forward-looking scenario analysis are considered by ALM and Risk functions, but what are the important focus points using this approach?
THE PROBLEM WITH USING HISTORICAL OBSERVATIONS
In traditional deposit models, it is difficult to capture the complex nature of deposit client rate and volume dynamics. On the one hand Risk and ALM managers believe that historical observations are not necessarily representative for the coming years. On the other hand it is hard to ignore observed behaviour, especially when future interest rates return to historic levels. To overcome these issues, model forecasts should be challenged by proper logical reasoning.
In many European markets, the degree to which customer deposit rates track market rates (repricing) has decreased over the last decade. Repricing decreased because many banks hesitate to lower rates below zero. Risk and ALM managers should analyse to what extent the historically decreasing repricing pattern is representative for the coming years and align with the banks’ pricing strategy. This discussion often involves the approval of senior management given the strategic relevance of the topic.
"Common sense and understanding deposit model dynamics are an integral part of the modeling process."
IMPROVING MODELS THROUGH FORWARD LOOKING INFORMATION
Common sense and understanding deposit model dynamics are an integral part of the modeling process (read our interview with ING experts here). Best practice deposit modeling includes forming a comprehensive set of interest rate scenarios that can be translated to a business strategy. To capture all possible future market developments, both downward and upward scenarios should be included. The slope of the interest rate scenarios can be adjusted to reflect gradual changes over time, or sudden steepening or flattening of the curve. Pricing experts should be consulted to determine the expected deposit rate developments over time for each of the interest rate scenarios. Deposit model parameters should be chosen in such a way that its estimations on average provide a best fit for the scenario analysis.
When going through this process in your own organisation, be aware that the effects of consulting pricing experts go both ways. Risk and ALM managers will improve deposit models by using forward-looking business opinion and the business’ understanding of the market will improve through model forecasts.
SAVINGS MODELING SERIES
This short article is part of the Savings Modeling Series, a series of articles covering five hot topics in NMD for banking risk management. The other articles in this series are:
The low interest rate environment has faced banks with structural changes in customer behavior and converging products such as savings and current accounts. ING, one of Europe’s largest players in the savings market and a long-term client of Zanders, has positioned itself as one of the frontrunners in this environment. We sat down with Tom Tschirner (head of market risk at ING Germany) and Maarten Hummel (financial risk officer at ING Group) to gather their view on modeling and balance sheet management after these structural shifts.
In some European countries, savings rates appear to have hit a limit where they have stayed at a low level for a few years, despite interest rates moving down. This would suggest a structural shift where the relation between interest rates and savings rates has broken down. How can banks model savings in this unprecedented situation?
Tom Tschirner: “The situation is different everywhere. Within the countries where we are active, the legal and regulatory frameworks are very different. For example, in countries like Italy or Belgium, the law prohibits further decreases in specific interest rates. In Germany, this regulatory restriction is not in place. From a modeling perspective, this introduces a very different dynamic.”
Maarten Hummel: “It seems all banks are struggling with the impact of these low interest rates on the behavior of their customers. There is no real history on these low rates to use in our modeling. To develop forward-looking scenarios and to know how to model these scenarios we therefore work even more closely with the business.”
How do you weigh these expert opinions in unprecedented scenarios versus historic observations?
Tschirner: “The political wind is towards using historic data. It is challenging to substantiate what you have based your expert opinion on with a regulator. Using data-based model decisions is more straightforward from that point of view, as the model is then objectively determined. However, there are situations like the one we have now, when you just have no or very limited data. And then you must use expert judgement.
The question is then: how good can the experts be? We neither have data nor experience with the current situation. What becomes important in that situation is not to do stupid things. It’s important to know what competitors are doing. For example, if you find out that on average their deposits are modeled for the duration of three years and your own model indicates you should use seven years, you should take a break and reconsider. Particularly when you don’t have enough data and experience.”

Maarten Hummel - Financial Risk Officer - ING Group
What is your role in this as a market risk manager?
Tschirner: “Our role is always to make sure that common sense is around the table and that everyone who is somehow affected by the model knows how much it depends on expert opinion, data, competition and common sense.”
Hummel: “We always have to be sure that we understand and can explain the dynamics in the forward-looking scenarios; how the bank reacts, how the clients react, what would happen in the wider savings market and other relevant factors. There needs to be a logic to explain the scenario outcomes, both on the savings portfolio and the overall balance sheet. We always look at what it means for the bank as a whole, for example: how would we manage the total bank in such a situation? It is not just a simple exercise of running a savings model based on historic data to get the answers – more important is that you assess the overall plausibility. Therefore, when calibrating our savings models, we now spend more time discussing the scenarios in-depth with the various stakeholders in the bank.”
Does that mean that both quantitative and qualitative elements are discussed?
Hummel: “The business strategy is leading. We use a global framework for our business strategy to look at how it would play out in a certain environment. Then you need to have discussions on whether that strategy will really hold in the more severe scenarios. We do take scenarios into account in a more qualitative strategy discussion. We have to look at the market, our own balance sheet and how we are positioned. It is an interesting discussion.”
To what extent do you look at the restrictions on the lending side in discussions on savings modeling?
Hummel: “The starting point is to look at the saving portfolio independently, but at some stage you cannot escape the rest of the balance sheet. For example, if I have a 50-year liability, where am I going to invest it and what is my funding value? There needs to be a check to see if the value attached to it exists.”
Tschirner: “At the end of the day, when it comes to modeling savings, the question that we are trying to answer is: how should we invest the money that we get from our clients? And can you do that totally independently of the asset situation? Most likely not. If the model tells you to invest the money for fifty years, but there are no such assets in the economy, the model is not very helpful. I would not say it is the individual situation of the bank that matters, but more the economy or the country. How easy is it to find long-term assets in Germany, Poland, or Belgium? That certainly plays an important role for the modeling of savings. One year ago, I may not have subscribed to this view, but now I’m quite convinced about that.”
Do the low savings rates impact the relation between the balances on payment accounts and the saving accounts?
Hummel: “Before, the idea was that these have different functions; one for the transactions and one to earn interest. The incentive on the savings side has now largely disappeared. Inevitably, we see many more funds staying on the current account. The question is then: how can you separate the two parts? The client does not bother to put it on the savings account, because the interest is the same. But since we have to be prepared for a scenario where interest rates will go up significantly again, we keep identifying that money as savings. You need data to identify the amount of transactional account money and separate that from the savings amount. Rates have been low for a long period already, so for a newly started bank estimating that will be very hard.”

Tom Tschirner - Head of Market Risk - ING Germany
A large portion of German ING clients is relatively new. Is it therefore harder to get the right data?
Tschirner: “There are different ways to look at it, but what we clearly observe is that the average balance of the current accounts is increasing quite significantly. You can relook at history and try to find a trend, to see what the average balances would be if it were not for the low-rate environment. Or you can look at intra-monthly patterns, driven for example by salaries and rents. If there is a threshold above which you do not find a pattern anymore, then it looks more like a savings account. These are two approaches to determine which part should be modeled as true current account money and which part as savings. There is no standard yet, but given the regulatory attention, we will find an industry standard in the coming year.”
Do you think it is a common blind spot that the segmentation between those two is often not explicitly modeled?
Tschirner: “It’s not the biggest issue that we have. But yes, you need a model. If you want a real good model though, you need all legs of the cycle; you would also need an observation from a point in time that rates increase – and you don’t have that.”
Hummel: “I agree, you need a full cycle. The challenge is that for each solution you put on this, you need an exit strategy, so once savings rates go up again and market rates are high, you gradually build down the savings on your current account. In the meantime, every client is different. We have different sets of clients and you need to have data on how your client composition is changing over time.”
Tschirner: “In Germany, ING is growing, and the number of accounts has been increasing a lot. We also know that the average age of our clients has gone up. You could argue that older clients intend to have higher balances on their accounts and that they do not shift it when rates are around zero. But if you look at data, you will not be able to tell the difference. And there is no data-based way of telling this apart. That makes it challenging to model.”
The recent rises in global interest rates mark the first raise in a long time, as the loose monetary policies and quantitative easing (QE) introduced after the 2008 crash and Covid-19 pandemic abate.
There is now a clear trend break that is likely to significantly impact financial markets. Rate hikes have already caused rises in the mortgage rates offered by banks, but variable rate savings are still negligible in the eurozone. However, when you look further east, the first glimpses of positive compensations for client deposits are evident. What can we learn from Poland in this new and recently uncharted market territory?
Since the beginning of this year, interest rates are increasing at a fast pace after a long period of low rates. The Bank of England and the US Federal Reserve have already hiked their rates in an effort to tame high inflation, while the European Central Bank (ECB) has just announced it plans to up rates after 11 years of historically low or even negative interest rates. The consensus on financial markets is that positive rates will return in the eurozone towards the end of this year.
Looking towards Eastern Europe might offer a glimpse into the future for banks and their clients, as they are already ahead of the curve in terms of rising interest rates.
THE POLISH EXEMPLAR
Where interest rate hikes have only just been announced within the 19-nation eurozone, the markets in Hungary, Romania, Poland and other parts of Eastern Europe that remain outside the single currency are already in front of the trend. In Poland, for example, interest rates decreased to near-zero after the 2020 Covid-19 pandemic, driving down mortgage and savings rates to historically low levels. Due to high inflation, however, the Polish central bank has increased rates sharply since October of last year. As a result, short term rates in Poland have risen by almost 7% since the end of 2021, while the eurozone rates are only expected to increase in the coming months (see Figure 1).

Figure 1: Three-month interest rate in Poland v the eurozone, including implied future rates for the eurozone (dashed line)
Polish consumers hoping for a similarly fast increase in their savings rate were left disillusioned. Since interest rates started to rise nine months ago in the country, savings rates have remained at a constant level of 0.5%, resulting in an extreme increase in margins for Polish banks. Since the majority of Polish mortgage owners pay a variable mortgage rate, rising interest rates have put a squeeze on many households.
As a reaction, the Polish government publicly urged banks to further increase the savings rate paid to consumers. Indeed, the National Bank of Poland recently began offering its own savings bonds directly to consumers. Retail clients are able to invest their savings for a fixed term against a coupon which tracks the central bank’s rate. As hoped, this has encouraged a response from the Polish banks. They are now providing similar fixed term deposits to clients.
Upward pricing pressure on savings rates is now evident. Recently, multiple banks announced a small raise of the general savings rate, towards 1%, slowly passing on some of the additional margin to clients. However, savings rates on offer in Poland still significantly lag the short-term interest rates in the market.
ARE POLISH TRENDS APPLICABLE TO EURO MARKETS?
Although Eastern European markets provide interesting insights into interest rate developments, it doesn’t necessarily provide a clear roadmap for Western European markets. Eastern markets on the continent have experienced a relatively low interest rate environment for a long time, but historically interest rates have been significantly higher when compared to the eurozone. Since the introduction of the Euro, interbank offered rates have hardly ever risen above 5% (see Figure 2). It remains to be seen, therefore, whether euro yields will rise to the same extremes currently observed in Eastern Europe.

Figure 2: Historical interbank rates for the eurozone
Banks in the euro area face more competition making it challenging to maintain a savings margin that is similar to the Polish banks. Eurozone banks face more competition from peers within their own country and from foreign banks that can more easily operate in the single currency area. Those with their own domestic currencies face less displacement risk. Next to that, eurozone backs face more competition from newer Fintech-enabled banks that spy an opportunity to conquer market share by offering higher savings rates. Waiting too long to raise the compensation of depositors could lead to a large exodus of retail clients from traditional institutions.
It is unlikely that the ECB will take a similarly active role to the National Bank of Poland in pressuring banks to increase savings rates. ECB policies must be appropriate for all the 19 nation marketplaces within the eurozone, which generally exhibit less uniformity than the Polish market.
For example, the intervention of the National Bank of Poland resulted from the large portion of variable rate mortgages in Poland, but the eurozone market is much more diversified in this respect . It is therefore not expected that the ECB will start offering retail products to increase savings rates.
Although the ECB is planning to hike its interest rates in common with its Eastern European neighbors, a continuous series of significant rate hikes is less likely because financial markets tend to react stronger to expectations or announcements from the ECB, which necessitates a more graduated approach. The point is illustrated by the significant increase in the spread between Italian and German obligations seen following the recent announcement that the ECB will raise interest rates for the first time in 11 years. The foreshadowed change decreased the value of Italian obligations immediately. Some divergence with the trend observed in Poland is therefore inevitable, but the over-arching pattern of rising global rates is evident and over time this will course feed into savings rates with some local variations.
WHAT CAN WE LEARN FROM SAVINGS MARKETS IN OTHER COUNTRIES?
Despite the differences between savings markets in Eastern Europe and the eurozone, there are plenty of lessons that we can still learn from the Polish situation. Interest rate hikes in the market will likely predate the increasing of deposit rates, although the lag between the two is likely to vary due to differences in the competitive environment.
In Poland, the savings rates offered by banks are slowly rising after more than six months of high short-term interest rates. This makes it unlikely that we will see large increases in deposit rates in the eurozone before the end of the year if we map that trend across the currency border.
While the approach of the ECB to interest rate hikes is less hawkish compared to the Eastern European central banks, there will still be multiple rate increases over the coming year. In the Polish market, the pressure to increase rates on savings deposits mostly came from a competitive price on fixed term deposits – in this case offered by the central bank itself. Although the ECB is unlikely to adopt such an active approach, the pricing pressure in the eurozone is likely to come from term deposits as well. Once the difference between short term rates, which are typically reflected in fixed term deposits, and rates on savings becomes large enough, banks are likely to increase their compensation on savings – or face a declining customer base.
From the banks point of view, it is critical to accurately capture the pricing dynamic between fixed term deposits and saving rates. This dynamic could be modeled explicitly when forecasting deposit rates to capture the risk in variable rate savings.
One approach is to consider the forward-looking behavior of savings while calibrating the models by formulating specific scenarios and the expected pricing strategy in these scenarios. Lessons from Poland and other parts of Eastern Europe offer an interesting case study to challenge the way the bank approaches increasing interest rates.
Credit Risk Suite – Expected Credit Losses Methodology article
INTRODUCTION
The IFRS 9 accounting standard has been effective since 2018 and affects both financial institutions and corporates. Although the IFRS 9 standards are principle-based and simple, the design and implementation can be challenging. Specifically, the difficulties that the incorporation of forward-looking information in the loss estimate introduces should not be underestimated. Using our hands-on experience and over two decades of credit risk expertise of our consultants, Zanders developed the Credit Risk Suite. The Credit Risk Suite is a calculation engine that determines transaction-level IFRS 9 compliant provisions for credit losses. The CRS was designed specifically to overcome the difficulties that our clients face in their IFRS 9 provisioning. In this article, we will elaborate on the methodology of the ECL calculations that take place in the CRS.
An industry best-practice approach for ECL calculations requires four main ingredients:
- Probability of Default (PD): The probability that a counterparty will default at a certain point in time. This can be a one-year PD, i.e. the probability of defaulting between now and one year, or a lifetime PD, i.e. the probability of defaulting before the maturity of the contract. A lifetime PD can be split into marginal PDs which represent the probability of default in a certain period.
- Exposure at Default (EAD): The exposure remaining until maturity of the contract based on current exposure, contractual, expected redemptions and future drawings on remaining commitments.
- Loss Given Default (LGD): The percentage of EAD that is expected to be lost in case of default. The LGD differs with the level of collateral, guarantees and subordination associated with the financial instrument.
- Discount Factor (DF): The expected loss per period is discounted to present value terms using discount factors. Discount factors according to IFRS 9 are based on the effective interest rate.
The overall ECL calculation is performed as follows and illustrated by the diagram below:

MODEL COMPONENTS
The CRS consists of multiple components and underlying models that are able to calculate each of these ingredients separately. The separate components are then combined into ECL provisions which can be utilized for IFRS 9 accounting purposes. Besides this, the CRS contains a customizable module for scenario-based Forward-Looking Information (FLI). Moreover, the solution allocates assets to one of the three IFRS 9 stages. In the component approach, projections of PDs, EADs and LGDs are constructed separately. This component-based setup of the CRS allows for customizable and easy to implement approach. The methodology that is applied for each of the components is described below.
PROBABILITY OF DEFAULT
For each projected month, the PD is derived from the PD term structure that is relevant for the portfolio as well as the economic scenario. This is done using the PD module. The purpose of this module is to determine forward-looking Point-in-Time (PIT) PDs for all counterparties. This is done by transforming Through-the-Cycle (TTC) rating migration matrices into PIT rating migration matrices. The TTC rating migration matrices represent the long-term average annual transition PDs, while the PIT rating migration matrices are annual transition PDs adjusted to the current (expected) state of the economy. The PIT PDs are determined in the following steps:
- Determine TTC rating transition matrices: To be able to calculate PDs for all possible maturities, an approach based on rating transition matrices is applied. A transition matrix specifies the probability to go from a specified rating to another rating in one year time. The TTC rating transition matrices can be constructed using e.g., historical default data provided by the client or external rating agencies.
- Apply forward-looking methodology: IFRS 9 requires the state of the economy to be reflected in the ECL. In the CRS, the state of the economy is incorporated in the PD by applying a forward-looking methodology. The forward-looking methodology in the CRS is based on a ‘Z-factor approach’, where the Z-factor represents the state of the macroeconomic environment. Essentially, a relationship is determined between historical default rates and specific macroeconomic variables. The approach consists of the following sub-steps:
- Derive historical Z-factors from (global or local) historical default rates.
- Regress historical Z-factors on (global or local) macro-economic variables.
- Obtain Z-factor forecasts using macro-economic projections.
- Convert rating transition matrices from TTC to PIT: In this step, the forward-looking information is used to convert TTC rating transition matrices to point-in-time (PIT) rating transition matrices. The PIT transition matrices can be used to determine rating transitions in various states of the economy.
- Determine PD term structure: In the final step of the process, the rating transition matrices are iteratively applied to obtain a PD term structure in a specific scenario. The PD term structure defines the PD for various points in time.
The result of this is a forward-looking PIT PD term structure for all transactions which can be used in the ECL calculations.

EXPOSURE AT DEFAULT
For any given transaction, the EAD consists of the outstanding principal of the transaction plus accrued interest as of the calculation date. For each projected month, the EAD is determined using cash flow data if available. If not available, data from a portfolio snapshot from the reporting date is used to determine the EAD.
LOSS GIVEN DEFAULT
For each projected month, the LGD is determined using the LGD module. This module estimates the LGD for individual credit facilities based on the characteristics of the facility and availability and quality of pledged collateral. The process for determining the LGD consists of the following steps:
- Seniority of transaction: A minimum recovery rate is determined based on the seniority of the transaction.
- Collateral coverage: For the part of the loan that is not covered by the minimum recovery rate, the collateral coverage of the facility is determined in order to estimate the total recovery rate.
- Mapping to LGD class: The total recovery rate is mapped to an LGD class using an LGD scale.

SCENARIO-WEIGHTED AVERAGE EXPECTED CREDIT LOSS
Once all expected losses have been calculated for all scenarios, the weighted average one-year and lifetime loss are calculated for each transaction , for both 1-year and lifetime scenario losses:

For each scenario , the weights are predetermined. For each transaction , the scenario losses are weighted according to the formula above, where is either the lifetime or the one-year expected scenario loss. An example of applied scenarios and corresponding weights is as follows:
- Optimistic scenario: 25%
- Neutral scenario: 50%
- Pessimistic scenario: 25%
This results in a one-year and a lifetime scenario-weighted average ECL estimate for each transaction.
STAGE ALLOCATION
Lastly, using a stage allocation rule, the applicable (i.e., one-year or lifetime) scenario-weighted ECL estimate for each transaction is chosen. The stage allocation logic consists of a customisable quantitative assessment to determine whether an exposure is assigned to Stage 1, 2 or 3. One example could be to use a relative and absolute PD threshold:
- Relative PD threshold: +300% increase in PD (with an absolute minimum of 25 bps)
- Absolute PD threshold: +3%-point increase in PD The PD thresholds will be applied to one-year best estimate PIT PDs.
If either of the criteria are met, Stage 2 is assigned. Otherwise, the transaction is assigned Stage 1.
The provision per transaction are determined using the stage of the transaction. If the transaction stage is Stage 1, the provision is equal to the one-year expected loss. For Stage 2, the provision is equal to the lifetime expected loss. Stage 3 provision calculation methods are often transaction-specific and based on expert judgement.
At Zanders we have developed several Credit Rating models. These models are already being used at over 400 companies and have been tested both in practice and against empirical data. Do you want to know more about our Credit Rating models, keep reading.
During the development of these models an important step is the calibration of the parameters to ensure a good model performance. In order to maintain these models a regular re-calibration is performed. For our Credit Rating models we strive to rely on a quantitative calibration approach that is combined and strengthened with expert option. This article explains the calibration process for one of our Credit Risk models, the Corporate Rating Model.
In short, the Corporate Rating Model assigns a credit rating to a company based on its performance on quantitative and qualitative variables. The quantitative part consists of 5 financial pillars; Operations, Liquidity, Capital Structure, Debt Service and Size. The qualitative part consist of 2 pillars; Business Analysis pillar and Behavioural Analysis pillar. See A comprehensive guide to Credit Rating Modeling for more details on the methodology behind this model.
The model calibration process for the Corporate Rating Model can be summarized as follows:

Figure 1: Overview of the model calibration process
In steps (2) through (7), input from the Zanders expert group is taken into consideration. This especially holds for input parameters that cannot be directly derived by a quantitative analysis. For these parameters, first an expert-based baseline value is determined and second a model performance optimization is performed to set the final model parameters.
In most steps the model performance is accessed by looking at the AUC (area under the ROC curve). The AUC metric is one of the most popular metrics to quantify the model fit (note this is not necessarily the same as the model quality, just as correlation does not equal causation). The AUC metric indicates, very simply put, the number of correct and incorrect predictions and plots them in a graph. The area under that graph then indicates the explanatory power of the model
DATA
The first step covers the selection of data from an extensive database containing the financial information and default history of millions of companies. Not all data points can be used in the calibration and/or during the performance testing of the model, therefore data filters are applied. Furthermore, the data set is categorized in 3 different size classes and 18 different industry sectors, each of which will be calibrated independently, using the same methodology.
This results in the master dataset, in addition data statistics are created that show the data availability, data relations and data quality. The master dataset also contains derived fields based on financials from the database, these fields are based on a long list of quantitative risk drivers (financial ratios). The long list of risk drivers is created based on expert option. As a last step, the master dataset is split into a calibration dataset (2/3 of the master dataset) and a test dataset (1/3 of the master dataset).
RISK DRIVER SELECTION
The risk driver selection for the qualitative variables is different from the risk driver selection for the quantitative variables. The final list of quantitative risk drivers is selected by means of different statistical analyses calculated for the long list of quantitative risk drivers. For the qualitative variables, a set of variables is selected based on expert opinion and industry practices.
SCORING APPROACH
Scoring functions are calibrated for the quantitative part of the model. These scoring function translate the value and trend value of each quantitative risk driver per size and industry to a (uniform) score between 0-100. For this exercise, different possible types of scoring functions are used. The best-performing scoring function for the value and trend of each risk driver is determined by performing a regression and comparing the performance. The coefficients in the scoring functions are estimated by fitting the function to the ratio values for companies in the calibration dataset. For the qualitative variables the translation from a value to a score is based on expert opinion.
WEIGHTING APPROACH
The overall score of the quantitative part of the model is combined by summing the value and trend scores by applying weights. As a starting point expert opinion-based weights are applied, after which the performance of the model is further optimized by iteratively adjusting the weights and arriving at an optimal set of weights. The weights of the qualitative variables are based on expert opinion.
MAPPING TO CENTRAL TENDENCY
To estimate the mapping from final scores to a rating class, a standardized methodology is created. The buckets are constructed from a scoring distribution perspective. This is done to ensure the eventual smooth distribution over the rating classes. As an input, the final scores (based on the quantitative risk drivers only) of each company in the calibration dataset is used together with expert opinion input parameters. The estimation is performed per size class. An optimization is performed towards a central tendency by adjusting the expert opinion input parameters. This is done by deriving a target average PD range per size class and on total level based on default data from the European Banking Authority (EBA).
The qualitative variables are included by performing an external benchmark on a selected set of companies, where proxies are used to derive the score on the qualitative variables.
The final input parameters for the mapping are set such that the average PD per size class from the Corporate Rating Model is in line with the target average PD ranges. And, a good performance on the external benchmark is achieved.
OVERRIDE FRAMEWORK
The override framework consists of two sections, Level A and Level B. Level A takes country, industry and company-specific risks into account. Level B considers the possibility of guarantor support and other (final) overriding factors. By applying Level A overrides, the Interim Credit Risk Rating (CRR) is obtained. By applying Level B overrides, the Final CRR is obtained. For the calibration only the country risk is taken into account, as this is the only override that is based on data and not a user input. The country risk is set based on OECD country risk classifications.
TESTING AND BENCHMARKING
For the testing and benchmarking the performance of the model is analysed based on the calibration and test dataset (excluding the qualitative assessment but including the country risk adjustment). For each dataset the discriminatory power is determined by looking at the AUC. The calibration quality is reviewed by performing a Binomial Test on Individual Rating Classes to check if the observed default rate lies within the boundaries of the PD rating class and a Traffic Lights Approach to compare the observed default rates with the PD of the rating class.
Concluding, the methodology applied for the (re-)calibration of the Corporate Rating Model is based on an extensive dataset with financial and default information and complemented with expert opinion. The methodology ensures that the final model performs in-line with the central tendency and an performs well on an external benchmark.
Typical retail banks often use short-term funding such as customer deposits to fund long-term loans. The profitability of this business activity is highly dependent on the pricing of the deposits and loans.
External client rates can be split up in an interest-rate component, a liquidity spread and a margin covering, for example, operational and credit risk. To limit the risk of a decline in profitability, banks often hedge the interest-rate risk as part of their risk management framework. Since the global financial crisis of 2007-2008, it has become clearer that the liquidity spread also has a significant impact on profitability. However, the measurement and hedging of liquidity spread risk is still at an early stage in the banking sector. In this article we use a stylized example to illustrate the impact of liquidity spread risk on banks’ earnings. Furthermore, we discuss which challenges banks face regarding the management of liquidity spread risk.
WHAT ARE LIQUIDITY SPREADS?
The global financial crisis of 2007-2008 was a major turning point in terms of liquidity in the financial system. In preceding years, funding was available on a large scale and at low rates, especially for creditworthy and large banks. These banks often only paid a small spread above the swap rates for attracting funding. This enabled banks to earn significant profits. Meanwhile, there was a wide belief in the sustainability of the attractive funding conditions. During the global financial crisis, liquidity declined as banks were less willing to lend to each other because of their uncertainty about the exposure on structured products. To account for declined liquidity, banks charged each other a higher spread on top of the swap rates when lending funds. The liquidity spread can therefore be described as the spread banks pay and receive on top of the swap rate to account for liquidity.
Liquidity spreads exhibit procyclical behavior as liquidity spreads typically decrease during economic expansion when there is plenty of liquidity, while liquidity spreads increase during economic contraction when liquidity is declining or limited. This procyclical behavior also has an impact on the pricing of shortterm deposits and long-term loans. As a result, the profitability of a bank is affected by changes in liquidity spreads. The embedded risk of the procyclical behavior of liquidity spreads in banks’ profitability is called liquidity spread risk.
CHALLENGES IN THE MEASUREMENT AND MANAGEMENT OF LIQUIDITY SPREAD RISK
Banks face a number of challenges in the measurement and management of liquidity spread risk. The first one is the non-trivial estimation of the liquidity spread repricing speed for variable rate products like on-demand savings. The second one is the measurement of liquidity spread risk in a Funds Transfer Pricing (FTP) context. The final challenge is the hedging of liquidity spread risk.
Estimation of liquidity spread pass through rate As shown in the example in this article, measurement of the liquidity spread pass through rate is crucial for determining the impact of liquidity spread risk on earnings. Measuring the liquidity spread pass through rate is quite straightforward for maturing products like mortgages and other loans, as the liquidity spread is often fixed until a pre-specified horizon. For non-maturing products like on-demand savings this estimation is often harder to make. Analysis on the historical relationship between liquidity spreads and external client rates is a possible approach.
MEASUREMENT OF LIQUIDITY SPREAD RISK IN A FUNDS TRANSFER PRICING CONTEXT
The hypothetical bank taken as a starting point in this article is a stylized example, where short-term customer deposits are directly used to fund long-term mortgages. In practice, many banks have a Funds Transfer Pricing (FTP) framework in place which attributes liquidity costs and benefits to each line of business activity. Interest rate risk and liquidity risk is managed by the central treasury department of the bank. Such a framework is also in line with Internal Liquidity Assessment Adequacy Process regulations from for example the Dutch National Bank (DNB).
An FTP framework also serves as a monitoring and management tool for the bank as certain products might be priced more or less attractive, thereby impacting the balance sheet structure. This makes it hard to measure, or even identify, liquidity spread risk for business lines. Often liquidity spreads are not externally set, but the bank management can adjust the externally observed liquidity spreads to steer the balance sheet. In this way bank management can influence the liquidity spreads (FTP) business lines will pay/receive on their products.
HEDGING OF LIQUIDITY SPREAD RISK
Banks typically use swaps to hedge interest rate risk, aiming to steer towards a target duration of equity as set in the Risk Appetite Statement. Without the use of interest rate swaps, banks bear interest rate risk as shocks in the interest rate will affect the bank’s value or earnings. From a liquidity spread risk perspective, the situation is identical as shocks in the liquidity spread will also affect the bank’s value or earnings. In practice it is, in contrary to interest rate risk where there are plenty of derivatives available to use for hedging purposes, quite a challenge to find suitable derivative contracts.
STYLIZED EXAMPLE
Consider a bank that uses retail customer deposits (on-demand savings) to fund retail mortgages, with a balance sheet as shown on the left in Figure 1. For the analysis in this article, a static balance sheet is assumed. The retail mortgages all have a contractual maturity of 20 years, with a fixed liquidity spread and coupon until maturity. The retail on-demand savings do not have a contractual maturity. The client rates the bank receives on the mortgages and pays on the on-demand savings are shown on the right in Figure 1. The figure shows the contribution of the interest rate component, a liquidity spread and a margin, to the client rates. Cash and cash equivalents are assumed to be non-interest bearing.
From Figure 1 it is clear that future movements in each of the components of the client rate have an impact on the bank’s earnings. The impact of these movements depends on the degree of sensitivity of the client rates on mortgages and on-demand savings towards its components. The degree of sensitivity can be measured by the pass-through rate of each of the components. The pass-through rate measures what percentage of a certain change in the market interest rate or liquidity spread is reflected in the client rates on mortgages and on-demand savings.
The pass-through rate for fixed-rate contracts is purely driven by the repricing date of the contract, because a contract generally fully reprices at repricing date for both interest rates and liquidity spreads. For savings, this is more difficult as there is no clear repricing date. First, changes in market rate and liquidity spreads are gradually passed through in the client rate over time. Second, when banks would choose to not pass through these changes in the client rate, clients would switch to competing banks that would increase their client rates.
IMPACT OF LIQUIDITY SPREAD RISK ON EARNINGS
To show the impact on earnings for the first year, we consider three market scenarios. Next to the base scenario with no changes in the interest rate and liquidity spread, instantaneous parallel 100 bps increases in the market interest rate and liquidity spread are considered. It is assumed that the pass-through rate for the interest rate and liquidity spread for savings equals 50% in the first year (see Figure 2). This means that 50% of the changes in market interest rate and liquidity spread are passed through to the client rates in the first year. For mortgages, the pass-through rate is set at 0% in the first year as these fully reprice after 20 years and no repricing takes place in the first year.
"Liquidity spreads exhibit procyclical behavior as liquidity spreads typically decrease during economic expansion when there is plenty of liquidity, while liquidity spreads increase during economic contraction when liquidity is declining or limited."

Figure 3 shows earnings for the three scenarios. In the scenario with an upward shock of 100 bps to the interest rate, the on-demand savings client rate increases by 50 bps. Given that the mortgage client rate remains the same, earnings drop from 200 bps to 150 bps. For liquidity spreads the impact of an upward shock of 100 bps to the liquidity spread on earnings is identical. Earnings drop from 200 bps to 150 bps because the on-demand savings client rate increases by 50 bps and the mortgage client rate remains the same. This example shows that both market interest rate and liquidity spread movements have impact on the banks’ earnings.
HEDGING OF INTEREST RATE AND LIQUIDITY SPREAD RISK
Under normal market conditions, the interest-rate risk on earnings can be hedged using interest-rate swaps. This is illustrated in Figure 4 for the hypothetical bank in our example. The bank buys a (for example) 10-year interest-rate swap with notional equal to half of the total savings volume. If the market interest rate increases by 100 bps, the client rate increases by 50 bps. However, this increase is offset by the interest-rate swap payoff, which also equals 50 bps. As a result, earnings remain stable on 200 bps.
CONCLUSION
Liquidity spreads have an impact on banks’ earnings. The degree of impact depends on the sensitivity of the client rates on deposits and loans towards the liquidity spread movements. For a typical retail bank using short-term funding for long-term loans, earnings can drop when liquidity spreads increase.
Banks can improve their risk measurement and management systems by incorporating liquidity spread risk. A first step might be Earnings-at-Risk scenarios for liquidity spread risk, independent of scenarios for interest rates. This enables banks to gain insight into the impact that liquidity spreads have on a bank’s earnings, and to set up relevant liquidity spread risk management systems. Doing so is non-trivial as banks face several additional challenges compared to interest rate risk management. The main challenges are the measurement of the sensitivity of non-maturing product client rates towards liquidity spreads, liquidity spread risk measurement for business lines in an FTP context, and the hedging of liquidity spread risk.
Many banks use a framework of replicating investment portfolios to measure and manage the interest rate risk of variable savings deposits. There are two commonly used methodologies, known as the marginal investment strategy and the portfolio investment strategy. While these have the same objective, the effects for margin and interest maturity may vary. We review these strategies on the basis of a quantitative and a qualitative analysis.
A replicating investment portfolio is a collection of fixed income investments based on an investment strategy that aims to reflect the typical interest rate maturity of the savings deposits (also referred to as ‘non-maturing deposits’). The investment strategy is formulated so that the margin between the portfolio return and the savings interest rate is as stable as possible, given various scenarios. A replicating framework enables a bank to base its interest rate risk measurement and management on investments with a fixed maturity and price – while the deposits have no contractual maturity or price. In addition, a bank can use the framework to transfer the interest rate risk from the business lines to the central treasury, by turning the investments into contractual obligations. There are two commonly used methodologies for constructing the replicating portfolios: the marginal investment strategy and the portfolio investment strategy. These strategies have the same objective, but have different effects on margin and interest-rate term, given certain scenarios.
STRATEGIES DEFINED
An investment strategy determines the monthly allocation of the investable volume across various maturities. The investable volume in month t (It) consists of two parts:

The first part is equal to the decrease or increase in the volume of savings deposits compared to the previous month. The second part is equal to the total principal of all investments in the investment portfolio maturing in the current month (end date m = t ), ∑i,m=t vi,m. By investing or re-investing the volume of these two parts, the total principal of the investment portfolio will equal the savings volume outstanding at that moment. When an investment is generated, it receives the market interest rate relating to the maturity at that time. The portfolio investment return is determined as the principal weighted average interest rate. The difference between a marginal investment strategy and a portfolio investment strategy is that in a marginal investment strategy, the volume is invested with a fixed allocation across fixed maturities. In a portfolio strategy, these parameters are flexible, however investments are generated in such a way that the resulting portfolio each month has the same (target) proportional maturity profile. The maturity profile provides the total monthly principal of the currently outstanding investments that will mature in the future. In the savings modeling framework, the interest rate risk profile of the savings portfolio is estimated and defined as a (proportional) maturity profile. For the portfolio investment strategy, the target maturity profile is set equal to this estimated profile. For the marginal investment strategy, the ‘investment rule’ is derived from the estimated profile using a formula. Under long lasting constant or stable volume of savings deposits, the investment portfolio given the investment rule converges to the estimated profile.
STRATEGIES ILLUSTRATED
In Figure 1, the difference between the two strategies is graphically illustrated in an example. The example provides the development of replicating portfolios of the two strategies in two consecutive months upon increasing savings volume. The replicating portfolios initially consist of the same investments with original maturities of one month, 12 months and 36 months. To this end, the same investments and corresponding principals mature. The total maturing principal will be reinvested and the increase in savings volume will be invested.

Figure 1: Two replication portfolio strategies
Note that if the savings volume would have remained constant, both strategies would have generated the same investments. However, with changing savings volume, the strategies will generate different investments and a different number of investments (3 under the marginal strategy, and 36 under the portfolio strategy). The interest rate typical maturities and investment returns will therefore differ, even if market interest rates do not change. For the quantitative properties of the strategies, the decision will therefore focus mainly on margin stability and the interest rate typical maturity given changes in volume (and potential simultaneous movements in market interest rates).
SCENARIO ANALYSIS
The quantitative properties of the investment strategies are explained by means of a scenario analysis. The analysis compares the development of the duration, margin and margin stability of both strategies under various savings volume and market interest rate scenarios.
CLIENT INTEREST RATE
As part of the simulation of a margin, a client interest rate is modeled. The model consists of a set of sensitivities to market interest rates (M1,t) and moving averages of market interest rates (MA12,t). The sensitivities to the variables show the degree to which the bank has to reflect market movements in its client interest rate, given the profile of its savings clients. The model chosen for the interest rate for the point in time t (CRt ) is as follows:

Up to a certain degree, the model is representative of the savings interest rates offered by (retail) banks.
INVESTMENT STRATEGIES
The investment rules are formulated so that the target maturity profiles of the two strategies are identical. This maturity profile is then determined so that the same sensitivities to the variables apply as for the client rate model. An overview of the investment strategies is given in Table 1.

The replication process is simulated for 200 successive months in each scenario. The starting point for the investment portfolio under both strategies is the target maturity profile, whereby all investments are priced using a constant historical (normal) yield curve. In each scenario, upward and downward shocks lasting 12 months are applied to the savings volume and the yield curve after 24 months.
EXAMPLE SCENARIO
The results of an example scenario are presented in order to show the dynamics of both investment strategies. This example scenario is shown in Figure 2. The results in terms of duration and margin are shown in Figure 3.

Figure 2: Example scenario in volume and market interest rate development

Figure 3: Impact on duration and margin from interest up and down scenarios
As one would expect, the duration for the portfolio investment strategy remains the same over the entire simulation. For the marginal investment strategy, we see a sharp decline in the duration during the ‘shock period’ for volume, after which a double wave motion develops on the duration. In short, this is caused by the initial (marginal) allocation during the ‘stress’ and subsequent cycles of reinvesting it. With an upward volume shock, the margin for the portfolio strategy declines because the increase in savings volume is invested at downward shocked market interest rates. After the shock period, the declining investment return and client rate converge. For the marginal strategy this effect also applies and in addition the duration effects feed into the margin development.
SCENARIO SPECTRUM
In the scenario analysis the standard deviation of the margin series, also known as the margin volatility, serves as a proxy for margin stability. The results in terms of margin stability for the full range of market interest rate and volume scenarios are summarized in Figure 4. From the figures, it can be seen that the margin of the marginal investment strategy has greater sensitivity to volume and interest rate shocks. Under these scenarios the margin volatility is on average 2.3 times higher, with the factor ranging between 1.5 and 4.5. In general, for both strategies, the margin volatility is greatest under negative interest-rate shocks combined with upward or downward volume shocks.
REPLICATION IN PRACTICE
The scenario analysis shows that the portfolio strategy has a number of advantages over the marginal strategy. First of all, the maturity profile remains constant at all times and equal to the modeled maturity of the savings deposits. Under the marginal strategy, the interest rate typical maturity can vary from it over long periods, even when there are no changes in market interest environment or behavior in the savings portfolio. Secondly, the development of the margin is more stable under volume and interest rate shocks. The margin volatility under the marginal investment strategy is actually at least one and a half times higher under the chosen scenarios.
AN INTUITIVE PROCESS
These benefits might, however, come at the expense of a number of qualitative aspects that may form an important consideration when it comes to implementation. Firstly, the advantage of a constant interest-rate profile for the portfolio strategy, comes at the expense of intuitive combinations of investments. This may be important if these investments form contractual obligations.

Figure 4: Margin volatility of marginal (left-hand) and portfolio strategy (right-hand) for upward (above) and downward shocks (below)
for the transfer of the interest rate risk. The strategy, namely, requires generating a large number of investments that can even have negative principals in case of a (small) decline of savings volume. Secondly, the shocks in the duration in a marginal strategy might actually be desirable and in line with savings portfolio developments. For example, if due to market or idiosyncratic circumstances there is high inflow of deposit volume, this additional volume may be relatively more interest rate sensitive justifying a shorter duration. Nevertheless, the example scenario shows that after such a temporary decline a temporary increase will follow for which this justification no longer applies.
THE CHOICE
A combination of the two strategies may also be chosen as a compromise solution. This involves the use of a marginal strategy whereby interventions trigger a portfolio strategy at certain times. An intervention policy could be established by means of limits or triggers in the risk governance. Limits can be set for (unjustifiable) deviations from the target duration, whereas interventions can be triggered by material developments in the market or the savings portfolio.
In its choice for the strategy, the bank is well-advised to identify the quantitative and qualitative effects of the strategies. Ultimately, the choice has to be in line with the character of the bank, its savings portfolio and the resulting objective of the process.