Update of liquidity forecasts
Within an organization, important choices are made based on the current and expected liquidity position. It is therefore of great importance that the liquidity position is accurately portrayed and regularly updated.
During uncertain times, making a forecast requires extra effort and poses a greater challenge. For example, due to uncertainty, it may be decided to update the liquidity forecast(s) more frequently. The question raised then is what frequency is appropriate and how to deal with the assumptions underlying forecasts made during an economically sound period. In this article we discuss the information, processes and systems that are important for getting (and keeping) a good grip on the development of cash flows. Ultimately, as an organization you want to have a reliable picture at all times of the expected development of the liquidity in the short, medium and long term.
AVAILABILITY OF THE RIGHT INFORMATION
In order to obtain the best possible picture of the liquidity position, the quality of the underlying information is of great importance. The realized and projected profit and loss account, balance sheet, investment plans and transactions form an important part of the input. From a theoretical point of view, there are two methods to translate this input into the calculation of the (expected) cash flows; the direct and the indirect method.
- Under the direct method, cash flows are based on individual incoming and outgoing transactions, such as accounts receivable receipts, accounts payable payments, investments and interest payments.
- Cash flows under the indirect method arise from the P&L account and the balance sheet. Thus, under this method, the cash effects of balance sheet changes are also included.
Which method is appropriate depends in part on the length of the specific forecast. Actual bank movements (or an accurate estimate of these), on which the direct method is based, are often available for a relatively short period of time. This makes this method suitable for a short-term forecast. With the indirect method, the cash flows are derived from the projected P&L account and balance sheet. Because of this combination, this method is ideal for medium and long-term forecasts.
The most appropriate methodology depends partly on the length of the liquidity forecasts. In order to obtain a good picture of the liquidity position in the short, medium and long term, various forecasts must be prepared. These forecasts differ in terms of time units (week, month, quarter and year) and length (quarter, year and > 1 year). The matrix below shows the different time lines again, with the different forecasts shown vertically. To the right of the matrix, the appropriate methods for each forecast are shown.
Here, it is often the case that the longer the period over which the forecast is made, the less accurate the forecast. The choices regarding time units and length of the forecast are related to the phase in which the organization finds itself and the type of sector in which the organization operates. For example, in times of economic uncertainty (such as the current pandemic) or in the case of a weak financial position, it is often chosen - sometimes imposed by external financial stakeholders (e.g. special management of a bank) - to produce a short term forecast on a weekly basis. This should ensure that the financial position is brought into focus on a weekly basis, thereby increasing the grip on cash flows.
"In order to get a good picture of the liquidity position in the short, medium and long term, various forecasts need to be made."
However, this does not mean that the financial position is more accurately portrayed by creating more forecasts of different lengths. Indeed, too many (different) forecasts generate a constant time investment that is too great to keep updating them.
When an organization is in a "quiet" period, a monthly forecast for 12 months rolling, combined with a multi-year (annual) forecast for 5 years rolling, may be sufficient. However, consistency between forecasts is crucial. The inputs provided should be consistent across the different forecasts, and the time units of the different forecasts should overlap. For example, if a 13-week forecast (quarterly) is chosen, it will logically align with the liquidity forecast of at least 12 months rolling on a quarterly basis.
EMBEDDING IN SYSTEMS
One of the tools needed to process and update all information is a system that brings together all cash flow information. In practice, a treasury module in the existing ERP system or an Excel file is regularly used. If Excel is used without a clear format, it often turns out to be too complex, confusing and prone to errors. With a clear format, Excel can certainly be a suitable tool. In addition, one can choose to largely automate the forecasting process by means of an application.
The format of the chosen system will act as a means of creating consensus among internal stakeholders on the approach and principles of forecasting. In addition, it will create clarity towards external stakeholders. In order to maintain an overview, it is advisable to subdivide the cash flows into a limited number of items. The following three types of cash flows provide the basis for this:
- Operating cash flow: all cash flows resulting from operations.
- Investing cash flow: consisting of the investments in fixed assets, investments in the form of acquisitions or revenue sales.
- Financing cash flow: all expenditures and income from financing activities (a different choice can be made with respect to interest).
TREASURYnxt provides organizations with a flexible way to create liquidity forecasts for the above cash flows. To learn more about TREASURYnxt , click here.
GETTING A GRIP ON CASH THROUGH THE RIGHT PROCESSES
Besides information and systems, processes will need to be established within the organization to really get a grip on the cash position. Fixed processes bring structure to the preparation, comparison and updating of forecasts. It must always be possible to answer the question of why a particular cash flow differs from a previously prepared forecast.
It is important to be able to explain the difference between two liquidity forecasts of different times. A clear format can help with this. The challenge lies in constantly updating and reconciling these forecasts. If, for example, an investment is postponed, this will have to be reflected immediately in the investment cash flows of the forecasts. Here, clear communication is crucial. This starts with internal communication, by means of regular meetings or calls (so-called cash calls). By scheduling regular cash calls, during which the cash position and expectations are discussed and analyzed, the forecasts remain up-to-date. It is preferable to align the frequency of these cash calls with the frequency of the relevant forecast. In communications with external stakeholders, it is particularly important to provide insight into the risks and opportunities of the forecasts that have been prepared.
Finally, it is essential to compare the actual realization of the cash flows with the forecast that was prepared for the cash flows. The deviations and insights arising from this must be taken into account in the forecast for the subsequent period. After processing this realization, the length of the relevant forecast will have to remain unchanged, a so-called rolling forecast.
Ultimately, grip on the cash position can only be realized through the combination of information, systems and processes. A clear vision on this helps to structure this interplay.
Savings modelling series: The impact of savings rate floors on balance sheet management
Low interest rates, decreasing margins and regulatory pressure: banks are faced with a variety of challenges regarding non-maturing deposits. Accurate and robust models for non-maturing deposits are more important than ever. These complex models depend largely on a number of modelling choices. In the savings modelling series, Zanders lays out the main modelling choices, based on our experience at a variety of Tier 1, 2 and 3 banks.
The low or even negative market rates in many Western European countries significantly affect banks’ pricing and funding strategy. Many banks hesitate to offer negative rates on non-maturing deposits (NMD) to retail customers. In some markets, like in Belgium, regulatory restrictions impose a lower limit on the savings rate that a bank can offer. The adverse impact of these developments is that current funding margins for many banks are under pressure.
The flooring effect on variable rate deposits is a hot topic for banks’ Risk Management functions due to its impact on the pricing dynamics and customer behaviour. Although it is possible that banks offer negative deposit rates when interest rates continue to decrease (“soft flooring”), banks depend on the pricing strategy of competing banks. Next to that, offering negative rates can cause serious reputational damage, leading to deposit volume outflows. The next paragraphs outline the key focus regarding risk reporting, economic hedges, and risk models for Risk and ALM managers.
BREACHING THE SUPERVISORY OUTLIER TEST
Banks are likely to hit the Supervisory Outlier Test (SOT) because of the asymmetric sensitivity of the economic value to interest rate shocks. Banks must inform their supervisor when the Economic Value of Equity change resulting from specific interest rate scenarios exceeds certain thresholds. Asymmetric pricing effects on NMD can have substantial impact on economic value and earnings. This is because when NMD rates are close to the floor, the interest rate sensitivity decreases. This effectively makes NMD similar to fixed-rate instruments like bonds.
"Banks are likely to hit the Supervisory Outlier Test (SOT) because of the asymmetric sensitivity of the economic value to interest rate shocks."
Risk Management functions need to adjust the Economic hedge to mitigate the interest rate typical gap between assets and liabilities. While NMD are traditionally variable-rate products, these behave more like interest rate insensitive instruments in a low interest environment. Risk managers need to reflect this impact in the economic hedge. It is important to realize that it is difficult to capture the non-linearity of NMD, resulting from the floor, with linear financial instruments such as interest rate swaps. Although some banks are adjusting the hedge on a best-estimate (duration or DV01) basis, the asymmetric pricing effects will largely be left unhedged. Banks can choose to accept and monitor this risk, or capitalize for it.
Risk models need to be adjusted to reflect flooring effects on NMD. For most Western European markets, historical data is dominated by higher interest rate levels and does not yield representative behavioural risk estimations.
SAVINGS MODELLING SERIES
This short article is part of the Savings Modelling Series, a series of articles covering five hot topics in NMD for banking risk management. The other articles in this series are:
Savings modelling series – How ‘hidden savings’ impact the risk profile for banks
Low interest rates, decreasing margins and regulatory pressure: banks are faced with a variety of challenges regarding non-maturing deposits. Accurate and robust models for non-maturing deposits are more important than ever. These complex models depend largely on a number of modelling choices. In the savings modelling series, Zanders lays out the main modelling choices, based on our experience at a variety of Tier 1, 2 and 3 banks.
WHAT ARE HIDDEN SAVINGS?
Because the low or zero rates offered by banks provide little motivation to move money to savings accounts, many banking customers use their current accounts as savings account. It is very likely that customers will move part of this money to savings accounts when rates increase again. This ‘hidden savings’ or ‘savings substitution’ volume and savings accounts volume have the same interest rate sensitivity, including the asymmetric ‘flooring’ effect.
SO, HOW DO I DEAL WITH THEM?
Given the existence of these hidden savings, it might be justified to model it with a shorter maturity, thereby increasing funding stability. Because hidden savings proves to be very difficult to quantify and substantiate in practice, its modeling is still not general practice with Risk and ALM managers. The banks that do include the hidden savings effect typically use historical data-based approaches, combined with expert-based guidelines on the measurement approach and significance thresholds. Significance thresholds can be relative (a fixed percentage of total current accounts volume) or absolute amounts (for example 100 million euro of volume).
"Because the low or zero rates offered by banks provide little motivation to move money to savings accounts, many banking customers use their current accounts as savings account."
USING HISTORICAL DATA
Some banks use historical portfolio data to estimate the hidden savings portion of current accounts. Hidden savings is defined as the portion of volume after subtracting the volatile and long-term volume. The volatile (non-stable) volume is estimated based on intra-month (daily) volume fluctuations. The long-term, non-repricing, volume (core volume) can be estimated based on historical minimum volume levels.
Another measurement approach is to use account-level data to estimate the hidden savings volume. The average current account balance development over time is used to identify a trend of accelerating balance levels. Hidden savings is derived as the portion of current account volume above historically identified trends. To identify these historical trends, sufficient historical data on time periods with a significant difference between savings and current accounts rates are required.
SAVINGS MODELLING SERIES
This short article is part of the Savings Modelling Series, a series of articles covering five hot topics in NMD for banking risk management. The other articles in this series are:
Savings modelling series: Non-maturing deposits model concepts
Low interest rates, decreasing margins and regulatory pressure: banks are faced with a variety of challenges regarding non-maturing deposits. Accurate and robust models for non-maturing deposits are more important than ever. These complex models depend largely on a number of modelling choices. In the savings modelling series, Zanders lays out the main modelling choices, based on our experience at a variety of Tier 1, 2 and 3 banks.
Are you interested in a more in-depth comparison of deposit modeling concepts? Click here.
For banks with significant non-maturing deposits portfolios, Risk Management functions need to have a robust behavioural risk model. This model is required for Interest Rate Risk in the Banking Book reporting, hedge, stress testing, risk transfer, and ad-hoc analyses. Although specific modelling assumptions vary per bank, cashflow-based models, a replicating portfolio model, or a hybrid model are market practice model concepts. The choice for one of these models is strongly linked to model purpose and use. Each concept has its benefits and drawbacks for different purposes and uses.
CASHFLOW-BASED MODELS
Cashflow-based models consist of two sub-models for the deposit rate and volume that forecast coupon and notional cashflows, respectively. Both sub-models measure the relationship between behavioural risk and underlying explanatory factors. Cashflow-based models are suited to include asymmetric pricing effects (such as flooring of rates) in resulting risk metrics. Since the approach captures rate and volume dynamics well, it is also often used for ad-hoc behavioural risk analysis and stress testing.
"The choice for one of these models is strongly linked to model purpose and use."
REPLICATING PORTFOLIO MODELS
Replicating Portfolio models replicate a deposit portfolio into simple financial instruments (e.g., bonds) such that its risk profile matches the risk profile of the underlying deposits. The advantage is that it converts a complex product into tangible financial instruments with a coupon and maturity. This simplified portfolio is well-suited to transfer risk from business units to treasury departments. A disadvantage of the model is that it does not fully capture non-linear deposit behaviour, for example the asymmetric pricing effects resulting from the floor. This makes the approach less suited for stress testing or ad-hoc behavioural risk analysis for senior management.
Read our extensive analysis of replicating portfolio models here.
HYBRID MODELS
Hybrid models, consisting of both a cash flow model and replicating portfolio model, combine the benefits of the other approaches, but at the cost of increased complexity. These models are often used by banks that want to use the model for a wide range of purposes: risk transfer to treasury departments, risk reporting, ad-hoc behavioural risk analysis, and stress testing. To prevent a larger mismatch between the models, most banks ensure that the risk profiles (duration or DV01) of both models align.
SAVINGS MODELLING SERIES
This short article is part of the Savings Modelling Series, a series of articles covering five hot topics in NMD for banking risk management. The other articles in this series are:
Savings modelling series – How to determine core non-maturing deposit volume?
Low interest rates, decreasing margins and regulatory pressure: banks are faced with a variety of challenges regarding non-maturing deposits. Accurate and robust models for non-maturing deposits are more important than ever. These complex models depend largely on a number of modelling choices. In the savings modelling series, Zanders lays out the main modelling choices, based on our experience at a variety of Tier 1, 2 and 3 banks.
Identifying the core of non-maturing deposits has become increasingly important for European banking Risk and ALM managers. This is especially true for retail banks whose funding mostly comprises deposits. The last years, the concept of core deposits was formalized by the Basel Committee and included in various regulatory standards. European regulators consider a disclosure requirement of the core NMD portion to regulators and possibly to public stakeholders. Despite these developments, a lot of banks still wonder: What is core deposits and how do I identify them?
FINDING FUNDING STABILITY: CORE PORTION OF DEPOSITS
Behavioural risk profiles for client deposits can be quite different per bank and portfolio. A portion of deposits can be stable in volume and price where other portions are volatile and sensitive to market rate changes. Before banks determine the behavioural (investment) profile for these funds, it should be analysed which deposits are suitable for long-term investment. This portion is often labelled as core deposits.
Basel standards define core deposits as balances that are highly likely to remain stable in terms of volume and are unlikely to reprice after interest rate changes. Behaviour models can vary a lot between (or even within) banks and are hard to compare. A simple metric such as the proportion of core deposits should make a comparison easier. The core breakdown alone should be sufficient to substantiate differences in the investment and risk profiles of deposits.
"A good definition of core deposit volume is tailored to banks’ deposit behavioural risk model."
Regulatory guidelines do not define the exact confidence level and horizon used for core analysis. Therefore banks need to formulate an interpretation of the regulatory guidance and set the assumptions on which their analysis is based. A good definition of core deposit volume is tailored to banks’ deposit behavioural risk model. Ideally, the core percentage can be calculated directly from behavioural model parameters. ALM and Risk managers should start with the review of internal behavioural models: how are volume and pricing stability modelled and how are they translated into investment restrictions?
SAVINGS MODELLING SERIES
This short article is part of the Savings Modelling Series, a series of articles covering five hot topics in NMD for banking risk management. The other articles in this series are:
Savings modelling series – Calibrating models: historical data or scenario analysis?
Low interest rates, decreasing margins and regulatory pressure: banks are faced with a variety of challenges regarding non-maturing deposits. Accurate and robust models for non-maturing deposits are more important than ever. These complex models depend largely on a number of modelling choices. In the savings modelling series, Zanders lays out the main modelling choices, based on our experience at a variety of Tier 1, 2 and 3 banks.
One of the puzzles for Risk and ALM managers at banks the last years has been determining the interest rate risk profile of non-maturing deposits. Banks need to substantiate modelling choices and parametrization of the deposit models to both internal and external validation and regulatory bodies. Traditionally, banks used historically observed relationships between behavioural deposit components and their drivers for the parametrization. Because of the low interest rate environment and outlook, historic data has lost (part of) its forecasting power. Alternatives such as forward-looking scenario analysis are considered by ALM and Risk functions, but what are the important focus points using this approach?
THE PROBLEM WITH USING HISTORICAL OBSERVATIONS
In traditional deposit models, it is difficult to capture the complex nature of deposit client rate and volume dynamics. On the one hand Risk and ALM managers believe that historical observations are not necessarily representative for the coming years. On the other hand it is hard to ignore observed behaviour, especially when future interest rates return to historic levels. To overcome these issues, model forecasts should be challenged by proper logical reasoning.
In many European markets, the degree to which customer deposit rates track market rates (repricing) has decreased over the last decade. Repricing decreased because many banks hesitate to lower rates below zero. Risk and ALM managers should analyse to what extent the historically decreasing repricing pattern is representative for the coming years and align with the banks’ pricing strategy. This discussion often involves the approval of senior management given the strategic relevance of the topic.
"Common sense and understanding deposit model dynamics are an integral part of the modelling process."
IMPROVING MODELS THROUGH FORWARD LOOKING INFORMATION
Common sense and understanding deposit model dynamics are an integral part of the modelling process (read our interview with ING experts here). Best practice deposit modelling includes forming a comprehensive set of interest rate scenarios that can be translated to a business strategy. To capture all possible future market developments, both downward and upward scenarios should be included. The slope of the interest rate scenarios can be adjusted to reflect gradual changes over time, or sudden steepening or flattening of the curve. Pricing experts should be consulted to determine the expected deposit rate developments over time for each of the interest rate scenarios. Deposit model parameters should be chosen in such a way that its estimations on average provide a best fit for the scenario analysis.
When going through this process in your own organisation, be aware that the effects of consulting pricing experts go both ways. Risk and ALM managers will improve deposit models by using forward-looking business opinion and the business’ understanding of the market will improve through model forecasts.
SAVINGS MODELLING SERIES
This short article is part of the Savings Modelling Series, a series of articles covering five hot topics in NMD for banking risk management. The other articles in this series are:
ING’s perspective on deposit modelling: expert opinions, data, and common sense
The low interest rate environment has faced banks with structural changes in customer behavior and converging products such as savings and current accounts. ING, one of Europe’s largest players in the savings market and a long-term client of Zanders, has positioned itself as one of the frontrunners in this environment. We sat down with Tom Tschirner (head of market risk at ING Germany) and Maarten Hummel (financial risk officer at ING Group) to gather their view on modelling and balance sheet management after these structural shifts.
In some European countries, savings rates appear to have hit a limit where they have stayed at a low level for a few years, despite interest rates moving down. This would suggest a structural shift where the relation between interest rates and savings rates has broken down. How can banks model savings in this unprecedented situation?
Tom Tschirner: “The situation is different everywhere. Within the countries where we are active, the legal and regulatory frameworks are very different. For example, in countries like Italy or Belgium, the law prohibits further decreases in specific interest rates. In Germany, this regulatory restriction is not in place. From a modeling perspective, this introduces a very different dynamic.”
Maarten Hummel: “It seems all banks are struggling with the impact of these low interest rates on the behavior of their customers. There is no real history on these low rates to use in our modeling. To develop forward-looking scenarios and to know how to model these scenarios we therefore work even more closely with the business.”
How do you weigh these expert opinions in unprecedented scenarios versus historic observations?
Tschirner: “The political wind is towards using historic data. It is challenging to substantiate what you have based your expert opinion on with a regulator. Using data-based model decisions is more straightforward from that point of view, as the model is then objectively determined. However, there are situations like the one we have now, when you just have no or very limited data. And then you must use expert judgement.
The question is then: how good can the experts be? We neither have data nor experience with the current situation. What becomes important in that situation is not to do stupid things. It’s important to know what competitors are doing. For example, if you find out that on average their deposits are modeled for the duration of three years and your own model indicates you should use seven years, you should take a break and reconsider. Particularly when you don’t have enough data and experience.”
Maarten Hummel - Financial Risk Officer - ING Group
What is your role in this as a market risk manager?
Tschirner: “Our role is always to make sure that common sense is around the table and that everyone who is somehow affected by the model knows how much it depends on expert opinion, data, competition and common sense.”
Hummel: “We always have to be sure that we understand and can explain the dynamics in the forward-looking scenarios; how the bank reacts, how the clients react, what would happen in the wider savings market and other relevant factors. There needs to be a logic to explain the scenario outcomes, both on the savings portfolio and the overall balance sheet. We always look at what it means for the bank as a whole, for example: how would we manage the total bank in such a situation? It is not just a simple exercise of running a savings model based on historic data to get the answers – more important is that you assess the overall plausibility. Therefore, when calibrating our savings models, we now spend more time discussing the scenarios in-depth with the various stakeholders in the bank.”
Does that mean that both quantitative and qualitative elements are discussed?
Hummel: “The business strategy is leading. We use a global framework for our business strategy to look at how it would play out in a certain environment. Then you need to have discussions on whether that strategy will really hold in the more severe scenarios. We do take scenarios into account in a more qualitative strategy discussion. We have to look at the market, our own balance sheet and how we are positioned. It is an interesting discussion.”
To what extent do you look at the restrictions on the lending side in discussions on savings modeling?
Hummel: “The starting point is to look at the saving portfolio independently, but at some stage you cannot escape the rest of the balance sheet. For example, if I have a 50-year liability, where am I going to invest it and what is my funding value? There needs to be a check to see if the value attached to it exists.”
Tschirner: “At the end of the day, when it comes to modeling savings, the question that we are trying to answer is: how should we invest the money that we get from our clients? And can you do that totally independently of the asset situation? Most likely not. If the model tells you to invest the money for fifty years, but there are no such assets in the economy, the model is not very helpful. I would not say it is the individual situation of the bank that matters, but more the economy or the country. How easy is it to find long-term assets in Germany, Poland, or Belgium? That certainly plays an important role for the modeling of savings. One year ago, I may not have subscribed to this view, but now I’m quite convinced about that.”
Do the low savings rates impact the relation between the balances on payment accounts and the saving accounts?
Hummel: “Before, the idea was that these have different functions; one for the transactions and one to earn interest. The incentive on the savings side has now largely disappeared. Inevitably, we see many more funds staying on the current account. The question is then: how can you separate the two parts? The client does not bother to put it on the savings account, because the interest is the same. But since we have to be prepared for a scenario where interest rates will go up significantly again, we keep identifying that money as savings. You need data to identify the amount of transactional account money and separate that from the savings amount. Rates have been low for a long period already, so for a newly started bank estimating that will be very hard.”
Tom Tschirner - Head of Market Risk - ING Germany
A large portion of German ING clients is relatively new. Is it therefore harder to get the right data?
Tschirner: “There are different ways to look at it, but what we clearly observe is that the average balance of the current accounts is increasing quite significantly. You can relook at history and try to find a trend, to see what the average balances would be if it were not for the low-rate environment. Or you can look at intra-monthly patterns, driven for example by salaries and rents. If there is a threshold above which you do not find a pattern anymore, then it looks more like a savings account. These are two approaches to determine which part should be modeled as true current account money and which part as savings. There is no standard yet, but given the regulatory attention, we will find an industry standard in the coming year.”
Do you think it is a common blind spot that the segmentation between those two is often not explicitly modeled?
Tschirner: “It’s not the biggest issue that we have. But yes, you need a model. If you want a real good model though, you need all legs of the cycle; you would also need an observation from a point in time that rates increase – and you don’t have that.”
Hummel: “I agree, you need a full cycle. The challenge is that for each solution you put on this, you need an exit strategy, so once savings rates go up again and market rates are high, you gradually build down the savings on your current account. In the meantime, every client is different. We have different sets of clients and you need to have data on how your client composition is changing over time.”
Tschirner: “In Germany, ING is growing, and the number of accounts has been increasing a lot. We also know that the average age of our clients has gone up. You could argue that older clients intend to have higher balances on their accounts and that they do not shift it when rates are around zero. But if you look at data, you will not be able to tell the difference. And there is no data-based way of telling this apart. That makes it challenging to model.”
When will savings rates go up to match global interest rate rises?
The recent rises in global interest rates mark the first raise in a long time, as the loose monetary policies and quantitative easing (QE) introduced after the 2008 crash and Covid-19 pandemic abate.
There is now a clear trend break that is likely to significantly impact financial markets. Rate hikes have already caused rises in the mortgage rates offered by banks, but variable rate savings are still negligible in the eurozone. However, when you look further east, the first glimpses of positive compensations for client deposits are evident. What can we learn from Poland in this new and recently uncharted market territory?
Since the beginning of this year, interest rates are increasing at a fast pace after a long period of low rates. The Bank of England and the US Federal Reserve have already hiked their rates in an effort to tame high inflation, while the European Central Bank (ECB) has just announced it plans to up rates after 11 years of historically low or even negative interest rates. The consensus on financial markets is that positive rates will return in the eurozone towards the end of this year.
Looking towards Eastern Europe might offer a glimpse into the future for banks and their clients, as they are already ahead of the curve in terms of rising interest rates.
THE POLISH EXEMPLAR
Where interest rate hikes have only just been announced within the 19-nation eurozone, the markets in Hungary, Romania, Poland and other parts of Eastern Europe that remain outside the single currency are already in front of the trend. In Poland, for example, interest rates decreased to near-zero after the 2020 Covid-19 pandemic, driving down mortgage and savings rates to historically low levels. Due to high inflation, however, the Polish central bank has increased rates sharply since October of last year. As a result, short term rates in Poland have risen by almost 7% since the end of 2021, while the eurozone rates are only expected to increase in the coming months (see Figure 1).
Figure 1: Three-month interest rate in Poland v the eurozone, including implied future rates for the eurozone (dashed line)
Polish consumers hoping for a similarly fast increase in their savings rate were left disillusioned. Since interest rates started to rise nine months ago in the country, savings rates have remained at a constant level of 0.5%, resulting in an extreme increase in margins for Polish banks. Since the majority of Polish mortgage owners pay a variable mortgage rate, rising interest rates have put a squeeze on many households.
As a reaction, the Polish government publicly urged banks to further increase the savings rate paid to consumers. Indeed, the National Bank of Poland recently began offering its own savings bonds directly to consumers. Retail clients are able to invest their savings for a fixed term against a coupon which tracks the central bank’s rate. As hoped, this has encouraged a response from the Polish banks. They are now providing similar fixed term deposits to clients.
Upward pricing pressure on savings rates is now evident. Recently, multiple banks announced a small raise of the general savings rate, towards 1%, slowly passing on some of the additional margin to clients. However, savings rates on offer in Poland still significantly lag the short-term interest rates in the market.
ARE POLISH TRENDS APPLICABLE TO EURO MARKETS?
Although Eastern European markets provide interesting insights into interest rate developments, it doesn’t necessarily provide a clear roadmap for Western European markets. Eastern markets on the continent have experienced a relatively low interest rate environment for a long time, but historically interest rates have been significantly higher when compared to the eurozone. Since the introduction of the Euro, interbank offered rates have hardly ever risen above 5% (see Figure 2). It remains to be seen, therefore, whether euro yields will rise to the same extremes currently observed in Eastern Europe.
Figure 2: Historical interbank rates for the eurozone
Banks in the euro area face more competition making it challenging to maintain a savings margin that is similar to the Polish banks. Eurozone banks face more competition from peers within their own country and from foreign banks that can more easily operate in the single currency area. Those with their own domestic currencies face less displacement risk. Next to that, eurozone backs face more competition from newer Fintech-enabled banks that spy an opportunity to conquer market share by offering higher savings rates. Waiting too long to raise the compensation of depositors could lead to a large exodus of retail clients from traditional institutions.
It is unlikely that the ECB will take a similarly active role to the National Bank of Poland in pressuring banks to increase savings rates. ECB policies must be appropriate for all the 19 nation marketplaces within the eurozone, which generally exhibit less uniformity than the Polish market.
For example, the intervention of the National Bank of Poland resulted from the large portion of variable rate mortgages in Poland, but the eurozone market is much more diversified in this respect . It is therefore not expected that the ECB will start offering retail products to increase savings rates.
Although the ECB is planning to hike its interest rates in common with its Eastern European neighbors, a continuous series of significant rate hikes is less likely because financial markets tend to react stronger to expectations or announcements from the ECB, which necessitates a more graduated approach. The point is illustrated by the significant increase in the spread between Italian and German obligations seen following the recent announcement that the ECB will raise interest rates for the first time in 11 years. The foreshadowed change decreased the value of Italian obligations immediately. Some divergence with the trend observed in Poland is therefore inevitable, but the over-arching pattern of rising global rates is evident and over time this will course feed into savings rates with some local variations.
WHAT CAN WE LEARN FROM SAVINGS MARKETS IN OTHER COUNTRIES?
Despite the differences between savings markets in Eastern Europe and the eurozone, there are plenty of lessons that we can still learn from the Polish situation. Interest rate hikes in the market will likely predate the increasing of deposit rates, although the lag between the two is likely to vary due to differences in the competitive environment.
In Poland, the savings rates offered by banks are slowly rising after more than six months of high short-term interest rates. This makes it unlikely that we will see large increases in deposit rates in the eurozone before the end of the year if we map that trend across the currency border.
While the approach of the ECB to interest rate hikes is less hawkish compared to the Eastern European central banks, there will still be multiple rate increases over the coming year. In the Polish market, the pressure to increase rates on savings deposits mostly came from a competitive price on fixed term deposits – in this case offered by the central bank itself. Although the ECB is unlikely to adopt such an active approach, the pricing pressure in the eurozone is likely to come from term deposits as well. Once the difference between short term rates, which are typically reflected in fixed term deposits, and rates on savings becomes large enough, banks are likely to increase their compensation on savings – or face a declining customer base.
From the banks point of view, it is critical to accurately capture the pricing dynamic between fixed term deposits and saving rates. This dynamic could be modeled explicitly when forecasting deposit rates to capture the risk in variable rate savings.
One approach is to consider the forward-looking behavior of savings while calibrating the models by formulating specific scenarios and the expected pricing strategy in these scenarios. Lessons from Poland and other parts of Eastern Europe offer an interesting case study to challenge the way the bank approaches increasing interest rates.
ECL calculation methodology
Credit Risk Suite – Expected Credit Losses Methodology article
INTRODUCTION
The IFRS 9 accounting standard has been effective since 2018 and affects both financial institutions and corporates. Although the IFRS 9 standards are principle-based and simple, the design and implementation can be challenging. Specifically, the difficulties that the incorporation of forward-looking information in the loss estimate introduces should not be underestimated. Using our hands-on experience and over two decades of credit risk expertise of our consultants, Zanders developed the Credit Risk Suite. The Credit Risk Suite is a calculation engine that determines transaction-level IFRS 9 compliant provisions for credit losses. The CRS was designed specifically to overcome the difficulties that our clients face in their IFRS 9 provisioning. In this article, we will elaborate on the methodology of the ECL calculations that take place in the CRS.
An industry best-practice approach for ECL calculations requires four main ingredients:
- Probability of Default (PD): The probability that a counterparty will default at a certain point in time. This can be a one-year PD, i.e. the probability of defaulting between now and one year, or a lifetime PD, i.e. the probability of defaulting before the maturity of the contract. A lifetime PD can be split into marginal PDs which represent the probability of default in a certain period.
- Exposure at Default (EAD): The exposure remaining until maturity of the contract based on current exposure, contractual, expected redemptions and future drawings on remaining commitments.
- Loss Given Default (LGD): The percentage of EAD that is expected to be lost in case of default. The LGD differs with the level of collateral, guarantees and subordination associated with the financial instrument.
- Discount Factor (DF): The expected loss per period is discounted to present value terms using discount factors. Discount factors according to IFRS 9 are based on the effective interest rate.
The overall ECL calculation is performed as follows and illustrated by the diagram below:
MODEL COMPONENTS
The CRS consists of multiple components and underlying models that are able to calculate each of these ingredients separately. The separate components are then combined into ECL provisions which can be utilized for IFRS 9 accounting purposes. Besides this, the CRS contains a customizable module for scenario-based Forward-Looking Information (FLI). Moreover, the solution allocates assets to one of the three IFRS 9 stages. In the component approach, projections of PDs, EADs and LGDs are constructed separately. This component-based setup of the CRS allows for customizable and easy to implement approach. The methodology that is applied for each of the components is described below.
PROBABILITY OF DEFAULT
For each projected month, the PD is derived from the PD term structure that is relevant for the portfolio as well as the economic scenario. This is done using the PD module. The purpose of this module is to determine forward-looking Point-in-Time (PIT) PDs for all counterparties. This is done by transforming Through-the-Cycle (TTC) rating migration matrices into PIT rating migration matrices. The TTC rating migration matrices represent the long-term average annual transition PDs, while the PIT rating migration matrices are annual transition PDs adjusted to the current (expected) state of the economy. The PIT PDs are determined in the following steps:
- Determine TTC rating transition matrices: To be able to calculate PDs for all possible maturities, an approach based on rating transition matrices is applied. A transition matrix specifies the probability to go from a specified rating to another rating in one year time. The TTC rating transition matrices can be constructed using e.g., historical default data provided by the client or external rating agencies.
- Apply forward-looking methodology: IFRS 9 requires the state of the economy to be reflected in the ECL. In the CRS, the state of the economy is incorporated in the PD by applying a forward-looking methodology. The forward-looking methodology in the CRS is based on a ‘Z-factor approach’, where the Z-factor represents the state of the macroeconomic environment. Essentially, a relationship is determined between historical default rates and specific macroeconomic variables. The approach consists of the following sub-steps:
- Derive historical Z-factors from (global or local) historical default rates.
- Regress historical Z-factors on (global or local) macro-economic variables.
- Obtain Z-factor forecasts using macro-economic projections.
- Convert rating transition matrices from TTC to PIT: In this step, the forward-looking information is used to convert TTC rating transition matrices to point-in-time (PIT) rating transition matrices. The PIT transition matrices can be used to determine rating transitions in various states of the economy.
- Determine PD term structure: In the final step of the process, the rating transition matrices are iteratively applied to obtain a PD term structure in a specific scenario. The PD term structure defines the PD for various points in time.
The result of this is a forward-looking PIT PD term structure for all transactions which can be used in the ECL calculations.
EXPOSURE AT DEFAULT
For any given transaction, the EAD consists of the outstanding principal of the transaction plus accrued interest as of the calculation date. For each projected month, the EAD is determined using cash flow data if available. If not available, data from a portfolio snapshot from the reporting date is used to determine the EAD.
LOSS GIVEN DEFAULT
For each projected month, the LGD is determined using the LGD module. This module estimates the LGD for individual credit facilities based on the characteristics of the facility and availability and quality of pledged collateral. The process for determining the LGD consists of the following steps:
- Seniority of transaction: A minimum recovery rate is determined based on the seniority of the transaction.
- Collateral coverage: For the part of the loan that is not covered by the minimum recovery rate, the collateral coverage of the facility is determined in order to estimate the total recovery rate.
- Mapping to LGD class: The total recovery rate is mapped to an LGD class using an LGD scale.
SCENARIO-WEIGHTED AVERAGE EXPECTED CREDIT LOSS
Once all expected losses have been calculated for all scenarios, the weighted average one-year and lifetime loss are calculated for each transaction , for both 1-year and lifetime scenario losses:
For each scenario , the weights are predetermined. For each transaction , the scenario losses are weighted according to the formula above, where is either the lifetime or the one-year expected scenario loss. An example of applied scenarios and corresponding weights is as follows:
- Optimistic scenario: 25%
- Neutral scenario: 50%
- Pessimistic scenario: 25%
This results in a one-year and a lifetime scenario-weighted average ECL estimate for each transaction.
STAGE ALLOCATION
Lastly, using a stage allocation rule, the applicable (i.e., one-year or lifetime) scenario-weighted ECL estimate for each transaction is chosen. The stage allocation logic consists of a customisable quantitative assessment to determine whether an exposure is assigned to Stage 1, 2 or 3. One example could be to use a relative and absolute PD threshold:
- Relative PD threshold: +300% increase in PD (with an absolute minimum of 25 bps)
- Absolute PD threshold: +3%-point increase in PD The PD thresholds will be applied to one-year best estimate PIT PDs.
If either of the criteria are met, Stage 2 is assigned. Otherwise, the transaction is assigned Stage 1.
The provision per transaction are determined using the stage of the transaction. If the transaction stage is Stage 1, the provision is equal to the one-year expected loss. For Stage 2, the provision is equal to the lifetime expected loss. Stage 3 provision calculation methods are often transaction-specific and based on expert judgement.
Rating model calibration methodology
At Zanders we have developed several Credit Rating models. These models are already being used at over 400 companies and have been tested both in practice and against empirical data. Do you want to know more about our Credit Rating models, keep reading.
During the development of these models an important step is the calibration of the parameters to ensure a good model performance. In order to maintain these models a regular re-calibration is performed. For our Credit Rating models we strive to rely on a quantitative calibration approach that is combined and strengthened with expert option. This article explains the calibration process for one of our Credit Risk models, the Corporate Rating Model.
In short, the Corporate Rating Model assigns a credit rating to a company based on its performance on quantitative and qualitative variables. The quantitative part consists of 5 financial pillars; Operations, Liquidity, Capital Structure, Debt Service and Size. The qualitative part consist of 2 pillars; Business Analysis pillar and Behavioural Analysis pillar. See A comprehensive guide to Credit Rating Modelling for more details on the methodology behind this model.
The model calibration process for the Corporate Rating Model can be summarized as follows:
Figure 1: Overview of the model calibration process
In steps (2) through (7), input from the Zanders expert group is taken into consideration. This especially holds for input parameters that cannot be directly derived by a quantitative analysis. For these parameters, first an expert-based baseline value is determined and second a model performance optimization is performed to set the final model parameters.
In most steps the model performance is accessed by looking at the AUC (area under the ROC curve). The AUC metric is one of the most popular metrics to quantify the model fit (note this is not necessarily the same as the model quality, just as correlation does not equal causation). The AUC metric indicates, very simply put, the number of correct and incorrect predictions and plots them in a graph. The area under that graph then indicates the explanatory power of the model
DATA
The first step covers the selection of data from an extensive database containing the financial information and default history of millions of companies. Not all data points can be used in the calibration and/or during the performance testing of the model, therefore data filters are applied. Furthermore, the data set is categorized in 3 different size classes and 18 different industry sectors, each of which will be calibrated independently, using the same methodology.
This results in the master dataset, in addition data statistics are created that show the data availability, data relations and data quality. The master dataset also contains derived fields based on financials from the database, these fields are based on a long list of quantitative risk drivers (financial ratios). The long list of risk drivers is created based on expert option. As a last step, the master dataset is split into a calibration dataset (2/3 of the master dataset) and a test dataset (1/3 of the master dataset).
RISK DRIVER SELECTION
The risk driver selection for the qualitative variables is different from the risk driver selection for the quantitative variables. The final list of quantitative risk drivers is selected by means of different statistical analyses calculated for the long list of quantitative risk drivers. For the qualitative variables, a set of variables is selected based on expert opinion and industry practices.
SCORING APPROACH
Scoring functions are calibrated for the quantitative part of the model. These scoring function translate the value and trend value of each quantitative risk driver per size and industry to a (uniform) score between 0-100. For this exercise, different possible types of scoring functions are used. The best-performing scoring function for the value and trend of each risk driver is determined by performing a regression and comparing the performance. The coefficients in the scoring functions are estimated by fitting the function to the ratio values for companies in the calibration dataset. For the qualitative variables the translation from a value to a score is based on expert opinion.
WEIGHTING APPROACH
The overall score of the quantitative part of the model is combined by summing the value and trend scores by applying weights. As a starting point expert opinion-based weights are applied, after which the performance of the model is further optimized by iteratively adjusting the weights and arriving at an optimal set of weights. The weights of the qualitative variables are based on expert opinion.
MAPPING TO CENTRAL TENDENCY
To estimate the mapping from final scores to a rating class, a standardized methodology is created. The buckets are constructed from a scoring distribution perspective. This is done to ensure the eventual smooth distribution over the rating classes. As an input, the final scores (based on the quantitative risk drivers only) of each company in the calibration dataset is used together with expert opinion input parameters. The estimation is performed per size class. An optimization is performed towards a central tendency by adjusting the expert opinion input parameters. This is done by deriving a target average PD range per size class and on total level based on default data from the European Banking Authority (EBA).
The qualitative variables are included by performing an external benchmark on a selected set of companies, where proxies are used to derive the score on the qualitative variables.
The final input parameters for the mapping are set such that the average PD per size class from the Corporate Rating Model is in line with the target average PD ranges. And, a good performance on the external benchmark is achieved.
OVERRIDE FRAMEWORK
The override framework consists of two sections, Level A and Level B. Level A takes country, industry and company-specific risks into account. Level B considers the possibility of guarantor support and other (final) overriding factors. By applying Level A overrides, the Interim Credit Risk Rating (CRR) is obtained. By applying Level B overrides, the Final CRR is obtained. For the calibration only the country risk is taken into account, as this is the only override that is based on data and not a user input. The country risk is set based on OECD country risk classifications.
TESTING AND BENCHMARKING
For the testing and benchmarking the performance of the model is analysed based on the calibration and test dataset (excluding the qualitative assessment but including the country risk adjustment). For each dataset the discriminatory power is determined by looking at the AUC. The calibration quality is reviewed by performing a Binomial Test on Individual Rating Classes to check if the observed default rate lies within the boundaries of the PD rating class and a Traffic Lights Approach to compare the observed default rates with the PD of the rating class.
Concluding, the methodology applied for the (re-)calibration of the Corporate Rating Model is based on an extensive dataset with financial and default information and complemented with expert opinion. The methodology ensures that the final model performs in-line with the central tendency and an performs well on an external benchmark.