Treasury Roundtable Event for PE-Owned Companies: Treasury’s Role in Value Creation 

June 2024
3 min read

Explore the crucial role of treasury in value creation and financial performance in private equity.


The evolving economic landscape has placed a spotlight on the critical role of treasury in value creation. Our latest roundtable, themed ‘Treasury’s Role in Value Creation,’ delved into the challenges and strategies private equity firms must navigate to enhance financial performance and prepare for successful exits. This event gathered industry leaders to discuss the expectations from treasury functions, the integration of post-merger processes, and the use of innovative technologies to drive growth. Read more as we explore the insights and key takeaways from this engaging and timely discussion, offering a roadmap for treasurers to elevate their impact within portfolio companies.

Roundtable theme: Treasury’s Role in Value Creation 

The roundtable’s theme, ‘Treasury’s Role in Value Creation,’ was chosen to address the pressing economic and operational challenges that resulted in longer holding periods and slowed exits in 2023. In this context, private equity firms are increasingly focusing on growth and optimization strategies to drive long-term financial performance improvements, positioning their portfolio companies for successful exits once deal markets rebound. Key questions explored included: What is expected from the treasury function? How can treasurers navigate priorities and challenges to deliver productivity, financial performance, and value-added analysis to their company and PE sponsor? How can successful treasury post-merger integration be achieved in a buy & build scenario? And how should one prepare for an exit? 

Key Insights and Strategic Directions 

One of the significant discussion points was the value of cash management as a directly measurable lever of value creation. The panel emphasized the importance of focusing on free cash flow, EBITDA, and debt levels, which form the backbone of a successful investment. These metrics are crucial during due diligence, as they are scrutinized by Limited Partners (LPs). The consensus advocated for a focus on organic growth and business transformation over multiple expansions, which can signal stability and long-term value to LPs, and therefore add significant value to PE firms. 

Moreover, it was discussed that, LPs intensely evaluate the financial models of portfolio companies, focusing on recurring revenue, Capex, margins, and debt levels. These factors often determine the soundness of an investment. The robustness of financial operations and the sophistication of the technologies employed are crucial in investment decisions, underscoring the important role of treasury in due diligence. 

Enhancing ‘Buy and Build’ Strategies 

Effective cash management was highlighted as a key factor influencing the success of ‘buy and build’ strategies, which involve acquiring companies and then integrating and growing them to enhance value. Effective cash management ensures the necessary liquidity and financial oversight during the integration and growth phases. An attendee noted that firms often "buy but forget to build." Quantifying the impact of effective treasury management is essential to addressing this gap. 

A way of realizing operational improvements is through increased automation. Despite some pushback from PE firms on automating treasury functions, there are instances where sponsors are willing to invest in technologies to support the treasury function. For instance, an attendee mentioned receiving a sponsor’s support to invest in technology that will improve cash flow forecasting. Additionally, the approach to value creation at the portfolio company level depends on the sponsor's type and level of commitment. 

The use of Artificial Intelligence (AI) in search of value creation was also discussed. Notably, various tangible use cases for AI in Treasury are envisaged. One example highlighted was ASML’s use of AI for forecasting optimization. Even though the large chip-manufacturer is not PE-owned, ASML’s use of AI for forecasting optimization served as a prime example in the discussion. In 2023, ASML implemented an AI-powered material intake forecast model to enhance the effectiveness and efficiency of its purchase FX hedging program1. This sharpened focus on FX risk management is a visible trend across private market firms. Deploying more sophisticated tools to increase FX hedging effectiveness at the PE fund or portfolio company level is an area worth exploring. 

Looking Ahead 

We reflect on a successful inaugural edition of the Private Equity Roundtable. We learned that effective cash management is crucial for value creation, focusing on free cash flow, EBITDA, and debt levels to ensure liquidity and financial oversight, particularly in ‘buy and build’ strategies. Moreover, automation and technology investments in treasury functions, such as improved cash flow forecasting, are essential for operational improvements and enhancing value creation in portfolio companies. After the event, participants shared that the event added significant value to their roles as treasurers of PE-owned companies. The positive feedback energizes us to organize similar sessions in other countries. 

Is your company about to be or already owned by private equity? We can share our experiences regarding the added complexities of being a treasurer for a PE-owned company. For further information, you can reach out to Pieter Kraak.

Blockchain-based Tokenization for decentralized Issuance and Exchange of Carbon Offsets

November 2023
3 min read

Explore the crucial role of treasury in value creation and financial performance in private equity.


Carbon offset processes are currently dominated by private actors providing legitimacy for the market. The two largest of these, Verra and Gold Standard, provide auditing services, carbon registries and a marketplace to sell carbon offsets, making them ubiquitous in the whole process. Due to this opacity and centralisation, the business models of the existing companies was criticised regarding its validity and the actual benefit for climate action. By buying an offset in the traditional manner, the buyer must place trust in these players and their business models. Alternative solutions that would enhance the transparency of the process as well as provide decentralised marketplaces are thus called for.

The conventional process

Carbon offsets are certificates or credits that represent a reduction or removal of greenhouse gas emissions from the atmosphere. Offset markets work by having companies and organizations voluntarily pay for carbon offsetting projects. Reasons for partaking in voluntary carbon markets vary from increased awareness of corporate responsibility to a belief that emissions legislation is inevitable, and it is thus better to partake earlier.

Some industries also suffer prohibitively expensive barriers for lowering their emissions, or simply can’t reduce them because of the nature of their business. These industries can instead benefit from carbon offsets, as they manage to lower overall carbon emissions while still staying in business. Environmental organisations run climate-friendly projects and offer certificate-based investments for companies or individuals who therefore can reduce their own carbon footprint. By purchasing such certificates, they invest in these projects and their actual or future reduction of emissions. However, on a global scale, it is not enough to simply lower our carbon footprint to negate the effects of climate change. Emissions would in practice have to be negative, so that even a target of 1,5-degree Celsius warming could be met. This is also remedied by carbon credits, as they offer us a chance of removing carbon from the atmosphere. In the current process, companies looking to take part in the offsetting market will at some point run into the aforementioned behemoths and therefore an opaque form of purchasing carbon offsets.

The blockchain approach

A blockchain is a secure and decentralised database or ledger which is shared among the nodes of a computer network. Therefore, this technology can offer a valid contribution addressing the opacity and centralisation of the traditional procedure. The intention of the first blockchain approaches were the distribution of digital information in a shared ledger that is agreed on jointly and updated in a transparent manner. The information is recorded in blocks and added to the chain irreversibly, thus preventing the alteration, deletion and irregular inclusion of data.

In the recent years, tokenization of (physical) assets and the creation of a digital version that is stored on the blockchain gained more interest. By utilizing blockchain technology, asset ownership can be tokenized, which enables fractional ownership, reduces intermediaries, and provides a secure and transparent ledger. This not only increases liquidity but also expands access to previously illiquid assets (like carbon offsets). The blockchain ledger allows for real-time settlement of transactions, increasing efficiency and reducing the risk of fraud. Additionally, tokens can be programmed to include certain rules and restrictions, such as limiting the number of tokens that can be issued or specifying how they can be traded, which can provide greater transparency and control over the asset.

Blockchain-based carbon offset process

The tokenisation process for carbon credits begins with the identification of a project that either captures or helps to avoid carbon creation. In this example, the focus is on carbon avoidance through solar panels. The generation of solar electricity is considered an offset, as alternative energy use would emit carbon dioxide, whereas solar power does not.

The solar panels provide information regarding their electricity generation, from which a figure is derived that represents the amount of carbon avoided and fed into a smart contract. A smart contract is a self-executing application that exist on the blockchain and performs actions based on its underlying code. In the blockchain-based carbon offset process, smart contracts convert the different tokens and send them to the owner’s wallet. The tokens used within the process are compliant with the ERC-721 Non-Fungible Token (NFT) standard, which represents a unique token that is distinguishable from others and cannot be exchanged for other units of the same asset. A practical example is a work of art that, even if replicated, is always slightly different.

In the first stage of the process, the owner claims a carbon receipt, based on the amount of carbon avoided by the solar panel. Thereby the aggregated amount of carbon avoided (also stored in a database just for replication purposes) is sent to the smart contract, which issues a carbon receipt of the corresponding figure to the owner. Carbon receipts can further be exchanged for a uniform amount of carbon credits (e.g. 5 kg, 10 kg, 15 kg) by interacting with the second smart contract. Carbon credits are designed to be traded on the decentralised marketplace, where the price is determined by the supply and demand of its participants. Ultimately, carbon credits can be exchanged for carbon certificates indicating the certificate owner and the amount of carbon offset. Comparable with a university diploma, carbon certificates are tied to the address of the owner that initiated the exchange and are therefore non-tradable. Figure 1 illustrates the process of the described blockchain-based carbon offset solution:

Figure 1: Process flow of a blockchain-based carbon offset solution

Conclusion

The outlined blockchain-based carbon offset process was developed by Zanders’ blockchain team in a proof of concept. It was designed as an approach to reduce dependence on central players and a transparent method of issuing carbon credits. The smart contracts that the platform interacts with are implemented on the Mumbai test network of the public Polygon blockchain, which allows for fast transaction processing and minimal fees. The PoC is up and running, tokenizing the carbon savings generated by one of our colleagues photovoltaic system, and can be showcased in a demo. However, there are some clear optimisations to the process that should be considered for a larger scale (commercial) setup.

If you're interested in exploring the concept and benefits of a blockchain-based carbon offset process involving decentralised issuance and exchange of digital assets, or if you would like to see a demo, you can contact Robert Richter or Justus Schleicher.

FRTB: Profit and Loss Attribution (PLA) Analytics

June 2023
3 min read

Explore the crucial role of treasury in value creation and financial performance in private equity.


Under FRTB regulation, PLA requires banks to assess the similarity between Front Office (FO) and Risk P&L (HPL and RTPL) on a quarterly basis. Desks which do not pass PLA incur capital surcharges or may, in more severe cases, be required to use the more conservative FRTB standardised approach (SA).​

What is the purpose of PLA?​

PLA ensures that the FO and Risk P&Ls are sufficiently aligned with one another at the desk level.​ The FO HPL is compared with the Risk RTPL using two statistical tests.​ The tests measure the materiality of any simplifications in a bank’s Risk model compared with the FO systems.​ In order to use the Internal Models Approach (IMA), FRTB requires each trading desk to pass the PLA statistical tests.​ Although the implementation of PLA begins on the date that the IMA capital requirement becomes effective, banks must provide a one-year PLA test report to confirm the quality of the model.

Which statistical measures are used?​

PLA is performed using the Spearman Correlation and the Kolmogorov-Smirnov (KS) test using the most recent 250 days of historical RTPL and HPL.​ Depending on the results, each desk is assigned a traffic light test (TLT) zone (see below), where amber desks are those which are allocated to neither red or green.​

What are the consequences of failing PLA?

Capital increase: Desks in the red zone are not permitted to use the IMA and must instead use the more conservative SA, which has higher capital requirements. ​Amber desks can use the IMA but must pay a capital surcharge until the issues are remediated.

Difficulty with returning to IMA: Desks which are in the amber or red zone must satisfy statistical green zone requirements and 12-month backtesting requirements before they can be eligible to use the IMA again.​

What are some of the key reasons for PLA failure?

Data issues: Data proxies are often used within Risk if there is a lack of data available for FO risk factors. Poor or outdated proxies can decrease the accuracy of RTPL produced by the Risk model.​ The source, timing and granularity also often differs between FO and Risk data.

Missing risk factors: Missing risk factors in the Risk model are a common cause of PLA failures. Inaccurate RTPL values caused by missing risk factors can cause discrepancies between FO and Risk P&Ls and lead to PLA failures.

Roadblocks to finding the sources of PLA failures

FO and Risk mapping: Many banks face difficulties due to a lack of accurate mapping between risk factors in FO and those in Risk. ​For example, multiple risk factors in the FO systems may map to a single risk factor in the Risk model. More simply, different naming conventions can also cause issues.​ The poor mapping can make it difficult to develop an efficient and rapid process to identify the sources of P&L differences.

Lack of existing processes: PLA is a new requirement which means there is a lack of existing infrastructure to identify causes of P&L failures. ​Although they may be monitored at the desk level, P&L differences are not commonly monitored at the risk factor level on an ongoing basis.​ A lack of ongoing monitoring of risk factors makes it difficult to pre-empt issues which may cause PLA failures and increase capital requirements.

Our approach: Identifying risk factors that are causing PLA failures

Zanders’ approach overcomes the above issues by producing analytics despite any underlying mapping issues between FO and Risk P&L data. ​Using our algorithm, risk factors are ranked depending upon how statistically likely they are to be causing differences between HPL and RTPL.​ Our metric, known as risk factor ‘alpha’, can be tracked on an ongoing basis, helping banks to remediate underlying issues with risk factors before potential PLA failures.

Zanders’ P&L attribution solution has been implemented at a Tier-1 bank, providing the necessary infrastructure to identify problematic risk factors and improve PLA desk statuses. The solution provided multiple benefits to increase efficiency and transparency of workstreams at the bank.

Conclusion

As it is a new regulatory requirement, passing the PLA test has been a key concern for many banks. Although the test itself is not considerably difficult to implement, identifying why a desk may be failing can be complicated. In this article, we present a PLA tool which has already been successfully implemented at one of our large clients. By helping banks to identify the underlying risk factors which are causing desks to fail, remediation becomes much more efficient. Efficient remediation of desks which are failing PLA, in turn, reduces the amount of capital charges which banks may incur.

VaR Backtesting in Turbulent Market Conditions​: Enhancing the Historical Simulation Model with Volatility Scaling​

March 2023
3 min read

Explore the crucial role of treasury in value creation and financial performance in private equity.


Challenges with VaR models in a turbulent market

With recent periods of market stress, including COVID-19 and the Russia-Ukraine conflict, banks are finding their VaR models under strain. A failure to adhere to VaR backtesting requirements can lead to pressure on balance sheets through higher capital requirements and interventions from the regulator.

VaR backtesting

VaR is integral to the capital requirements calculation and in ensuring a sufficient capital buffer to cover losses from adverse market conditions.​ The accuracy of VaR models is therefore tested stringently with VaR backtesting, comparing the model VaR to the observed hypothetical P&Ls. ​A VaR model with poor backtesting performance is penalised with the application of a capital multiplier, ensuring a conservative capital charge.​ The capital multiplier increases with the number of exceptions during the preceding 250 business days, as described in Table 1 below.​

Table 1: Capital multipliers based on the number of backtesting exceptions.

The capital multiplier is applied to both the VaR and stressed VaR, as shown in equation 1 below, which can result in a significant impact on the market risk capital requirement when failures in VaR backtesting occur.​

Pro-cyclicality of the backtesting framework​

A known issue of VaR backtesting is pro-cyclicality in market risk. ​This problem was underscored at the beginning of the COVID-19 outbreak when multiple banks registered several VaR backtesting exceptions. ​This had a double impact on market risk capital requirements, with higher capital multipliers and an increase in VaR from higher market volatility.​ Consequently, regulators intervened to remove additional pressure on banks’ capital positions that would only exacerbate market volatility. The Federal Reserve excluded all backtesting exceptions between 6th – 27th March 2020, while the PRA allowed a proportional reduction in risks-not-in-VaR (RNIV) capital charge to offset the VaR increase.​ More recent market volatility however has not been excluded, putting pressure on banks’ VaR models during backtesting.​

Historical simulation VaR model challenges​

Banks typically use a historical simulation approach (HS VaR) for modelling VaR, due to its computational simplicity, non-normality assumption of returns and enhanced interpretability. ​Despite these advantages, the HS VaR model can be slow to react to changing markets conditions and can be limited by the scenario breadth. ​This means that the HS VaR model can fail to adequately cover risk from black swan events or rapid shifts in market regimes.​ These issues were highlighted by recent market events, including COVID-19, the Russia-Ukraine conflict, and the global surge in inflation in 2022.​ Due to this, many banks are looking at enriching their VaR models to better model dramatic changes in the market.

Enriching HS VaR models​

Alternative VaR modelling approaches can be used to enrich HS VaR models, improving their response to changes in market volatility. Volatility scaling is a computationally efficient methodology which can resolve many of the shortcomings of HS VaR model, reducing backtesting failures.​

Enhancing HS VaR with volatility scaling​

The Volatility Scaling methodology is an extension of the HS VaR model that addresses the issue of inertia to market moves.​ Volatility scaling adjusts the returns for each time t by the volatility ratio σT/σt, where σt is the return volatility at time t and σT is the return volatility at the VaR calculation date.​ Volatility is calculated using a 30-day window, which more rapidly reacts to market moves than a typical 1Y VaR window, as illustrated in Figure 1.​ As the cost of underestimation is higher than overestimating VaR, a lower bound to the volatility ratio of 1 is applied.​ Volatility scaling is simple to implement and can enrich existing models with minimal additional computational overhead.​

Figure 1: The 30-day and 1Y rolling volatilities of the 1-day scaled diversified portfolio returns. This illustrates recent market stresses, with short regions of extreme volatility (COVID-19) and longer systemic trends (Russia-Ukraine conflict and inflation). 

Comparison with alternative VaR models​

To benchmark the Volatility Scaling approach, we compare the VaR performance with the HS and the GARCH(1,1) parametric VaR models.​ The GARCH(1,1) model is configured for daily data and parameter calibration to increase sensitivity to market volatility.​ All models use the 99th percentile 1-day VaR scaled by a square root of 10. ​The effective calibration time horizon is one year, approximated by a VaR window of 260 business days.​ A one-week lag is included to account for operational issues that banks may have to load the most up-to-date market data into their risk models.​

VaR benchmarking portfolios​

To benchmark the VaR Models, their performance is evaluated on several portfolios that are sensitive to the equity, rates and credit asset classes. ​These portfolios include sensitivities to: S&P 500 (Equity), US Treasury Bonds (Treasury), USD Investment Grade Corporate Bonds (IG Bonds) and a diversified portfolio of all three asset classes (Diversified).​ This provides a measure of the VaR model performance for both diversified and a range of concentrated portfolios.​ The performance of the VaR models is measured on these portfolios in both periods of stability and periods of extreme market volatility. ​This test period includes COVID-19, the Russia-Ukraine conflict and the recent high inflationary period.​

VaR model benchmarking

The performance of the models is evaluated with VaR backtesting. The results show that the volatility scaling provides significantly improved performance over both the HS and GARCH VaR models, providing a faster response to markets moves and a lower instance of VaR exceptions.​

Model benchmarking with VaR backtesting​

A key metric for measuring the performance of VaR models is a comparison of the frequency of VaR exceptions with the limits set by the Basel Committee’s Traffic Light Test (TLT). ​Excessive exceptions will incur an increased capital multiplier for an Amber result (5 – 9 exceptions) and an intervention from the regulator in the case of a Red result (ten or more exceptions).​ Exceptions often indicate a slow reaction to market moves or a lack of accuracy in modelling risk.​

VaR measure coverage​

The coverage and adaptability of the VaR models can be observed from the comparison of the realised returns and VaR time series shown in Figure 2.​ This shows that although the GARCH model is faster to react to market changes than HS VaR, it underestimates the tail risk in stable markets, resulting in a higher instance of exceptions.​ Volatility scaling retains the conservatism of the HS VaR model whilst improving its reactivity to turbulent market conditions. This results in a significant reduction in exceptions throughout 2022.​

Figure 2: Comparison of realised returns with the model VaR measures for a diversified portfolio.

VaR backtesting results​

The VaR model performance is illustrated by the percentage of backtest days with Red, Amber and Green TLT results in Figure 3.​ Over this period HS VaR shows a reasonable coverage of the hypothetical P&Ls, however there are instances of Red results due to the failure to adapt to changes in market conditions.​ The GARCH model shows a significant reduction in performance, with 32% of test dates falling in the Red zone as a consequence of VaR underestimation in calm markets.​ The adaptability of volatility scaling ensures it can adequately cover the tail risk, increasing the percentage of Green TLT results and completely eliminating Red results.​ In this benchmarking scenario, only volatility scaling would pass regulatory scrutiny, with HS VaR and GARCH being classified as flawed models, requiring remediation plans.

Figure 3: Percentage of days with a Red, Amber and Green Traffic Light Test result for a diversified portfolio over the window 29/01/21 - 31/01/23.

VaR model capital requirements​

Capital requirements are an important determinant in banks’ ability to act as market intermediaries. The volatility scaling method can be used to increase the HS capital deployment efficiency without compromising VaR backtesting results.​

Capital requirements minimisation​

A robust VaR model produces risk measures that ensure an ample capital buffer to absorb portfolio losses. When selecting between robust VaR models, the preferred approach generates a smaller capital charge throughout the market cycle. Figure 4 shows capital requirements for the VaR models for a diversified portfolio calculated using Equation 1, with 𝐴𝑑𝑑𝑜𝑛𝑠 set to zero. Volatility scaling outperforms both models during extreme market volatility (the Russia-Ukraine conflict) and the HS model in period of stability (2021) as a result of setting the lower scaling constraint. The GARCH model underestimates capital requirements in 2021, which would have forced a bank to move to a standardised approach.

Figure 4: Capital charge for the VaR models measured on a diversified portfolio over the window 29/01/21 - 31/01/23.

Capital management efficiency

Pro-cyclicality of capital requirements is a common concern among regulators and practitioners. More stable requirements can improve banks’ capital management and planning. To measure models’ pro-cyclicality and efficiency, average capital charges and capital volatilities are compared for three concentrated asset class portfolios and a diversified market portfolio, as shown in Table 2. Volatility scaling results are better than the HS model across all portfolios, leading to lower capital charges, volatility and more efficient capital allocation. The GARCH model tends to underestimate high volatility and overestimate low volatility, as seen by the behaviour for the lowest volatility portfolio (Treasury).

Table 2: Average capital requirement and capital volatility for each VaR model across a range of portfolios during the test period, 29/01/21 - 31/01/23.

Conclusions on VaR backtesting

Recent periods of market stress highlighted the need to challenge banks’ existing VaR models. Volatility scaling is an efficient method to enrich existing VaR methodologies, making them robust across a range of portfolios and volatility regimes.

VaR backtesting in a volatile market

Ensuring VaR models conform to VaR backtesting will be challenging with the recent period of stressed market conditions and rapid changes in market volatility. Banks will need to ensure that their VaR models are responsive to volatility clustering and tail events or enhance their existing methodology to cope. Failure to do so will result in additional overheads, with increased capital charges and excessive exceptions that can lead to additional regulatory scrutiny.

Enriching VaR Models with volatility scaling

Volatility scaling provides a simple extension of HS VaR that is robust and responsive to changes in market volatility. The model shows improved backtesting performance over both the HS and parametric (GARCH) VaR models. It is also robust for highly concentrated equity, treasury and bond portfolios, as seen in Table 3. Volatility scaling dampens pro-cyclicality of HS capital requirements, ensuring more efficient capital planning. The additional computational overhead is minimal and the implementation to enrich existing models is simple. Performance can be further improved with the use of hybrid models which incorporate volatility scaling approaches. These can utilise outlier detection to increase conservatism dynamically with increasingly volatile market conditions.

Table 3: Percentage of Green, Amber and Red traffic Lights test results for each VaR model across a range of portfolios for dates in the range: 13/02/19 - 31/01/23.

Zanders recommends

Banks should invest in making their VaR models more robust and reactive to ensure capital costs and the probability of exceptions are minimised. VaR models enriched with a volatility scaling approach should be considered among a suite of models to challenge existing VaR model methodologies. Methods similar to volatility scaling can also be applied to parametric and semi-parametric models. Outlier detection models can be used to identify changes in market regime as either feeder models or early warning signals for risk managers

ECL calculation methodology

January 2023
5 min read

Credit Risk Suite – Expected Credit Losses Methodology article


INTRODUCTION

The IFRS 9 accounting standard has been effective since 2018 and affects both financial institutions and corporates. Although the IFRS 9 standards are principle-based and simple, the design and implementation can be challenging. Specifically, the difficulties that the incorporation of forward-looking information in the loss estimate introduces should not be underestimated. Using our hands-on experience and over two decades of credit risk expertise of our consultants, Zanders developed the Credit Risk Suite. The Credit Risk Suite is a calculation engine that determines transaction-level IFRS 9 compliant provisions for credit losses. The CRS was designed specifically to overcome the difficulties that our clients face in their IFRS 9 provisioning. In this article, we will elaborate on the methodology of the ECL calculations that take place in the CRS.

An industry best-practice approach for ECL calculations requires four main ingredients:

  • Probability of Default (PD): The probability that a counterparty will default at a certain point in time. This can be a one-year PD, i.e. the probability of defaulting between now and one year, or a lifetime PD, i.e. the probability of defaulting before the maturity of the contract. A lifetime PD can be split into marginal PDs which represent the probability of default in a certain period.
  • Exposure at Default (EAD): The exposure remaining until maturity of the contract based on current exposure, contractual, expected redemptions and future drawings on remaining commitments.
  • Loss Given Default (LGD): The percentage of EAD that is expected to be lost in case of default. The LGD differs with the level of collateral, guarantees and subordination associated with the financial instrument.
  • Discount Factor (DF): The expected loss per period is discounted to present value terms using discount factors. Discount factors according to IFRS 9 are based on the effective interest rate.

The overall ECL calculation is performed as follows and illustrated by the diagram below:

MODEL COMPONENTS

The CRS consists of multiple components and underlying models that are able to calculate each of these ingredients separately. The separate components are then combined into ECL provisions which can be utilized for IFRS 9 accounting purposes. Besides this, the CRS contains a customizable module for scenario-based Forward-Looking Information (FLI). Moreover, the solution allocates assets to one of the three IFRS 9 stages. In the component approach, projections of PDs, EADs and LGDs are constructed separately. This component-based setup of the CRS allows for customizable and easy to implement approach. The methodology that is applied for each of the components is described below.

PROBABILITY OF DEFAULT

For each projected month, the PD is derived from the PD term structure that is relevant for the portfolio as well as the economic scenario. This is done using the PD module. The purpose of this module is to determine forward-looking Point-in-Time (PIT) PDs for all counterparties. This is done by transforming Through-the-Cycle (TTC) rating migration matrices into PIT rating migration matrices. The TTC rating migration matrices represent the long-term average annual transition PDs, while the PIT rating migration matrices are annual transition PDs adjusted to the current (expected) state of the economy. The PIT PDs are determined in the following steps:

  1. Determine TTC rating transition matrices: To be able to calculate PDs for all possible maturities, an approach based on rating transition matrices is applied. A transition matrix specifies the probability to go from a specified rating to another rating in one year time. The TTC rating transition matrices can be constructed using e.g., historical default data provided by the client or external rating agencies.
  2. Apply forward-looking methodology: IFRS 9 requires the state of the economy to be reflected in the ECL. In the CRS, the state of the economy is incorporated in the PD by applying a forward-looking methodology. The forward-looking methodology in the CRS is based on a ‘Z-factor approach’, where the Z-factor represents the state of the macroeconomic environment. Essentially, a relationship is determined between historical default rates and specific macroeconomic variables. The approach consists of the following sub-steps:
    1. Derive historical Z-factors from (global or local) historical default rates.
    2. Regress historical Z-factors on (global or local) macro-economic variables.
    3. Obtain Z-factor forecasts using macro-economic projections.
  3. Convert rating transition matrices from TTC to PIT: In this step, the forward-looking information is used to convert TTC rating transition matrices to point-in-time (PIT) rating transition matrices. The PIT transition matrices can be used to determine rating transitions in various states of the economy.
  4. Determine PD term structure: In the final step of the process, the rating transition matrices are iteratively applied to obtain a PD term structure in a specific scenario. The PD term structure defines the PD for various points in time.

The result of this is a forward-looking PIT PD term structure for all transactions which can be used in the ECL calculations.

EXPOSURE AT DEFAULT

For any given transaction, the EAD consists of the outstanding principal of the transaction plus accrued interest as of the calculation date. For each projected month, the EAD is determined using cash flow data if available. If not available, data from a portfolio snapshot from the reporting date is used to determine the EAD.

LOSS GIVEN DEFAULT

For each projected month, the LGD is determined using the LGD module. This module estimates the LGD for individual credit facilities based on the characteristics of the facility and availability and quality of pledged collateral. The process for determining the LGD consists of the following steps:

  1. Seniority of transaction: A minimum recovery rate is determined based on the seniority of the transaction.
  2. Collateral coverage: For the part of the loan that is not covered by the minimum recovery rate, the collateral coverage of the facility is determined in order to estimate the total recovery rate.
  3. Mapping to LGD class: The total recovery rate is mapped to an LGD class using an LGD scale.

SCENARIO-WEIGHTED AVERAGE EXPECTED CREDIT LOSS

Once all expected losses have been calculated for all scenarios, the weighted average one-year and lifetime loss are calculated for each transaction , for both 1-year and lifetime scenario losses:

For each scenario , the weights  are predetermined. For each transaction , the scenario losses are weighted according to the formula above, where  is either the lifetime or the one-year expected scenario loss. An example of applied scenarios and corresponding weights is as follows:

  • Optimistic scenario: 25%
  • Neutral scenario: 50%
  • Pessimistic scenario: 25%

This results in a one-year and a lifetime scenario-weighted average ECL estimate for each transaction.

STAGE ALLOCATION

Lastly, using a stage allocation rule, the applicable (i.e., one-year or lifetime) scenario-weighted ECL estimate for each transaction is chosen. The stage allocation logic consists of a customisable quantitative assessment to determine whether an exposure is assigned to Stage 1, 2 or 3. One example could be to use a relative and absolute PD threshold:

  • Relative PD threshold: +300% increase in PD (with an absolute minimum of 25 bps)
  • Absolute PD threshold: +3%-point increase in PD The PD thresholds will be applied to one-year best estimate PIT PDs.

If either of the criteria are met, Stage 2 is assigned. Otherwise, the transaction is assigned Stage 1.

The provision per transaction are determined using the stage of the transaction. If the transaction stage is Stage 1, the provision is equal to the one-year expected loss. For Stage 2, the provision is equal to the lifetime expected loss. Stage 3 provision calculation methods are often transaction-specific and based on expert judgement.

Rating model calibration methodology

January 2023
5 min read

At Zanders we have developed several Credit Rating models. These models are already being used at over 400 companies and have been tested both in practice and against empirical data. Do you want to know more about our Credit Rating models, keep reading.


During the development of these models an important step is the calibration of the parameters to ensure a good model performance. In order to maintain these models a regular re-calibration is performed. For our Credit Rating models we strive to rely on a quantitative calibration approach that is combined and strengthened with expert option. This article explains the calibration process for one of our Credit Risk models, the Corporate Rating Model.

In short, the Corporate Rating Model assigns a credit rating to a company based on its performance on quantitative and qualitative variables. The quantitative part consists of 5 financial pillars; Operations, Liquidity, Capital Structure, Debt Service and Size. The qualitative part consist of 2 pillars; Business Analysis pillar and Behavioural Analysis pillar. See A comprehensive guide to Credit Rating Modelling for more details on the methodology behind this model.

The model calibration process for the Corporate Rating Model can be summarized as follows:

Figure 1: Overview of the model calibration process

In steps (2) through (7), input from the Zanders expert group is taken into consideration. This especially holds for input parameters that cannot be directly derived by a quantitative analysis. For these parameters, first an expert-based baseline value is determined and second a model performance optimization is performed to set the final model parameters.

In most steps the model performance is accessed by looking at the AUC (area under the ROC curve). The AUC metric is one of the most popular metrics to quantify the model fit (note this is not necessarily the same as the model quality, just as correlation does not equal causation). The AUC metric indicates, very simply put, the number of correct and incorrect predictions and plots them in a graph. The area under that graph then indicates the explanatory power of the model

DATA

The first step covers the selection of data from an extensive database containing the financial information and default history of millions of companies. Not all data points can be used in the calibration and/or during the performance testing of the model, therefore data filters are applied. Furthermore, the data set is categorized in 3 different size classes and 18 different industry sectors, each of which will be calibrated independently, using the same methodology.

This results in the master dataset, in addition data statistics are created that show the data availability, data relations and data quality. The master dataset also contains derived fields based on financials from the database, these fields are based on a long list of quantitative risk drivers (financial ratios). The long list of risk drivers is created based on expert option. As a last step, the master dataset is split into a calibration dataset (2/3 of the master dataset) and a test dataset (1/3 of the master dataset).

RISK DRIVER SELECTION

The risk driver selection for the qualitative variables is different from the risk driver selection for the quantitative variables. The final list of quantitative risk drivers is selected by means of different statistical analyses calculated for the long list of quantitative risk drivers. For the qualitative variables, a set of variables is selected based on expert opinion and industry practices.

SCORING APPROACH

Scoring functions are calibrated for the quantitative part of the model. These scoring function translate the value and trend value of each quantitative risk driver per size and industry to a (uniform) score between 0-100. For this exercise, different possible types of scoring functions are used. The best-performing scoring function for the value and trend of each risk driver is determined by performing a regression and comparing the performance. The coefficients in the scoring functions are estimated by fitting the function to the ratio values for companies in the calibration dataset. For the qualitative variables the translation from a value to a score is based on expert opinion.

WEIGHTING APPROACH

The overall score of the quantitative part of the model is combined by summing the value and trend scores by applying weights. As a starting point expert opinion-based weights are applied, after which the performance of the model is further optimized by iteratively adjusting the weights and arriving at an optimal set of weights. The weights of the qualitative variables are based on expert opinion.

MAPPING TO CENTRAL TENDENCY

To estimate the mapping from final scores to a rating class, a standardized methodology is created. The buckets are constructed from a scoring distribution perspective. This is done to ensure the eventual smooth distribution over the rating classes. As an input, the final scores (based on the quantitative risk drivers only) of each company in the calibration dataset is used together with expert opinion input parameters. The estimation is performed per size class. An optimization is performed towards a central tendency by adjusting the expert opinion input parameters. This is done by deriving a target average PD range per size class and on total level based on default data from the European Banking Authority (EBA).

The qualitative variables are included by performing an external benchmark on a selected set of companies, where proxies are used to derive the score on the qualitative variables.

The final input parameters for the mapping are set such that the average PD per size class from the Corporate Rating Model is in line with the target average PD ranges. And, a good performance on the external benchmark is achieved.

OVERRIDE FRAMEWORK

The override framework consists of two sections, Level A and Level B. Level A takes country, industry and company-specific risks into account. Level B considers the possibility of guarantor support and other (final) overriding factors. By applying Level A overrides, the Interim Credit Risk Rating (CRR) is obtained. By applying Level B overrides, the Final CRR is obtained. For the calibration only the country risk is taken into account, as this is the only override that is based on data and not a user input. The country risk is set based on OECD country risk classifications.

TESTING AND BENCHMARKING

For the testing and benchmarking the performance of the model is analysed based on the calibration and test dataset (excluding the qualitative assessment but including the country risk adjustment). For each dataset the discriminatory power is determined by looking at the AUC. The calibration quality is reviewed by performing a Binomial Test on Individual Rating Classes to check if the observed default rate lies within the boundaries of the PD rating class and a Traffic Lights Approach to compare the observed default rates with the PD of the rating class.

Concluding, the methodology applied for the (re-)calibration of the Corporate Rating Model is based on an extensive dataset with financial and default information and complemented with expert opinion. The methodology ensures that the final model performs in-line with the central tendency and an performs well on an external benchmark.

A comprehensive guide to Credit Rating Modelling

January 2023
10 min read

Credit rating agencies and the credit ratings they publish have been the subject of a lot of debate over the years. While they provide valuable insight in the creditworthiness of companies, they have been criticized for assigning high ratings to package sub-prime mortgages, for not being representative when a sudden crisis hits and the effect they have on creating ‘self fulfilling prophecies’ in times of economic downturn.


For all the criticism that rating models and credit rating agencies have had through the years, they are still the most pragmatic and realistic approach for assessing default risk for your counterparties. Of course, the quality of the assessment depends to a large extent on the quality of the model used to determine the credit rating, capturing both the quantitative and qualitative factors determining counterparty credit risk. A sound credit rating model strikes a balance between these two aspects. Relying too much on quantitative outcomes ignores valuable ‘unstructured’ information, whereas an expert judgement based approach ignores the value of empirical data, and their explanatory power.

In this white paper we will outline some best practice approaches to assessing default risk of a company through a credit rating. We will explore the ratios that are crucial factors in the model and provide guidance for the expert judgement aspects of the model.

Zanders has applied these best practices while designing several Credit Rating models for many years. These models are already being used at over 400 companies and have been tested both in practice and against empirical data. Do you want to know more about our Credit Rating models, click here.

Credit ratings and their applications

Credit ratings are widely used throughout the financial industry, for a variety of applications. This includes the corporate finance, risk and treasury domains and beyond. While it is hardly ever a sole factor driving management decisions, the availability of a point estimation to describe something as complex as counterparty credit risk has proven a very useful piece of information for making informed decisions, without the need for a full due diligence into the books of the counterparty.

Some of the specific use cases are:

  • Internal assessment of the creditworthiness of counterparties
  • Transparency of the creditworthiness of counterparties
  • Monitoring trends in the quality of credit portfolios
  • Monitoring concentration risk
  • Performance measurement
  • Determination of risk-adjusted credit approval levels and frequency of credit reviews
  • Formulation of credit policies, risk appetite, collateral policies, etc.
  • Loan pricing based on Risk Adjusted Return on Capital (RAROC) and Economic Profit (EP)
  • Arm’s length pricing of intercompany transactions, in line with OECD guidelines
  • Regulatory Capital (RC) and Economic Capital (EC) calculations
  • Expected Credit Loss (ECL) IFRS 9 calculations
  • Active Credit Portfolio Management on both portfolio and (individual) counterparty level

Credit rating philosophy

A fundamental starting point when applying credit ratings, is the credit rating philosophy that is followed. In general, two distinct approaches are recognized:

  • Through-the-Cycle (TtC) rating systems measure default risk of a counterparty by taking permanent factors, like a full economic cycle, into account based on a worst-case scenario. TtC ratings change only if there is a fundamental change in the counterparty’s situation and outlook. The models employed for the public ratings published by e.g. S&P, Fitch and Moody’s are generally more TtC focused. They tend to assign more weight to qualitative features and incorporate longer trends in the financial ratios, both of which increase stability over time.
  • Point-in-Time (PiT) rating systems measure default risk of a counterparty taking current, temporary factors into account. PiT ratings tend to adjust quickly to changes in the (financial) conditions of a counterparty and/or its economic environment. PiT models are more suited for shorter term risk assessments, like Expected Credit Losses. They are more focused on financial ratios, thereby capturing the more dynamic variables. Furthermore, they incorporate a shorter trend which adjusts faster over time. Most models incorporate a blend between the two approaches, acknowledging that both short term and long term effects may impact creditworthiness.

Rating methodology

Modelling credit ratings is very complex, due to the wide variety of events and exposures that companies are exposed to. Operational risk, liquidity risk, poor management, a perishing business model, an external negative event, failing governments and technological innovation can all have very significant influence on the creditworthiness of companies in the short and long run. Most credit rating models therefore distinguish a range of different factors that are modelled separately and then combined into a single credit rating. The exact factors will differ per rating model. The overview below presents the factors included the Corporate Rating Model, which is used in some of the cloud-based solutions of the Zanders applications.

The remainder of this article will detail the different factors, explaining the rationale behind including them.

Quantitative factors

Quantitative risk factors are crucial to credit rating models, as they are ‘objective’ and therefore generate a large degree of comparability between different companies. Their objective nature also makes them easier to incorporate in a model on a large scale. While financials alone do not tell the whole story about a company, accounting standards have developed over time to provide a more and more comparable view of the financial state of a company, making them a more and more thrustworthy source for determining creditworthiness. To better enable comparisons of companies with different sizes, financials are often represented as ratios.

Financial Ratios

Financial ratios are being used for credit risk analyses throughout the financial industry and present the basic characteristics of companies. A number of these ratios represent (directly or indirectly) creditworthiness. Zanders’ Corporate Credit Rating model uses the most common of these financial ratios, which can be categorised in five pillars:

Pillar 1 - Operations

The Operations pillar consists of variables that consider the profitability and ability of a company to influence its profitability. Earnings power is a main determinant of the success or failure of a company. It measures the ability of a company to create economic value and the ability to give risk protection to its creditors. Recurrent profitability is a main line of defense against debtor-, market-, operational- and business risk losses. 

Turnover Growth

Turnover growth is defined as the annual percentage change in Turnover, expressed as a percentage. It indicates the growth rate of a company. Both very low and very high values tend to indicate low credit quality. For low turnover growth this is clear. High turnover growth can be an indication for a risky business strategy or a start-up company with a business model that has not been tested over time.

Gross Margin

Gross margin is defined as Gross profit divided by Turnover, expressed as a percentage. The gross margin indicates the profitability of a company. It measures how much a company earns, taking into consideration the costs that it incurs for producing its products and/or services. A higher Gross margin implies a lower default probability.

Operating Margin

Operating margin is defined as Earnings before Interest and Taxes (EBIT) divided by Turnover, expressed as a percentage. This ratio indicates the profitability of the company. Operating margin is a measurement of what proportion of a company's revenue is left over after paying for variable costs of production such as wages, raw materials, etc. A healthy Operating margin is required for a company to be able to pay for its fixed costs, such as interest on debt. A higher Operating margin implies a lower default probability.

Return on Sales

Return on sales is defined as P/L for period (Net income) divided by Turnover, expressed as a percentage. Return on sales = P/L for period (Net income) / Turnover x 100%. Return on sales indicates how much profit, net of all expenses, is being produced per pound of sales. Return on sales is also known as net profit margin. A higher Return on sales implies a lower default probability.

Return on Capital Employed

Return on capital employed (ROCE) is defined as Earnings before Interest and Taxes (EBIT) divided by Total assets minus Current liabilities, expressed as a percentage. This ratio indicates how successful management has been in generating profits (before Financing costs) with all of the cash resources provided to them which carry a cost, i.e. equity plus debt. It is a basic measure of the overall performance, combining margins and efficiency in asset utilization. A higher ROCE implies a lower default probability.

Pillar 2 - Liquidity

The Liquidity pillar assesses the ability of a company to become liquid in the short-term. Illiquidity is almost always a direct cause of a failure, while a strong liquidity helps a company to remain sufficiently funded in times of distress. The liquidity pillar consists of variables that consider the ability of a company to convert an asset into cash quickly and without any price discount to meet its obligations. 

Current Ratio

Current ratio is defined as Current assets, including Cash and Cash equivalents, divided by Current liabilities, expressed as a number. This ratio is a rough indication of a firm's ability to service its current obligations. Generally, the higher the Current ratio, the greater the cushion between current obligations and a firm's ability to pay them. A stronger ratio reflects a numerical superiority of Current assets over Current liabilities. However, the composition and quality of Current assets are a critical factor in the analysis of an individual firm's liquidity, which is why the current ratio assessment should be considered in conjunction with the overall liquidity assessment. A higher Current ratio implies a lower default probability. 

Quick Ratio

The Quick ratio (also known as the Acid test ratio) is defined as Current assets, including Cash and Cash equivalents, minus Stock divided by Current liabilities, expressed as a number. The ratio indicates the degree to which a company's Current liabilities are covered by the most liquid Current assets. It is a refinement of the Current ratio and is a more conservative measure of liquidity. Generally, any value of less than 1 to 1 implies a reciprocal dependency on inventory or other current assets to liquidate short-term debt. A higher Quick ratio implies a lower default probability.

Stock Days 

Stock days is defined as the average Stock during the year times the number of days in a year divided by the Cost of goods sold, expressed as a number. This ratio indicates the average length of time that units are in stock. A low ratio is a sign of good liquidity or superior merchandising. A high ratio can be a sign of poor liquidity, possible overstocking, obsolescence, or, in contrast to these negative interpretations, a planned stock build-up in the case of material shortages. A higher Stock days ratio implies a higher default probability.

Debtor Days

Debtor days is defined as the average Debtors during the year times the number of days in a year divided by Turnover. Debtor days indicates the average number of days that trade debtors are outstanding. Generally, the greater number of days outstanding, the greater the probability of delinquencies in trade debtors and the more cash resources are absorbed. If a company's debtors appear to be turning slower than the industry, further research is needed and the quality of the debtors should be examined closely. A higher Debtors days ratio implies a higher default probability.

Creditor Days

Creditor days is defined as the average Creditors during the year as a fraction of the Cost of goods sold times the number of days in a year. It indicates the average length of time the company's trade debt is outstanding. If a company's Creditors days appear to be turning more slowly than the industry, then the company may be experiencing cash shortages, disputing invoices with suppliers, enjoying extended terms, or deliberately expanding its trade credit. The ratio comparison of company to industry suggests the existence of these or other causes. A higher Creditors days ratio implies a higher default probability.

Pillar 3 - Capital Structure

The Capital pillar considers how a company is financed. Capital should be sufficient to cover expected and unexpected losses. Strong capital levels provide management with financial flexibility to take advantage of certain acquisition opportunities or allow discontinuation of business lines with associated write offs.  

Gearing

Gearing is defined as Total debt divided by Tangible net worth, expressed as a percentage. It indicates the company’s reliance on (often expensive) interest bearing debt. In smaller companies, it also highlights the owners' stake in the business relative to the banks. A higher Gearing ratio implies a higher default probability.

Solvency

Solvency is defined as Tangible net worth (Shareholder funds – Intangibles) divided by Total assets – Intangibles, expressed as a percentage. It indicates the financial leverage of a company, i.e. it measures how much a company is relying on creditors to fund assets. The lower the ratio, the greater the financial risk. The amount of risk considered acceptable for a company depends on the nature of the business and the skills of its management, the liquidity of the assets and speed of the asset conversion cycle, and the stability of revenues and cash flows. A higher Solvency ratio implies a lower default probability.

Pillar 4 - Debt Service

The debt service pillar considers the capability of a company to meet its financial obligations in the form of debt. It ties the debt obligation a company has to its earnings potential. 

Total Debt / EBITDA

The debt service pillar considers the capability of a company to meet its financial obligations. This ratio is defined as Total debt divided by Earnings before Interest, Taxes, Depreciation, and Amortization (EBITDA). Total debt comprises Loans + Noncurrent liabilities. It indicates the total debt run-off period by showing the number of years it would take to repay all of the company's interest-bearing debt from operating profit adjusted for Depreciation and Amortization. However, EBITDA should not, of course, be considered as cash available to pay off debt. A higher Debt service ratio implies a higher default probability. 

Interest Coverage Ratio

Interest coverage ratio is defined as Earnings before interest and taxes (EBIT) divided by interest expenses (Gross and Capitalized). It indicates the firm's ability to meet interest payments from earnings. A high ratio indicates that the borrower should have little difficulty in meeting the interest obligations of loans. This ratio also serves as an indicator of a firm's ability to service current debt and its capacity for taking on additional debt. A higher Interest coverage ratio implies a lower default probability.

Pillar 5 - Size

In general, the larger a company is, the less vulnerable the company is, as there is, usually, more diversification in turnover. Turnover is considered the best indicator of size. In general, turnover is related to vulnerability. The higher the turnover, the less vulnerable a company (generally) is.

Ratio Scoring and Mapping

While these financial ratios provide some very useful information regarding the current state of a company, it is difficult to assess them on a stand-alone basis. They are only useful in a credit rating determination if we can compare them to the same ratios for a group of peers. Ratio scoring deals with the process of translating the financials to a score that gives an indication of the relative creditworthiness of a company against its peers.

The ratios are assessed against a peer group of companies. This provides more discriminatory power during the calibration process and hence a better estimation of the risk that a company will default. Research has shown that there are two factors that are most fundamental when determining a comparable peer group. These two factors are industry type and size. The financial ratios tend to behave ‘most alike’ within these segmentations. The industry type is a good way to separate, for example, companies with a lot of tangible assets on their balance sheet (e.g. retail) versus companies with very few tangible assets (e.g. service based industries). The size reflects that larger companies are generally more robust and less likely to default in the short to medium term, as compared to smaller, less mature companies.

Since ratios tend to behave differently over different industries and sizes, the ratio value score has to be calibrated for each peer group segment.

When scoring a ratio, both the latest value and the long-term trend should be taken into account. The trend reflects whether a company’s financials are improving or deteriorating over time, which may be an indication of their long-term perspective. Hence, trends are also taken into account as a separate factor in the scoring function.

To arrive to a total score, a set of weights needs to be determined, which indicates the relative importance of the different components. This total score is then mapped to a ordinal rating scale, which usually runs from AAA (excellent creditworthiness) to D (defaulted) to indicate the creditworthiness. Note that at this stage, the rating only incorporates the quantitative factors. It will serve as a starting point to include the qualitative factors and the overrides.

"A sound credit rating model strikes a balance between quantitative and qualitative aspects. Relying too much on quantitative outcomes ignores valuable ‘unstructured’ information, whereas an expert judgement based approach ignores the value of empirical data, and their explanatory power."

Qualitative Factors

Qualitative factors are crucial to include in the model. They capture the ‘softer’ criteria underlying creditworthiness. They relate, among others, to the track record, management capabilities, accounting standards and access to capital of a company. These can be hard to capture in concrete criteria, and they will differ between different credit rating models.

Note that due to their qualitative nature, these factors will rely more on expert opinion and industry insights. Furthermore, some of these factors will affect larger companies more than smaller companies and vice versa. In larger companies, management structures are far more complex, track records will tend to be more extensive and access to capital is a more prominent consideration.

All factors are generally assigned an ordinal scoring scale and relative weights, to arrive at a total score for the qualitative part of the assessment.

A categorisation can be made between business analysis and behavioural analysis.

Business Analysis

Business analysis deals with all aspects of a company that relate to the way they operate in the markets. Some of the factors that can be included in a credit rating model are the following:

Years in Same Business

Companies that have operated in the same line of business for a prolonged period of time have increased credibility of staying around for the foreseeable future. Their business model is sound enough to generate stable financials.

Customer Risk

Customer risk is an assessment to what extent a company is dependent on one or a small group of customers for their turnover. A large customer taking its business to a competitor can have a significant impact on such a company.

Accounting Risk

The companies internal accounting standards are generally a good indicator of the quality of management and internal controls. Recent or frequent incidents, delayed annual reports and a lack of detail are clear red flags.

Track record with Corporate

This is mostly relevant for counterparties with whom a standing relationship exists. The track record of previous years is useful first hand experience to take into account when assessing the creditworthiness.

Continuity of Management

A company that has been under the same management for an extended period of time tends to reflect a stable company, with few internal struggles. Furthermore, this reflects a positive assessment of management by the shareholders.

Operating Activities Area

Companies operating on a global scale are generally more diversified and therefore less affected by most political and regulatory risks. This reflects well in their credit rating. Additionally, companies that serve a large market have a solid base that provides some security against adverse events.

Access to Capital

Access to capital is a crucial element of the qualitative assessment. Companies with a good access to the capital markets can raise debt and equity as needed. An actively traded stock, a public rating and frequent and recent debt issuances are all signals that a company has access to capital.

Behavioral Analysis

Behavioural analysis aims to incorporate prior behaviour of a company in the credit rating. A separation can be made between external and internal indicators

External indicators

External indicators are all information that can be acquired from external parties, relating to the behaviour of a company where it comes to honouring obligations. This could be a credit rapport from a credit rating agency, payment details from a bank, public news items, etcetera.

Internal Indicators

Internal indicators concern all prior interactions you have had with a company. This includes payment delay, litigation, breaches of financial covenants etcetera.

Override Framework

Many models allow for an override of the credit rating resulting from the prior analysis. This is a more discretionary step, which should be properly substantiated and documented. Overrides generally only allow for adjusting the credit rating with one notch upward, while downward adjustment can be more sizable.

Overrides can be made due to a variety of reasons, which is generally carefully separated in the model. Reasons for overrides generally include adjusting for country risk, industry adjustments, company specific risk and group support.

It should be noted that some overrides are mandated by governing bodies. As an example, the OECD prescribes the overrides to be applied based on a country risk mapping table, for the purpose of arm’s length pricing of intercompany contracts.

Combining all the factors and considerations mentioned in this article, applying weights and scoring functions and applying overrides, a final credit rating results.

Model Quality and Fit

The model quality determines whether the model is appropriate to be used in a practical setting. From a statistical modelling perspective, a lot of considerations can be made with regard to model quality, which are outside of the scope of this article, so we will stick to a high level consideration here.

The AUC (area under the ROC curve) metric is one of the most popular metrics to quantify the model fit (note this is not necessarily the same as the model quality, just as correlation does not equal causation). The AUC metric indicates, very simply put, the number of correct and incorrect predictions and plots them in a graph. The area under that graph then indicates the explanatory power of the model. A more extensive guide to the AUC metric can be found here.

Alternative Modelling Approaches

The model structure described above is one specific way to model credit ratings. While models may widely vary, most of these components would typically be included. During recent years, there has been an increase in the use of payment data, which is disclosed through the PSD2 regulation. This can provide a more up-to-date overview of the state of the company and can definitely be considered as an additional factor in the analysis. However, the main disadvantage of this approach is that it requires explicit approval from the counterparty to use the data, which makes it more challenging to apply on a portfolio basis.

Another approach is a purely machine learning based modelling approach. If applied well, this will give the best model in terms of the AUC (area under the curve) metric, which measures the explanatory power of the model. One major disadvantage of this approach, however, is that the interpretability of the resulting model is very limited. This is something that is generally not preferred by auditors and regulatory bodies as the primary model for creditworthiness. In practice, we see these models most often as challenger models, to benchmark the explanatory power of models based on economic rationale. They can serve to spot deficiencies in the explanatory power of existing models and trigger a re-assessment of the factors included in these models. In some cases, they may also be used to create additional comfort regarding the inclusion of some factors.

Furthermore, the degree to which the model depends on expert opinion is to a large extent dependent on the data available to the model developer. Most notably, the financials and historical default data of a representative group of companies is needed to properly fit the model to the empirical data. Since this data can be hard to come by, many credit rating models are based more on expert opinion than actual quantitative data. Our Corporate Credit Rating model was calibrated on a database containing the financials and default data of an extensive set of companies. This provides a solid quantitative basis for the model outcomes.

Closing Remarks

Model credit risk and credit ratings is a complex affair. Zanders provides advice, standardized and customizable models and software solutions to tackle these challenges. Do you want to learn more about credit rating modelling? Reach out for a free consultation. Looking for a tailor made and flexible solution to become IFRS 9 compliant, find out about our Condor Credit Risk Suite, the IFRS9 compliance solution.

The usage of proxies under FRTB

November 2021
3 min read

Explore the crucial role of treasury in value creation and financial performance in private equity.


Non-modellable risk factors (NMRFs) have been shown to be one of the largest contributors to capital charges under FRTB. The use of proxies is one of the methods that banks can employ to increase the modellability of risk factors and reduce the number of NMRFs. Other potential methods for improving the modellability of risk factors is using external data sources and modifying risk factor bucketing approaches.

Proxies and FRTB

A proxy is utilised when there is an insufficient historical data for a risk factor. A lack of historical data increases the likelihood of the risk factor failing the Risk Factor Eligibility Test (RFET). Consequently, using proxies ensures that the number of NMRFs is reduced and capital charges are kept to a minimum. Although the use of proxies is allowed, regulation states that their usage must be limited, and they must have sufficiently similar characteristics to the risk factors which they represent.

Banks must be ready to provide evidence to regulators that their chosen proxies are conceptually and empirically sound. Despite the potential reduction in capital, developing proxy methodologies can be time-consuming and require considerable ongoing monitoring. There are two main approaches which are used to develop proxies: rules-based and statistical.

Proxy decomposition

FRTB regulation allows NMRFs to be decomposed into modellable components and a residual basis, which must be capitalised as non-modellable. For example, credit spreads for small issuers which are not highly liquid can be decomposed into a liquid credit spread index component, which is classed as modellable, and a non-modellable basis or spread.  

To test modellability using the RFET, 12-months of data is required for the proxy and basis components. If the basis between the proxy and the risk factor has not been identified and properly capitalised, only the proxy representation of the risk factor can be used in the Risk Theoretical P&L (RTPL). However, if the capital requirement for a basis is determined, either: (i) the proxy risk factor and the basis; or (ii) the original risk factor itself can be included in the RTPL.

Banks should aim to produce preliminary analysis on the cost benefits of proxy development – does the cost and effort of developing proxies outweigh the capital which could be saved by increasing risk factor modellability? For example, proxies which are highly volatile may also result in increasing NMRF capital charges.

Approaches for the development of proxies

Both rules-based and statistical approaches to developing proxies require considerable effort. Banks should aim to develop statistical approaches as they have been shown to be more accurate and also more efficient in reducing capital requirements for banks.

Rules-based approach

Rules-based approaches are more simplistic, however are less accurate than the statistical approaches. They find the “closest fit” modellable risk factor using somewhat more qualitative methods. For example, picking the closest tenor on a yield curve (see below), using relevant indices or ETFs, or limiting the search for proxies to the same sector as the underlying risk factor.

Similarly, longer tenor points (which may not be traded as frequently) can be decomposed into shorter-tenor points and cross-tenor basis spread.

Statistical approach

Statistical approaches are more quantitate and more accurate than the rules-based approaches. However, this inevitably comes with computational expense. A large number of candidates are tested using the chosen statistical methodology and the closest is picked (see below).

For example, a regression approach could be used to identify which of the candidates are most correlated with the underlying risk factor. Studies have shown that statistical approaches not only produce the more accurate proxies, but can also reduce capital charges by almost twice as much as simpler rules-based approaches.

Conclusion

Risk factor modellability is a considerable concern for banks as it has a direct impact on the size of their capital charges. Inevitably, reducing the number of NMRFs is a key aim for all IMA banks. In this article, we show that developing proxies is one of the strategies that banks can use to minimise the amount of NMRFs in their models. Furthermore, we describe the two main approaches for developing proxies: rules-based and statistical. Although rules-based approaches are less complicated to develop, statistical approaches show much better accuracy and hence have the potential to better reduce capital charges.

FRTB: Improving the Modellability of Risk Factors

June 2021
3 min read

Explore the crucial role of treasury in value creation and financial performance in private equity.


Under the FRTB internal models approach (IMA), the capital calculation of risk factors is dependent on whether the risk factor is modellable. Insufficient data will result in more non-modellable risk factors (NMRFs), significantly increasing associated capital charges.

NMRFs

Risk factor modellability and NMRFs

The modellability of risk factors is a new concept which was introduced under FRTB and is based on the liquidity of each risk factor. Modellability is measured using the number of ‘real prices’ which are available for each risk factor. Real prices are transaction prices from the institution itself, verifiable prices for transactions between arms-length parties, prices from committed quotes, and prices from third party vendors.

For a risk factor to be classed as modellable, it must have a minimum of 24 real prices per year, no 90-day period with less than four prices, and a minimum of 100 real prices in the last 12 months (with a maximum of one real price per day). The Risk Factor Eligibility Test (RFET), outlined in FRTB, is the process which determines modellability and is performed quarterly. The results of the RFET determine, for each risk factor, whether the capital requirements are calculated by expected shortfall or stressed scenarios.

Consequences of NMRFs for banks

Modellable risk factors are capitalised via expected shortfall calculations which allow for diversification benefits. Conversely, capital for NMRFs is calculated via stressed scenarios which result in larger capital charges. This is due to longer liquidity horizons and more prudent assumptions used for aggregation. Although it is expected that a low proportion of risk factors will be classified as non-modellable, research shows that they can account for over 30% of total capital requirements. 

There are multiple techniques that banks can use to reduce the number and impact of NMRFs, including the use of external data, developing proxies, and modifying the parameterisation of risk factor curves and surfaces. As well as focusing on reducing the number of NMRFs, banks will also need to develop early warning systems and automated reporting infrastructures to monitor the modellability of risk factors. These tools help to track and predict modellability issues, reducing the likelihood that risk factors will fail the RFET and increase capital requirements.

Methods for reducing the number of NMRFs

Banks should focus on reducing their NMRFs as they are associated with significantly higher capital charges. There are multiple approaches which can be taken to increase the likelihood that a risk factor passes the RFET and is classed as modellable.

Enhancing internal data

The simplest way for banks to reduce NMRFs is by increasing the amount of data available to them. Augmenting internal data with external data increases the number of real prices available for the RFET and reduces the likelihood of NMRFs. Banks can purchase additional data from external data vendors and data pooling services to increase the size and quality of datasets.

It is important for banks to initially investigate their internal data and understand where the gaps are. As data providers vary in which services and information they provide, banks should not only focus on the types and quantity of data available. For example, they should also consider data integrity, user interfaces, governance, and security. Many data providers also offer FRTB-specific metadata, such as flags for RFET liquidity passes or fails.

Finally, once a data provider has been chosen, additional effort will be required to resolve discrepancies between internal and external data and ensure that the external data follows the same internal standards.

Creating risk factor proxies

Proxies can be developed to reduce the number or magnitude of NMRFs, however, regulation states that their use must be limited. Proxies are developed using either statistical or rules-based approaches.

Rules-based approaches are simplistic, yet generally less accurate. They find the “closest fit” modellable risk factor using more qualitative methods, e.g. using the closest tenor on the interest rate curve. Alternatively, more accurate approaches model the relationship between the NMRF and modellable risk factors using statistical methods. Once a proxy is determined, it is classified as modellable and only the basis between it and the NMRF is required to be capitalised using stressed scenarios.

Determining proxies can be time-consuming as it requires exploratory work with uncertain outcomes. Additional ongoing effort will also be required by validation and monitoring units to ensure the relationship holds and the regulator is satisfied.

Developing own bucketing approach

Instead of using the prescribed bucketing approach, banks can use their own approach to maximise the number of real price observations for each risk factor.

For example, if a risk model requires a volatility surface to price, there are multiple ways this can be parametrised.  One method could be to split the surface into a 5x5 grid, creating 25 buckets that would each require sufficient real price observations to be classified as modellable. Conversely, the bank could instead split the surface into a 2x2 grid, resulting in only four buckets. The same number of real price observations would then need to be allocated between significantly less buckets, decreasing the chances of a risk factor being a NMRF.

It should be noted that the choice of bucketing approach affects other aspects of FRTB. Profit and Loss Attribution (PLA) uses the same buckets of risk factors as chosen for the RFET. Increasing the number of buckets may increase the chances of passing PLA, however, also increases the likelihood of risk factors failing the RFET and being classed as NMRFs.

Conclusion

In this article, we have described several potential methods for reducing the number of NMRFs. Although some of the suggested methods may be more cost effective or easier to implement than others, banks will most likely, in practice, need to implement a combination of these strategies in parallel. The modellability of risk factors is clearly an important part of the FRTB regulation for banks as it has a direct impact on required capital. Banks should begin to develop strategies for reducing the number of NMRFs as early as possible if they are to minimise the required capital when FRTB goes live.

How Royal FloraHolland grew a global cash management bank relationship from scratch

Royal FloraHolland launched the Floriday digital platform to enhance global flower trade by connecting growers and buyers, offering faster transactions, and streamlining international payment solutions.


In a changing global floriculture market, Royal FloraHolland created a new digital platform where buyers and growers can connect internationally. As part of its strategy to offer better international payment solutions, the cooperative of flower growers decided to look for an international cash management bank.

Royal FloraHolland is a cooperative of flower and plant growers. It connects growers and buyers in the international floriculture industry by offering unique combinations of deal-making, logistics, and financial services. Connecting 5,406 suppliers with 2,458 buyers and offering a solid foundation to all these players, Royal FloraHolland is the largest floriculture marketplace in the world.

The company’s turnover reached EUR 4.8 billion (in 2019) with an operating income of EUR 369 million. Yearly, it trades 12.3 billion flowers and plants, with an average of at least 100k transactions a day.

The floriculture cooperative was established 110 years ago, organizing flower auctions via so-called clock sales. During these sales, flowers were offered for a high price first, which lowered once the clock started ticking. The price went down until one of the buyers pushed the buying button, leaving the other buyers with empty hands.

The Floriday platform


Around twenty years ago, the clock sales model started to change. “The floriculture market is changing to trading that increasingly occurs directly between growers and buyers. Our role is therefore changing too,” Wilco van de Wijnboom, Royal FloraHolland’s manager corporate finance, explains. “What we do now is mainly the financing part – the invoices and the daily collection of payments, for example. Our business has developed both geographically and digitally, so we noticed an increased need for a platform for the global flower trade. We therefore developed a new digital platform called Floriday, which enables us to deliver products faster, fresher and in larger amounts to customers worldwide. It is an innovative B2B platform where growers can make their assortment available worldwide, and customers are able to transact in various ways, both nationally and internationally.”

Our business has developed both geographically and digitally, so we noticed an increased need for a platform for the global flower trade

Wilco van de Wijnboom, Royal FloraHolland’s Manager Corporate Finance

quote

The Floriday platform aims to provide a wider range of services to pay and receive funds in euros but also in other currencies and across different jurisdictions. Since it would help treasury to deal with all payments worldwide, Royal FloraHolland needed an international cash management bank too.

Van de Wijnboom: “It has been a process of a few years. As part of our strategy, we wanted to grow internationally, and it was clear we needed an international bank to do so. At the same time, our commercial department had some leads for flower business from Saudi-Arabia and Kenya. Early in 2020, all developments – from the commercial, digital and financing points of view – came together.”

RFP track record


Royal FloraHolland’s financial department decided to contact Zanders for support. “Selecting a cash management bank is not something we do every day, so we needed support to find the right one,” says Pim Zaalberg, treasury consultant at Royal FloraHolland. “We have been working together with Zanders on several projects since 2010 and know which subject matter expertise they can provide. They previously advised us on the capital structure of the company and led the arranging process of the bank financing of the company in 2017. Furthermore, they assisted in the SWIFT connectivity project, introducing payments-on-behalf-of. They are broadly experienced and have a proven track record in drafting an RFP. They exactly know which questions to ask and what is important, so it was a logical step to ask them to support us in the project lead and the contact with the international banks.”

Zanders consultant Michal Zelazko adds: “We use a standardized bank selection methodology at Zanders, but importantly this can be adjusted to the specific needs of projects and clients. This case contained specific geographical jurisdictions and payment methods with respect to the Floriday platform. Other factors were, among others, pre-payments and the consideration to have a separate entity to ensure the safety of all transactions.”

Strategic partner


The project started in June 2020, a period in which the turnover figures managed to rebound significantly, after the initial fall caused by the corona pandemic. Van de Wijnboom: “The impact we currently have is on the flowers coming from overseas, for example from Kenya and Ethiopia. The growers there have really had a difficult time, because the number of flights from those countries has decreased heavily. Meanwhile, many people continued to buy flowers when they were in lockdown, to brighten up their new home offices.”

Together with Zanders, Royal FloraHolland drafted the goals and then started selecting the banks they wanted to invite to find out whether they could meet these goals. All questions for the banks about the cooperative's expected turnover, profit and perspectives could be answered positively. Zaalberg explains that the bank for international cash management was also chosen to be a strategic partner for the company: “We did not choose a bank to do only payments, but we needed a bank to think along with us on our international plans and one that offers innovative solutions in the e-commerce area. The bank we chose, Citibank, is now helping us with our international strategy and is able to propose solutions for our future goals.”

The Royal FloraHolland team involved in the selection process now look back confidently on the process and choice. Zaalberg: “We are very proud of the short timelines of this project, starting in June and selecting the bank in September – all done virtually and by phone. It was quite a precedent to do it this way. You have to work with a clear plan and be very strict in presentation and input gathering. I hope it is not the new normal, but it worked well and was quite efficient too. We met banks from Paris and Dublin on the same day without moving from our desks.”

You only have one chance – when choosing an international bank for cash management it will be a collaboration for the next couple of years

Wilco van de Wijnboom, Royal FloraHolland’s Manager Corporate Finance

quote

Van de Wijnboom agrees and stresses the importance of a well-managed process: “You only have one chance – when choosing an international bank for cash management it will be a collaboration for the next couple of years.”

Future plans

The future plans of the company are focused on venturing out to new jurisdictions, specifically in the finance space, to offer more currencies for both growers and buyers. “This could go as far as paying growers in their local currency,” says Zaalberg.

“Now we only use euros and US dollars, but we look at ways to accommodate payments in other currencies too. We look at our cash pool structure too. We made sure that, in the RFP, we asked the banks whether they could provide cash pooling in a way that was able to use more currencies. We started simple but have chosen the bank that can support more complex setups of cash management structures as well.” Zelazko adds: “It is an ambitious goal but very much in line with what we see in other companies.”

Also, in the longer term, Royal FloraHolland is considering connecting the Floriday platform to its treasury management system. Van de Wijnboom: “Currently, these two systems are not directly connected, but we could do this in the future. When we had the selection interviews with the banks, we discussed the prepayments situation - how do we make sure that the platform is immediately updated when there is a prepayment? If it is not connected, someone needs to take care of the reconciliation.”

There are some new markets and trade lanes to enter, as Van de Wijnboom concludes: ”We now see some trade lanes between Kenya and The Middle East. The flower farmers indicate that we can play an intermediate role if it is at low costs and if payments occur in US dollars. So, it helps us to have an international cash management bank that can easily do the transactions in US dollars.”

Customer successes

View all Insights

Fintegral

is now part of Zanders

In a continued effort to ensure we offer our customers the very best in knowledge and skills, Zanders has acquired Fintegral.

Okay

RiskQuest

is now part of Zanders

In a continued effort to ensure we offer our customers the very best in knowledge and skills, Zanders has acquired RiskQuest.

Okay

Optimum Prime

is now part of Zanders

In a continued effort to ensure we offer our customers the very best in knowledge and skills, Zanders has acquired Optimum Prime.

Okay
This site is registered on wpml.org as a development site.