FRTB: Harnessing Synergies Between Regulations
Discover how leveraging synergies across key regulatory frameworks like SIMM, BCBS 239, SA-CVA, and the IBOR transition can streamline your compliance efforts and ease the burden of FRTB implementation.
Regulatory Landscape
Despite a delay of one year, many banks are struggling to be ready for FRTB in January 2023. Alongside the FRTB timeline, banks are also preparing for other important regulatory requirements and deadlines which share commonalities in implementation. We introduce several of these below.
SIMM
Initial Margin (IM) is the value of collateral required to open a position with a bank, exchange or broker. The Standard Initial Margin Model (SIMM), published by ISDA, sets a market standard for calculating IMs. SIMM provides margin requirements for financial firms when trading non-centrally cleared derivatives.
BCBS 239
BCBS 239, published by the Basel Committee on Banking Supervision, aims to enhance banks’ risk data aggregation capabilities and internal risk reporting practices. It focuses on areas such as data governance, accuracy, completeness and timeliness. The standard outlines 14 principles, although their high-level nature means that they are open to interpretation.
SA-CVA
Credit Valuation Adjustment (CVA) is a type of value adjustment and represents the market value of the counterparty credit risk for a transaction. FRTB splits CVA into two main approaches: BA-CVA, for smaller banks with less sophisticated trading activities, and SA-CVA, for larger banks with designated CVA risk management desks.
IBOR
Interbank Offered Rates (IBORs) are benchmark reference interest rates. As they have been subject to manipulation and due to a lack of liquidity, IBORs are being replaced by Alternative Reference Rates (ARRs). Unlike IBORs, ARRs are based on real transactions on liquid markets rather than subjective estimates.
Synergies With Current Regulation
Existing SIMM and BCBS 239 frameworks and processes can be readily leveraged to reduce efforts in implementing FRTB frameworks.
SIMM
The overarching process of SIMM is very similar to the FRTB Sensitivities-based Method (SbM), including the identification of risk factors, calculation of sensitivities and aggregation of results. The outputs of SbM and SIMM are both based on delta, vega and curvature sensitivities. SIMM and FRTB both share four risk classes (IR, FX, EQ, and CM). However, in SIMM, credit is split across two risk classes (qualifying and non-qualifying), whereas it is split across three in FRTB (non-securitisation, securitisation and correlation trading). For both SbM and SIMM, banks should be able to decompose indices into their individual constituents.
We recommend that banks leverage the existing sensitivities infrastructure from SIMM for SbM calculations, use a shared risk factor mapping methodology between SIMM and FRTB when there is considerable alignment in risk classes, and utilise a common index look-through procedure for both SIMM and SbM index decompositions.
BCBS 239
BCBS 239 requires banks to review IT infrastructure, governance, data quality, aggregation policies and procedures. A similar review will be required in order to comply with the data standards of FRTB. The BCBS 239 principles are now in “Annex D” of the FRTB document, clearly showing the synergy between the two regulations. The quality, transparency, volume and consistency of data are important for both BCBS 239 and FRTB. Improving these factors allow banks to easily follow the BCBS 239 principles and decrease the capital charges of non-modellable risk factors. BCBS 239 principles, such as data completeness and timeliness, are also necessary for passing P&L attribution (PLA) under FRTB.
We recommend that banks use BCBS 239 principles when designing the necessary data frameworks for the FRTB Risk Factor Eligibility Test (RFET), support FRTB traceability requirements and supervisory approvals with existing BCBS 239 data lineage documentation, and produce market risk reporting for FRTB using the risk reporting infrastructure detailed in BCBS 239.
Synergies With Future Regulation
The IBOR transition and SA-CVA will become effective from 2023. Aligning the timelines and exploiting the similarities between FRTB, SA-CVA and the IBOR transition will support banks to be ready for all three regulatory deadlines.
SA-CVA
Four of the six risk classes in SA-CVA (IR, FX, EQ, and CM) are identical to those in SbM. SA-CVA, however, uses a reduced granularity for risk factors compared to SbM. The SA-CVA capital calculation uses a similar methodology to SbM by combining sensitivities with risk weights. SA-CVA also incorporates the same trade population and metadata as SbM. SA-CVA capital requirements must be calculated and reported to the supervisor at the same monthly frequency as for the market risk standardised approach.
We recommend that banks combine SA-CVA and SbM risk factor bucketing tasks in a common methodology to reduce overall effort, isolate common components of both models as a feeder model, allowing a single stream for model development and validation, and develop a single system architecture which can be configured for either SbM or SA-CVA.
IBOR Transition
Although not a direct synergy, the transition from IBORs will have a direct impact to the Internal Models Approach (IMA) for FRTB and eligibility of risk factors. As the use of IBORs are discontinued, banks may observe a reduction in the number of real-price observations for associated risk factors due to a reduction in market liquidity. It is not certain if these liquidity issues fall under the RFET exemptions for systemic circumstances, which apply to modellable risk factors which can no longer pass the test. It may be difficult for banks to obtain stress-period data for ARRs, which could lead to substantial efforts to produce and justify proxies. The transition may cause modifications to trading desk structure, the integration of external data providers, and enhanced operational requirements, which can all affect FRTB.
We recommend that banks investigate how much data is available for ARRs, for both stress-period calculations and real-price observations, develop any necessary proxies which will be needed to overcome data availability issues, as soon as possible, and Calculate IBOR capital consequences through the existing FRTB engine.
Conclusion
FRTB implementation is proving to be a considerable workload for banks, especially those considering opting for the IMA. Several FRTB requirements, such as PLA and RFET, are completely new requirements for banks. As we have shown in this article, there are several other important regulatory requirements which banks are currently working towards. As such, we recommend that banks should leverage the synergies which are seen across this regulatory landscape to reduce the complexity and workload of FRTB.
Zanders Project Management Framework
If you want to go fast, go alone. If you want to go far, go together
At the birth of any project, it is crucial to determine the most suitable project management framework by which the treasury objectives can be achieved. Whether the focus is on TMS implementation, treasury transformation or risk management, the grand challenge remains – to ensure the highest quality of the delivered outcome while understanding the realistic timelines and resources. In this article we shed a light on the implications of project management methodologies and address its main concepts and viewpoints, accompanied by experiences from past treasury projects.
In recent years, big corporates have been strategically cherry-picking elements from various methodologies, as there is no one-size-fits-all. At Zanders, our treasury project experience has given us an in-depth knowledge in this area. Based on this knowledge, and depending on several variables – project complexity, resource maturity, culture, and scope – we advise our clients on the best project management methodology to apply to a specific treasury project.
We have observed that when it comes to choosing the project management methodology for a new treasury project, most corporates tend to choose what is applied internally or on previous projects. This leverages the internal skillsets and maturity around that framework. But is this really the right way to choose?
Shifting from traditional methodologies
As the environment that businesses operate in is undergoing a rapid and profound change, the applicability and relevance of the traditional project management methodologies have been called in to question. In the spirit of becoming responsive to unforeseen events, companies sensed the urgency to seek methods that are geared to rapid delivery and with the ability to respond to change quickly.
Embracing agile
The agile management framework aims to enhance project delivery by maximizing team productivity, while minimizing the waste inherent in redundant meetings, repetitive planning or excessive documentation. Unlike the traditional command and control-style management, which follows a linear approach, the core of agile methodology lies in a continuous reaction to a change rather than following a fixed plan.
This type of framework is mostly applied in an environment where the problem to be solved is complex, its solution is non-linear as it has many unknowns, and the project requirements will most likely change during the lifetime of the project as the target is on a constant move.
The illustration of an agile process (figured above) portrays certain similarities to the waterfall approach, in the sense of breaking the entire project in to several phases. However, while these phases in the waterfall approach are sequential, the activities in agile methodology can be run in parallel.
Agile principles promote changing requirements and sustainable development, and deliver working software frequently which can add value sooner. However, from a treasury perspective, you often cannot go live in pieces/functionalities since it increases risk or, when a requirement comes late in process, teams might not have the resources or availability to support the new requirement, creating delivery risk.
Evolving Agile and its forms
Having described the key principles of agile methodology, it is vital to state that over the years it has become a rather broad umbrella-term that covers various concepts that abide by the main agile values and principles.
One of the most popular agile forms is the Kanban approach, the uniqueness of which lies in the visualization of the workflow by building a so-called (digital) Kanban board. Scrum is another project management framework that can be used to manage iterative and incremental projects of all types. The Product Owner works with the team to identify and prioritize system functionality by creating a Product Backlog, with an estimation of software delivery by the functional teams. Once a Sprint has been delivered, the Product Backlog is analyzed and reprioritized, and the next set of deliverables is selected for the next Sprint. Lean framework focuses on delivering value to the customer through effective value-added analysis. Lean development eliminates waste by asking users to select only the truly valuable features for a system, prioritize the features selected, and then work on delivering them in small batches.
Waterfall methodologies – old but good
Even though agile methodologies are now widely accepted and rising in popularity, certain types of projects benefit from highly planned and predictive frameworks. The core of this management style lies in its sequential design process, meaning that an upcoming phase cannot begin until the previous one is formally closed. Waterfall methodologies are characterized by a high level of governance, where documentation plays a crucial role. This makes it easier to track the progress and manage the project scope in general. Projects that highly benefit from this methodology are characterized by their ability to define the fixed-end requirements up-front and are relatively smaller in size. For a project to move to the next phase, all current documentation must be approved by all the involved project managers. The excessive documentation ensures that the team members are familiar with the requirements of the coming phase.
Depending on the scope of the project, this progressive method breaks down the workload into several discrete steps, as shown here:
Project Team Structures
There are also differences between the project structures and the roles used in the two presented frameworks.
In waterfall, the common roles – outside of delivery or the functional team – to support and monitor the project plan are the project managers (depending on the size of the project there can be one or many, creating a project management office (PMO) structure) and a program director. In agile, the role structure is more intricate and complex. Again, this depends on the size of the treasury project.
As stated previously, agile project management relies heavily on collaborative processes. In this sense, a project manager is not required to have a central control, but rather appointing the right people to right tasks, increasing cross-functional collaboration, and removing impediments to progress. The main roles differ from the waterfall approach and can be labelled as Scrum master, Agile coach and Product owner.
Whatever the chosen approach is for a treasury project, one structure is normally seen in both – the steering committee. In more complex and bigger treasury projects (with greater impact and risk to the organization) sometimes a second structure or layer on top of the steering committee (called governance board) is needed. The objective of each one differs.
The Project Steering Committee is a decision-making structure within the project governance structure that consists of top managers (for example, the leads of each treasury area involved directly in the project) and decision makers who provide strategic direction and policy guidance to the project team and other stakeholders. They also:
- Monitor progress against the project management plan.
- Define, review and monitor value delivered to the business and business case.
- Review and approve changes made to project resource plan, schedules, and scope. This normally depends on the materiality of the changes.
- Review and approve project deliverables.
- Resolve conflicts between stakeholders.
The Governance Board, when needed, is more strategical by nature. For example, in treasury projects they are normally represented by the treasurer, CFO, and CEO. Some of the responsibilities are to:
- Monitor and help unblock major risks and potential project challenges.
- Keep updated and understand broader impacts coming out from the project delivery.
- Provide insights and solutions around external factors that might impact the treasury project (e.g. business strategic changes, regulatory frameworks, resourcing changes).
Other structures might be needed to be designed or implemented to support project delivery. More focused groups require different knowledge and expertise. Again, no one solution fits all and it depends on the scope and complexity of the treasury project.
The key decision factors that should be considered when selecting the project structure are:
Roles and responsibilities: Clearly define all roles and responsibilities for each project structure. That will drive planning and will clearly define who should do what. A lack of clarity will create project risks.
Size and expertise: Based on roles and responsibilities, and using a clear RAPID or RACI matrix, define the composition of these structures. There should not be a lot of overlap in terms of people in the structure. In most cases ‘less is more’ if expertise and experience is ensured.
The treasury project scope, complexity and deliverables should drive these structures. Like in the organizational structure of a company, a project should follow the same principles. A pyramid structure should be applied (not an inverted one) in which the functional (hands-on) team should be bigger than other structures.
Is a hybrid model desirable? Our conclusion
While it is known that all methodologies ultimately accomplish the same goal, choosing the most suitable framework is a critical success factor as it determines how the objectives are accomplished. Nowadays, we see that a lot of organizations are embracing a hybrid approach instead of putting all their hopes into one method.
Depending on the circumstances of the treasury project, you might find yourself in a situation where you employ the waterfall approach at the very beginning of the project. This creates a better structure for planning, ensures a common understanding of the project objectives and creates a reasonable timeline of the project. When it comes to the execution of the project, however, it becomes apparent that there needs to be space for some flexibility and early business engagement, as the project happens to be in a dynamic environment. Hence, it becomes beneficial to leverage an agile approach. Such project adapts a “structured agile” methodology, where the planning is done in the traditional way of management, while the execution implements some agile articles.
Machine learning in risk management
Machine learning (ML) models have already been around for decades. The exponential growth in computing power and data availability, however, has resulted in many new opportunities for ML models. One possible application is to use them in financial institutions’ risk management. This article gives a brief introduction of ML models, followed by the most promising opportunities for using ML models in financial risk management.
The current trend to operate a ‘data-driven business’ and the fact that regulators are increasingly focused on data quality and data availability, could give an extra impulse to the use of ML models.
ML models
ML models study a dataset and use the knowledge gained to make predictions for other datapoints. An ML model consists of an ML algorithm and one or more hyperparameters. ML algorithms study a dataset to make predictions, where hyperparameters determine the settings of the ML algorithm. The studying of a dataset is known as the training of the ML algorithm. Most ML algorithms have hyperparameters that need to be set by the user prior to the training. The trained algorithm, together with the calibrated set of hyperparameters, form the ML model.
ML models have different forms and shapes, and even more purposes. For selecting an appropriate ML model, a deeper understanding of the various types of ML that are available and how they work is required. Three types of ML can be distinguished:
- Supervised learning.
- Unsupervised learning.
- Semi-supervised learning.
The main difference between these types is the data that is required and the purpose of the model. The data that is fed into an ML model is split into two categories: the features (independent variables) and the labels/targets (dependent variables, for example, to predict a person’s height – label/target – it could be useful to look at the features: age, sex, and weight). Some types of machine learning models need both as an input, while others only require features. Each of the three types of machine learning is shortly introduced below.
Supervised learning
Supervised learning is the training of an ML algorithm on a dataset where both the features and the labels are available. The ML algorithm uses the features and the labels as an input to map the connection between features and labels. When the model is trained, labels can be generated by the model by only providing the features. A mapping function is used to provide the label belonging to the features. The performance of the model is assessed by comparing the label that the model provides with the actual label.
Unsupervised learning
In unsupervised learning there is no dependent variable (or label) in the dataset. Unsupervised ML algorithms search for patterns within a dataset. The algorithm links certain observations to others by looking at similar features. This makes an unsupervised learning algorithm suitable for, among other tasks, clustering (i.e. the task of dividing a dataset into subsets). This is done in such a manner that an observation within a group is more like other observations within the subset than an observation that is not in the same group. A disadvantage of unsupervised learning is that the model is (often) a black box.
Semi-supervised learning
Semi-supervised learning uses a combination of labeled and unlabeled data. It is common that the dataset used for semi-supervised learning consist of mostly unlabeled data. Manually labeling all the data within a dataset can be very time consuming and semi-supervised learning offers a solution for this problem. With semi-supervised learning a small, labeled subset is used to make a better prediction for the complete data set.
The training of a semi-supervised learning algorithm consists of two steps. To label the unlabeled observations from the original dataset, the complete set is first clustered using unsupervised learning. The clusters that are formed are then labeled by the algorithm, based on their originally labeled parts. The resulting fully labeled data set is used to train a supervised ML algorithm. The downside of semi-supervised learning is that it is not certain the labels are 100% correct.
Setting up the model
In most ML implementations, the data gathering, integration and pre-processing usually takes more time than the actual training of the algorithm. It is an iterative process of training a model, evaluating the results, modifying hyperparameters and repeating, rather than just a single process of data preparation and training. After the training is performed and the hyperparameters have been calibrated, the ML model is ready to make predictions.
Machine learning in financial risk management
ML can add value to financial risk management applications, but the type of model should suit the problem and the available data. For some applications, like challenger models, it is not required to completely explain the model you are using. This makes, for example, an unsupervised black box model suitable as a challenger model. In other cases, explainability of model results is a critical condition while choosing an ML model. Here, it might not be suitable to use a black box model.
In the next section we present some examples where ML models can be of added value in financial risk management.
Data quality analysis
All modeling challenges start with data. In line with the ‘garbage in, garbage out’ maxim, if the quality of a dataset is insufficient then an ML model will also not perform well. It is quite common that during the development of an ML model, a lot of time is spent on improving the data quality. As ML algorithms learn directly from the data, the performance of the resulting model will increase if the data quality increases. ML can be used to improve data quality before this data is used for modeling. For example, the data quality can be improved by removing/replacing outliers and replacing missing values with likely alternatives.
An example of insufficient data quality is the presence of large or numerous outliers. An outlier is an observation that significantly deviates from the other observations in the data, which might indicate it is incorrect. Outlier detection can easily be performed by a data scientist for univariate outliers, but multivariate outliers are a lot harder to identify. When outliers have been detected, or if there are missing values in a dataset, it might be useful to substitute some of these outliers or impute for missing values. Popular imputation methods are the mean, median or most frequent methods. Another option is to look for more suitable values; and ML techniques could help to improve the data quality here.
Multiple ML models can be combined to improve data quality. First, an ML model can be used to detect outliers, then another model can be used to impute missing data or substitute outliers by a more likely value. The outlier detection can either be done using clustering algorithms or by specialized outlier detection techniques.
Loan approval
A bank’s core business is lending money to consumers and companies. The biggest risk for a bank is the credit risk that a borrower will not be able to fully repay the borrowed amount. Adequate loan approval can minimize this credit risk. To determine whether a bank should provide a loan, it is important to estimate the probability of default for that new loan application.
Established banks already have an extensive record of loans and defaults at their disposal. Together with contract details, this can form a valuable basis for an ML-based loan approval model. Here, the contract characteristics are the features, and the label is the variable indicating if the consumer/company defaulted or not. The features could be extended with other sources of information regarding the borrower.
Supervised learning algorithms can be used to classify the application of the potential borrower as either approved or rejected, based on their probability of a future default on the loan. One of the suitable ML model types would be classification algorithms, which split the dataset into either the ‘default’ or ‘non-default’ category, based on their features.
Challenger models
When there is already a model in place, it can be helpful to challenge this model. The model in use can be compared to a challenger model to evaluate differences in performance. Furthermore, the challenger model can identify possible effects in the data that are not captured yet in the model in use. Such analysis can be performed as a review of the model in use or before taking the model into production as a part of a model validation.
The aim of a challenger model is to challenge the model in use. As it is usually not feasible to design another sophisticated model, mostly simpler models are selected as challenger model. ML models can be useful to create more advanced challenger models within a relatively limited amount of time.
Challenger models do not necessarily have to be explainable, as they will not be used in practice, but only as a comparison for the model in use. This makes all ML models suitable as challenger models, even black box models such as neural networks.
Segmentation
Segmentation concerns dividing a full data set into subsets based on certain characteristics. These subsets are also referred to as segments. Often segmentation is performed to create a model per segment to better capture the segment’s specific behavior. Creating a model per segment can lower the error of the estimations and increase the overall model accuracy, compared to a single model for all segments combined.
Segmentation can, among other uses, be applied in credit rating models, prepayment models and marketing. For these purposes, segmentation is sometimes based on expert judgement and not on a data-driven model. ML models could help to change this and provide quantitative evidence for a segmentation.
There are two approaches in which ML models can be used to create a data-driven segmentation. One approach is that observations can be placed into a certain segment with similar observations based on their features, for example by applying a clustering or classification algorithm. Another approach to segment observations is to evaluate the output of a target variable or label. This approach assumes that observations in the same segment have the same kind of behavior regarding this target variable or label.
In the latter approach, creating a segment itself is not the goal, but optimizing the estimation of the target variable or classifying the right label is. For example, all clients in a segment ‘A’ could be modeled by function ‘a’, where clients in segment ‘B’ would be modeled by function ‘b’. Functions ‘a’ and ‘b’ could be regression models based on the features of the individual clients and/or macro variables that give a prediction for the actual target variable.
Credit scoring
Companies and/or debt instruments can receive a credit rating from a credit rating agency. There are a few well-known rating agencies providing these credit ratings, which reflects their assessment of the probability of default of the company or debt instrument. Besides these rating agencies, financial institutions also use internal credit scoring models to determine a credit score. Credit scores also provide an expectation on the creditworthiness of a company, debt instrument or individual.
Supervised ML models are suitable for credit scoring, as the training of the ML model can be done on historical data. For historical data, the label (‘defaulted’ or ‘not defaulted’) can be observed and extensive financial data (the features) is mostly available. Supervised ML models can be used to determine reliable credit scores in a transparent way as an alternative to traditional credit scoring models. Alternatively, credit scoring models based on ML can also act as challenger models for traditional credit scoring models. In this case, explainability is not a key requirement for the selected ML model.
Conclusion
ML can add value to, or replace, models applied in financial risk management. It can be used in many different model types and in many different manners. A few examples have been provided in this article, but there are many more.
ML models learn directly from the data, but there are still some choices to be made by the model user. The user can select the model type and must determine how to calibrate the hyperparameters. There is no ‘one size fits all’ solution to calibrate a ML model. Therefore, ML is sometimes referred to as an art, rather than a science.
When applying ML models, one should always be careful and understand what is happening ‘under the hood’. As with all modeling activities, every method has its pitfalls. Most ML models will come up with a solution, even if it is suboptimal. Common sense is always required when modeling. In the right hands though, ML can be a powerful tool to improve modeling in financial risk management.
Working with ML models has given us valuable insights (see the box below). Every application of ML led to valuable lessons on what to expect from ML models, when to use them and what the pitfalls are.
Machine learning and Zanders
Zanders already encountered several projects and research questions where ML could be applied. In some cases, the use of ML was indeed beneficial; in other cases, traditional models turned out to be the better solution.
During these projects, most time was spent on data collection and data pre-processing. Based on these experiences, an ML based dataset validation tool was developed. In another case, a model was adapted to handle missing data by using an alternative available feature of the observation.
ML was also used to challenge a Zanders internal credit rating model. This resulted in useful insights on potential model improvements. For example, the ML model provided more insight in variable importance and segmentation. These insights are useful for the further development of Zanders’ credit rating models. Besides the insights what could be done better, the ML model also emphasized the advantages of classical models over the ML-based versions. The ML model was not able to provide more sensible ratings than the traditional credit rating model.
In another case, we investigated whether it would be sensible and feasible to use ML for transaction screening and anomaly detection. The outcome of this project once more highlighted that data is key for ML models. The available data was numerous, but of low quality. Therefore, the used ML models were not able to provide a helpful insight into the payments, or to consistently detect divergent payment behavior on a large scale.
Besides the projects where ML was used to deliver a solution, we investigated the explainability of several ML models. During this process we gained knowledge on techniques to provide more insights into otherwise hardly understandable (black box) models.
Corrections and reversals in SAP Treasury
Machine learning (ML) models have already been around for decades. The exponential growth in computing power and data availability, however, has resulted in many new opportunities for ML models. One possible application is to use them in financial institutions’ risk management. This article gives a brief introduction of ML models, followed by the most promising opportunities for using ML models in financial risk management.
As part of an SAP Treasury system implementation or enhancement, we review existing business processes, define bottlenecks and issues, and propose (further) enhancements. Once we have applied these enhancements in your SAP system, we create a series of trainings and user manuals which layout the business process actions needed to correctly use the system.
“It’s only those who do nothing that make no mistakes, I suppose”
Joseph Conrad
This legendary saying of Joseph Conrad is still very valid today, as everyone makes mistakes. Therefore, we help our clients define smooth, seamless and futureproof processes which consider the possibility of mistakes or requirements for correction, and include actions to correct them.
Some common reasons why treasury payments require corrections are:
- No need for a cash management transfer between house bank anymore
- Incorrect house/beneficiary bank details were chosen
- Wrong currency / amount / value date / payment details
- Incorrect payment method
One of our practices is to first define a flowchart structure in form of decision tree, where each node represents either a treasury process (e.g. bank-to-bank transfer, FX deal, MM deal, Securities etc.), a transaction status in SAP, or an outcome which represents a solution scenario.
We must therefore identify the scope of the manual process, which depends on the complexity of the business case. At each stage of the transaction life cycle, we must identify whether it may be stuck and how it can be rectified or reversed.
Each scenario will bring a different set of t-codes to be used in SAP, and a different number of objects to be touched.
Below is an example of a bank-to-bank cash management transfer which is to be cancelled in SAP.
Figure 1: Bank-to-bank payment reversal
Scenario 2: A single payment request created via t-code FRFT_B and an automatic payment run is executed (F111), BCM is used but the payment batching (FBPM1) is not yet executed.
Step 1: define the accounting document to be reversed
T-code F111, choose the payment run created (one of the options) -> go to Menu -> Edit -> Payments -> Display log (display list) -> note the document number posted in the payment run.
Step 2: Reverse the payment document
T-code FB08: Enter the document number defined in step 1, choose company code, fiscal year and reversal reason, and click POST/SAVE.
SAP creates the corresponding offsetting accounting document.
Step 3: Reverse clearing of the payment request
T-code F8BW: Enter the document number defined in step 1, choose company code, fiscal year and click EXECUTE
The result is the payment request is uncleared.
Step 4: Reverse the payment request
T-code F8BV: enter the payment request (taken from FTFR_B or F111 or F8BT) and press REVERSE.
This step will reverse the payment request itself. Also, you may skip this step if you tick “Mark for cancellation” in STEP 3.
Step 5: Optional step, depending on the client setup of OBPM4 (selection variants)
Delete entries in tables: REGUVM and REGUHM. This is required to disable FBPM1 payment batching in SAP BCM for the payment run which is cancelled. The execution of this step depends on the client setup.
Call functional module (SE37): FIBL_PAYMENT_RUN_MERGE_DELETE with:
- I_LAUFD : Date of the payment run as in F111
- I_LAUFI : Identification of the payment run as in F111
- I_XVORL : empty/blank
The number of nodes and branches comprising the decision tree may vary based on the business case of a client. Multiple correctional actions may also be possible, meaning there is no unique set of the correctional steps applicable for all the corporates.
If you interested in a review of your SAP Treasury processes, their possible enhancements and the corresponding business user manuals, please feel free to reach out to us. We are here to support you!
Average Rate FX Forwards and their processing in SAP
Machine learning (ML) models have already been around for decades. The exponential growth in computing power and data availability, however, has resulted in many new opportunities for ML models. One possible application is to use them in financial institutions’ risk management. This article gives a brief introduction of ML models, followed by the most promising opportunities for using ML models in financial risk management.
The observation period for the average rate calculation is usually long and can be defined flexibly with daily, weekly or monthly periodicity. Though this type of contract is always settled as non-delivery forward in cash, it is a suitable hedging instrument in certain business scenarios, especially when the underlying FX exposure amount cannot be attributed to a single agreed payment date. In case of currencies and periods with high volatility, ARF reduces the risk of hitting an extreme reading of a spot rate.
Business margin protection
ARF can be a very efficient hedging instrument when the business margin needs to be protected, namely in the following business scenarios:
- Budgeted sales revenue or budgeted costs of goods sold are incurred with reliable regularity and spread evenly in time. This exposure needs to be hedged against the functional currency.
- The business is run in separate books with different functional currencies, FX exposure is determined and hedged against the respective functional currency of these books. Resulting margin can be budgeted with high degree of reliability and stability, is relatively small and needs to be hedged from the currency of the respective business book to the functional currency of the reporting entity.
Increased complexity
Hedging such FX exposure with conventional FX forwards would lead to a very high number of transactions, as well as data on the side of underlying FX exposure determination, resulting in a data flood and high administrative effort. A hedge accounting according the IFRS 9 rules is almost impossible due to high number of hedge relationships to manage. The complexity increases even more if treasury operations are centralized and the FX exposure has to be concentrated via intercompany FX transactions in the group treasury first.
If the ARF instruments are not directly supported by the used treasury management system (TMS), the users have to resort to replicating the single external ARF deal with a series of conventional FX forwards, creating individual FX forwards for each fixation date of the observation period. As the observation periods are usually long (at least 30 days) and rate fixation periodicity is usually daily, this workaround leads to a high count of fictitious deals with relatively small nominal, leading to an administrative burden described above. Moreover, this workaround prevents automated creation of deals via an interface from a trading platform and automated correspondence exchange based on SWIFT MT3xx messages, resulting in a low automation level of treasury operations.
Add-on for SAP TRM
Currently, the ARF instruments are not supported in SAP Treasury and Risk management system (SAP TRM). In order to bridge the gap and to help the centralized treasury organizations to further streamline their operations, Zanders has developed an add-on for SAP TRM to manage the fixing of the average rate over the observation period, as well as to correctly calculate the fair value of the deals with partially fixed average rate.
The solution consists of dedicated average rate FX forward collective processing report, covering:
- Particular information related to ARF deals, including start and end of the fixation period, currently fixed average rate, fixed portion (percentage), locked-in result for the fixed portion of the deal in the settlement currency.
- Specific functions needed to manage this type of deals: creation, change, display of rate fixation schedule, as well as creating final fixation of the FX deal, once the average rate is fully calculated through the observation period.
Figure 1 Zanders FX Average Rate Forwards Cockpit and the ARF specific key figures
The solution builds on the standard SAP functionality available for FX deal management, meaning all other proven functionalities are available, such as payments, posting via treasury accounting subledger, correspondence, EMIR reporting, calculation of fair value for month-end evaluation and reporting. Through an enhancement, the solution is fully integrated into market risk, credit risk and, if needed, portfolio analyser too. Therefore, correct mark-to-market is always calculated for both the fixed and unfixed portion of the deal.
Figure 2 Integration of Zanders ARF solution into SAP Treasury Transaction manager process flow
The solution builds on the standard SAP functionality available for FX deal management, meaning all other proven functionalities are available, such as payments, posting via treasury accounting subledger, correspondence, EMIR reporting, calculation of fair value for month-end evaluation and reporting. Through an enhancement, the solution is fully integrated into market risk, credit risk and, if needed, portfolio analyser too. Therefore, correct mark-to-market is always calculated for both the fixed and unfixed portion of the deal.
Zanders can support you with the integration of ARF forwards into your FX exposure management process. For more information do not hesitate to contact Michal Šárnik.
Managing Virtual Accounts using SAP In-House Cash
How to setup virtual accounts in SAP, part III. In the previous part of this series on ‘How to setup virtual accounts in SAP’, we delved into the details of a scenario where virtual accounts are managed on GL account level using SAP FI module only. This article investigates how SAP In-house cash (SAP IHC) module can be used to manage virtual accounts in your ERP.
SAP IHC is a module that facilitates a full suite of payment factory processes. It can be seen as an intercompany position subledger with a set of fancy features like POBO payment routing, bank statement allocation, arms-length intercompany interest calculations, out of the box payment and bank statement interfaces with participants (Opco’s) etcetera.
The process where virtual accounts are managed in IHC is depicted below:
In this process, we rely on a simple set of building blocks:
- In-house cash accounts to manage intercompany positions between Treasury and OpCo’s,
- GL accounts to represent external cash and the IC positions.
- Processing of external bank statements,
- Distribution of internal bank statements from IHC towards the OpCo’s ERP system,
- On the external bank statement for the Master Account, an identifier needs to be available that conveys to which virtual account the actual collection was originally credited. This identifier ultimately tells us which OpCo these funds originally belongs to and which IHC account to credit.
The idea here is that Treasury will receive the external bank statement and automatically post the receipts into the correct IHC account using the identifier. By posting items on the IHC account, the intercompany positions are updated. Then, at the end of the day, a set of internal bank statements is generated in IHC and sent through an interface to the OpCo’s ERP. The OpCo’s ERP processes these statements, clears out the customers invoices and updates the IC position with treasury.
The two major benefits of using IHC over the solution as described in the previous articles of this series are:
- The OpCo’s do not require any direct integration with the bank and can rely on internal interfacing with Treasury. Especially in companies with a fragmented ERP landscape this can become a valuable proposition.
- IHC can very aptly integrate virtual account management processes with internal netting payments, payments on behalf of (POBO) and payment in name of processes.
Implementing virtual accounts in SAP
In the explanation below we assume that the basic FI-CO settings for the company code a.o. are already in place. Also, it is by no means a complete inventory of all the settings that are required to get IHC up and running. It focusses more on the configurational parts that specifically cater for the VA requirements specifically.
Master data – general ledger accounts
Three sets of GL accounts need to be created: balance sheet accounts for the representation of the intercompany positions, one set for virtual account clearing purposes between the EBS and the IHC accounting process, and the GL account to represent the cash position with the external bank. These GL accounts need to be assigned to the appropriate company codes and can now be used to in the bank statement import process and the IHC accounting process.
In the Treasury entity we should create a single GL (per position currency) representing the IC position with all its OpCo’s because the granularity of IC position per OpCo is managed in the IHC subledger. This approach results in less of an increase of accounts in the chart of account.
Transaction code FS00
House bank maintenance bank account maintenance
In order to be able to process bank statements and generate GL postings in your SAP system, we need to maintain the house bank data first. A house bank entry comprises of the following information that needs to be maintained carefully:
- The house bank identifier: a 5-digit label that clearly identifies the bank branch.
- Bank country: The ISO country code where the bank branch is located.
- Bank key: The bank key is a separate bank identifier that contains information like SWIFT BIC, local routing code and address related data of your house bank.
Transaction code FI12
Secondly, under the house bank entry, the bank accounts can be created, including:
- The account identifier: a 5-digit label that clearly identifies the bank account.
- Bank account number and IBAN: This represents the bank account number as assigned to you by the bank.
- Currency: the currency of the bank account.
- G/L Account: the general ledger account that is going to be used to represent the balance sheet position on this bank account. Or the IC position with Treasury.
Transaction code FI12 in SAP ECC or NWBC in S/4 HANA
The idea here is that we maintain one house bank and bank account in the treasury company code that represents the Master account as held with your house bank. This house bank will have the G/L account assigned to it that represents the house banks external cash position.
In each of the OpCo’s company codes, we maintain one house bank and bank account that represents each of the IHC bank accounts as held with the treasury center. This house bank will have the G/L account assigned to it that represents the intercompany position with the Treasury entity.
Electronic bank statement settings
The electronic bank statement (EBS) settings will ensure that, based on the information present on the bank statement, SAP is capable of posting the items into the general or sub ledgers according to the requirements. There are a few steps in the configuration process that are important for this to work:
1) Posting rule construction
Posting rules construction starts with setting up Account symbols and assigning GL accounts to it. The idea here is to define at two account symbols, the first one to represent the external Cash position (BANK), and the second one for the virtual account clearing between IHC and EBS (VACLR)
A separate account symbol for customers is not required in SAP.
For the account symbol for BANK we do not assign a GL account number directly in the settings; instead we will assign a so-called mask by entering the value “+++++++++”. What this does in SAP is for every time the posting rule attempts to post to “BANK”, the GL account as assigned in the house bank account settings is used (FI12 or NWBC setting above).
For the account symbol VACLR we can assign a dedicated O/I clearing GL that is used to clear out the EBS posting against the IHC posting (more on that later). These GL accounts should have already been created in the first step (FS00).
Now that we have the account symbols prepared, we can start tying together these symbols into posting rules. We need to create 3 posting rules.
Posting rule 1 is going to debit the BANK symbol and it is going to credit VACLR symbol
Posting rule 2 is going to debit the BANK symbol and it is going to credit a BLANK symbol. The posting type however is going the be set to value 8 “Clear Credit Subledger Account”. What this setting is going to attempt is to clear out any open item sitting in the customer sub-ledger using algorithms. We will explain more on these algorithms below.
As you can imagine, posting rule 1 is applicable for the Treasury entity. Posting rule 2 is going to be used in the OpCo’s EBS process.
Transaction code OT83
2) Posting rule assignment
In the next step we can assign the posting rules to the so-called “Bank Transaction Codes” (or BTC’s like NTRF) that are typically observed in the body of the bank statements to identify the nature of the transactions.
To understand under which Bank Transaction Code these collections are reported on the statement, you typically need to carefully analyze some sample statement output or check with your bank’s implementation team for feedback.
Important to note here is to assign an algorithm to posting rule 2. This algorithm will attempt to search the payment notes of the bank statement for “reference numbers” which it can use to trace back the original customer invoice open item. Once SAP has identified the correct outstanding invoice, it can clear this one off and identify it as being paid.
If SAP is unsuccessful to automatically identify the open item, it can be manually post processed in FEBAN or FEB_BSPROC.
Transaction code OT83
3) Bank account assignment
In the last part, we can assign the posting rules assignments to the bank accounts. This way we can differentiate different rule assignments for different accounts if that is needed.
Transaction code OT83
4) Search strings
If the posting rule assignment needs more granularity than the level provided in step 2 above (on BTC level), we can setup search strings. Search strings can be configured to look at the payment notes section of the bank statement and find certain fixed text or patterns of text. Based on such search strings, we can then modify the posting behavior by for instance overruling the posting rule assignment as defined in step 2.
Whether this is required depends on the level of information that is provided by the bank in its bank statements.
Transaction code OTPM
Prepare IHC to parallel post certain bank statement items into IHC accounts
In IHC there are two ways to parallel post bank statement items into IHC accounts; as payment items or as payment orders.
This can be controlled by setting a specific function module on BTE2810. If we set function module “BKK_IHB_BASTA_IN_POST”, SAP will post an IHC payment item. If we assign “IHC_APPL_XBS_POST”, SAP will post an IHC payment order.
Additional information can be found in note 2370212.
In the subsequent part of the article we assume that we use the payment item logic.
Transaction BF42
IHC account determination from payment notes
In this section of the configuration we can determine which IHC account should be used to post the bank statement items towards using payment notes search strings.
For example, if the master account bank statement payment notes for VA collections for a particular VA contains a string “From VA 54353” and we know this belongs to IHC account “F4000EUR01”, we can setup a rule in this part of the configuration for that. This will ensure that all items on a bank statement containing this text string will get posted into IHC account F4000EUR01.
Maintenance view TBKKIHB1
Assign external BTC to posting category
Here we can identify the external banks BTC codes (NTRF, NCMZ a.o.) which are applicable for the VA movements to post into IHC. Secondly, we can identify with which posting category to post them into the IHC accounts.
Once we identified the BTC code related to our VA collections (e.g. NCMZ), we can link them to the correct posting categories here. You could use standard categories 90 (Balancing Ext. Acct (D)) for debits and 91 (Balancing Ext. Acct (C)) for credits.
Alternatively, you can setup and link your own custom posting categories here to more precisely control how our VA collections are posted into IHC. This is out of scope for this article though.
Importing and processing bank statements
We should now be in good shape to import our first statements. We could download them from our electronic banking platform. We could also be in a situation where we already receive them through some automated H2H interface or even through SWIFT. In any case, the statements need to be imported in SAP. This can be achieved through transaction code FF.5. The most important parameters to understand here are the following:
- File parameters: Here we define the filename and storage path where our statement is saved. We also need to define what format this file is going to be, i.e. MT940, CAMT.053 or one of the many other supported formats
- Posting Parameters: Here we can define whether the line items on the bank statements are going to be posted to general or sub-ledger.
- Algorithms: Here we need to set the range of customer invoice reference number (XBLNR) for the EBS Algorithm to search the payment notes for any such occurrence in a focused manner. If we would leave these fields empty, the algorithm would not work properly and would not find any open invoice for automatic clearing.
Once these parameters are maintained in the import variant, the system will start to load the statements and generate the required postings.
Transaction code FF.5 / FEBP
Display IHC account statement
Now that we successfully loaded an external bank statement, we can now check whether the items are posted into the IHC account. This can be done via transaction code F9K3. For each IHC account we can now look at the “Account Turnover” and observe all the VA collections that are posted on the account.
Transaction code F9K3
Prepare the IHC account for FINSTA statement distribution
We need to enable the distribution of internal IHC statements to the OpCo’s ERP on the IHC account master record. This can be achieved via F9K2. On the “Account Statement” tab we can adjust the statement format to “FINSTA” and dispatch type to “ALE” to ensure we are going to send FINSTA statements over an ALE connection. This would be the most common combination; other combinations can be configured and selected here as well.
Transaction code F9K2
Setting up ALE partner profiles
Finally, we can configure the system to determine to which system the FINSTA’s need to be send. This can be done in WE20, partner type GP (business partner).
Here we need to setup the outbound parameters for the FINSTA message type. An appropriate port needs to be selected that represents the ERP of the OpCo.
Transaction code WE20
Trigger the distribution of a FINSTA statement
Now that we have some transactions posted on the IHC account and the FINSTA settings enabled, we can trigger the system to send the FINSTA statements to the receiving ERP system. This can be done in F9N7.
Here we can select the correct IHC account and statement date and run the program to generate the FINSTA statement.
Once the finsta is generated and sent to the receiving ERP, it can be processed there via FEBP there.
Transaction code F9N7
Closing remarks
This is the third part of a series on how to set up virtual accounts in SAP. Please find below the other articles on this subject:
How to set cash pool and in-house bank interest rates
Machine learning (ML) models have already been around for decades. The exponential growth in computing power and data availability, however, has resulted in many new opportunities for ML models. One possible application is to use them in financial institutions’ risk management. This article gives a brief introduction of ML models, followed by the most promising opportunities for using ML models in financial risk management.
The pricing of intercompany treasury transactions is subject to transfer pricing regulation. In essence, treasury and tax professionals need to ensure that the pricing of these transactions is in line with market conditions, also known as the arm’s length principle, thereby avoiding unwarranted profit shifting.
We have has been assisting dozens of multinationals on this topic through our Transfer Pricing Solution (TPS). The TPS enables them to set interest rates on intercompany transactions in a compliant and automated way. Since its go-live, clients have priced over 1000 intercompany loans with a total notional of over EUR 60 billion using this self-service solution.
Cash Pooling Solution
In February 2020, the OECD published the first-ever international consensus on financial transactions transfer pricing. One of the key topics of the document relates to the determination of internal pooling interest rates. As a reaction, Zanders has launched a co-development initiative with key clients to design a Cash Pooling Solution that determines the arm’s length interest rates for physical cash pools, notional cash pools and in-house banks.
The goal of this new solution is to present treasury and tax professionals with a user-friendly workflow that incorporates all compliance areas as well as treasury insights into the pooling structure. The three main compliance areas for treasury professionals are:
- Ensuring that participants have a financial incentive to participate in the pooling structure. Entities participating in the pool should be ‘better off’ than they would be if they went directly to a third-party bank. In other words, participants’ pooled rates should be more favorable than their stand-alone rates. The OECD sets out a step-by-step approach to improve interest conditions for participating entities to distribute the synergies towards the participants.First, the total pooling benefit should be calculated. This total pooling benefit is the financial advantage for a group compared to a non-pooled cash management set-up. The total pooling benefit can be broken down into a netting benefit and an interest rate benefit. The netting benefit arises from offsetting debit and credit balances. The interest rate benefit arises from more beneficial interest rate conditions on the cash pool or in-house bank position, compared to stand-alone current accounts.
Once the total pooling benefit has been calculated, it should be allocated over the leader entity and the participating entities. Therefore, a functional analysis of the pooling structure should be made to identify which entities contribute most in terms of their balances, creditworthiness and the administration of the pool. The allocated amount should be priced into the interest rates. A deposit rate will thus receive a pooling premium. A withdrawal rate will incorporate pooling discount. - Ensuring a correct tax treatment of the cash pool transactions. Pooling structures are primarily in place to optimize cash and liquidity management. Therefore, tax authorities will expect to see the balances of cash pool participants fluctuate around zero. Treasury professionals should monitor positions to prevent participants from having a structural balance in the pool. If the balance has a longer-term character, tax authorities can classify such pooling position as a longer-term intercompany loan. Consequently, monitoring structural balances can lower tax risk significantly.
- Appropriate documentation should be in place for each time treasury determines the pooling interest rates. The documentation should include the methodology as well as all specifics of the transfer pricing analysis. Proper documentation will enable the multinational to substantiate the interest rates during tax audits.
Multinationals are confronted with a significant compliance burden to comply with these new guidelines. Different hurdles can be identified, ranging from access to the appropriate market data to a considerable and recurring time investment in determining and documenting the internal deposit and withdrawal rates for each pooling structure.
It remains to be seen how auditors treat these new guidelines, but the recent increased focus on transfer pricing seems to indicate that this will be a topic that may need additional attention in the coming years.
Zanders Inside solutions
In order to support treasury and tax professionals in this area, Zanders Inside launched its cloud-based Cash Pooling Solution. This solution will focus on each of the three compliance areas as described above. In addition, the solution leverages a high degree of automation to support the entire end-to-end process. It offers a cost-effective alternative for the manual process that multinationals go through. Please watch our video showing how the Cash Pooling Solution tackles the challenge of OECD compliancy.
Why treasury projects can fail
If you want to go fast, go alone. If you want to go far, go together
Even a small project like a SaaS cash flow forecast system implementation is complex if we take into account that the business, regulatory and technical landscape is constantly evolving and that in most cases business resources cannot fully dedicate their time to projects. In today’s world, project teams also need to respond to change quickly and deliver value as soon as possible, both to the project and to business stakeholders. The challenge is to do that while also being in control of timelines, budget, scope, and – most importantly – creating quality and value to treasury and the business.
As described in project management theory, a project is typically constrained by three elements. This is called the triple constraint, iron triangle or project triangle. The three elements are scope, time, and cost. This theory is project management methodology agnostic. It does not matter whether the project is delivered from a waterfall or agile approach; if any of these elements are changed, something else must change.
The theory of the project management triangle is true, but far too simple. Reality differs; only working with these three variables is a limitation when the aim is to deliver a project that also meets the quality criteria defined by the business. In that case, you also need to consider other variables that we will highlight in this article, as these will impact delivered quality, as well as scope, timelines, and cost.
Variables to consider for a successful project
Based on our experience, we know that all these variables will change and need to be adjusted during any project. However, these changes (to scope, timelines or cost) should come from evolving business requirements, a changing landscape and the additional value that they can bring to the business – and not due to a wrong project approach, or variables, processes and structures that were incorrectly or insufficiently defined at the start of the project.
What we see is that most projects are not delivered within the defined scope, time, and cost, due to incorrect project planning, lack of the right project structure or skillsets, and an unclear approach – and not because business is evolving.
So, how can we use this to manage projects effectively? We know, after all, that conditions inevitably change.
In most cases, the variables in a treasury project don’t change in the core functional area. Implementation teams (whether they are strategical, tactical, or operational) usually have a high level of treasury expertise. They speak the same language as the business and that helps to structure the scope, project, and delivery. The right business engagement allows you to keep the efforts, timelines, scope and cost under control.
Typically, it is other areas outside of treasury functional delivery that undermine delivery within timelines, scope, and cost. There are several reasons for this, including these key factors:
- Lack of a detailed approach defined at the start of the project
- Lack of time and focus given to these areas throughout the project
- Lack of a right skillset (both external and internal) allocated to the project and specific areas
Support areas
Which are the areas that undermine delivery within timelines, scope, and cost? We call these ‘support areas’ to the functional treasury delivery team. Since they are considered support areas, most of the time there is a lack of focus or lack of treasury skilled people to define the right approach at an early stage.
These support areas are:
- Project management
- Business and change management
- Data management
- Testing
- Infrastructure and integration
The way that it should work, theoretically, is that these areas support the core treasury delivery. Apart from making sure that their own deliverables support the overall functional delivery, their tasks should take the workload and pressure away from the core team and leaving it to the business to focus on priority items and risk areas.
What happens in practice is that, due to the reasons mentioned above (i.e. not using the right skillsets and an incorrect detailed approach defined for each of these areas), instead of taking off the pressure or workload from the core team, these areas require more time and put more pressure in the core functional team, requiring the time and focus of the key people to be shared across high and low priority and risk areas.
Questions you need to ask
Are all the deliverables critical to a successful project? Project empire-building happens in some projects, so review and prioritize deliverables. Non-critical deliverables will impact cost, timelines, and quality, shifting project focus and resources.
Therefore, at the start of a project, we always need to ask ourselves:
- Do we have the right approach for each project area (core functional, project management, etc.)?
- Is the approach based on similar projects, with a similar landscape, scope and complexity?
- Is the approach defined by someone with treasury skills and experience?
An ERP implementation is not the same as a treasury system implementation, since the data required, the processes and risk areas differ. Integration-wise, counterparties involved in treasury system projects differ in terms of file formatting and content, where security and encryption is key. From a testing perspective: how can you prioritize test activities if you don’t know the difference between trade settlement and trade capture?
The same applies to non-system related projects. A treasury transformation project does not require data to be loaded into a new treasury management system, however required reporting (project management dashboards) or stakeholder mapping and management differ from a non-treasury non system related project.
An agnostic approach and agnostic skillsets do not really work. The triangle comes into play when something affects one of its variables. For example, if we need to produce more reports for project management, or have more non-critical deliverables while the time frame is too short to deliver all, the project will likely need more resources, or perhaps a scope reduction. Either way, it will certainly impact quality.
Why do projects succeed?
All successful projects have things in common, and these are incorporated in our implementation framework, for areas like data, testing, project management, business change.
This article is the first of a series on implementation projects. In the next parts, we will start delving into each one of the areas mentioned, sharing our insights and explaining the right approach to impact costs, timelines and scope with changes that will increase the delivered quality and value for the business and that are coming from evolving business requirements with a clear ROI.
Structural Foreign Exchange Risk in practice
Machine learning (ML) models have already been around for decades. The exponential growth in computing power and data availability, however, has resulted in many new opportunities for ML models. One possible application is to use them in financial institutions’ risk management. This article gives a brief introduction of ML models, followed by the most promising opportunities for using ML models in financial risk management.
Since the introduction of the Pillar 1 capital charge for market risk, banks must hold capital for Foreign Exchange (FX) risk, irrespective of whether the open FX position was held on the trading or the banking book. An exception was made for Structural Foreign Exchange Positions, where supervisory authorities were free to allow banks to maintain an open FX position to protect their capital adequacy ratio in this way.
This exemption has been applied in a diverse way by supervisors and therefore, the treatment of Structural FX risk has been updated in recent regulatory publications. In this article we discuss these publications and market practice around Structural FX risk based on an analysis of the policies applied by the top 25 banks in Europe.
Based on the 1996 amendment to the Capital Accord, banks that apply for the exemption of Structural FX positions can exclude these positions from the Pillar 1 capital requirement for market risk. This exemption was introduced to allow international banks with subsidiaries in currencies different from the reporting currency to employ a strategy to hedge the capital ratio from adverse movements in the FX rate. In principle a bank can apply one of two strategies in managing its FX risk.
- In the first strategy, the bank aims to stabilize the value of its equity from movements in the FX rate. This strategy requires banks to maintain a matched currency position, which will effectively protect the bank from losses related to FX rate changes. Changes in the FX rate will not impact the equity of a bank with e.g. a consolidated balance sheet in Euro and a matched USD position. The value of the Risk-Weighted Assets (RWAs) is however impacted. As a result, although the overall balance sheet of the bank is protected from FX losses, changes in the EUR/USD exchange rate can have an adverse impact on the capital ratio.
- In the alternative strategy, the objective of the bank is to protect the capital adequacy ratio from changes in the FX rate. To do so, the bank deliberately maintains a long, open currency position, such that it matches the capital ratio. In this way, both the equity and the RWAs of the bank are impacted in a similar way by changes in the EUR/USD rate, thereby mitigating the impact on the capital ratio. Because an open position is maintained, FX rate changes can result in losses for the bank. Without the exemption of Structural FX positions, the bank would be required to hold a significant amount of capital for these potential losses, effectively turning this strategy irrelevant.
As can also be seen in the exhibit below, the FX scenario that has an adverse impact on the bank differs between both strategies. In strategy 1, an appreciation of the currency will result in a decrease of the capital ratio, while in the second strategy the value of the equity will increase if the currency appreciates. The scenario with an adverse impact on the bank in strategy 2 is when the foreign currency depreciates.
Until now, only limited guidance has been available on e.g. the risk management framework, (number of) currencies that can be in scope of the exemption and the maximum open exposure that can be exempted. As a result, the practical implementation of the Structural FX exemption varies significantly across banks. Recent regulatory publications aim to enhance regulatory guidance to ensure a more standardized application of the exemption.
Regulatory Changes
With the publication of the Fundamental Review of the Trading Book (FRTB) in January 2019, the exemption of Structural FX risk was further clarified. The conditions from the 1996 amendment were complemented to a total of seven conditions related to the policy framework required for FX risk and the maximum and type of exposure that can be in scope of the exemption. Within Europe, this exemption is covered in the Capital Requirements Regulation under article 352(2).
To process the changes introduced in the FRTB and to further strengthen the regulatory guidelines related to Structural FX, the EBA has issued a consultation paper in October 2019. A final version of these guidelines was published in July 2020. The date of application was pushed back one year compared to the consultation paper and is now set for January 2022.
The guidelines introduced by EBA can be split in three main topics:
- Definition of Structural FX.
The guidelines provide a definition of positions of a structural nature and positions that are eligible to be exempted from capital. Positions of a structural nature are investments in a subsidiary with a reporting currency different from that of the parent (also referred to as Type A), or positions that are related to the cross-border nature of the institution that are stable over time (Type B). A more elaborate justification is required for Type B positions and the final guidelines include some high-level conditions for this. - Management of Structural FX.
Banks are required to document the appetite, risk management procedures and processes in relation to Structural FX in a policy. Furthermore, the risk appetite should include specific statements on the maximum acceptable loss resulting from the open FX position, on the target sensitivity of the capital ratios and the management action that will be applied when thresholds are crossed. It is moreover clarified that the exemption can in principle only be applied to the five largest structural currency exposures of the bank. - Measurement of Structural FX.
The guidelines include requirements on the type and the size of the positions that can be in scope of the exemption. This includes specific formulas on the calculation of the maximum open position that can be in scope of the exemption and the sensitivity of the capital ratio. In addition, banks will need to report the structural open position, maximum open position, and the sensitivity of the capital ratio, to the regulator on a quarterly basis.
One of the reasons presented by the EBA to publish these additional guidelines is a growing interest in the application of the Structural FX exemption in the market.
Market Practice
To understand the current policy applied by banks, a review of the 2019 annual reports of the top 25 European banks was conducted. Our review shows that almost all banks identify Structural FX as part of their risk identification process and over three quarters of the banks apply a strategy to hedge the CET1 ratio, for which an exemption has been approved by the ECB. While most of the banks apply the exemption for Structural FX, there is a vast difference in practices applied in measurement and disclosure. Only 44% of the banks publish quantitative information on Structural FX risk, ranging from the open currency exposure, 10-day VaR losses, stress losses or Economic Capital allocated.
The guideline that will have a significant impact on Structural FX management within the bigger banks of Europe is the limit to include only the top five open currency positions in the exemption: of the banks that disclose the currencies in scope of the Structural FX position, 60% has more than 5 and up to 20 currencies in scope. Reducing that to a maximum of five will either increase the capital requirements of those banks significantly or require banks to move back to maintaining a matched position for those currencies, which would increase the capital ratio volatility.
Conclusion
The EBA guidelines on Structural FX that will to go live by January 2022 are expected to have quite an impact on the way banks manage their Structural FX exposures. Although the Structural FX policy is well developed in most banks, the measurement and steering of these positions will require significant updates. It will also limit the number of currencies that banks can identify as Structural FX position. This will make it less favourable for international banks to maintain subsidiaries in different currencies, which will increase the cost of capital and/or the capital ratio volatility.
Finally, a topic that is still ambiguous in the guidelines is the treatment of Structural FX in a Pillar 2 or ICAAP context. Currently, 20% of the banks state to include an internal capital charge for open structural FX positions and a similar amount states to not include an internal capital charge. Including such a capital charge, however, is not obvious. Although an open FX position will present FX losses for a bank which would favour an internal capital charge, the appetite related to internal capital and to the sensitivity of the capital ratio can counteract, resulting in the need for undesirable FX hedges.
The new guidelines therefore present clarifications in many areas but will also require banks to rework a large part of their Structural FX policies in the middle of a (COVID-19) crisis period that already presents many challenges.
How to set up Intraday Bank Statement reporting in SAP
Intraday bank statement (IBS) reporting, a service that your house bank can provide your company, enables your cash manager to understand which debits and credits have cleared on your bank accounts throughout the current day. We explain how to implement it in SAP.
Intraday Bank Statements offers a cash manager additional insight in estimated closing balances of external bank accounts and therefore provides the information to manage the cash more tightly on the company’s bank accounts.
Compared to intraday bank statement reporting, end-of-day (EOD) bank statement reporting is only available the next calendar day. The information therefore always comes too late to be meaningful for cash management decisions – apart from providing an opening bank balance for the next day.
Business rationale behind IBS reporting
So, why would a Treasury typically start implementing IBS reporting in its cash management processes?
- Cash visibility: In general, IBS reporting will provide your cash management function an additional tool to improve cash visibility. Achieving cash visibility intrinsically might not be a goal of its own, but by achieving visibility, the cash manager now has information to make certain economically relevant decisions in certain situations.
- Managing cash: By creating cash visibility, we now have an opportunity to manage cash on our accounts in an intelligent way. In case we estimate a positive closing balance, we could decide to invest this surplus in, for example, a money market fund or overnight deposit to earn some return. In case of an expected deficit, we need to fund the account to ensure no EOD negative position happens. This can be achieved by transferring funds from another bank account (in same currency), swapping funds from another bank account (in different currency), or funding it from, for example, a facility drawdown.
- Reduced risk of delinquency: As we now implemented a process to increase control over our bank balances, we now have less chance of e.g. rejected payments due to insufficient available funds and therefore less chance of being delinquent on certain obligations to pay.
- Reduced requirements on overdraft facility: By reducing the chance of having insufficient funds on our account, the overdraft facility requirements can also be reduced.
- Timely clearing of open items: IBS can also be used to clear off open items throughout the day, as opposed to only rely on clearing from EOD statements. Benefit here is that KPI’s like days sales outstanding (DSO) will improve and that reconciliation effort is spread out more through time.
This article will now only focus on the cash management side; the IBS reconciliation process may be discussed another time. If you like to know more about bank reconciliation using intraday statements, feel free to reach out to us. We have a pre-developed solution that we can implement at your side.
IBS concepts
There are a few design considerations that need to be looked at before attempting to implementing this solution in SAP.
- Reporting formats: MT942, CAMT.052, BAI2 are formats that can be imported by SAP standard and are also supported by most banks to some degree. There may be some informational or structural benefits that one format has over the other which should be considered in the design.
- Reporting frequency: It is possible to agree with the bank on reporting frequencies of IBS. Ten times through working hours? Or one time only, half an hour before the payment cut-off time? In most cases, the bank will charge a fee for every statement it sends, so this should be considered in the design.
- Delta vs cumulative reporting: As it is possible for the bank to report multiple times a day, it is important to understand how the data is reported. There are two methodologies. In case of delta reporting, only new transactions are reported, relative to the previously distributed IBS. Alternatively, there is cumulative reporting, where all booked items are reported on the statement throughout the day. Delta reporting typically means that the data in your SAP system needs to be appended for every new IBS. Cumulative reporting means that every time you process an IBS in SAP, the data needs to be rebuilt completely.
- Data integration: The intraday data as provided by the bank needs to be integrated with already existing cash-relevant data to compile a proper reporting view of estimated closing balance for the day. This needs to happen in the cash management module of SAP (FF7* reports). The design of the structure of the cash management report should be carefully aligned with the liquidity structure (i.e. ZBA structure).
- Prevention of duplications: Integrating the intraday data with existing data should be designed with data duplication in mind. It is paramount that the data on the same cash movement is not counted twice from two sources and data duplication should always be prevented while designing the solution. For example, if we are not careful, a payment flow can be included in the report twice, once from the intraday statement when it is debited and once from the payment in transit GL in the SAP administration. This would result in a skewed estimated closing balance.
Ultimately, the goal here is to receive and upload intraday bank statements throughout the day and to load cash movement data into your SAP system. This cash-relevant data needs to be made visible through the cash management reports so that the cash manager can better estimate EOD balances and make intelligent decisions related to funding accounts or investing excess funds.
Setting up Intraday Bank Statement reporting in SAP
We will now go into detail on how to setup intraday statement reporting and assume that the basic FI-CO settings for e.g. the company code are already in place. We also assume that the EOD bank statement process has already been implemented. To learn how to set this up, please read this article on virtual accounts.
Cash Management
It is important to understand that intraday statement data is converted into so called ‘Memo Records’ once loaded in SAP. These memo records can be visualized in the cash management reports (FF7AN/FF7BN). We will now explain the necessary settings on the cash management report section to ensure that the intraday data can be made visible in these cash management reports.
Define planning levels
First, we need to define a planning level; a label that is assigned to all cash movements as reported on the intraday statement. The planning level is used to structure the data in the cash management reports.
The level is a two-digit label, freely definable. We set it to C1.
The sign we need to set to blank as cash movements reported on this level can be both positive and negative.
The source will be ‘BNK’. This ensures that this planning level is reported on both ‘cash position’ and ‘liquidity forecast’ in the FF7AN/FF7BN reports.
The descriptions are freely definable. We define it as ‘INTRADAY’.
Define planning types
A planning type is a label under which a ‘memo record’ is stored on the SAP database. A planning type is subsequently linked to a ‘planning level’ to ensure the underlying data can be visualized in the cash management reports.
First, we define the planning type label: we set it identical to the planning level; C1 and link it to planning level C1.
We need to define an archiving category. This defines the data retention period of the memo records. If the period is exceeded and the reorganization program is executed; the memo record data will be cleansed.
The auto-expiry option defines whether the memo record will expire automatically and becomes invisible in the cash management report output. This needs to be enabled. The idea here is that the intraday statement data will be superseded by the EOD statement data once this is loaded after midnight next calendar day. To ensure we do not double count identical cash movements from both sources, the intraday data needs to be expired.
Also, a number range and description need to be entered. No specific functional considerations are needed here.
Define grouping and maintain headers
A ‘grouping’ is a label that is used to structure the cash management report data in a meaningful manner for the user. The grouping can be selected in the cash management reports and is going to dictate how the data is shown to the user.
We will configure a grouping ‘CASHPOS’.
Maintain structure
Under the grouping we can now maintain the structure of the cash management data. For our report, we are including two components. The first component is the planning level., the second will be the GL account under which we record our bank account balances. This is the GL account we typically maintain in the house bank account data (table T012K, transaction FI13, NWBC).
For the first component we are going to add an entry as follows:
The grouping we set to ‘CASHPOS’.
The type we set to ‘E’ for planning level. Now we can define a planning level that is going to be relevant to our cash management report output.
We set the selection to C1 (our intraday planning level we defined earlier).
This setting will ensure all cash management data as stored under C1 planning level is going to be selected in the report output.
For the second component we are going to add an entry as follows:
The grouping we set to ‘CASHPOS’.
The type we set to ‘G’ for GL Account. Now we can define the bank GL account that is going to be relevant for our cash management report output.
The selection we are going to set to a GL account is saved in our bank account entry in table T012K.
This setting will ensure all cash management data as stored under the GL account and relevant for our bank account will be selected in the report output.
The combination of these two lines is going to ensure that we will only see the C1 data for our one bank account. We can add multiple lines to increase the scope of the reports output.
Importing and processing bank statements
We should now be in good shape to import our first intraday statements. We could download these statements from our electronic banking platform. Also, we could be in a situation where we already receive them through some automated H2H interface or even through SWIFT. In any case, the statements need to be imported in SAP. This can be achieved through e.g. transaction code FF.5. The most important parameters to understand here are the following:
- File parameters: Here we define the filename and storage path where our statement is saved. We also need to define what format this file is going to be; MT940, CAMT.053, or one of the many other supported formats
- Posting parameters: Here we can define whether the line items on the bank statements should be posted to general or sub-ledger. This section is not relevant for intraday statements, as SAP does not support GL postings and reconciliation from intraday statements out of the box.
- Cash management: This is the most important section, specifically for intraday statement processing. The fields and tick boxes control a few parameters:
- A/CM payment advice: This needs to be enabled to ensure that SAP creates the memo record data from the intraday statements.
- B/Summarization: This tick box controls whether a single memo record will be created for the whole delta balance as reported on the statement or for each reported debit and credit on the statement. If high volumes are expected, summarization can reduce the number of memo records and improve performance a bit. Obviously, it does reduce the data granularity.
- C/Planning type: Here we set the planning type under which the memo records are going to be recorded. In our sample we set this to C1.
- D/ Account balance: This needs to be set if we are loading intraday statements.
- Algorithms: Here we need to set the range of customer invoice reference number (XBLNR) for the electronic bank statement (EBS) algorithm, to search the payment notes for any such occurrence in a focussed manner. If we would leave these fields empty, the algorithm would not work properly and would not find any open invoice for automatic clearing. This section is not relevant for intraday statements as SAP does not support GL postings and reconciliation from intraday statements out of the box.
Once these parameters are maintained in the import variant, the system will start to load the statements and generate the required postings.
Transaction code: FF.5
Now we can check if the memo records are updated in table FDES.
Subsequently, we can check the FF7BN report for grouping ‘CASHPOS’ and observe the output.