The Basel III CVA deadline is fast approaching. Denny Yu, product manager of Risk for Numerix, explains the quantitative and technological challenges firms face in CVA implementation for both vanilla and exotic derivatives, and the key differences between the standard and advanced approaches for CVA calculations under the Basel III framework.
Under the Basel III regulatory framework, banks that fall under the Basel Committee on Banking Supervision (BCBS) mandate are required to have CVA (Credit Value Adjustment) implemented by January 2013. This requirement comes on the heels of the Basel II Market Risk Amendments that banks were required to comply with by January 2012. CVA, defined as the change in portfolio value due to the possibility of counterparty default, affects P&L no matter what, whether or not a firm includes it in financial reporting. Thus, it’s of critical importance that banks thoroughly explore the quantitative and trading implications of CVA.
Currently, many banks are scrambling to implement a CVA calculation either by building internally on top of existing models, or going to the marketplace in search of vendor solutions. In the current market environment, banks face a range of implementation challenges including the need to calculate risk numbers that take into account both market and credit risk factors, the IT challenge of centralizing trade and reference data from multiple data sources within the bank into a central repository and using a singular pricing methodology across all trades of the bank’s counterparties.
The regulatory drivers behind CVA are without a doubt increasing pressure on derivative portfolio managers to take action in order to establish accurate, timely, and consistent pricing, risk, and reporting measures enterprise-wide.
For CVA calculation under the Basel III framework, there are two approaches proposed by the Basel Committee on Banking Supervision: the Standardized Approach and the Advanced Approach.
The Standardized Approach is a simple formula that uses standard BCBS inputs and takes into account the non-simulated value of exposures, specific hedges to individual counterparties and CDS indexes, and risk weights based on counterparty rating. Banks that have approval for the internal models method (IMM) and approval to use the internal market Value at Risk (VaR) approach for specific interest rate risk of bonds will be able to use the Advanced Approach for CVA. The Advanced Approach for CVA on the other hand, uses a simulation-based approach to estimate exposures at default and explicitly models the credit spreads of counterparties.
A common complaint from dealers and banks about both methodologies is that neither takes into account the sensitivity of CVA to market risk factors such as interest rates, equities and commodities. Many banks that actively hedge CVA risk also hedge the sensitivities of the exposures to interest rate, equities, commodities and other market risks. The Advanced Approach does not account for these market risk hedges and the CVA charge actually penalizes the banks for hedging as these trades are not directly modeled and treats these positions as naked positions.
Difficulties of CVA Implementation: Quantitative & Technology Challenges From Vanillas to Exotics
A unique stipulation of CVA is that it needs to be calculated at the portfolio level rather than most other risk measures, which are done at the trade level. Most banks calculate CVA at the legal entity or counterparty level for internal risk management purposes. However, Basel III will require CVA to be calculated at the trading portfolio level. Additionally, it is important to note that most banks have historically implemented trading systems by asset class. Typically, the bank has systems that cover the rates desk, another system for the commodities and so on across the different asset classes. The team responsible for the CVA calculation at the bank faces the challenge of locating trades by counterparty and bringing them all into one repository.
The netting agreements and credit support annexes (CSAs) must also be brought together to reduce the counterparty credit exposure. Once this data is all in one place, each individual trade for the counterparty must be simulated until maturity. This requires a simulation methodology that takes into account the market risk factors of the counterparty portfolio. Additionally assumptions must be made on the correlation of these risk factors. There are several steps required for CVA calculation once the trade and legal data are in one place.
CVA is a calculation that explicitly accounts for credit spreads of counterparties and also requires the exposures to be calculated for the counterparties. The individual trade exposures must be simulated through time until maturity. This requires simulating hundreds of risk factors for thousands of simulations and re-pricing each trade over different time steps – for the entire portfolio.
For banks that have both exotic and structured derivatives in addition to vanilla swaps in the trading portfolio, an additional challenge comes in the form of modeling the exotics exposures. Many banks have pricing models in place to mark-to-market the exotic derivatives, but may face challenges when simulating the value of the exotic derivative into the future. This comes from the fact most exotics have a complex payoff function that is difficult to model with traditional pricing systems. Additionally, since these exotics may have embedded call or put options, a much larger number of Monte Carlo simulations are required for accurate future pricing compared to the number needed for vanilla derivatives.
For example, on average, banks simulate exposure values over 20 time steps from the valuation date until the maturity of the longest dated trade. Based on the confidence level chosen by the bank, the specific exposure paths for that confidence level are aggregated at the counterparty level. This calculation yields the potential future exposure (PFE) of the counterparty.
For banks using the Advanced Approach, referenced earlier, the CVA calculation requires assigning the credit spread of the counterparty to essentially discount the counterparty PFE given the recovery rate assumptions. For an average portfolio of 10,000 vanilla OTC derivatives, a bank will need to simulate up to 2000 paths for 20 time steps up to the maturity of the longest-dated trade. This equates to roughly 40 million re-pricing calculations for a portfolio-level CVA calculation. Compare this number of calculations to historical VaR for the same portfolio using a one-year look back period with daily observations which results in 2.5 million re-pricing calculations (10,000 trades multiplied by 250 daily observations). For banks aiming to calculate CVA on a weekly or daily basis, the sheer number of calculations requires computational power unprecedented in the traditional market risk environment.
The chart below illustrates how complex valuations such as Historical VaR, Monte Carlo VaR, and PFE, against an increasing number of scenarios and time periods, demand exponentially more compute power in order to generate the projected number of observations. Firms looking to move to daily or weekly calculations will have to figure out how to generate all these calculations in minutes versus hours or days.
The last decade has seen explosive growth in computational processing power from private server grids to cutting edge analytics deployed on the cloud. Today’s IT professionals are faced with a plethora of choices for deployment of financial risk technologies. For sheer computational power, banks are looking to use distributed processing mechanisms that can farm out calculations all across the institutions processing grid. This enabling technology allows banks to utilize servers that are idle to better maximize utility of the entire grid.
Once calculations are complete, another challenge is to analyze the results which can be in the order of gigabytes of data. Merely retrieving this data from traditional databases can take dozens of minutes. In a real-time scenario where a bank needs to calculate the incremental CVA of a potential trade with a counterparty, the need for quick analysis of CVA results is essential. Many banks have begun exploring the benefits of in-memory caching whereby computation results are stored in memory and available for near real-time access by the end users.
Finally, additional analysis on this “Big Data” can be significantly improved through tools that allow visualization to quickly spot risk concentrations and highlight areas for increased scrutiny. A cottage industry has spawned since the early 2000’s to offer commercial-strength tools that can easily and quickly search through very large datasets and drill into specific areas of interest.
Surprisingly, even though there are less than 8 months remaining in 2012 for implementation of CVA, in a recent webinar Sybase and Numerix found that that over 50% of the respondents are still seeking education around CVA and had no visibility in to their organizations plans for CVA, while less than 15% were in production or in implementation of CVA. As banks continue grapple with the numerous quantitative and technology challenges associated with CVA, it’s imperative that they also have a clear picture of the key issues and current state of implementation to ensure the CVA approach they choose is a perfect fit for their business.