Deep Insights. Deep Expertise.
Financial Engineering: Theory and Practice
It goes without saying that there is a significant gap between theoretical quantitative modelling and it’s practical implementation. Black-Scholes does not allow for a volatility smile, no one actually expects local volatilies to be realised and the square route in the Heston volatility process is chosen more for practical implementation considerations, than out of consideration of the real world stock price process.
In producing Value-at-Risk a number of (unrealistic) assumptions are made which mean that, generally speaking, no one believes that (paraphrasing Wikipedia) “for a given portfolio, probability and time horizon, …the mark-to-market loss on the portfolio over the given time horizon exceeds this value (VaR), (assuming normal markets and no trading in the portfolio)… (with) the given probability”. Rather, VaR is an approximate indicator of the (market) riskiness of an institutions portfolio. As such, there is no definitive VaR number (given the inputs above). There are a number of tweaks that can be made to VaR to improve the calculation, depending on the definition of “improve”.
One such example is the 10-day scaling factor. An institution may either use a 10-day return series to calculate their 10-day VaR, or scale the 1-day VaR calculation by the square route of 10. It is possible (and observable) that one of these methods will result in a lower VaR calculation, and lower economic capital. An institution is likely to base their internal framework on the approach that requires a lower capital holding.
Another interesting tweak is the use of volatility scaling to adjust historical scenarios. This is something I covered in university, and recommended to clients as a young, naive financial engineeer. The idea is to update the historical scenarios, scaling by the ratio of the current volatility to the historical volatility level. The way we were taught it, it was intended to improve the ‘responsiveness’ of Value-at-Risk to changes in the market, so that more volatile markets would be reflected as such, and likewise for less volatile markets. It’s current day use falls under the ‘likewise’ category. Following turbulent market conditions, VaR calculated without a volatility adjustment will ‘overestimate’ VaR, in that it reflects a loss distribution from a more volatile period in history.
The problem here is that this adjustment is procyclical. Institutions applying the adjustment will benefit in holding less regulatory capital during quiet market periods. During volatile market periods however, the institution will need to significantly increase it’s capital holding, likely after just having experienced a large loss.
This approach is interesting, firstly because the actual intention of the practical application of this method differs greatly from the theoretical intention of the methodology (theory is looking for a genuine improvement in the accuracy of VaR, while practice simply seeks a lower VaR), and secondly, because this is an approach with unintended consequences.
There is a limit to how much optimisation an institution can do. They still need to pass daily backtests, so it is unlikely that any reported VaR misstates expected losses too severely (relative to VaR being calculated in an ‘unoptimised’ environment). There is also the matter, in this case, of the countercyclical buffer to be introduced under Basel III. If the use of a countercyclical buffer overstates a regulatory capital requirement in good times, then the volatility adjustment normalises this effect, and anyone who does not apply the adjustment is unnecessarily incurring excessive capital charges. Alternatively, if the countercyclical buffer accurately reflects losses that may occur (even in low volatility periods), then the adjustment is reversing a realistically prudent buffer.
Although regulatory capital optimisation sits as an embarrassing footnote on the JP Morgan Whale scandal, it is a practice that can be saddled with only a small portion of the blame for the loss, and it is a practice, room for which is increasingly disappearing. What is clear is that financial engineering can be applied to the production of VaR for a variety of different reasons, not all of which are to make it more accurate. Whatever optimisation is applied, a good market risk department will be fully aware of the implications of such an approach, and will manage it’s use accordingly. While it’s unlikely that any institution would seek to voluntarily attract higher risk charges, they may employ separate VaR models, a regulatory VaR, for use in regulatory capital calculations, and an internal VaR for use in internal risk management (which may very well be higher than regulatory VaR, and accordingly lead to a higher level of held capital).