Defending VAR - But You Still Need Common Sense

by: Suna Reyent, CFA

In Defense Of Value At Risk And Other Risk Management Methods

In the beginning of the month, New York Times Magazine published an article by Joe Nocera called “Risk Mismanagement” that created quite a stir in the blogosphere and beyond. Despite the watering-down of certain aspects related to risk management tools, as well as the diversity with which these tools are applied practice, the article was a success because of the buzz it created as well as the ensuing debate.

The article portrays a debate over value at risk methodology between well-known practitioners of VAR and the critics of the methodology led by Nassim Taleb. It is hard not to get carried away with Mr. Taleb’s tabloid-like descriptions of VAR as a “fraud” and its practitioners as “intellectual charlatans.”

I love how the debate is construed. The premise is that value at risk and other valuation models (such as Black-Scholes) assume normal distribution of asset returns. Okay, they do that in their most primitive forms, but let’s just accept the oversimplification as a fact for a moment because the debate would hardly exist in this simplistic form if we didn’t go along with the show here.

This is where our hero Mr. Taleb, an experienced options trader no less, emerges to the public mainstream to inform all of us ignorant folks that asset returns do not follow a normal distribution! The horror! The painful realization that this stuff continues to be taught in business schools! All that wasted class time learning statistics!

It is fair to say that this assumption will mislead naïve market participants about the nature of their risk exposures as “Black Swan” events happen a lot more frequently than suggested by Gaussian distributions. The problem is, almost anyone in finance already knows that asset prices are not normally distributed, and many practitioners build models or apply extensions to existing ones in order to take this into consideration.

I decided to give a little background on value at risk in order to get the points across that I feel strongly about. Since I teach VAR in the classroom as part of a risk management curriculum, I feel it is best to give some preliminary information.

A Primer On Value At Risk

Depending on the confidence interval chosen, value at risk, in its simplest form, exists of applying a one-sided test to figure out the loss that a portfolio may weather in a given time period. For instance, a 95% daily VAR of ten million dollars indicates that a portfolio is likely to lose at most that amount of money 95% of the time, or once a month assuming 20 trading days in a given month. At the same time, it displays the LEAST amount of money that the portfolio can lose 5% of the time. I appreciated it when Mr. Nocera mentioned this in his article prepared for general readership. As VAR is unable to tell us about what kind of a loss we should expect in that tail of 5%, the limitation of this metric if taken as gospel becomes apparent even to the untrained eye.

More on the tail risk later. But first, I would like to talk about three established ways of calculating value at risk for one asset and analyze the current risk management crisis within this framework:

Analytical VAR – “Misunderestimating” Risks

Otherwise known as variance-covariance method of calculating the value at risk, this is the well-known method of calculating VAR and the easiest one to apply. It assumes a normal distribution of returns. All it takes to calculate VAR is a standard deviation, which represents the “volatility” of the asset as well as a mean, which is the expected return on the same asset.

This is the VAR that Mr. Taleb seems to conveniently focus on, because it will indeed underestimate the risk at the tails of a negatively skewed or a leptokurtic distribution.

Stock markets in general exhibit negative skewness, which means that the distribution of returns will exhibit a long tail (a few extreme losses) to the left side. They also exhibit leptokurtosis, which means that both tails of the distribution are fatter than implied by normal distribution.

So we could go nuts over how wrong the normal distribution assumption is, and apparently people do. But we should also be very concerned over how sensitive this measure is to the standard deviation as well the mean, both of which are subject to change as markets change especially in the light of the current crisis.

Historical VAR – Good As Long As Future Resembles Past


This method does not need any assumptions about the distribution of returns and is certainly superior to analytical VAR because it is not parametric. The more data there is, the better the measurement. Historical data will exhibit characteristics such as skewness or kurtosis as long as the asset itself exhibits these qualities as well.

Assuming 250 trading days in a given year, in order to measure the 95% daily VAR you need to rank the returns from worst to best and pick the greatest return among those that correspond to the bottom 5% of returns. So the worst 12th return (or you could interpolate between the 12th and13th worst return, since 250 divided by 20 is 12.5, but since VAR itself is an approximation, why bother?) will tell you the maximum percentage loss 95% percent of the time, or the minimum percentage loss 5% of the time. Multiply the loss by your portfolio value and you get the neat VAR value in terms of dollars.

Moreover, the majority of investment houses use historical VAR as the basis for measurement as it is a clear improvement over the analytical VAR. You do not need return assumptions or standard deviation values to come up with this value.

Historical VAR calculations replace parametric assumptions with historical data. This means that if you had positions in mortgage derivative securities and started the year 2007 with models that were built around data of the previous two years encompassing the “peaceful” periods of 2005 and 2006, you would soon be awakened to a world where your VAR measures no longer reflected the reality of the marketplace. Note that such limitations of VAR as an all-encompassing risk measure were visible to any professional who understood risk management models as well as the limitations of historical data that went into them.

As Mr. Nocera’s article conveys, this is precisely what Goldman Sachs (NYSE:GS) did. When it became obvious that the mortgage markets had changed in fundamental ways and aggressive positions in these securities started bringing in gigantic losses (as opposed to reaping the usual gigantic profits on the back of the ever-rising housing market), the team decided to limit its risk exposure by “getting closer to home.”

I don’t think the article conveys what “getting closer to home” really means. Let me use day trading as an example here. In day trading terms, this means that when your positions start showing huge losses at the end of the day, you accept “defeat” and take your losses as opposed to trying to ride them in the hope that the market will come around. So instead of wishing for market to make a comeback to recoup losses, you close out your open positions, take your losses and go home. Then you go back to the drawing board to strategize for the next day given the new reality of the marketplace.

Of course, looking retrospectively, the decision to limit exposure and take losses as opposed to trying to ride them in the expectation of a housing market turnaround has been the right decision to make. However, as we have seen with many other bubbles, managers do not have the incentive to make the sound trading decisions, nor do they have the incentive to listen to their risk managers as long as they get a huge piece of profits made during the ride and the taxpayer ends up holding the bag when the market finally blows up.

We have seen this movie over and over again. What surprises me is the heavy blame put on models for not reflecting “reality,” whereas those in charge knew that the mortgage bubble was collapsing, they had many opportunities to get rid of their huge exposures to the derivatives securities, but they chose not to do it most likely because of expectations of a market turn around. This is trading 101. If you try to ride your losses, you may make comebacks, but you will eventually blow up.

Now the next episode features critics who tell us that the “models” have been faulty and wrong. Hence the conclusion that value at risk is an erroneous and misleading measure, not to mention a “fraud.”

Ladies and gentleman, we found the “fraud” haunting the trading floor on the street, and it is not a human being: Shame on you, VAR and other risk management tools! Of course, we can blame the car manufacturers for the accident: the car’s faulty speedometer, or its lack of an apparatus to show us the bumps on the road ahead. But why is the culture that is reticent to blame the drunk driver who was clearly intoxicated with the thrill of making green?

These “models” are as guilty as the “accounting” that was used with a sleight of hand to conceal what was really going on behind the curtains during the Enron debacle and others. Of course, given the mathematical complexities of models, the quantitative brainpower needed to understand some of them, and the assumptions required in creating a map of your territory, there is more of an opportunity to either blame the models or to pretend that you didn’t understand them when things turned sour.

As I ventured with this essay, hoping to make my points within the value at risk framework featured in textbooks, I will move on to the third methodology used in calculating the measure.

Monte Carlo Simulation – Anything Goes, But More Of An Art Than Science

Monte Carlo Simulation is especially useful in calculating risk exposures of assets that have either little historical data or whose historical data is rendered irrelevant due to changing economic conditions that affect both the price of securities and the way these securities interact with each other in a portfolio. Also, historical returns of assets with asymmetric payoffs or returns of derivative securities that interact with variables such as interest rates, housing prices, and the like will not reflect the future when factors that influence the return of the security change as the economic climate shifts.

Monte Carlo Simulation does not require a fixed set of assumptions regarding its parameters or regarding its distribution of returns. In its simplest form, the technique generates random outcomes via simulating a large number of market returns. For instance, if 50,000 iterations are used in creating asset returns, the 95% percent VAR would be calculated as the 2500th worst return amongst all those returns that are generated by the computer.

The process explained above is the simplest way of running a Monte Carlo Simulation, and requires fairly simple programming and computational capabilities. However, a trading book may hold thousands of positions of securities with asymmetric payoffs that also interact with each other given changing market conditions. Depending on how many securities an average institution holds in its portfolio, calculation of VAR in this manner may become an enormous computational task.

Monte Carlo Simulation in finance is an extensive and broad field. It is also used in combination with other risk methods to carry out stress testing or to create scenarios that simulate crashes, among other things.

Extensions Of VAR And Must-Have Techniques To Supplement VAR Models

Shortcomings of the VAR metric have received due respect in risk management circles. For instance, given the deficiency of the VAR metric in correctly identifying the risks “stuffed” into the tails, extensions such as tail value at risk, otherwise known as conditional VAR, have been developed in order to deal with the issue.

Tail value at risk is the average loss a portfolio manager can expect in that 5% tail. A technical recipe could be like the following: Instead of looking at the 2500th worst return of your Monte Carlo model of 50,000 iterations to find the 95% VAR value, you sum up all the worst returns ranking from 1st to 2500, take the average of them, and call this number TVAR.

The New York Times Magazine article mentions the possibility of gaming VAR as well. For instance, if the portfolio contained a large out of the money short put whose value is likely to blow 1% of the time and cause a much larger loss than what your VAR is telling you, studying those tails carefully either via scenario analysis or via TVAR would keep the risk manager vigilant regarding the true risks the portfolio is exposed to.

Conclusion

Despite the improvements and extensions to the VAR metric, it remains an absolute necessity to supplement the measure with other methods such as stress-testing, scenario analysis as well as factor push models. Combinations of various methods may be used to simulate worst-case scenarios, and these models must be evaluated on the back of conditions present in the marketplace.

Yet there is a Bloomberg story from one year ago that quotes an SEC filing of Merrill Lynch:

"VaR, stress tests and other risk measures significantly underestimated the magnitude of actual loss from the unprecedented credit market environment,'' Merrill's filing said. "In the past, these AAA ABS CDO securities had never experienced a significant loss in value."

Translation: We did not “stress” our portfolio well enough, carry out the necessary worst-case scenarios, or question the validity of our historical data given the new mortgage environment because it was inconvenient for us to do so. But surely it’s not our fault now, is it?

Okay, I have since made peace with the idea that the taxpayer is footing the bill for the losses incurred by the “club.” When the big guys ruin the fabric of the financial system, the taxpayer pays for it, and that’s been the standard procedure. But I’ve had enough of watching people wash the blame off of their hands via appealing to the inadequacies of the models or the risk management tools they were using. Securities Exchange Commission or major newspapers or the general public may very well accept that kind of explanation, but I refuse the idea that risk models were to blame for the severe losses in this crisis.

A financial model is never a complete representation of what is going on in the markets and was never meant to replace judgment as well as common sense.

A forensic eye combined with qualitative analysis was and has always been good enough of a requirement to evaluate the robustness of models, and it is almost always visible to the practitioners when their models no longer reflect the reality of the markets.

Thus, while we should be aware of the shortcomings of VAR measure as well as any other model we are using, it is erroneous to put the blame on the tools when the crisis at hand remains a failure of human judgment, lack of responsible behavior as well as a collapse of plain old business ethics.

About this article:

Expand
Want to share your opinion on this article? Add a comment.
Disagree with this article? .
To report a factual error in this article, click here