Seeking Alpha
Dividend investing
Profile| Send Message|
( followers)  

Ben Stein recently expressed a feeling that many people share with regard to 2007-2008: how could equities lose so much value so fast[1]? Ben puts it this way:

“…we have learned that even the most rigorous back testing of portfolios did not work during this period. The reason was simple -- no back test allowed for as much stress as markets were under from late 2007 to fall 2008. There simply was no postwar historic precedent for markets to be as volatile on the downside as they were in 2007-08. Thus, back testing (very similar to stress testing) that called for maximum falls of, say, 33 percent simply did not work when markets fell as far and fast as they did in 2007-08.”

I am afraid that many people will come away from the bear market of 2007-2008 (even if it has now ended) with the idea that holding substantial allocation to equities is too risky if we can experience conditions like 2008. Certainly a drop such as we experienced in 2008 serves as a wakeup call with regard to risk management. I imagine that investors in the mid 1930’s felt the same way. In the aftermath of 2008, there are plenty of people suggesting that quantitative estimates of potential losses were impossible—that our math was flawed. The reality is, I believe, more complex[2]. A recent lengthy piece in the New York Times Magazine provides an engaging perspective of the differing opinions[3].

I want to move away from the fairly abstract examples that abound with regard to risk management at the big banks and trading firms, and focus on more practical examples. What can individual investors and advisors learn with regard to risk management in the context of 2008? To motivate the discussion, I decided to look at a series of “lazy portfolio” allocations that are tracked over at Marketwatch.com[4]. There are eight model portfolio allocations—some designed by professionals and some designed as amateur examples. The reason I want to use these portfolios as examples is that they were designed to provide broad diversification across a range of asset classes. For 2008, the performance of these portfolios is as follows (I have included the S&P500 for comparison):

We may first note that none of these portfolios had a good year, with losses ranging from -21% to -36%. These are severe losses, but were these truly unpredictable?

Before continuing, it is very important to think about what predictability means when we are talking about things with uncertain outcomes. Many investors think about predictability in terms of the ability to forecast what will happen – a specific outcome. This is not the right way to think about predictability in investing. When you roll a dice, you cannot predict the next value with odds any better than 1-in-6. You can, however, predict the probability of future outcomes perfectly. You have a 1-in-6 chance of rolling each value, 1 through 6. This is what statistical probability is about. From my perspective, the key issue for investors is the degree to which it is possible to calculate the probabilities of future outcomes, rather than the specific return for next year, next month, etc.

It is fairly hard to validate a model in terms of its ability to predict extreme events. This is at the core of Nassim Taleb’s critique of financial theory. He believes that quantitative models cannot predict the types of extreme events that can occur with any meaningful skill: the world is fairly unpredictable---and totally unpredictable when it comes to massive disruptive events.

This element of predictability can, at least, be tested. If an event occurs that models suggested was utterly impossible, you can say with confidence that the model is flawed. Black Swans are things that models would project as being astronomically rare. In the case of LTCM, a huge hedge fund which blew up (and which Nassim Taleb uses as one of his prime Black Swan examples), LTCM had estimated that losses of the scale they experienced were "unlikely to occur over the entire life of the universe and even over numerous repetitions of the universe" (source: When Genius Failed by Roger Lowenstein, P. 156). They were wrong. But what about the more subtle problem of less extreme events? If you have two floods that are at the estimated 100 year flood level within a decade, does this mean that the model is broken? Not necessarily.

But what about the market losses in 2008? We saw historically unprecedented levels of market volatility[5]. There has never been a year with so many huge daily swings[6].The question that we should ask ourselves is how extreme an event we consider 2008 to be, and whether good risk models would have at least generated these outcomes as possible. Using our portfolio analysis tool, Quantext Portfolio Planner (QPP), I ran projections for each of the lazy portfolios using data through the end of 2007, using all default settings.

The projected 1-year losses at the three standard deviation level are shown below:

click to enlarge

Before interpreting these results, let’s make sure that we understand what this loss projection means. QPP projects average return and standard deviation of return for a portfolio. A one standard deviation loss is equal to the projected average return minus one standard deviation, a three standard deviation loss is equal to the projected average return minus three standard deviations. A three standard deviation loss will occur with a probability of a bit more than 1-in-1000. In other words, it is very rare but it does happen.

Does this mean that QPP “forecasted” a massive loss in the market? No. The reader should be clear in understanding that this projection also does not mean that this level of loss will only happen once in 1000 years. The situation is like this. You have a computer program that will generate a number from 1 to 1000, with equal probability to each. If you run that program and you draw a value of 1000 on your first draw, this does not mean the program is broken. If you draw one number each year and you get the 1-in-1000 value on your first year, that is simply chance. You had a 1-in-1000 chance of getting this value, but you got it on the first try. So, for the moment, let’s proceed with this idea: the 3 standard deviation loss is close to a 1-in-1000 event—and this is about the probability that QPP projected for what happened in 2008.

The correspondence between the projected 3 standard deviation loss level and the actual losses on these portfolios is high: 83%. QPP projected that the Yale U. portfolio would lose 27% in a 3 standard deviation event, whereas the S&P500 would lose 37% in a 3 standard deviation event. In other words, QPP projected that the Yale portfolio would provide 10% of downside protection vs. the S&P500 in a really bad year. This was a pretty fair estimate of the downside protection provided by the Yale portfolio vs. the S&P500 for 2008. We see similar results for the range of portfolios. Overall, the returns in 2008 for these lazy portfolios match QPP’s projected 3 standard deviation losses quite closely and the relative risk between the various portfolios is well characterized.

QPP projected the potential for extreme losses in these portfolios. Even so, QPP was also projecting that these extreme losses were unlikely—and this is a key issue for a portfolio management tool. Portfolio and risk management tools will tell you the following: if things get really bad, this is how much you can lose. They will not tell you in any certain terms whether something was a 1-in-1000 event. Standing at the end of 2007, QPP projected that what happened in 2008 was likely with only about a 1-in-1000 chance.

What really matters is not whether 2008 was, in fact, a 1-in-1000 event or a 1-in-100 event. We will never be able to prove this issue, one way or the other. What is important is to grasp that this type of event must be survivable because rare events do happen.

I think that it is useful to put something like a 3 standard deviation event in context. 1-in-1000 events are rare but not impossible. The odds of being struck by lightning for the average American is 1-in-280,000[7]. This is rare, but I take precautions if I am out when lightning is around. We clear the soccer fields, etc. The probability of being killed in a car accident in any given year is about 1-in-5000. 1-in-5000 is a low probability, but I drive a Volvo and wear a seatbelt all the time. If you get struck by lightning while hiking, this is not a “Black Swan” event---it is simply rare. Our calculations are saying that events like 2008 are far more probable than being killed in a car accident or getting struck by lightning.

Humans are not at all good at processing probabilities—especially low probability extreme events. This issue is extensively documented in the field of heuristics. Directly after some catastrophic rare event impacts peoples’ lives (like a hurricane), they are likely to see this event as significantly more probable than when such an event has not happened for a long time. This effect is well documented in sales of insurance. Demand (and premiums) for hurricane insurance drop more and more as the period since the last major hurricane strike increases. Right after the hurricane, people rush to buy insurance. In the stock market, this effect is even more significant because people's actions drive the events themselves.

How do I think people should use risk estimates? First, if you have a portfolio and you cannot survive the projected 3 standard deviation event, you need a different portfolio. This does not mean that you necessarily believe that the model that makes the projections is perfect, but simply that it has some value in projecting risk. After all, models are imperfect and they are probably more likely to under-estimate the probability of extremely rare events than to over-estimate them—simply because there are so few of these events in history that can be used to develop models.

Broadly speaking, there are two ways to manage the potential impacts of the rare events. One approach is to reduce the aggregate risk in your portfolio so that the rare (but not astronomically rare) events are survivable. Another approach is to specifically attempt to manage “tail risk.” Mohamed El-Erian has a discussion of how to manage tail risk in When Markets Collide (p. 279). His discussion focuses on the availability of cheap “tail insurance.” Back when he was writing this book, global market volatility was low—so it was inexpensive to purchase put options to protect against extreme losses.

How many investors were disciplined enough to buy put options on a consistent basis? Surprisingly few, even though it was quite evident that these options were very cheap due to unsustainably low levels of market volatility[8]. In the post-2008 world, options are (in general) very expensive[9]: the market has gone from low risk aversion to high risk aversion. In this type of market, some tail risk can still be achieved by selling covered call options—to the extent that the premium obtained cushions losses.

Beyond managing risk by managing total portfolio volatility and trying to manage tail risk using options, there are other more nuanced strategies. One of these is to differentiate between stocks in individual companies on the basis of the implied risks in the options markets[10]. It is also possible to examine the relative risks associated with individual stocks and to avoid those that carry excessive risk.[11]

We should always recognize that portfolio and risk management models are just models: the map is not the territory. That said, good models can provide substantial insight.

What should investors know about managing the risks associated with severe losses due to low probability events?

  1. Risk models do not predict whether you will have a loss next year—they predict the potential for losses of various magnitudes
  2. Risk models are useful in estimating the magnitudes of potential losses
  3. The relative risk reduction that can be achieved via asset allocation is broadly quantifiable
  4. Investors will be well served by thinking about the potential for severe events and how to prepare—and risk models can help as “what if” tools

Endnotes

[1] http://finance.yahoo.com/expert/article/yourlife/132246

[2] http://seekingalpha.com/article/103924-risk-management-for-all

[3] http://www.nytimes.com/2009/01/04/magazine/04risk-t.html?pagewanted=1&_r=1&partner=permalink&exprod=permalink

[4] http://www.marketwatch.com/lazyportfolio

[5] http://seekingalpha.com/article/109927-the-volatility-bubble-average-daily-change-now-above-4

[6] http://biz.yahoo.com/ts/090106/10456218.html?.v=1

[7] http://www.lightningsafety.com/nlsi_pls/probability.html

[8] http://seekingalpha.com/article/27508-foreign-and-domestic-market-risk-outlook-from-february-2007

[9] http://seekingalpha.com/article/107756-profiting-from-risk-aversion

[10] http://seekingalpha.com/article/107756-profiting-from-risk-aversion

[11] http://seekingalpha.com/article/68135-using-default-risk-to-limit-downside-in-individual-stock-investing

Source: Risk Management Lessons From 2008