Learning From the 2008 Market Debacle

| About: SPDR S&P (SPY)

This is a follow-up to a 1/19/09 article in which I took Morningstar to task for its January 16 article, Our Biggest Mistakes of 2008. I suggested that Morningstar’s list may have omitted its biggest mistake: too much youth. All seven of the errors cited by Morningstar struck me not so much as legitimate lessons to be learned from what happened in 2008 but instead as garden-variety symptoms of inexperience. However, my critique did not address the kinds of lessons that could have been learned by market veterans, many of who were also burned badly last year. Opinions on this will necessarily vary widely, but I’ll take a stab at this topic right now.

Mathematics cannot overcome the basic fundamentals

Quantitative finance is not new. Peter Bernstein has well chronicled its long ascent in such works as Capital Ideas (Free Press 1993) and Capital Ideas Evolving (Wiley, 2007). But lately, it seems as if the quants have moved more than ever to center stage, achieving something akin to rock-star status. Now, we have books not just on their theories, but also on them as people: i.e., Derman, My Life As A Quant (Wiley, 2007), and the Lindsay & Schacter How I Became a Quant (Wiley, 2007) anthology. And of course we have something like Jiu, Starting Your Career as a Wall Street Quant (Quitskirts Press 2007) which, according to an author’s message presented on Amazon.com, helps you get your foot in the quat finance door even if “you have never had formal training in finance, economics or financial engineering.”

Actually, that last quote may be point most directly to the problem. Perhaps we’ve been suffering from too much mathematics and not enough finance or economics.

Topical example: What happens if you lend a lot of money to someone who has a no chance of being able to repay? People with backgrounds in finance or economics (as well as most rational adults) will assume the borrower is going to lose out. But the mathematicians had other ideas. They were confident they could protect the lenders by combining, packaging, and valuing these loans in complex ways the rest of us are too simple too comprehend. While I don’t pretend to understand everything about the way they modeled all this, I do recognize one thing. The lenders still got burned, and unfortunately, so much so as to spill over into the global financial system as a whole.

We probably need to reevaluate the role of quantitative finance

Quants are here. They will stay here. They should stay. They add an important layer of intellectual rigor that might otherwise be absent, and I believe my own work has been enhanced by exposure to many of their contributions.

But notwithstanding my respect for the substance of what they bring to the table, 2008 does seem to have driven home the importance of not getting so carried away with math that we lose track of the basics. Quantitative analysis should support and enhance fundamental analysis, not the other way around.

Quants need to come off the pedestals and interact with and be evaluated like the rest of us.

And they need to communicate effectively with us so we know what exactly they are doing. Here, for example, is the first paragraph of a section of a paper describing a modeling approach that’s of great interest to me right now:

Consider the following Gaussian regime-switching model for the sample path of a time series, {yt} T t=1

yt = xtBt + StEt

Et ~ i.i.d.N(0,1)

Where yt is scalar, xt is a (k x 1) vector of observed exogenous or predetermined explanatory variables, which may include lagged values of yt, and St= i is the state variable, denote the number of regimes by N, so that i = 1,2 …,N. We begin with the case where N = 2. In addition to aiding intuition, the two-regime case is a popular specification in applied work.

So how much benefit do you think I got from the rest of that tome? (Actually, that’s the simple part; the paper gets considerably more complex as it progresses.)

Strictly speaking, I should provide a source for this quoted material. But I hope the Seeking Alpha editors will indulge me here in my reluctance to call out one individual quant for a communication style that is commonplace throughout the field. (By the way, in the original paper, several of the letters were really Greek characters and the “t” terms were all subscripted while “T” was superscript.) In the past, I’ve even quipped to some colleagues that when I retire, I may keep myself busy by launching a newsletter that translates quant finance papers to readable English. I say this tongue-and-cheek because I don’t know that there’s money to be made here, but it could produce big benefits to the quants and their non-academic constituents.

Many, especially those with quant backgrounds, might assume my lack of comprehension means I’m the one with the problem. Perhaps that’s so. I have an MBA (Finance) and a JD, but not a degree in math.

But I’m not walking into the world of higher mathematics. Instead, the author of the above-quoted paper and his colleagues are entering the world of day-to-day finance. If quants are going to ask us to listen to what they have to say and apply their work (and pay them handsomely for it and give them lots of glory), then the least we should require is that they explain what they’re doing in a way that is understandable. It need not be comprehensible to a general mass audience. But it should be understandable by the mainstream of the investment community, the ones expected to implement their ideas.

Besides explaining, their work also needs to be evaluated under clear success-or-failure criteria, such as asset managers experience all the time (when their performances are compared to benchmarks).

The headline view looks pretty bad. In 1987, quants promised us portfolio insurance. They delivered a crash. In 1998, Long Term Capital Management promised its clients strong risk-controlled performance. They imploded instead, nearly bringing down the financial system as well. Recently, quants promised to contain the risks of sub-prime lending. Instead, it looks like they may have destroyed the banking industry (so far, it’s hard to tell because the story continues to unfold). This cannot continue.

Not everything done by quants involves toxic derivatives. There are many good things, such as the way they evolved beyond such classics as capital asset pricing model or Markowitz efficient frontier toward more pragmatic upgrades like arbitrage pricing theory, re-sampled efficient frontier, Black Litterman, Conditional Capitalized Pricing Model, etc. How are these approaches working? Outfits like Barra and Ibbotson (ironically, a subsidiary of Morningstar!) have been achieving success. But it would be nice if such matters could be articulated through some sort of open, and comprehensible, assessment rubric. Accountability is not there just to bust people for bad work. It also serves to monitor and give kudos to good works. Unfortunately, we’re presently seeing the consequences of our inability to separate the wheat from the chaff.

Many quants need to beef up their “domain knowledge” or learn to communicate and collaborate more effectively with others who have it.

This call for more collaboration isn’t just for show. There really is much to be gained. Consider the obviously-failed efforts to control sub-prime mortgage default risk. What research was done? I have to assume there was considerable study of default rates, recovery experience, etc. But how was the data sample defined? Did they just grab all the numbers they could find stretching as far back in time as possible? Or, did they fine-tune their samples to calculate probable default and recovery rates when the ratio of debt-service costs to disposable income has been at levels similar to what had we were seeing in the mid-2000s? I strongly suspect that had correct samples been specified, our current situation would not be seen as an unfavorable “outlier” but an instance of the maximum-probability scenario. Quants are great with crunching data. But it takes domain knowledge to put it into a proper, relevant, context.

(NOTE: The foregoing presumes that the quants involved with sub-prime worked in good faith, as do many quants I know who specialize in equities and conventional corporate fixed income. I realize, though, from recalling Frank Partnoy’s FIASCO (Penguin 1999) that this is a loaded topic, albeit one that is beyond my ability to address.)

We definitely need to reevaluate, and upgrade, the role of finance history

Business schools that teach finance and economics already offer huge helpings of quant-oriented coursework. But what about history?

Sadly, it is quite normal for students to graduate from such programs without knowing who John Law was, without knowing about the South Sea bubble, without understanding the canal mania of the late 1700s or the railway mania of the mid 1800s, or without even understanding why the tulip is more than a nice flower.

Educators need to reassess their priorities, lest the class of 2050 enter Wall Street saying “dot what,” “Bernie Who” (it’s not just Madoff, don’t forget Ebbers), or not knowing what sub-prime mortgages were.

It’s nice to be able to calculate a conditional value at risk, which purports to tell us how much we could lose if we experience a worst-case scenario. But is that really necessary? Don’t we already know that if the worst happens the amount we’ll lose will be enough to destroy our nest-eggs, wreck our careers, etc? How vital is it that we put a specific number on it? (By the way, has anybody dug up any past CVAR presentations to see if whatever numbers were bandied about were close to being accurate?) Or might we be better off having a perspective that helps us recognize when the probability of a worst-case scenario is becoming disturbingly high?

Sub-prime is not the first time imprudent lending caused trouble. An understanding of history would have supplied important context. And it's not as if we have to go back to the 1600s to get the necessary perspective. The 1990s, both with the U.S. S&L crisis and plenty of bad lending in the Emerging Economies, and the late 1980s junk bond experiences would have been ample to warn anyone with open eyes that sub-prime lending would likely to end badly. And the portfolio insurance disaster of 1987 should have warned us not to count so much on newfangled derivatives to overcome fundamentals.

I urge the educational community to amend their degree standards to require solid understandings of financial and economic history, and to promote these disciplines as vital areas worthy of new scholarly research.

Santa Claus, the Tooth Fairy, and robust modeling

If you don’t foresee trouble in the markets and, hence, continue to follow bull-market strategies, you’re going to feel pain. But however much it hurts, you at least can understand why it happened and hopefully, you’ll know where to focus your improvement efforts.

But suppose you correctly anticipate hard times and pursue a strategy that takes this into account, but you get hammered anyway. That really hurts. And that’s exactly what happened to many of us in 2008.

This isn’t the first bear market I’ve seen. I have quite a few screens that have coped quite well with bear markets (not by posting positive results – I don’t claim to offer absolute returns – but by losing considerably less than the market as a whole). But this time around, they bombed. They weren’t worse than most other models I’ve seen. But they weren’t better either.

It’s quite apparent that the models weren’t nearly as robust as I had hoped they would be.

On one level, it would seem a simple matter. I just need to get better, to test my models more thoroughly and make darn sure they are robust. And indeed, I am seeing more indications of robustness in some of the new protocols I’m building. Or am I?

It’s nice to dream about a model that will work well (or at least adequately) in all circumstances. But perhaps one important lesson of 2008 is that however convinced we are that we’re looking at a good-for-all-seasons approach, robustness is more likely to be little more than a mirage.

We can test out of sample, measure statistical significance, etc. all we want. But no matter how many scenarios we study, we must always respect the world’s ability to throw something new at us.

In the past, many reacted to such uncertainties by saying that we should stick with a good model even during periods of temporary poor performance because over the long term, the merits of the model will win the day. The problem is how poor that poor period is. We can’t simply shrug our shoulders and allow ten years of excellent performance to be more than wiped out in six months.

Market timing needs to move front and center

In my last article, when I took Morningstar to task for it’s having learned too late that the economy counts, I stated what many market veterans know about economic forecasting: “Well yes it’s hard! Who ever said this is supposed to be easy?” But even those who are well accustomed to facing the hazards of predicting the economy run far and run fast when the phrase “market timing” is uttered.

There is a difference between market timing and economic forecasting, so it is feasible for a practitioner to address one but cringe at the other. With the economy, it’s a matter of trying as best one can to predict future conditions. That is, indeed, incredibly difficult. But imagine the task of the market timer. The latter must not only predict those same conditions, but also, how investors will feel about it!

For example, an economist who correctly predicts interest rates will rise can stop and take a bow. The market timer, however, must go further: Will investors take rising interest rates as a leading indicator of future profits and, hence, turn bullish? Or, will investors allocate money away from stocks because less-volatile fixed income alternatives offer improved returns? Right now, we expect that strong corporate earnings and/or economic indicators would be bullish. But that’s not always so. Sometimes, investors take these as portends of higher interest rates, or even indications that rates will no longer be cut, and drive stock prices lower.

Oh yes, market timing is brutal. But, we still have to try our best to do it. We now know that the consequences of being caught on the wrong side of the market, even for the short term, are too large to ignore.

This doesn’t necessarily mean we have to predict hour by hour, day by day, or even week by week. If we can, by all means, let’s do it. But we have to be realistic. There’s just so much in the way of timing we’re likely to be able to do. And it’s not even clear we’ll be able to do it at all using objective fundamental or technical approaches. We may have to supplement these with some seat-of-the-pants judgment. We’ll see. Much research still needs to be done.

One thing we do know is that we have to try our best to implement timing protocols that, even if imperfect, can at least provide credible signals of major movements.

Flow of funds needs to gain prominence

To some degree, we’ve always known that it was important to understand the flow of funds. But looking to the future, we’re going to need to step it up, a lot!

I’m not talking about technical-analysis indicators like the Money Flow Index, On Balance Volume, or the Accumulation/Distribution Index. I’m talking about a serious understanding of who owns assets and the financial constraints that impact their decisions.

One reason why so many normally-sound stock-selection protocols bombed in 2008 is because large-scale money-flow issues trumped the sort of asset-specific fundamental analysis upon which so many models are based.

Consider, for example, a hedge fund that needs to quickly raise cash. What will it sell; shares of fundamentally great companies in its portfolio, or some busted derivatives it wishes would vanish. Obviously, it would love to sell the latter. Unfortunately, it can’t because there are no buyers. So it’s forced to slice away at its best positions, the ones it wishes it could hold but has to dump because those are the ones for which there are willing buyers. Multiply scenarios like this over and over and over again, and you start seeing one of the reasons why just about all fundamental models tanked last year; by pointing to the best companies, they were also pointing to stocks that were easiest for strapped holders to sell.

This is why I’ve often quipped that the only stock-selection rule worth using in 2008 required one to answer the questions “Who owns the stock and how desperately do they need to raise cash?”

The bad news is I cannot now see a way detailed flow-of-funds analysis at this level can be implemented. We have some publicly available data on positions of major equity investors. But the amount of data we have, and its timeliness, falls far shy of what we really need. We’d want to know about all holders, and about all of their financial conditions. It would require a radical overhaul of everything we now think in the area of shareholder disclosure (not to mention a substantial shift in our capital-market culture which presently disdains heavy disclosures like that). If it happens at all, I seriously doubt it would occur during my professional life span, or those of any others who read this.

So at present, the spit-and-chewing-gum approach all we can manage. It means using whatever sentiment gauges we can get our hands on right now, supplemented by constant attention to what’s happening in the world with an eye toward this topic.

Looking ahead

Some of the lessons discussed here are beyond the ability of any one person to implement. Since I’m not king of the world, I cannot command educators to beef up their attention to history nor can I singlehandedly restructure the relationship between quants and the mainstream investment community.

But there are some things any of us can unilaterally do. Speaking for myself, besides resolving to stay more aware of deeper flow-of-funds issues, I’m working in the area of “regime switching” (this being the topic of the quant paper I previously quoted). I’m not a mathematician, so I won’t be doing anything like what’s described in the academic literature on the topic. But I am working to bring regime switching to the screening and rankings approaches with which I am proficient.

I’ve already touched on related topics in previous Seeking Alpha postings: a 10/18/08 article on the Portfolio123.com market-timing model, a 10/23/08 article introducing a rules-based approach to isolating an equity universe designed for bear markets, and a series of additional articles published around then describing bull- or bear-specific sets of stock-selection rules. Early results with regime switching look promising. But I am handling this with care, because I don’t want to slip into the trap of simply fighting the last war. I’ll submit future articles on this topic as I progress further.

One topic I haven’t overtly addressed is the need for asset allocation. I don’t see this as a 2008 lesson per se, since I and many others in the investment community have always understood and respected this discipline. What’s needed is for us to do it better under extreme conditions, and I believe we can accomplish that if we raise our game in areas like regime switching and market timing. I’m also keeping my eyes open as to whether my market timing and regime switching efforts will make it feasible for me to continue with my largely bottom-up approach, or whether I’ll also need to add industry- and/or region-oriented top down features to my approach.