Seeking Alpha
Value, special situations, research analyst, registered investment advisor
Profile| Send Message|
( followers)  

John Mason’s recent Seeking Alpha review of "The Quants: How a New Breed of Math Whizzes Conquered Wall Street and Nearly Destroyed It" by Scott Patterson, has provoked an interesting controversy among those who posted comments, with some vilifying quants and others suggesting they performed well and attributing misfeasance to others. Interestingly, opinion was similarly divided among Amazon.com reviewers where, as of this writing, there were 10 five-star reviews, nine one-star reviews and three in between.

I’d like to expand here on the Seeking Alpha comment I posted, wherein I argued that quants are capable of and have made great contributions to investors, but that we need to distinguish between good quants and bad quants, with the latter having been all too numerous lately.

The need to differentiate among quants based on competency should not raise any eyebrows. We do it all the time in other walks of life: medicine, law, professional sports, business management, stock picking and so on and so forth. Any discussion that lumps all quants together without attempting to differentiate based on talent is necessarily bound to turn out shallow.

Understandability

The problem, here, is that most of us don’t have the wherewithal to make the competence distinctions, because the things quants do are so mysterious to many. A couple of years ago, I even semi-seriously mused with some former colleagues about an interesting business venture I might launch to keep myself busy after I retire: a journal that would take interesting research in academic finance and translate it to understandable English.

If Wall Street is to continue to work with quants in the future, as I think it will and should, I believe it is critical that non-quantitative investors get tough on the understandability issue. If a quant can’t or won’t explain what he or she is doing in language that can be understood by the average investment advisor, then they should be sent packing. There should be no compromise here. Working with a quant who won’t clarify their insights makes as much sense as paying millions to a baseball player who will not allow his batting average to be tracked and published.

Domain Knowledge

The need for communication is not part of a quest for “touchy feely.” It’s necessary to help make the models useful in the real world. It’s one thing to be well trained in mathematics and statistics. It’s quite another to understand what makes businesses tick and how ownership interests tend to be valued. The latter, “domain knowledge,” is vital if quants are to successfully contribute to the investment community or any other field in which they work.

To show how important it is to bring domain knowledge into the picture and how easy it is for even the best quant efforts to falter in its absence, consider, as an example, one of the hallmark’s of academic finance: the capital asset pricing model (CAPM) by Nobel-prize winner William Sharpe. In essence, it tells us to calculate the return we should require an asset to produce as follows:

Ra = Rf + B(Rm – Rf)

Ra = Required return of the asset

Rf = Risk-free rate of return

Rm = Return of the market

B = Beta (to be discussed below)

It’s tempting to make light of this model because our generation of investors grew up under its auspices and accept its core ideas as ingrained truths. But imagine a pre-CAPM world; one where we have no understanding of the huge extent to which returns on an individual stock will be influenced by the movements of the overall market, no understanding of how to think about the idiosyncratic aspects of a stock’s return (that which reflects factors unique to the company) and no understanding of the need for some sort of rational connection between returns on risky and risk-free assets! CAPM has, indeed, taught us much.

Still, this model has been pilloried in the decades since it came out. For one thing, as an early offering it made for a ripe target: the army of academicians who needed to publish or perish had fewer alternatives on which to focus their research. Beyond that, as good of a first offering as it was, it really does leave much to be desired when it comes to a comprehensive expression of what causes returns to be what they are.

It is not my intent here to thoroughly dissect the CAPM. There’s a mountain of literature that already does that. I want, instead, to pick one easy-to-understand element -- beta -- and show how easily it can lead us awry if domain knowledge is absent, or even downplayed.

Beta In Theory And In The Real World

Beta is a statistic that measures how volatile a security is in comparison to the market. For this article, I’m going to accept the standard definition of the market, the S&P 500.

A stock that is exactly as volatile as the S&P 500 will have a beta of 1.00. More volatile stocks will have higher betas. Less volatile stocks will have lower betas. So, for example, a stock with a beta of 1.25 would be 25% more volatile than the market.

Suppose the risk-free return is 2% and the market return is 8%. The CAPM tells us that in order to invest in a stock with a 1.25 Beta, we’d have to be comfortable assuming we could earn a return of 9.5%. If Beta was 0.72, we could invest if we thought the return would be 6.3% or better (actually, efficient market theorists tell us not to bother expecting “or better,” but that’s way beyond the topic of this article).

Notice what Beta, as used in the CAPM, is doing. It’s telling us that in order to invest in the riskier (more volatile) security we’d need to earn a higher return to compensate for the extra risk we’re taking. In practical terms, that would probably translate in the minds of most investors to a requirement that the P/E be lower.

Whatever we think of the details, one thing is clear: There is a heck of a big difference between how we should look at a stock with a beta of 1.25 versus the approach we should take with an issue having a beta of 0.72. So we really would need to make sure the beta we’re using is correct, or at least pretty darn close to being correct. A beta that isn’t serving as an accurate barometer of risk will point investors in the wrong direction. And that is where the nightmare (or fun, depending on one’s point of view) begins, assuming one can accurately assess risk-free and market returns (two troublesome topics in their own rights).

A simple formula for calculating Beta is to start with the covariance of market and stock returns and divide by the variance of the market return. If you want to download some Yahoo Finance pricing data and fiddle around with this, use adjusted stock prices (which factor in the impact of dividends) and the Excel functions COVAR(a,b) and VARP(a).

For General Electric (NYSE:GE), a nice blue-chop stock that’s often seen as reasonably representative of the U.S. economy and which, for many years, has served as a cornerstone selection in many portfolios, I used weekly returns to calculate a Beta of 1.18 from 12/26/61 (the first day Yahoo had price data) through 2/18/10.

There. To borrow a phrase from the ad agency used by Staples -- that was easy.

Or was it?

Was I right to use weekly returns, or should I have used daily, monthly, quarterly or annual returns?

Let’s assume my use of weeklies was OK. I’m not saying it absolutely is, but for the purposes of this article, I prefer to investigate a different aspect of the calculation.

Was I correct to go all the way back to the end of 1961 and grab as much data as I could get?

Major commercial databases that offer pre-calculated information use shorter periods. Thomson Reuters, for example, goes back five years. For GE, however, this is much ado about nothing; the five-year beta I calculated is 1.17.

Now let’s have some fun. Table 1 shows betas I calculated for calendar years 2004-2009.

Table 1

Year

Beta

2003

1.12

2004

1.01

2005

0.60

2006

0.87

2007

0.62

2008

0.89

2009

2.01

We start out with a problem in 2005, when what we thought was a stock that carried market-level risk suddenly started to sputter a bit in the direction of conservatism. I’d hate to have been a money manager who cut a GE stake in late 2004 as part of an effort to reduce risk.

But in the scheme of things, that wouldn’t have been the worst sin. Imagine the poor quant-groupie who maintained a GE stake at the end of 2007 as part of a portfolio with the goal of carrying a level of risk equal to or slightly below that of the market. Clearly, Beta was not giving a proper risk signal because it was based solely on historic data and failed to incorporate any qualitative understanding of how the unfolding financial crisis and the role of GE Finance was about to impact the shares. As we now know, the risk level at the end of 2007 was massive, but beta failed to show that. As to the high beta we see in 2009, that relates to the way the numbers fell (as GE shares recovered briskly from a deep trough) and not to anything at all in the real world; the crisis was already in the past and the now-high calculation might serve to shake out a caught-too-long conservative investor at a time when risk was really diminishing.

There’s a lot we could have said about GE and risk. Notice, though, how completely irrelevant Beta was to the conversation.

This is not a one-time thing. Table 2 shows three years from the now-distant-I-suppose past.

Table 2

Year

Beta

1976

1.60

1977

0.89

1978

1.15

What on Earth was going on back then? Frankly, I can’t remember. Obviously, though, something was occurring that was changing GE’s risk levels in significant ways. Whatever it was, Beta was of no help at all. With the number bouncing around so much there was no way a late-1970s investor could have used it to help gauge risk.

I could go on and on and on and on and on, and so too could everyone else. And that’s the problem when quants work in isolation. Absent domain knowledge, either directly on their part, or indirectly through collaboration, their ability to contribute usefully is seriously compromised at best.

Bad quant

For the record, I have no interest in hearing about probabilities and bad outcomes that are part of the range of expected possibilities and so on and so forth. What happened to GE’s risk levels in the late-2000s was evident to anyone who was willing to look beyond Beta and at what was happening in the world. Any attempt to analyze risk solely in terms of Beta would have been an example of quantitative analysis badly done.

Thinking beyond this admittedly simplified example, it should be easier to see footprints of bad quant work at play in other areas. Consider sub-prime mortgages.

I don’t know, I wasn’t on the inside, but I think I can reasonably imagine two different types of quants addressing the problem.

  • Quant A calculates how much loss exposure a lender has if defaults turn out as bad as the worst 1% of possible outcomes.
  • Quant B calculates how much loss exposure a lender has in a portfolio of mortgages wherein the average borrower is paying a much higher percentage of disposable income than ever before as debt service and defaults turn out as bad as the worst 1% of possible outcomes.

Which quant do you think would have been doing a better job in making more of an effort to bring domain knowledge to bear in the modeling effort? Which quant do you think more likely represents those who were on bank and Wall Street payrolls in the later 2000s?

Again, I wasn’t there and can’t say for sure. But based on what happened, I think you can guess my opinion. What’s yours?

Disclosure: No positions in securities mentioned

Source: Quants: The Good, The Bad and the Confused