Fixing The Misspecified Model
As noted, in order to use fundamental data to analyze stocks, we must "separate the ordinary from the unusual."
When studying an individual company, this is easy to accomplish once are aware of the need for it. We may have to dig into the 10-K and 10-Q documents, since the ratios and margins posted on web sites often won't suffice. But when looking at just one company, the task is manageable. Professional analysts (or their assistants if they can still afford to employ them) do it all the time.
When aggregating data covering hundreds or thousands of companies, as quants do and as we do with screens and ranking systems, it's not so easy. We have three choices.
- We can close our eyes, grit our teeth, and hope our models don't suffer too much distortion.
- We can think strategically about the kinds of factors we use and try to emphasize those less likely to be misspecified or find a lot of different ways to express the same concept hoping that a few misspecified items can be overpowered by a larger number that really tell us what we think they're telling us. (Contrary to what quants would have us believe, we're probably better off accepting autocorrelation and looking instead to overpower misspecifications.)
- We can seek new formulas that eliminate, or at least alleviated, the misspecifications to the extent possible.
The second choice is the one most investors who work with large collections of companies can aim at right away. It's the reason why, when creating stock screens or multi-factor ranking systems, prefer forward-looking P/Es to those that use Trailing 12 Month earnings (one of the most badly misspecified data series we have). It's why I like ranking systems that use a lot of factors. It's why I'm cautious about free cash flow, a metric not designed to match revenues with expenses and which therefore raises an entirely different sort of misspecification mess.
My efforts to create a metric known as Business Income for use on Portfolio123.com and StockScreen123.com are an example of the third approach, as is another project involving Core Income (an after-tax version of Business Income). If I could roll back the clock 30 years, I'd simply use the label "Operating Income" to describe what I now refer to as Business Income. After all, that's what I called it when I was a junior analyst and recast everything by hand. But today, with database definitions so firmly entrenched, I suspect an attempt to change the by-now standard definition of operating profit would cause to much confusion (especially since the standard definitions match the 10-K labels). Hence my decision to work with new nomenclature.
Test Driving Business Income
Let's create a very simple one-factor ranking system based on trailing 12 month growth in operating income, as traditionally defined. The model assumes the higher the growth rate, the better the stock's prospects.
Figure 1 depict the results of a test of this system stretching back to 3/31/01 and assuming rebalancing every four weeks. (The test includes all non-OTC stocks.) The red bar on the far left represents the annualized percent return on the S&P 500. The light blue bar on the far right represents the annualized return of those stocks that rank in the top 5%, the second bar on the left represents the return of the next highest-ranked group, and so on.
That's not an effective stock-selection strategy.
Figure 2 shows what happens if I switch from growth in operating income to growth in business income.
Even this is by no means a complete stock-selection strategy. But it does show that the model has been much enhanced by efforts to eliminate unusual items from income before computing growth rates.
Figures 3 and 4 respectively show what happens if the model looks at three-year change in income.
We suffer in both cases from oddities among the lower-ranked stocks. This is not a great surprise and is consistent with the notion that we usually need more than one factor to explain share price performance. But assuming I would go further to develop a more complete model, it seems apparent that use of Business Income provides a much more constructive jumping off point.
Don't get the idea that any of this is letting management off the hook for the kinds of moves that often wind up causing companies to take big write-offs. We're not sweeping anything under the rug. But when we do create factors that include unusuals, we do so because we really expect the unusuals to be relevant, as opposed to having unusuals stuck in the data base in ways we can't control.
Consider, for example, single-factor ranking systems based on two different measures of return on assets: one based on operating profit (Figure 5) and another based on business income (Figure 6).
So which version is better? Is it even possible to really say one or the other is better?
Actually, I think they're both relevant in different ways. Figure 6 confirms the general idea that return on capital is useful and the notion that it's worthwhile to measure return based as much as possible on normal recurring trends. Figure 5, on the other hand, contains a different piece of noteworthy information. It suggests to us that the stock market is willing to penalize companies for the kinds of behaviors that lead to unusual charges that loom large when measured as a percent of assets.
In the models whose performance is depicted in Figures 5 and 6, we make strategic decisions as to how we want to evaluate unusual income-statement items. We did likewise with respect to Figures 2 and 4. That's the goal.
Figures 1 and 3 are another matter. When we use strategies like those, we wind up, in effect, delegating this important decision to unknown individuals who at some unknown time and place designed the database and who, more than likely, were striving for factual accuracy rather than the sort of analytical judgment articulated by Ben Graham.
In sum, when it comes to investing, the information age has been born, but has a long way to go before it reaches maturity.
Disclosure: No positions in STZ