Profiting From Half-Hearted Analyst Coverage

by: Marc Gerstein

We usually think of stock coverage by Wall Street in either-or terms. Either a stock is covered, or it isn’t. The next level beyond that is the distinction between heavy or light coverage (a lot of analysts watching a stock, or a small number, perhaps just one). But it’s the oddities that often generate interesting investment ideas, the things that fall between the cracks, and one may exist regarding analyst coverage: situations where stocks are covered, but the analysts don’t really relish the task. In other words, they cover the stock, but are apathetic and do a slipshod job. (OK, OK, I know a lot of readers think analysts always do bad work. Try to put that aside, at least for now. I want to present an approach to distinguishing careful from halfhearted coverage that may help us uncover worthwhile investment ideas, but if you want to take advantage, you’ll have to put aside any disdain you might have for analysts and play along, at least for now.)

The idea behind this strategy sort-of relates to the notion of market inefficiency. The less intensely a stock is watched, the less likely it is that the share price will reflect all available and pertinent information. The half-hearted coverage variation is a twist on this. Because the stocks are covered by analysts, we tend to assume their share prices are more likely to be, if not perfectly efficient, then at least somewhere on that side of the scale of possibilities. But if we can find grounds to assume the coverage is half-hearted, we may be onto a fall-between-the-cracks situation: stocks that are not being watched nearly as closely as the casual observer might assume; stocks that have greater-than-expected potential to surprise the investment community.

So how do we define half-heartedness? I can easily tell if I talk to an analyst or read one of their written reports. But even if I could get sufficient access to analysts to go that route, that’s not the sort of evidence that plugs well into a financial database. But I think we might be able to make something of data-points on the number of published estimates.

Normally, when analysts cover stocks, they are expected to produce estimates for three time-frames: the current (still-in-progress) fiscal year, including the quarterly intervals, the next fiscal year (including the quarters), and the long term, which is traditionally thought of as a three- to five-year time frame. Common sense tells us to expect analysts to feel less confidence in their projections as they move further out in time. In other words, the projected “long-term” EPS growth rate is likely to be backed by considerably less evidence and conviction then the current-year figures, and might, in some cases, even be throwaway numbers (much the way target prices are). But still, professional custom calls for analysts to come up with something for all three time periods.

What if they don’t? What it an analyst has estimates for the current fiscal year and for the next year, but fails to supply any number (not even something concocted via the Excel random-number function) for the long-term EPS growth rate? I’d say that’s pretty lame, especially since an apathetic analyst can easily pull something off Yahoo! Finance based on what other analysts projected for the same company or from others in the industry. Beyond that, what about an analyst who has a set of projections for the current fiscal year, but doesn’t even bother publishing estimates for next year. That’s beyond lame. That, to me, is the functional equivalent of a research report headlined “Good for you if you care, because I sure as heck don’t.”

There may occasionally be valid reasons why an analyst would fail to follow the three-period custom regarding estimates, but I don’t think there are many.

One possible explanation could be that the stock is a piece of junk and the analyst thinks the world should be grateful for any estimates he or she deigns to provide, but you’ll see below how I take this scenario off the table. Another possibility might be that the stock, while not a dog, may experience diminishing investor interest going forward due to some sort of deterioration. But I’m not buying that. Even assuming analysts are great at seeing this sort of thing (think Housing and Financials coverage circa 2007), the more professional approach would be to present one’s estimates and put a Sell rating on the stock, or if one feels inhibited by such a course of action (a big-time topic for another day), then an analyst can simply discontinue coverage, in which case, no estimates at all would be published. Once in a while, we’ll find companies that are about to be acquired or restructured in such a way as to make out-period estimates meaningless. But that doesn’t happen nearly often enough to explain what we see in the way of diminishing estimates. So I’m going to focus on my main theory (apathy), take advantage of the fact that numbers of estimates can be expressed as data-points that can be used in screening platforms and I’ll build a model that tries to identify stocks that may not be getting the attention they deserve by analysts who’ve been falling asleep at the switch.

Generally, my screen looks for this:

  • The number of estimates of next-year EPS is less the number for the current year
  • The number of estimates of long-term EPS growth rate is less the number for the next year’s EPS

Actually, though, we need to work a bit harder than that.

There are, obviously, many things that influence share price performance so I want to try to reasonably focus on the impact of analyst half-heartedness. So I’m going to compare my basic model with a “control” model that requires the usual estimate profile (wherein the number of next-year estimates equals or exceeds the number of current-year estimates and where the number of long-term projections equals or exceeds the number of next-year estimates). I’m also going to use the multi-style QVGM (QualityGrowthValue - Momentum) ranking system I created for and looked closely at last week to identify the top 15 stocks among those passing the main analyst-half-heartedness screen and the top 15 among those that pass the control screen. So in terms of fundamentals, the two lists of 15 stocks should be generally comparable. I’m also going to confine both models to stocks with market capitalizations below $5 billion. As you go up in size, the market gets a lot more efficient, so much so it takes more than a few drowsy analysts to change that state of affairs (I know this because I tested).

If my idea is perfect, I should expect to see the best performance taking place among the most extreme instances of estimate publication drop-offs. But the results, as shown from backtesting, didn’t go quite that far. Really big drop-offs, whether we’re talking about drop-offs occurring in the context of a large number of covering analysts, or smaller levels of overall coverage characterized by especially large percentage drop-offs, don’t seem to help us. It appears that when the level of analyst tune-out is large, we may be dealing with problematic companies. What we really need to make this idea work is a “sweet spot.” I think my model hits that through market caps and overall levels of analyst coverage small enough to make it easy for share-price inefficiencies to arise, and drop-offs large enough to be noticeable yet not so huge as to constitute red flags.

Here’s the final model:

  • Market Capitalization below $5 billion
  • At least three analysts publishing EPS estimates for the current fiscal year
  • The number of analysts publishing estimates for the next fiscal year is less than or equal to 80% of the number publishing current-year estimates
  • The number of analysts publishing long-term EPS growth-rate projections year is less than or equal to 80% of the number publishing estimates for the next fiscal year

In the control screen, the last two rules are eliminated and I use the following instead:

  • The number of analysts publishing estimates for the next fiscal year is equal to or greater than the number publishing current-year estimates
  • The number of analysts publishing long-term EPS growth-rate projections year is equal to or greater than the number publishing estimates for the next fiscal year

Figure 1 shows the results of a five-year backtest of the control screen (assuming the screen is re-run and the stock “holdings” are refreshed every four weeks).

Figure 1

The control model outperformed the Russell 2000, but not by a significant amount. Suffice it to say I would not use the control screen as a source of ideas.

Figure 2 shows a similar five-year backtest for the main model, the one based on mild analyst half-heartedness.

Figure 2

During the first part of the test period, the unfolding of the financial crisis, the model did not do well, as was the case with most models. (Back then, I think the only workable stock-analysis criterion was to ask who owned the stock and how badly they needed to raise cash). But after the crash, the strategy did quite well. The model also did great in backtests that studied the years before the financial crisis. It also stumbled during the mid-2011 mini-panic (Thank you Congress, Mr. President and S&P). It’s not as if the model caused disaster during either of those crises (the scale of the chart above might make it look scary, but other tests isolating the crisis periods showed that this strategy was in mid-2011 only trivially worse than owning a market ETF and actually a bit better than that in late 2008), but the benefits of this approach seem most likely to be enjoyed during healthy market environments.

Table 1 shows the stocks that currently make the grade under this strategy.

Table 1

The results are sorted based on number of analysts publishing current-year estimates; from fewest to most. Although the backtest results depicted in Figure 2 would seem to support drawing ideas from the entire list, the other tests discussed (i.e. involving greater levels of coverage) suggest stocks in the upper half of the list might be more promising, although I wouldn’t absolutely rule out the bottom group absent further study to try to define the “sweet spot” more precisely (and, of course, given high QVGM scores).

Disclosure: I am long CNTY.