How Over-Valued Is Too Over-Valued?

by: Michael Allen

Jesse Felder of the Felder report has plenty of good reasons to believe this is the Worst Time in 40 years to buy stocks. I think it might be more of a third-place myself, but it never hurts to stress test your assumptions. I don't think there is any doubt that the market is over-valued, but I thought it was over-valued a year ago, and it is up 26% since then. The question is always: "Exactly how over-valued is too over-valued?

Felder displays several valuation measures that are all at extreme levels relative to their mean, but the problem with all of these measures is that there is no consistency in either the distance from the mean at which they peak in any given cycle, or the amount of time they spend at any given level. I know, because I use a similar model and have had to make adjustments because of real-time mistakes that the use of such models has caused me in the past. For a good laugh, see my previous post in Seeking Alpha.

I have significantly refined my approach since then. I still start with GAAP earnings provided by Shiller. Despite all the changes in GAAP rules, despite all the real economic values that GAAP treats poorly or not at all, despite all the changes in corporate financial behavior over all these decades, this is the earnings figure that has shown the greatest correlation with long-term changes in share prices. In fact, over any 20-year period, the correlation between S&P price and S&P GAAP earnings is almost perfect.

(Click to enlarge)

Since earnings are mean-reverting, the price-to-current earnings almost always suffers from being based on an earnings number that has no sustainability. Shiller introduced the Cyclically Adjusted Price to Earnings measure, which is simply a 10-year moving average. Critics argue that a 10-year moving average typically lags the true value. An exponential trend of deflated earnings does not have this drawback.

No Two Bear Markets are Ever Alike

If you look at the worst 7 market declines in the past 100 years, the average price/trend earnings ratio was 18x, but it has never peaked or troughed at exactly the same level. There is no price so high that the market cannot go any higher, nor any price so low that it cannot go lower. It also takes an extraordinary long time for valuation to take its toll, and very few human beings have the patience to wait. For example, the valuation component of my model went negative in May of 1997, and this eventually proved correct, but the market did not peak until the summer of 2000. More recently, the valuation barometer went negative in August of 2012, but the market is up 26% since then and has yet to show any irrefutable sign of peaking.

By my calculations, the price / trend-earnings ratio is currently 16.6x. This puts us in a very ambiguous place: below average for bull-market peaks, but well above any level that has ever proved sustainable. So we are definitely headed for trouble, but the question is, "when?" The wrong answer might prove costly.

(Click to enlarge)

How Adding a Trend Reversal Overlay Works

A key to understanding how to deal with any mean-reverting data series is the standard deviation, which measures how closely the individual data points are clustered around the mean. If the time series is stationary, meaning that it is not only mean reverting but also reverts at regular and consistent intervals, then 68% of the data points will fall between one standard deviation above and below the mean, and 95% will fall between two standard deviations above and below the mean. If the price/trend earnings series were stationary, I could very simply sell whenever the series was 2 standard deviations above and buy when it was 2 standard deviations below, and I would make money 95% of the time.

As I mentioned, the price/trend earnings ratio is mean-reverting, but it is not stationary. I can use the standard deviation concept with a non-stationary series, but it doesn't work quite as well. As you can see from the table below describing the return analysis of a strict valuation model, categorizing the market into four categories, the most bullish being 1.6 standard deviations below fair value and the most bearish being 1.6 standard deviations above fair value, the bullish category months significantly outperform the bearish category months on average, but there is still enough over-lap that any bearish month might still out-perform any bullish month, and there is statistically no meaningful difference between the cautious and constructive stances.

(Click to enlarge)

By the way, if you are wondering why I use 1.6 standard deviations, instead of 1 or 2, it is because that is the level that generated the optimal returns during the developmental stage of my valuation model and also worked in the out-of-sample test. It's simply the level that optimizes the trade-off between participating as much as possible in any upside while avoiding as much as possible of any downside.

What's happening here is a simple problem that occurs with any valuation methodology, no matter what version of fundamental value that you use. When the market passes through any given zone, let's say the cautious zone for this example, it passes through twice: once on its way up to the bearish zone, and once on its way down to the constructive zone. Although the direction of the market is completely the opposite in each case, the valuation model is incapable of distinguishing the two different situations. This problem is compounded in a non-stationary series because it also happens in the extreme zones: once on the way up to wherever it is headed and once on the way back down. I cannot accurately predict exactly where the top or bottom will be.

The solution I found is to overlay a moving average indicator. Whenever the current price is above this moving average, the index is assumed to be on the way up through any given valuation zone, and whenever the price is below the moving average, then the index is moving down through any given valuation zone. Whenever the trend indicator is up, I add 50% of my total net assets to the equity exposure, and whenever the trend indicator is down, I subtract 50%. This simple trick helps the model separate good risk reward environments from bad ones so well that it increases the monthly average return by 41%.

(Click to enlarge)

The length of the moving average is important. The shorter the length, the more rapidly it reacts to changes in direction, but when a signal reacts too quickly, it can be fooled by short-term volatility that isn't really a change in direction. This kind of whipsawing action kills most pure technical analytical systems. What I want is a system that reacts differently depending on the valuation assessment. If the valuation assessment is bullish, I want a system that is fast to get in, and slow to get out of the market. If the valuation is bearish, I want a system that is slow to get in, but fast to get out. If the valuation assessment is neutral, I want a system that is simply slow to change from whatever mode that it was in. For the slow moving average, I found the 9-month average works best, which is about 190 days, and for the fast-moving average, I use the 5 month average.

Risk Assessment

Market closed at 1643 on Friday, June 7. The value component of my model will reduce exposure to stocks by 50% if the market rises 3% or more, and the technical component will reduce exposure to 0% if the market drops by to 1,517 or lower,
which is about 7.7% or more.

This presents two rather low-probability risks:

1). The market rises more than 4%, and as a result of downgrading our exposure, I fail to participate fully in any further rise.

2). The market drops significantly more than 8% before I have a chance to get out. The speed of the drop is an important distinction - I don't care if the market drops 50% or more as long as it drops slowly. I am only worried the market dropping faster than my model can react.

The risk of this happening is not zero, but it is extremely rare. In the past 1,707 months, a decline of more than 7% occurred 61 times, or just less than 3.6% of the total. Declines of this magnitude or more were not caused by valuation. Of the 321 instances when the price-to-trend earnings was greater than 16x, only 7 of these, or 2.2% were followed by monthly declines of more than 7%.

The upside risk is trivial even though the odds of it happening are slightly higher. Of the previous 1,707 months, 223 months, or about 13% rose more than 4%. Valuation doesn't have any material effect on this statistic. Of the months preceded by price/trend earnings of more than 16x, still 31 months or 9.7% of the 321 months with this condition Ire up 4% or more. The average return, however, for all months with this a price/ trend earnings above 16x only 0.2%, and of course, longer-term returns have never been positive. At the point my model turns cautious, the risks of missing out on higher returns are really going to be extremely low, and as my back-testing has proved, it's completely OK to miss them.

Tactics vs. Strategy

These are extremely low probabilities, but I can never eliminate all risk without reducing total long-term returns by an unacceptable amount. The worst one month return in history was a 26% decline, and you have to be aware that this can always happen to you no matter how smart you think you are. It doesn't matter whether it is the worst time in history to invest or the best time, you should never invest in stocks without an awareness that you can lose up to 26% at any time. This is why I differentiate between asset allocation tactics and asset allocation strategy.

Tactics refer to recommendations of the valuation models, which recommends intermediate-term movements in and out of various asset classes according to assessments of current risk and reward relative to historical norms. One can never eliminate risk entirely, however, so one needs a long-term strategy that is sets appropriate limitations to the amount of exposure you take to the risk of large, unpredictable drawdowns. These policy positions only change with one's age and unique sensitivity to risk.

In the portfolio section of my website, I provide a table that demonstrates how to adjust the tactical model to your own particular policy, or risk-tolerance level. This table shows how to multiply the tactical weighting in stocks to the policy weighting that suits your own needs. For example, if the tactical model weighting is 100% and the policy weighting is 60%, then you hold 60% in equities. If the tactical weighting drops to 50%, then you hold half of the policy weighting, which in this example means you would hold only 30% of your portfolio in equities. So in the event of a 27% decline in stocks, given the current tactical weighting of 100%, and a 60% policy weighting, your actual weighting toward equities would be 60%, and 60% of a 27% decline is only 16%.

If 16% downside is still intolerable to you, then you need to increase the conservativeness of your policy. Some 40 year-olds might be more comfortable with the portfolio recommended for a 50 or even 60 year old, in which case they would lower equity exposure to 30%. Thus, with the same investment model, the maximum likely downside risk falls to about 8%.

Nothing is Guaranteed

I cannot be certain that this isn't the worst time in 40-years to invest in stocks, but I can assure you that if you follow the system I've designed and set a policy exposure that is appropriate for your own personal tolerance levels, then no matter what happens, you'll be able to recover from it. The full model is available here.

Disclosure: I have no positions in any stocks mentioned, and no plans to initiate any positions within the next 72 hours. I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it. I have no business relationship with any company whose stock is mentioned in this article.