Please Note: Blog posts are not selected, edited or screened by Seeking Alpha editors.

Context Always Matters When Evaluating Economic Data

We note often the importance of starting with a big picture perspective when evaluating the potential risk and reward of investment and trading positions. Short-term market behavior only has meaning when analyzed within the proper context afforded by the long-term cycles and trends. The same strategy should be used when evaluating the quality of economic data releases. In his latest weekly commentary, fund manager John Hussman discusses the importance of understanding both context and the data generating process.

For anyone who works to infer information from a broad range of evidence, one of the important aspects of the job is to think carefully about the structure of the data - what is sometimes called the "data-generating process." Data doesn't just drop from the sky or out of a computer. It is generated by some process, and for any sort of data, it is critical to understand how that process works.

For example, one of the moments of market excitement last week was the reported jump in new housing starts for September. But later in the week, investors learned that there was a slump in existing home sales as well. If we just take those two data points at face value, it's not clear exactly what we should conclude about housing. But the story is clearer once we consider the process that generates that data.

One part of the process is purely statistical. The housing data that is reported each month actually uses monthly data at an annual rate, so the jump from 758,000 to 852,000 housing starts at an annual rate actually works out to a statement that "During September, in an economy of about 130 million homes, about 100 million which are single detached units, a total of 9,500 more homes were started than in August - a fluctuation that is actually in the range of month-to-month statistical noise, but does bring recent activity to a recovery high." Now, in prior recessions, the absolute low was about 900,000 starts on an annual basis, rising toward 2 million annual starts over the course of the recovery. The historical peak occurred in 1972 near 2.5 million starts, but the period leading up to 2006 was the longest sustained increase without a major drop. In the recent instance, housing starts bottomed at 478,000 in early 2009, so we've clearly seen a recovery in starts. But the present level is still so low that it has previously been observed only briefly at the troughs of prior recessions.



The second part of the process is important to the question of what is sustainable. Here the question to ask is how and why does a decision to "start" a house occur? According to CoreLogic, about 22% of mortgages are underwater, with mortgage debt that exceeds the market value of the home. Likewise, banks have taken millions of homes into their own "real-estate owned" or REO portfolios, and have dribbled that inventory into the market at a very gradual rate. All of that means that the availability of existing homes for sale is far smaller than the actual inventory of homes that would be available if underwater homeowners were able, or banks were willing, to sell. Accordingly, much of the volume in "existing home sales" represents foreclosure sales, REO and short-sales (sales allowed by banks for less than the value of the outstanding mortgage). That constrained supply of homes available for sale is one reason why home prices have held up. At the same time, constrained supply means that new home buyers face higher prices and fewer choices for existing homes than they would if the market was actually clearing properly. Given those facts, buyers who are able to secure financing (or pay cash) often find it more desirable to build to their preference instead of buying an existing home. It's not clear how many of these starts represent "spec" building by developers, but it's interesting to note that the average time to sell a newly completed home has been rising, not falling, over the past year.

In the end, the data-generating process features millions of underwater homes, huge REO inventories, and yet constrained supply. The result is more stable home prices, but a misallocation of capital into new homes despite a glut of existing homes that cannot or will not be brought to market. So starts are up even though existing home sales are down. Inefficient or not, there's no indication that inventory will be abruptly spilled into the market, so if the slow dribble continues, we'll probably continue to see gradual growth in housing starts that competes with a gradual release of inventory. This isn't an indication of economic resilience or housing "liftoff," but is instead an indication of market distortion and misallocation of scarce capital. Housing starts increased off of a low base during the 1969-70 and 1981-82 recessions as well. A similar increase is unlikely to materially affect the course of what we continue to believe is a new recession today.

Careful consideration of the data-generating process provides insight into how "surprises" can emerge in a very predictable way. For example, although short-term economic data isn't particularly cyclical, the expectations of investors and economists typically swing too far in the direction of recent news, which in turn creates cycles in economic "surprises" because not many periods contain an utter preponderance of only-good or only-bad data. In modeling this process, the same behavior can be produced in random data. The length of the cycle appears to be proportional to the length of the "lookback" period used to determine whether the recent trend of the data is favorable or unfavorable.

Case in point, there's a perception that the recent economic data has somehow changed the prospects for a U.S. recession. The idea is that while the data has remained generally weak, the latest reports have been better than expectations. However, it turns out that there is a fairly well-defined ebb-and-flow in "economic surprises" that typically runs over a cycle of roughly 44 weeks (which is by no means a magic number). The Citigroup Economic Surprises Index tracks the number of individual economic data points that come in above or below the consensus expectations of economists. I've updated a chart that I last presented in 2011, which brings that 44-week cycle up-to-date. Conspiracy theorists take note - the recent round of "surprises" follows the fairly regular pattern that we've observed in recent years. There's no manipulation of the recent data that we can find - it just happens that the sine wave will reach its peak right about the week of the general election.



In short, it is not enough to examine data, even large volumes of it. In order to extract information and draw conclusions, it is crucial to think about the process that is involved in generating that data. In other words, it helps to think about the interactions between buyers and sellers, the effect of expectations and how they are formed, and - for physical and biological data - the actual systems that are operating to produce the facts and figures that are being analyzed.

If investors believe that the markets are simply balloons that increase as funds flow in and out, that stocks should be valued as a simple multiple of profits without concern for profit margins or the factors that drive those margins, that stocks are "under-owned" simply because an enormous volume of low-interest debt has been issued, that moderate growth in a distorted housing market representing a diminished fraction of economic activity will suddenly drive a robust recovery, and that central bank "puts" are a reliable defense against market losses - with no need to consider the mechanism by which those puts supposedly work - then the willingness to accept significant market risk is understandable. For my part, I am convinced that these beliefs are at odds with how the data are actually generated. The red flags are significant not only for the stock market, but for the bond market (particularly credit-sensitive debt) and the economy as well.