In 1993, Stanford economist John B. Taylor wrote an influential paper that introduced the economics profession (statisticians, almost all) to what was later called the Taylor Rule. The need for such a “rule” was an unspoken outgrowth of monetary evolution. In the 1960s and 1970s, long-established regression models estimating the influence of then-defined money on economic variables had broken down almost completely.
It left central banks of the 1980s to essentially discretionary monetary policy in total, a situation that no reasonable official or economist at the time wished to see put into practice. In the interest of some kind of limitation, Taylor drew in several economic variables (aggregates) that could in theory tell the Fed what to do with its primary policy lever – whatever that lever might be.
The preferred policy rules that have emerged from this research have not generally involved fixed settings for the instruments of monetary policy, such as a constant growth rate for the money supply. The rules are responsive, calling for changes in the money supply, the monetary base, or the short-term interest rate in response to changes in the price level or real income.
It is interesting that in the original piece Taylor’s so-called rule isn’t actually meant exclusively for interest rate targeting, but over the years has been adapted that way not on the same econometric principles but on the legend of Alan Greenspan’s tenure. It introduces another element of subjectivity that blurs the very monetary evolution motivating it as a factor.
There are, of course, different versions nowadays, with modified Taylor Rules springing up with some frequency. A full part of the reason for that is what you see on the right hand side of the chart immediately above. There has never been a perfect fit one to the other (Taylor prescription for Federal Funds and the actual rate), but in recent years the distance has grown quite large and triggering an often intense orthodox debate.
It is in many ways the mirror of (illegitimate) Fed criticism in the early crisis period. Then it was QE was so much money printing it risked igniting inflation, if not hyperinflation. Now it is the Fed is so far behind the curve that it risks igniting inflation, if not runaway inflation. They were and are both wrong for reasons of both money (modern) and its effects on the economy.
We can quite easily account for the disparity. Taylor’s original formula included only four parts: the nominal Federal Funds rate, the real Federal Funds rate, the rate of inflation in relation to its target, and the output gap (the difference between current levels of output, usually described by real GDP, and “potential” output calculated in some fashion along the lines of NAIRU and the Phillips Curve).
Over the past eight years since the official end of the Great “Recession,” the ratio has been moved almost entirely on the output gap figure. Inflation remains, for the most part, below target and in these terms stable in its misbehavior. The real Federal Funds rate is, therefore, of little marginal concern.
According to the Taylor Rule, then, the Fed should have been raising the Federal Funds rate starting all the way back in 2011, if not 2010. Aware of this glaring difference, in March 2015 Janet Yellen spoke about the Fed’s seeming incredible patience – in the context of what was for her, anyway, the likely end to the delayed recovery.
Under normal circumstances, simple monetary policy rules, such as the one proposed by John Taylor, could help us decide when to raise the federal funds rate. Even with core inflation running below the Committee’s 2 percent objective, Taylor’s rule now calls for the federal funds rate to be well above zero if the unemployment rate is currently judged to be close to its normal longer-run level and the “normal” level of the real federal funds rate is currently close to its historical average. But the prescription offered by the Taylor rule changes significantly if one instead assumes, as I do, that appreciable slack still remains in the labor market, and that the economy’s equilibrium real federal funds rate–that is, the real rate consistent with the economy achieving maximum employment and price stability over the medium term–is currently quite low by historical standards.
She is both right and wrong in her calculations, though what she gets right has turned out to be more trivia than meaningful (just as it was predictable by extending her concerns beyond merely trusting exclusively the unemployment rate). The Taylor Rule trajectory presented above is revisionist history, literally in this case, using the latest output gap numbers provided by the Congressional Budget Office (CBO).
As I have shown for several years now, the CBO has been forced to drastically overhaul its calculations of economic potential, thus the output gap. The reason for that is the lack of recovery, and the reason for the lack of recovery is actually what Janet Yellen was talking about in early 2015 – persistent, stubborn, unending slack. It’s why wages and incomes remain decrepit, and therefore spending, demand, and everything that follows.
As the unemployment rate falls and the economy still fails to recover, the CBO by its own orthodox conventions and “rules” is forced to downgrade potential. The output gap shrinks and the Taylor Rule declares that it is long past time for the Fed to get going, maybe even get aggressive. With little or no output gap left and the unemployment rate as low as it is, inflation is surely about to explode.
Except that it isn’t. Going backward in revisions has the effect of revealing (part of) the error. Using instead the CBO’s February 2014 estimates for economic potential, and then plugging them into the Taylor Rule as an older version of the output gap, we find instead far less certainty about “rate hikes.” What this level of “potential” says is exactly what Janet Yellen was using to criticize the hurry to “raise rates.” There was thought far more slack at the time.
It still produces a recommendation (is it really a recommendation if the Taylor Rule is a rule?) for some “rate hikes” in 2011; but then, notably, not much beyond them and for several years more (indicating more than just a temporary problem).
And, of course, if we go even further back in time and CBO revisions, we find that in their January 2010 estimates for economic “potential” a huge output gap extending quite far into the future. Along this line, the view of actual real GDP has only increased in its output gap.
In other words, according to those previous calculations for “slack,” there was no reason at all to “raise rates” in 2011 or anytime since.
What has changed is only the one variable – the constant and often drastic write-downs of “potential.” The Taylor Rule is no rule at all, but as any other econometric monstrosity prone to serious errors that it cannot abide. In this case “slack” or how the unemployment rate no longer has much bearing on the real economic situation. Relying on it would require a view of the economy that just isn’t realistic. It’s only by ignoring those downgrades in potential that we can arrive at an economy in 2017 where Fed “rate hikes” indicate what they might have in the past.
The Fed is going forward with its RHINOs (rate hike in name only) not because there is recovery right around the corner; the “rising dollar” finished any chance of that happening. They are moving forward because they don’t know what else to do. The bond market (and others related, like eurodollar futures) gets it, while those who constantly declare interest rates have nowhere to go but up, and often use the Taylor Rule or something like it to justify that view, are missing the most important part.
We are stuck in a shrunken economy where those 15 or 16 million Americans matter in every possible way. As if it could have been any other outcome, rules or no. The problem isn’t just discretionary monetary policy, but more so discretion about what counts as money in and outside of that policy (policies).