On June 13, 2012, the Office of the Comptroller of the Currency published the final rules defining whether a security is "investment grade," in accordance with Section 939A of the Dodd-Frank Act of 2010. On its web page the OCC states, "Under the revised regulations, to determine whether a security is 'investment grade,' banks must determine that the probability of default by the obligor is low and the full and timely repayment of principal and interest is expected." This article provides justification for the Dodd-Frank Act's prohibition on using legacy credit ratings from Moody's (NYSE:MCO) and the McGraw-Hill (MHFI) affiliate Standard & Poor's as the sole criterion for determining "investment grade" in Federal Regulations. We find that one of the reasons for inaccuracy in legacy ratings is that fact that ratings are simply out of date. We report below that the median time since the last ratings change for 2,265 public firms is 815 days, nearly 2 years and 3 months.
The web page explaining the Office of the Comptroller of the Currency's new rules defining investment grade and related guidance can be found here. We find many market participants are still struggling to understand why ratings, with a history of more than 100 years, have performed so badly in recent years and why this change to the definition of "investment grade" was necessary. The reasons are easy to summarize:
An "expert judgment" assessment of credit risk, consistent with 50 years of research in a wide variety of fields, is simply less accurate than a comprehensive statistical assessment of credit risk using modern statistical methods. Soneji and King (2012), in the journal Demography, explain the reasons for this finding.
The degree by which ratings are less accurate than a comprehensive statistical assessment of credit risk is documented by Hilscher and Wilson (2013). The Hilscher and Wilson findings have been confirmed in independent research reported in the Kamakura Risk Information Services Technical Guides, versions 3.0, 4.1 and 5.0.
The Levin report ("Wall Street and the Financial Crisis: Anatomy of a Financial Collapse," April 13, 2011) of the U.S. Senate devotes 84 pages (see pages 243 to 317) to the failures of legacy ratings for mortgage-related securities. The Levin report focuses heavily on rating agency conflicts of interest, failures to detect corruption in the mortgage market, and inadequate models and staffing. This note focuses on one of the specific reasons that ratings of public firms are inaccurate. Ratings, on average, are very old and out of date and therefore do not provide sufficient accuracy to be the sole determinant of "investment grade" for U.S. bank securities investments. We document the staleness of ratings in the following sections.
How Stale Are Ratings?
It is well known that legacy ratings, like most "expert judgment" models, cannot be reduced to a quantitative formula. Instead, ratings are determined in a qualitative fashion by a large bureaucracy of analysts and paid for by the issuer of the securities to be rated. One can imagine that the bureaucratic process itself causes delays in changes in credit ratings. Moreover, the "issuer pays" model creates little incentive for ratings to be timely, especially if the ratings are too high for that particular issuer. Why should an issuer pay a rating agency to reset a rating that has been set at a level that is too optimistic about the issuer's credit risk?
We seek to measure the timeliness of ratings in order to compare to the daily reset of public firm default probabilities in wide commercial use. We asked a simple question: of the 2,265 firms with legacy ratings on October 29, 2012, how many days has it been since the last ratings change for each of those 2,265 firms? The percentile distribution of the years since the last ratings change is shown in the following chart. The median time since the last ratings change is 815 days, 2.23 years, or nearly 2 years and 3 months:
Over 16 percent of the 2,265 firms had not been "re-rated" in more than 6 years. We did not pursue the archaeology of those ratings beyond the 6 year point. The following charts show the "staleness" of ratings by percentile:
One analyst, after reviewing these surprising figures, asked plaintively, "Couldn't it be that the credit quality of these firms really has been unchanged during this period?" The Hilscher and Wilson paper proves that this hope is in vain, and we illustrate that fact with the example of Citigroup.
Legacy Ratings and Default Probabilities of Citigroup: 1997 to 2008
Citigroup (NYSE:C) has been one of the largest financial institutions in the United States for decades. Clearly, it is one of those corporate names for which one would expect a high degree of rating agency scrutiny, if for no other reason that Citigroup was a regular debt issuer for all of that time period and would have paid the rating agencies often to have its ratings "refreshed." Citigroup's rating was first set to AA- in April, 1997. Its rating continued at this level until November 2008, with the exception of February to December 2007 when the legacy rating was raised to AA. The graph below shows that Citigroup 1 year and 1 month default probabilities were both high and volatile during that same period, particularly in 2007 and 2008 when the credit crisis was at its peak. The graph below is shown on a log scale:
We can quantify the default probabilities of Citigroup over this period by analyzing their high, low and average at 7 different maturities: 1 month, 3 months, 6 months, and 1, 2, 3, and 5 years. Those values are shown in the following table:
For the roughly 9.5 years that Citigroup was rated AA- or higher, its 1 year default probability averaged 0.407%. This figure is 13 times higher than the average default rate for AA- rated companies reported by Standard & Poor's in its "2010 Annual Global Corporate Default Study and Ratings Transitions: Americas," released in 2011. The 0.407% default probability is also 3.77 standard deviations above the average, using the 10 basis point standard deviation reported by S&P for the AA- default rate. Citigroup was an outlier for nearly the entire 9-plus year period, with a peak 1 year default probability during this period of 14.116%. The fact that Citigroup would have failed without substantial government assistance in 2008 and 2009 is well-known and documented in reports from the Special Inspector General of the Troubled Asset Relief Program.
Citigroup is but one example of the fact that ratings are unchanged because they are stale, not because the firm has remained with unchanged credit quality since the last time ratings were refreshed. The full sample analyses of Hilscher and Wilson and in the Kamakura Risk Information Services Technical Guides provide the comprehensive proof of inaccuracy. Again, the failure of ratings to match the accuracy of modern statistical methods is simply due to the fact that expert judgment systems "...suffer from humans' well-known poor abilities to judge and weight information informally" (Soneji and King, 2012). For this reason, the Office of the Comptroller of the Currency's new definition of investment grade, focused on modern measures of default risk, is timely and appropriate.
Disclosure: I have no positions in any stocks mentioned, and no plans to initiate any positions within the next 72 hours. I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it. I have no business relationship with any company whose stock is mentioned in this article.