In preparation for a quick chat about ratings with a journalist the other day, I had a peek at A.M. Best’s methodology for rating insurance firms. For non-industry people, A.M. Best is a ratings agency that has historically focused on insurance firms and has historically been quoted alongside Moody’s (MCO), Standard & Poor’s (MHP), and Fitch when insurers trumpet their financial stability. After having a quick look at the methodology document, I remembered having a version of the same document stored on my laptop from 2008, so I compared them quickly.
It should come as no surprise that the document had changed, evolved, and grown. But my oh my, how it has grown! The 2008 methodology took up a mere 24 pages, but had swelled to 98 pages by 2010. In consideration of what we’ve learned since AIG (AIG), the Hartford (HIG) and Lincoln were forced into the arms of TARP and others narrowly escaped its clutches, certain topics have been granted expanded focus. Among them are relations between holding companies and subsidiaries — encapsulated in explicit and complementary “top-down” and “bottom-up” methodologies — as well as enterprise risk management. Other risk categories have been added whole-cloth, including country risk analysis and specific methodologies for captive insurers and startups. I’m sure there were more and equally substantial differences, but I didn’t have time to dig too deep. I would assume that the other major ratings agencies have made similar changes, but that’s a topic for another day.
One question that came to mind, however, was how well A.M. Best’s team can carry out this expanded task. Even allowing for the fact that the 2008 method document may not have offered as much insight and transparency into what analysts actually do, there’s a lot of work to do to cover all these bases effectively. How much work?
Best says that it rates 3,200 property and casualty insurers and 2,000 life insurers. The firm’s website states that it employs 400 analysts, statisticians, and editors. If we assume that each of these works 240 days a year, that means that 18.5 person days each year go into rating each of the insurers. Even if we assume that much of the data for each insurer remains static and that it needs only to be updated and validated, that may or may not seem like a ton of time. It’s fair as well to assume that it takes longer to assess MetLife than it does to assess a smaller carrier.
But is it adequate? In the end, the proof is in the pudding. The crisis has shown us, if nothing else, that due diligence is in the eye of the beholder — or the emptor, as the case may be – and that having good information demands effort and cost. Just as capital requirements for financial institutions will rise, we will spend more time monitoring them. It may not always be exciting, but it should be worthwhile. After all, as any insurance person will tell you, a society’s productivity rises in line with its ability to manage risk.
n.b. A.M. Best is privately held. I don't have old methodology documents for the other major raters of insurance companies, S&P, Moody's, and Fitch, so I couldn't compare their old and new versions. One would hope that they have also stepped up their respective games since the crisis. It is also reasonable to assume that they are not staffed proportionately better than A.M. Best. The differences between them, then, should come down to culture, process, and leadership, as is the case with most things.
As I have mentioned in an earlier post, the fact that the life insurance industry and the NAIC chose to go outside the ratings agencies to assess its mortgage-backed portfolios and thereby to figure out its capital needs was not exactly a ringing endorsement of the agencies by the industry.
Disclosure: Indices only