Despite the aggregate underperformance of actively managed mutual funds, many individuals believe they can pick winning funds, just as they believe they can pick winning stocks. The tool most people rely on to do this are the ratings provided by agencies like Morningstar and Lipper.
These ratings have a tremendous impact on individuals’ decisions about where to invest their money. According to Financial Research Corp, for example, funds with four or five star ratings took in nearly $80 billion of new investor money in 2002, whereas lower-rated funds suffered withdrawals of over $108 billion.
But here’s the problem. Research by Professor Matthew Morey of New York’s Pace University shows that funds with high Morningstar and ValueLine ratings don’t necessarily perform better than those with lower ratings. Here’s one particularly poignant example: according to the Wall Street Journal, at the end of 1999, 90% of Morningstar-rated tech funds had 5 stars. Oh dear.
Despite their predictive failure, the mutual fund companies themselves exploit the Morningstar and Lipper ratings in their advertising. In a paper called Fund Families and the Star Phenomenon, three academics at the University of Michigan in Ann Arbor showed that by touting good ratings for their “leading” funds, the fund companies succeed in attracting money to their other funds too. And of course, the aggregate performance of the most heavily advertised fund families is no better than average.
Here's more evidence that investors pick mutual funds using criteria with no predictive validity: recent research has shown that if mutual funds change their name, for example by adding the word “growth” to their title, and raise their upfront fees (loads) or 12(b)1 fees (additional fees largely used for marketing and compensating brokers who push the funds), they actually attract more money.