Seeking Alpha
Dividend investing
Profile| Send Message|
( followers)  
Financial ‘screening’ tools are popular among a range of investors and financial pundits. A wide variety of financial portals provide interesting screening tools. Yahoo! Finance has a nice, easy to use set of screening tools for stocks and funds. The value of screens, obviously, is that they help you reduce the world of possible investments down to a more manageable subset. Value investors can screen to find stocks with price-to-earnings below some level, dividend yields above some level, or both. Standard screens are available for Beta and other risk-related parameters, too. The challenge with using screens is simply how you judge the meaningfulness of your screen for selecting investments, and this is potentially harder than many people think. Does a screen for high-performing funds have any value for looking to the future, for example?

While individual investors use screening tools to limit the universe of funds, institutional investors look at performance attribution. In this article, I take a portfolio of high-performing funds selected by a series of screens and analyze it using a form of performance attribution that is readily available to any investor or advisor willing to do a bit of work.

Rather than come up with a set of screens myself, I will start with an article from Fortune (and published on Yahoo! Finance) that used screens to find a set of mutual funds that have outperformed over the last ten years.

The article title is Funds That Mint Money. Yes, that is really the title. The Fortune article uses screens for funds which are in the highest 20% of their fund categories for both average return and dollar-weighted average return over the most recent ten years. Is this a viable basis for choosing funds to invest in now? Would a portfolio of these funds actually have out-performed on a risk-adjusted basis? These two questions are fundamentally important and are not, in fact, addressed in the Fortune article.

screens 1

The funds that are chosen in the Fortune article were also screened to reject those with high fees. These funds represent a fairly broad cross section of the sectors that most advisors want people to have some money in. We have funds with capitalization focuses in small, medium, and large cap. We have emerging markets and developed markets showing up here. To begin, let us ask how a portfolio of these funds would have performed. While we could look at these individually, it is simpler to look at them together. Note: the process that we are going to go through for a portfolio of these funds could equally be applied to any individual fund.

Let’s start by assembling a generic portfolio of these ‘funds that mint money’:

screens 2

This is a generic allocation—nothing fancy. We have 40% allocated overseas (15% to FIGRX, 15% to VTRIX, and 10% to PRMSX), with 10% of the total portfolio devoted to emerging markets. Thirty percent of the portfolio is in large domestic funds (TWEIX and PRWCX). This is a generic ‘pie chart’ kind of allocation. When I ran statistics for the last four years (through the end of December, 2006), I found that this portfolio had generated an average annual return of 21.8% with a standard deviation of 10.2%. This is, no doubt, a great performance and one that most people would envy in comparison to their own portfolios. Over this same period, SPY (an S&P500 index fund) has returned an average of 14.8% per year with a standard deviation of 8.2%. The last four years have been good for the S&P500, too. This portfolio of screened funds has dramatically out-performed the S&P500, but the S&P500 is not a very relevant benchmark. We know that value-oriented strategies have been performing very well in recent years, as have foreign markets. Are there style features of these funds that have made them winners, or are these funds being run by managers who can out-perform their benchmarks?

The important question that we want to address, then, is how a comparably-oriented portfolio of ETFs has fared over the same four year period. In order to assemble a ‘comparable’ portfolio, I wanted to use broad index ETFs with a fair amount of market history. I also wanted to design a portfolio that has exhibited the same level of risk as the portfolio of screen-selected funds. This is very important. To build this portfolio, I started with IVE as my large-cap value index fund. As soon as I compared the basic historical statistics of IVE to TWEIX, for which it was intended to serve as a proxy, a problem jumped out. The trailing four-year Beta for TWEIX is around 78% and the trailing four-year Beta for IVE is 111%. IVE would appear not to be a good proxy for TWEIX. I decided to look for a large-cap value ETF with a more comparable Beta. I came up with ELV, the streetTracks Wilshire Large Cap Value ETF. The Beta was still a bit high, so I looked at the holdings of ELV. TWEIX has substantial concentrations in raw materials, utilities, and energy—as Morningstar shows.

The same factor shows up in UMVEX vs. IWR, the mid-cap ETF. UMVEX has a higher allocation to energy and raw materials than IWR. This is also seen in comparing PENNX to the small-cap ETF that we have selected, IWM:

screens 3

In each market cap category, there is a substantially higher allocation to materials, energy and utilities than in the equivalent ETFs by style. All in all, a portfolio allocated equally between these three screen-selected funds will have 37% allocated to these sectors while a portfolio equally-allocated between the three ETFs will have 27%. To account for the higher concentrations in energy and raw materials in this portfolio, I added to IDU and IGE to my benchmark portfolio mix:

screens 4

This mix of ETFs was designed to meet several goals. First, I wanted to cover the same general styles as captured by the original set of funds. Second, I wanted a portfolio with the same historical average return, historical standard deviation in return, Beta, and R-squared as the portfolio of screened funds. These are all important statistical measures to replicate. To match all of these properties, I ended up with the portfolio above, which I developed with some trial and error using Quantext Portfolio Planner [QPP]. This portfolio has matched the performance of the Fortune-screened fund portfolio very closely over the last four years:

screens 5

Note that we are showing annualized measures using monthly data and dividends are assumed to be reinvested. These results suggest that we have replicated the portfolio made up of Fortune-screened funds quite closely. Now, trailing performance is one thing, but it is also very important to look at projected future performance—and for this we have Quantext Portfolio Planner’s projections:

screens 6

Note: all calculations from QPP shown in this article use default settings. The QPP projections are important because the projected future values use a process called risk-return balancing. Capital markets tend to show consistent relationships between risk and return over time and across asset classes. Riskier assets will have a higher average return and vice versa. Over finite periods of time, certain asset classes will out-perform relative to their volatility and others will under-perform. QPP generates projections so that assets in a portfolio will all have future average returns consistent with their projected risk levels. The main thing to note from the table above is that the projected future performance of the portfolio of Fortune-screened actively-managed funds is in line with the projected future performance of the portfolio of ETFs (albeit with a 1% increase in average return and a 1% increase in standard deviation of annual return).

Let’s now go back to the original issue in this article. The Fortune article cited here provides a list of actively-managed mutual funds that have been selected using a screening tool. The funds were, apparently, each in the top 20% of their categories for the last ten years. If you read the article from Fortune, it sure sounds like these funds are a good bet. When we create a proxy benchmark ETF portfolio that matches risk, return and (generally) style, we get historical performance over the past four years that is as good—and even a little better—than the portfolio of actively-managed funds. Further, the projected future performance and risks match up well between these two portfolios. The difference in performance is consistent with the higher fees of the actively-managed funds.

screens 7

So, the screens make these funds look good, but a portfolio of index ETFs performs just as well before fees and even better after we account for fees. The results suggest that the use of financial screens did not identify funds that have really out-performed over the historical period or that are likely to out-perform in the future, when compared to a relevant benchmark portfolio.

What are the uncertainties here? First, my set of proxy ETFs does not perfectly replicate the asset allocation in the portfolio of actively managed funds. I acknowledge that, but it is not the principle issue here. The principle issue is coming up with a solid portfolio allocation from a specified set of sectors that gives you the most return for a given level of risk. Both of the portfolios look reasonable from an asset allocation standpoint. Given the choice between my portfolio of ETFs and the portfolio of actively-managed funds, I would choose the portfolio of ETFs. Why? Higher fees are a guaranteed drag on performance and the ‘hot hand’ effect of active managers is relatively short-lived.

Further, even though the projected risk in my ETF portfolio is 1% per year higher, it is a good bet to take one 1% in risk to get 1% in average return. In other words, why would someone go with the portfolio of actively managed funds over the portfolio of ETFs?

The process that we have gone through in this paper juxtaposes the approach that retail investors and journalists tend to use to examine fund performance and the process that institutional investors use. Retail investors use financial screens and related tools. Professionals tend to focus on performance attribution. What I have shown here is a performance attribution process that any investors or advisor can use. This process uses our Quantext Portfolio Planner. There may be readers out there from the professional side who will quibble with my approach—this is not exactly what most professionals do. That said, my approach is fairly equivalent. A standard professional approach to performance attribution is to use some form of Style Analysis, a method developed by Nobel Laureate Bill Sharpe. For a good standard paper by Dr. Sharpe on this approach, readers are referred to this paper.

In this approach, multiple linear regression (a standard statistical tool) is used to attribute the performance of a fund or portfolio to a series of indices. In this case, we would use our portfolio of actively managed mutual funds and attempt to explain the performance using a set of style indices—including a small cap, mid cap, energy, etc. This is a fine process to go through. You would not come out with an alternative portfolio of real funds at the end, however. You could use the index attribution to then allocate to a series of ETFs, but that is just one more step. This is one reason why I like the process used here. Second, you would not necessarily match the total volatility (i.e. risk, as measured by standard deviation) of the portfolio under analysis. What we have applied in this paper is a type of performance attribution that yields an alternative portfolio of index ETFs. How well does our approach compare to Style Analysis? A general benchmark that Dr. Sharpe applies and that is common in performance attribution is to calculate the R-squared to the index model (Note: this is not the same R-squared that we typically discuss for a stock or fund against the S&P500). If the R-squared is 90%, this means that 90% of the performance of the fund or portfolio can be explained by allocations to the set of indices. We can also calculate this for our proxy ETF portfolio. Our ETF benchmark portfolio has an R-squared of 92% against the portfolio of actively managed funds. This is consistent with the results that Dr. Sharpe cites for Style Analysis in the paper cited above. This is not entirely an apples-to-apples comparison, because Dr. Sharpe uses more data (five years vs. my four years) but also uses a larger series of indices than my number of index ETFs. Note: we could not do the five-year comparison because ADRE does not have five years of data. The longer data record would tend to reduce the resulting R-squared but the fact that Dr. Sharpe’s analysis uses twelve asset classes (as compared to our eight ETFs) tends to increase the resulting R-squared. If we included that many proxy style ETFs, we could get a higher R-squared. For the purposes of an investor, capturing 92% of the variance in monthly return is just fine and about as good a match as we would ever want, in terms of style.

While much of the paragraph above will be above the heads of the average investor or his/her advisor, the ‘take away’ point is that professionals ask whether an actively managed fund (or a portfolio of actively managed funds) can be demonstrated to be providing value beyond a set of passive index funds that replicate the underlying style. Individual investors tend to use financial ‘screens’ which may or may not have any value in determining whether one or a set of actively managed funds actually generates value, net of fees. In the sample case shown here, a portfolio made up of set of performance screened actively-managed funds does not appear to generate any net value beyond what could be expected from a (relatively) similarly-styled portfolio of ETFs even though these funds were in the top quintile in terms of screened performance.

Source: Financial Screens and Fund Performance Measurement