In my previous article entitled "Actively Passive Asset Allocation," I used the following stocks to illustrate time diversification: American Express (AXP), ConocoPhillips (COP), Johnson & Johnson (JNJ), Kraft Foods (KFT), Procter & Gamble (PG), Coca Cola (KO), USBanCorp (USB), Walmart Stores (WMT), Wells Fargo (WFC), and Wesco Financial (WSC).

The individual security data were taken from inception with the common data running from June 2002 to April 2011 and the expertGTC3 Optimizer used to perform the frontier calculations. The efficient portfolio was compared against a portfolio consisting of equal amounts invested in each of the 10 securities above starting in March 2004 (a randomly chosen start date) . The optimizer was tasked to provide an efficient portfolio mix with the same expected return as the equal-mix portfolio (using data as-at 2004) but with reduced volatility.

It was observed that the optimized portfolio (blue bars) beat the equal-mix portfolio (red bars) in all of the 7 years since March 2004 (itself a randomly chosen start date). Not shown is the fact that in all the trials in rolling months subsequent to March 2004, the optimized portfolio also beat all the equal-mix portfolios that were generated.

In the same article, I also cautioned that understanding the assumptions underlying this methodology and knowing what to do about them in situations where these assumptions have been violated is important for its successful implementation.

**Go behind the numbers**

Many optimization tools exist that simplify the otherwise onerous number of calculations you will have to do by hand or excel. The 3 types of data that drive these tools are the rate of return of each security, its standard deviation or volatility, and the correlations among each pair of securities in the portfolio.

While deriving the efficient frontier is a robust and mathematically well-defined procedure, the efficacy of the results from your optimization exercise are a function of your understanding of the assumptions and the way you handle deviations from the assumptions, which underlie the Markowitz efficient frontier model.

Do you use a forecasted rate of return or a historical return? If you use a historical return, how far back do you go? Do the correlations among each pair of securities change over time? If so, how do you account for this? When and how often should you re-balance your optimized portfolio? How do you detect a drift in the historical mean and how should this be handled?

**Systemic or Systematic**

Systematic risk is a term applied to volatility that affects entire market segments.This could be within the context of a sector, an economy, or suffer the thought, globally, which if it happens would be *systemic*. Notice the dip in 2009 in the bar chart above when the stock market began to turn awry in 2008? This was evidence of systematic risk within the group of securities that were being analysed.

In my previous article I showed how losing your capital along the way would require you to earn strenuously high returns for the remaining years in order to achieve your original goals. For example, in a 5-year investment that promises a 10% annual return, losing 30% of your capital in the first year would mean you needed 24% every year for the next 4 years to achieve what you had originally hoped to achieve.

The market dip in 2008/2009 was exactly one such scenario.

If you were faced with this dip during that time as an investor, you would have been tempted to exit the market at all cost and later try to bounce back by searching for exceptionally high returns. Or you could have stayed the course and waited for the markets to bounce back --- which they did a year later.

Where diversification across securities fell short because of systematic risk within this small group of 10 stocks (you could have diversified internationally but that is not the point of this article), patience, courage and the effects of time diversification saved the day. But what if the markets did not bounce back? Let's examine this in greater detail.

**The Law of Large Numbers**

** **

There is a usual disclaimer in financial plans or investment advice that past performance is not necessarily indicative of future performance. So why do we use a historical average?

The answer is simply because it is not our intention to forecast anything in the above optimization process. Rather, the historical average return serves as an anchor point that moving portfolio annual returns will revert to at some future date.

To understand stability, let's look at the definition of an average. The simple average of 2, 4 and 6 equals 4 but so is the average of -20, 10 and 22. For an average to be useful for our optimization purpose, we need to go back a long way into history.

A large number of data points going into the pot provides stability. To illustrate this, let's look at another way of arriving at the same answer above. First, calculate an interim average of 2 and 4, which is 3. Now take the difference between 6 (the third data point) and this interim average of 3 and divide the total number of data points (in this case, 3) into this difference to get 1. Add this result to the interim average of 3 to arrive at 4, which is the same average that was calculated previously. Try it yourself with -20, 10 and 22.

This somewhat convoluted way of calculating the average gives you a good idea of what stability is all about. If for example we had an interim average of 3 arising from 1000 data points rather than only 2 data points, taking the difference between 6 and the interim average of 3 and dividing 1001 into this difference will give you a small number to add to the interim average.

In other words, the average calculation is relatively stable when the number of data points going into its calculation is large. Even if the next data point going into the calculation is an outlier and quite different from the interim average, its impact is mitigated by the large number of data points that divide into the difference between this outlier and the interim average.

Now that we have anchored our rate of return, the next thing to consider is normality.

**Abnormal returns**

** **

Anchoring our average rate of return makes it easier to determine whether the annual returns are normally distributed. This is fundamental to the Markowitz frontier model. Deviations from normality will result in portfolios that do not perform.

Normality means that there is a 50% chance that a return at one instance will occur on either side of the average return (see 'Back to Basics' section in my previous article).

But just as the coin toss experiment in my previous article showed that a 'Head' does not have to be immediately followed by a 'Tail' and vice versa, an annual return can take some time before it reverts to the mean. Check out the daily rolling return difference graph for ConocoPhillips (COP) on the right where a return difference is defined as the actual annual return minus the average return. Do you see a reversion to the mean?

Now take a look at this graph of the Indonesian stock index. Do you perhaps see some abnormality in this graph?

Performing optimization on a universe that includes securities that have such abnormalities results in portfolios that do not perform as expected. Such abnormalities may occur because the market is going through an unprecedented period of growth or decline, or more importantly, the fact that the number of data points is not large enough to have seen such periods of growth or decline in the past. It is, for example, not advisable to include newly launched securities into an optimization process.

**Conclusion**

In this article, I talked about one of the critical assumptions underlying the Markowitz model. While the workings of the model are mathematically well-defined and robust, violating the assumptions that underpin the model, well, is akin to putting diesel in a vehicle that requires premium grade and expecting the engine to perform.

Go behind the numbers when you calculate any of the inputs into this model, such as the average return. Having an average return that is anchored allows for a better understanding of normality. And normality is critical to this optimization process if you wish to arrive at a model portfolio that does what it is supposed to do.

Abnormality may stall a reversion to the mean. While there are advanced methods to deal with abnormality, have a clear understanding of the fundamentals before delving deeper. After all, with your feet in the oven and your head in the freezer, on *average* you are doing just fine. Or are you?

**Disclosure: **I have no positions in any stocks mentioned, and no plans to initiate any positions within the next 72 hours.