When I started writing for Seeking Alpha I knew this day would come. I knew that publishing near-real-time buy‑and‑sell signals would one day require an explanation as to why I missed an upturn. But I also knew that transparency required just that, and I knew that the results would fit into a statistical distribution that would probably work out fine. As I write this, I'm enjoying the opportunity to provide greater texture as to the character of the algorithm I follow.
In my last article, I wrote:
This is the hardest time to follow the model - a sell signal in the face of a strong upward trend, and the corresponding instincts for greed encouraged by that trend. The market is showing serious upward momentum, but the VIX curve says don't count on it."
So, I sold last Monday morning and the market dipped for a few days then rallied into week's end.
Now, as of Friday's close, the Easy VIX dashboard looked like this, and I'm back into the same ETF basket of SPY, DIA, QQQ, IWM, but with a new addition, I'll disclose at the end of the article.
Easy VIX Dashboard, June 28, 2019 Close
Source: Michael Gettings Data: VIXCentral.com
On Friday, the SHAPE entered a safe zone, so the slope metrics became moot as to sustaining the prior sell signal, but in any case, the Primary Slope is sufficiently positive to indicate a buy signal.
I was traveling Friday when the buy signal came during the trading day. In my modeling and in these articles, I've consistently measured results based on executing close-of-business signals at the following day's opening prices, and I'll be consistent here. But I can't help lament that sometimes life gets in the way of trading; a Friday buy would have made the sell interval profitable. With the strong rally Monday morning that didn't work out, but so be it. Buying into Monday morning's rally resulted in a foregone profit of 1.06% over the 5-day "Out" period.
One of the comments offered by a reader of my last article was this one: "If the market rallies hard, will you believe that indicators like this are voodoo?" That is probably a sentiment shared by a material portion of the population, so I think it's worth discussing.
I replied, in part, with this:
If it rallies hard I'll believe it's one materially bad outcome over eleven years; I won't be shocked. Over the 11 years there have been bad weeks where the algorithm forewent gains - as much as 7% in the biggest missed opportunity when the market was in a crazy rally, ultimately adding 60+% per the ETF basket. It's a risk mitigation tool and it's not perfect, but on average it doubles returns and seriously constrains drawdown risk."
I'll put some statistics behind that response. The worst week referenced in that reply was ultimately mitigated before the sell interval ended; the frequency distribution of gains and losses for all sell intervals since 2008 is shown here.
Histogram of All Sell Intervals Since May-2008
Source: Michael Gettings Data Source: VIXCentral.com & Fidelity
Looking at the chart above, you can see that, in terms of frequency alone, the positive results outweigh the negatives by a substantial margin. In fact, the positives represent 75% of outcomes; 25% of the time the algorithm results in a foregone opportunity cost. But that only tells part of the story. Notice how skewed the positive outcomes are; they stretch all the way to about 20% gains, while loss outcomes are constrained to about 6%. Those are for variable duration ranging from 2 to 20 trading days and averaging 5.9 trading days.
Here is a graph of the same frequency distribution but weighted by the magnitude of gains and losses. In other words, each column reflects the probability multiplied by the appropriate gain or loss for its bin.
Frequency Distribution Weighted By The Magnitude Of Gains And Losses
Source: Michael Gettings Data Source: VIXCentral.com & Fidelity
Here you can see the texture of the algorithm's performance, and why I describe it as a risk mitigation tool. I view the segment of the graphic, highlighted with a pale-yellow box, as cost‑free insurance against drawdown risk. It reflects a preponderance of gains of less than 4% per interval which are offset by similar loss probabilities. It is free because gains and losses offset each other, and over time, the small magnitudes ultimately look like noise. A myopic view might see any particular loss of a few percent as a failure. I don't; it's noise.
Thankfully, there is no offsetting loss probability for the large contributions derived from the algorithm when markets fall by 5% to 8% and still no offsets to large contributions from out-sized gains at the right side when markets fall substantially. This explains why, from the beginning, I've balked at calling it a market-timing tool, instead describing the algorithm as a risk mitigation tool that happens to produce disproportionately large returns.
So, where does last week's outcome fall? The 1.06% foregone gain fits into the cost‑free insurance region as indicated by the blue arrow in the graph. So all is well, but I still wish I could have made the intraday trade on Friday.
There is another trend that is worth mentioning. Over 11 years, the Easy VIX algorithm has averaged 5.9 sell calls per year. In good years, like 2017, there were only 2 sell signals. But over the most recent 12 months, there have been 11 sell intervals as shown below in yellow.
Eleven Sell intervals - Most Recent 12 Months
Source: Michael Gettings Data Source: Fidelity
The frequency of those sell signals exceeds the worst period I've studied since 2008/2009. I suspect that might be telling us something, but there is a cause and effect question. Does the sell-signal count increase because the market is flat or does the count indicate the coming of a falling market? Or possibly both. I have no statistically valid evidence to interpret this, just preliminary pattern recognition. The following graph shows the number of sell signals for each rolling‑12‑month period since May of 2009, the earliest 12-month period in my data set.
Count of Rolling 12-month Sell Signals
Source: Michael Gettings
If I had earlier data, the 2008 trailing count would probably exceed those at the left of the chart, so I'll ignore the rise of that very early peak count. That leaves three periods of interest, and I've highlighted them with arrows labeled A, B, and C.
During period 'A' the count was falling, and the market was in a relatively steady rally; the same was true during period 'B'. However, period 'C' is interesting in that the rising number of sell signals seemed to portend the falling market of 2015. Now, here we are in 2019 at a peak sell count, a count that has continued to climb for six months despite the significant correction of December 2018. Stepping back to this macro-level view, I have concerns that the increase in sell signals is indicative of a coming market top. I wouldn't trade on this information, but it makes me more convinced that any given sell signal might be an important exit opportunity. In other words, consistent with my decision of last week, I won't second guess the algorithm when it tells me to sell.
This article is already too long, so once again I've deferred the discussion of applying the algorithm to technology stocks and the addition of a leveraged ETF to the basket. Look for it next week, but I will provide a hint - I've added a leveraged ETF, SSO, to the mix I acquired this week.
Thanks again for following me, I hope you find some of this insightful.
Disclosure: I am/we are long SPY. I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.
Additional disclosure: I trade all tickers mentioned using the algorithm described.