The American Mathematical Society recently published an article entitled "Pseudo-Mathematics and Financial Charlatanism: The Effects of Backtest Overfitting on Out-of-Sample Performance," in the May edition of *Notices by* David H. Bailey, Jonathan M. Borwein, Marcos López de Prado, and Qiji Jim Zhu, about the flaws of back-testing. It suggests that investment terms like Fibonacci ratio, Elliott wave and stochastic oscillator imply a certain amount of mathematical certainty that just isn't there. The article is more than an argument, it is a call to action for all mathematicians to speak up against this "madness". Really?

"As early as the 18th century, physicists exposed the nonsense of astrologers," the authors claim.

"Yet mathematicians in the 21st century have remained disappointingly silent with regards to those in the investment community who, knowingly or not, misuse mathematical techniques such as probability theory, statistics and stochastic calculus. Our silence is consent, making us accomplices in these abuses."

Well, it's one thing to say back-testing is unscientific, but it's another to accuse the investment community of misleading investors. Let's take a few steps back...

**Fundamental vs. Technical**

There are two schools of thought in the world of investments. One believes in technical analysis and the other in fundamental analysis. Technical analysis is based on historical price movements while fundamental analysis is based on the intrinsic value of the stock. For the most part, analysts and investment managers looking for long term growth have relied on the merits of fundamental analysis as a way to ensure superior returns, but it's difficult to determine intrinsic value. This is where probability theory, statistics and stochastic calculus are used to help provide predictions in an unpredictable world.

"We are not implying that those technical analysts, quantitative researchers or fund managers are 'snake oil salesmen,'" said David H. Bailey, a research fellow at the University of California. Baily is one of four to author the paper. He goes on to say that "Hedge-fund managers are often unaware that most back-tests presented to them by researchers and analysts may be useless."

So, the authors aren't going after the investment community -- they're going after their own -- the quant analysts. They want the mathematicians getting paid $400K a year to do these research studies to be exposed for the supposed financial engineering "sell-outs" they are.

The article clearly blames the "bad math" coming out of financial engineering think tanks for "the proliferation of investment products that are misleadingly marketed as mathematically sound." If this is true, not only are the seminal studies we've all come to reference with names like Fama, French, and Shiller wrong, but the investment vehicles, i.e. hedge funds that have been created to embody these concepts, may be just as faulty and vulnerable to risk.

**Back-testing**

Back-testing is a part of every academic investment analysis. It is simply the application of a particular trading strategy on historical results to determine how well the strategy may work in the future. It is not uncommon for assumptions to be made in order for the tests to make "sense". The article is accusing the mathematician or investment analysts of cherry-picking or refining which data to eliminate in order to maximize the strategy's performance. The mathematical process is referred to as overfitting. It's a little like conducting medical trials but only publicizing the best results.

Indeed, there may be some truth to these accusations. There are many managed and quantitative funds that use computer based investment strategies and their performance has lagged behind the market over the past 5 years. Managed funds use computer models to identify and follow pricing trends. Quantitative strategies use complex mathematical models to predict movements across all asset classes. According to data from Bloomberg, managed funds lost an annualized 1% for the five year period ending March 31, while quantitative funds gained an average of 7.5%. The S&P 500 averaged 21% over the same period.

**Conclusion**

So what's the answer? The authors of the paper suggest the answer is a review process of financial products conducted by an independent body such as Finra. It's certainly an option or an attribute for managed funds to give themselves more credibility, but historical returns *should* be enough. In the end, both camps have valid points, but it's up to the investor to do their due diligence before investing in any strategy -- computer aided or not.

**Disclosure: **I have no positions in any stocks mentioned, and no plans to initiate any positions within the next 72 hours. I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.