By Carl Howe
The magazine Technology Review (registration required) has some amazingly wonderful articles for those interested in the fusion of technology and business. One of those fascinating articles showed up in the latest issue titled, The Blow-Up, where Bryant Urstadt lays some of the blame for the credit crunch this past August at the feet of some of my MIT classmates, referred to prosaically as Quants. And while Urstadt has the temerity to include the differential equation for Black-Scholes option valuation, overall the article is a brilliant bit of story-telling for anyone trying to grasp the forces behind today's market spasms. Here's a great example from late in the article:
One trader I spoke with at a $10 billion hedge fund based in New York said that his computer executed 1,000 to 1,500 trades daily (although he noted that they were not what he called "intra-day" trades). His inch-thick employment contract precluded my using his name, but he did talk a little bit about his approach. "Our system has a touch of genetic theory and a touch of physics," he said. By genetic theory, he meant that his computer generates algorithms randomly, in the same way that genes randomly mutate. He then tests the algorithms against historical data to see if they work. He loves the challenge of cracking the behavior of something as complex as a market; as he put it, "It's like I'm trying to compute the universe." Like most quants, the trader professed disdain for the "sixth sense" of the traditional trader, as well as for old-fashioned analysts who spent time interviewing executives and evaluating a company's "story."
High-frequency trading is likely to become more common as the New York Stock Exchange gets closer and closer to a fully automated system. Already, 1,500 trades a day is conservative; the computers of some high-frequency traders execute hundreds of thousands of trades every day.
Linked with high-frequency trading is the developing science of event processing, in which the computer reads, interprets, and acts upon the news. A trade in response to an FDA announcement, for example, could be made in milliseconds. Capitalizing on this trend, Reuters recently introduced a service called Reuters NewsScope Archive, which tags Reuters-issued articles with digital IDs so that an article can be downloaded, analyzed for useful information, and acted upon almost instantly.
All this works great, until it doesn't. "Everything falls apart when you're dealing with an outlier event," says the trader at the $10 billion fund, using a statistician's term for those events that exist at the farthest reaches of probability. "It's easy to misjudge your results when you're successful. Those one-in-a-hundred events can easily happen twice a year."
The events of August were outliers, and they were of the quants' own making. (Some dispute that verdict: see "On Quants") To begin with, quants were indirectly responsible for the boom in housing loans offered to shaky candidates.
Derivatives allow banks to trade their mortgages like bubble-gum cards, and the separation of the holder of a loan from the writer of a loan tended to create an overgenerous breed of loan officer. The banks, in turn, were attracted by the enormous market for derivatives like CDOs. That market was fueled by hedge funds' appetite for products that were a little riskier and would thus produce a higher return. And the quants who specialized in risk assessment abetted the decision to buy CDOs, because they assumed that the credit market would enjoy nine or so years of relatively benign volatility.
It was a perfectly rational assumption; it just happened to be wrong. Matthew Rothman, a senior analyst in quantitative strategies at Lehman Brothers, called the summer a time of "significant abnormal performance"; according to his calculations, it was the strangest in 45 years. James Simons's Renaissance Technologies fund slid 8.7 percent in the first week of August, and in a letter to his investors, he called it a "most unusual period." As Andrew Lo put it, "Unfortunately, life has gotten very interesting." The Wall Street Journal called it an "August ambush."
Now one of the things that MIT taught me is to always ask the question, "What are the assumptions and limits behind systems?" This article hits one of the limits to market modeling and derivative valuation with some of its words. The words that hit me like a ton of bricks were "statistician", "stochastic", and "outliers".
I remember my statistics professor sandbagging my class with one particularly nasty problem set. We dutifully calculated all the appropriate means, standard deviations, and expected outcomes, and yet the experiment he had us model produced outcomes that were wildly different. Smarting from the low marks we all got on that problem set, many of us asked, "Why aren't these answers right?" And I remember the professor's sly smile when he answered, "Your answers would have been right if all these statistical events were independent. They weren't. And when you don't have independent, random events, statistics loses its much of is predictive power."
And that's the moral of "The Blow Up" from last summer: statistics works fine when you have random events. But when programmed trades and options start connecting trades to one another, the events aren't independent any more. We don't see an independent random walk down Wall Street, but rather one dependent on non-random forces, many of which have programmed actions. The assumptions for the models are no longer true, and that makes them produce unexpected results. Boom.
Urstadt's article not only talks about this problem, but provides evidence for it:
Another related explanation for the August downturn was that the quants' models simply ceased to reflect reality as market conditions abruptly changed. After all, a trading algorithm is only as good as its model. Unfortunately for quants, the life span of an algorithm is getting shorter. Before he was at RiskMetrics, Gregg Berman created commodity -trading systems at the Mint Investment Management Group. In the mid-1990s, he says, a good algorithm might trade successfully for three or four years. But the half-life of an algorithm's viability, he says, has been coming down, as more quants join the markets, as computers get faster and able to crunch more data, and as more data becomes available. Berman thinks two or three months might be the limit now, and he expects it to drop.
Said another way, the models only work when no one is using them. As soon as people start using them, they affect the market, and the market changes, sometimes in ways that were never expected by the models. The models break down.
So we're left with an interesting market paradox. Randomness in the market allows hedge fund modelers to rationalize random differences and make money. But the more modelers that do that, the less randomness there is. And because the assumptions behind the models are no longer true, and their outputs are now coupled to unrelated prices, the entire system can become unstable. In financial markets, unstable is bad, resulting in boundary condition events like 1987's Black Monday, where program trading and portfolio insurance created a similar, non-random instability.
So what's this have to do with today's markets? I wrote recently about the trend to create "dark pools" of secret trading information that provide competitive advantage. Allowing trading where prices are no longer visible could break the cycle being created by automated quantitative models. But more likely, it would simply create new risks to financial markets. The difference with dark pools would only be that articles like "The Blow Up" would be much rarer. And that would surely be a loss.