Interview With A Superforecaster

|
Includes: CS, UBS
by: David Pinsen

Summary

Psychologist Philip Tetlock's "Good Judgment Project" of amateur forecasters surprised the US Intelligence community with the accuracy of their predictions. The top 2% of those forecasters were designated "Superforecasters".

We interview a leading Superforecaster, Michael W. Story, who discusses the methodology he and other Superforecasters use when making their predictions while taking into account "wildcards", or blacks swans.

Story also shares his view on how investors can apply the Superforecasting methodology to their own research.

Crystal Balls Don't Work; Superforecasters Do

It's tough to make predictions, especially about the future.

- Yogi Berra

In a recent article ("Maybe You Should Panic"), I mentioned psychologist Philip Tetlock, whose book Superforecasting had just been reviewed in the Financial Times ("The Vision Thing"). In his review, Stephen Cave summarized the origin and success of Tetlock's Superforecasters:

Superforecasting is based on Tetlock's most recent study, the Good Judgement Project, in which he and colleagues recruited more than 20,000 people to make 500 predictions on questions ranging from the likelihood of political protests in Russia to the course of the Nikkei index. Tetlock's team was one of five competing in a competition sponsored by IARPA, the research and innovation arm of the US intelligence community, which also set the questions. But Tetlock's recruits were so much more successful that IARPA dropped the other teams two years into the four-year contest.

Those 20,000 individuals Tetlock recruited were all good forecasters, but only the most accurate among them were designated Superforecasters, and their predictions were given extra weighting. These Superforecasters were more accurate than intelligence community professionals; as a result, Superforecasting has drawn praise from senior investment industry executives such as UBS (NYSE:UBS) Global Head of Research Juan Luis Perez, and Michael Mauboussin, Head of Global Financial Strategies at Credit Suisse (NYSE:CS).

I was fortunate recently to interview one of the leading Superforecasters in the UK, Michael W. Story (pictured below; image via his website).

Given the investment industry praise mentioned above, I wasn't surprised to read that Superforecasters currently have a contract with UBS. Due to that contract, and ongoing negotiations with other firms, Mr. Story wasn't able to offer predictions about specific investments in this interview, but he did discuss how Superforecasters make and revise their predictions, and how individual investors might apply that.

Interview With A Superforecaster

David Pinsen: Thanks for offering to share some insights into Superforecasting, Michael. You're a social policy researcher based in London, and you recently organized a conference there for your fellow Superforecasters. Can you tell us a little more about your background and how you got involved in Superforecasting?

Michael Story: My background involves quite a bit of jumping around both geographically and otherwise, which I think is fairly typical for Supers, at least the ones I have met. I grew up travelling the world, moving every few years with my family: born in England, moving to Texas before age 1, then living in Kenya, Brussels, New York City, Paris, London, and by the time I was doing my undergrad, Moscow, Russia. I came back to London after that and have been here for 10 years or so, I wanted to try out the settled life for a while - but I ended up taking a job making documentary films, which meant a fair bit of travel and irregular hours (old habits die hard!).

So I'd been here in London a few years but still had that international perspective, was studying in grad school at the London School of Economics, thinking a lot about policy research stuff and looking for a cool down activity in a rather different domain when I was forwarded an email entitled: 'Philip Tetlock requests your help'. It contained the following line, "If you're willing to experiment with ways to improve your forecasting ability and if being part of cutting-edge scientific research appeals to you, then we, the Good Judgment Project, want your help." I read on, googled IARPA, and was fascinated - who wouldn't click?

At first, the project used simple play money prediction markets - you got the question (at that time I remember looking to North Korea, Syria and Iraq most frequently, so not very different from today), made your forecast and bet some of your supply of 'cash' on the answer against other people in the market. It took a fair bit of time investment - even though we were being compensated $250 per year for our efforts, that didn't exactly cover the commitment required to succeed. The competition side really appealed to me, though, and I began to think of it as a kind of computer game that occasionally sent me an Amazon voucher. What kept me coming back was the purity of the environment: we were scored solely on how accurate our forecasts were - not whether they could be justified according to weltanschauung or anything else. If you wanted to bet on nothing more than your hunches, you could - though I don't always recommend it!

As the project went on, they pulled the top 2% highest scoring forecasters into a special section - the Superforecasters. To get in, you had to be at the top of the league table and stay there - no getting lucky on just a few crazy bets! As a group, the 'supers' ended up having a bit more contact with each other, and we were invited out to the US couple of times for get-togethers to discuss new ways of forecasting and learn more about the research project we'd be contributing to. There was a very unusual atmosphere there; it turns out that forecasting is quite tightly linked to personality, and being in a room with 100 or so people who have vastly different life experiences and live all over the world but who share a similar personality was quite eye-opening. A colleague of mine compared it to the scene at the end of E.T. where the little alien guy goes back to his home planet and reunites with all the other E.T.s.

I enjoyed meeting up in person so much I organised a gathering here. Last October, about 30 Superforecasters from the European group came to London (along with some GJ [Good Judgment project] staff from the US and 70 curious outsiders from universities, banks, government agencies and business consulting outfits) and we repeated the experience here: pulling us up to date on the very latest research findings from our project, now the state of the art.

DP: According to the recent review of Philip Tetlock's Superforcasting book in the Financial Times, Superforecasters made predictions about 500 different events as part of the Good Judgment Project. I assume each Superforecaster focuses on a more manageable subset of predictions, since you have to periodically revise them. Could you tell us what a typical number of predictions is? This might be of particular interest to investors, since there may be a parallel in the number of individual securities they can keep track of at one time.

MS: I can only answer from my experience, but I think it's hard to have more than 30 questions on the go at once, and I usually have about 25. Any more than that and you start to lose track, can't react to new information or remember what your past forecast was if you want to update it! There are certainly diminishing returns to attention, and this is a part time thing for most of us - far more efficient to focus your time where you can make the most gains.

DP: Can you give us some color on how you approach your initial predictions and revisions?

MS: There's a fairly standard method which has been developed over the project, with a slightly unwieldy acronym: "CHAMPSKNOW":

Comparison classes should inform your probability estimates.

Hunt for the right information.

Adjust and update your forecasts when appropriate.

Mathematical and statistical models can help.

Post-mortem analyses help you improve.

Select the right questions to answer.

Know the power players.

Norms and protocols of domestic and international institutions matter.

Other perspectives aside from power politics can also inform your forecasts.

Wildcards, accidents and black swans can catch you off-guard if you don't consider the risk of irreducible uncertainty.

It's a good starting point for forecasting and covers the basics - beyond that it's a matter of doing your research. So for a question like we had last year about whether Bashar al-Assad would still lead the Syrian government, you might start by looking at C - Comparison classes. Instead of asking 'How bad a guy is Assad, how much does he deserve to be got rid of?' ask: 'What's the normal longevity of a Middle Eastern dictator? What about Syria specifically?' Al-Assad's father ruled for 29 years until his death, so your starting point ought to give him a fair bit of time in credit.

Making a basic model giving you the average length of a Middle East dictator and adjusting it from there will get you pretty close to the right answer, but you could jump down to look at local power players - who wants Assad dead or deposed? Who wants him to stick around? How powerful are they? Who is likely to prevail? Add to that the Wildcard chance of him being killed by an aggrieved bodyguard or family member, and you'd be pretty close.

DP: What was the typical background of the Superforcasters at your conference? Were any of them professional investors? Have you been contacted by financial institutions given the success of Superforecasting?

MS: There are a few professional investors among the supers - though many media reports portray us as 'regular Joes who beat the experts', in practice a lot of our group are experts of one kind or another - half the current crop of supers have Ph.Ds in a range of fields, some quite closely related to geopolitical forecasting like economics or political science and many a fair way away: physics, engineering and so on. We have a contract now with UBS bank and others are on the way, but I'm not able to divulge anything beyond that.

DP: Are there any general tips on forecasting you would offer to investors?

MS: Following CHAMPSKNOW is a pretty reasonable guide, but specifically for financial questions, you need to be more wary - it's easy to get overwhelmed, or over-committed to one investment. As everyone knows, index funds are a very hard target to beat but if you have a specific geopolitical forecast and can translate that into a financial decision, then you're in business - though that's easier said than done!

Something I'd like to see but don't expect any time soon is a mass participation prediction market - I think the regulatory burden and investor wariness makes such a thing too difficult at least for now, but it would give anybody a real prospect of translating good forecasts into financial rewards.

DP: Thanks again for this, Michael.

MS: Thank you!

Disclosure: I/we have no positions in any stocks mentioned, and no plans to initiate any positions within the next 72 hours.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.