Search This Blog

Friday, 1 June 2018

I can make one confident prediction: my forecasts will fail

Tim Harford in The Financial Times 

I am not one of those clever people who claims to have seen the 2008 financial crisis coming, but by this time 10 years ago I could see that the fallout was going to be bad. Banking crises are always damaging, and this was a big one. The depth of the recession and the long-lasting hit to productivity came as no surprise to me. I knew it would happen. 


Or did I? This is the story I tell myself, but if I am honest I do not really know. I did not keep a diary, and so must rely on my memory — which, it turns out, is not a reliable servant. 

In 1972, the psychologists Baruch Fischhoff and Ruth Beyth conducted a survey in which they asked for predictions about Richard Nixon’s imminent presidential visit to China and Russia. How likely was it that Nixon and Mao Zedong would meet? What were the chances that the US would grant diplomatic recognition to China? Professors Fischhoff and Beyth wanted to know how people would later remember their forecasts. Since their subjects had taken the unusual step of writing down a specific probability for each of 15 outcomes, one might have hoped for accuracy. But no — the subjects flattered themselves hopelessly. The Fischhoff-Beyth paper was titled, “I knew it would happen”. 

This is a reminder of what a difficult task we face when we try to make big-picture macroeconomic and geopolitical forecasts. To start with, the world is a complicated place, which makes predictions challenging. For many of the subjects that interest us, there is a substantial delay between the forecast and the outcome, and this delayed feedback makes it harder to learn from our successes and failures. Even worse, as Profs Fischhoff and Beyth discovered, we systematically misremember what we once believed. 

Small wonder that forecasters turn to computers for help. We have also known for a long time — since work in the 1950s by the late psychologist Paul Meehl — that simple statistical rules often outperform expert intuition. Meehl’s initial work focused on clinical cases — for example, faced with a patient suffering chest pains, could a two or three-point checklist beat the judgment of an expert doctor? The experts did not fare well. However, Meehl’s rules, like more modern machine learning systems, require data to work. It is all very well for Amazon to forecast what impact a price drop may have on the demand for a book — and some of the most successful hedge funds use algorithmically-driven strategies — but trying to forecast the chance of Italy leaving the eurozone, or Donald Trump’s impeachment, is not as simple. Faced with an unprecedented situation, machines are no better than we are. And they may be worse. 

Much of what we know about forecasting in a complex world, we know from the research of the psychologist Philip Tetlock. In the 1980s, Prof Tetlock began to build on the Fischhoff-Beyth research by soliciting specific and often long-term forecasts from a wide variety of forecasters — initially hundreds. The early results, described in Prof Tetlock’s book Expert Political Judgement, were not encouraging. Yet his idea of evaluating large numbers of forecasters over an extended period of time has blossomed, and some successful forecasters have emerged. 

The latest step in this research is a “Hybrid Forecasting Tournament”, sponsored by the US Intelligence Advanced Research Projects Activity, designed to explore ways in which humans and machine learning systems can co-operate to produce better forecasts. We await the results. If the computers do produce some insight, it may be because they can tap into data that we could hardly have imagined using before. Satellite imaging can now track the growth of crops or the stockpiling of commodities such as oil. Computers can guess at human sentiment by analysing web searches for terms such as “job seekers allowance”, mentions of “recession” in news stories, and positive emotions in tweets. 

And there are stranger correlations, too. A study by economists Kasey Buckles, Daniel Hungerman and Steven Lugauer showed that a few quarters before an economic downturn in the US, the rate of conceptions also falls. Conceptions themselves may be deducible by computers tracking sales of pregnancy tests and folic acid. 

Back in 1991, a psychologist named Harold Zullow published research suggesting that the emotional content of songs in the Billboard Hot 100 chart could predict recessions. Hits containing “pessimistic rumination” (“I heard it through the grapevine / Not much longer would you be mine”) tended to predict an economic downturn. 

His successor is a young economist named Hisam Sabouni, who reckons that a computer-aided analysis of Spotify streaming gives him an edge in forecasting stock market movements and consumer sentiment. Will any of this prove useful for forecasting significant economic and political events? Perhaps. But for now, here is an easy way to use a computer to help you forecast: open up a spreadsheet, note down what you believe today, and regularly revisit and reflect. The simplest forecasting tip of all is to keep score.

No comments:

Post a Comment