Search This Blog

Showing posts with label forecasting. Show all posts
Showing posts with label forecasting. Show all posts

Tuesday, 29 August 2023

A level Economics: How to Improve Economic Forecasting

 Nicholas Gruen in The FT 


Today’s four-day weather forecasts are as accurate as one-day forecasts were 30 years ago. Economic forecasts, on the other hand, aren’t noticeably better. Former Federal Reserve chair Ben Bernanke should ponder this in his forthcoming review of the Bank of England’s forecasting. 

There’s growing evidence that we can improve. But myopia and complacency get in the way. Myopia is an issue because economists think technical expertise is the essence of good forecasting when, actually, two things matter more: forecasters’ understanding of the limits of their expertise and their judgment in handling those limits. 

Enter Philip Tetlock, whose 2005 book on geopolitical forecasting showed how little experts added to forecasting done by informed non-experts. To compare forecasts between the two groups, he forced participants to drop their vague weasel words — “probably”, “can’t be ruled out” — and specify exactly what they were forecasting and with what probability.  

That started sorting the sheep from the goats. The simple “point forecasts” provided by economists — such as “growth will be 3.0 per cent” — are doubly unhelpful in this regard. They’re silent about what success looks like. If I have forecast 3.0 per cent growth and actual growth comes in at 3.2 per cent — did I succeed or fail? Such predictions also don’t tell us how confident the forecaster is. 

By contrast, “a 70 per cent chance of rain” specifies a clear event with a precise estimation of the weather forecaster’s confidence. Having rigorously specified the rules of the game, Tetlock has since shown how what he calls “superforecasting” is possible and how diverse teams of superforecasters do even better.  

What qualities does Tetlock see in superforecasters? As well as mastering necessary formal techniques, they’re open-minded, careful, curious and self-critical — in other words, they’re not complacent. Aware, like Socrates, of how little they know, they’re constantly seeking to learn — from unfolding events and from colleagues. 

Superforecasters actively resist the pull to groupthink, which is never far away in most organisations — or indeed, in the profession of economics as a whole, as practitioners compensate for their ignorance by keeping close to the herd. The global financial crisis is just one example of an event that economists collectively failed to warn the world about. 

There are just five pages referencing superforecasting on the entire Bank of England website — though that’s more than other central banks. 

Bernanke could recommend that we finally set about the search for economic superforecasters. He should also propose that the BoE lead the world by open sourcing economic forecasting.  

In this scenario, all models used would be released fully documented and a “prediction tournament” would focus on the key forecasts. Outsiders would be encouraged to enter the tournament — offering their own forecasts, their own models and their own reconfiguration or re-parameterisation of the BoE’s models. Prizes could be offered for the best teams and the best schools and universities.  

The BoE’s forecasting team(s) should also compete. The BoE could then release its official forecasts using the work it has the most confidence in, whether it is that of its own team(s), outsiders or some hybrid option. Over time, we’d be able to identify which ones were consistently better.  

Using this formula, I predict that the Bank of England’s official forecasts would find their way towards the top of the class — in the UK, and the world.

Saturday, 24 November 2018

Why good forecasters become better people

Tim Harford in The FT

So, what’s going to happen next, eh? Hard to say: the future has a lotta ins, a lotta outs, a lotta what-have-yous. 

Perhaps I should be more willing to make bold forecasts. I see my peers forecasting all kinds of things with a confidence that only seems to add to their credibility. Bad forecasts are usually forgotten and you can milk a spectacular success for years. 

Yet forecasts are the junk food of political and economic analysis: tasty to consume but neither satisfying nor healthy in the long run. So why should they be any more wholesome to produce? The answer, it seems, is that those who habitually make forecasts may turn into better people. That is the conclusion suggested by a research paper from three psychologists, Barbara Mellers, Philip Tetlock and Hal Arkes. 

Prof Tetlock won attention for his 2005 book Expert Political Judgment, which used the simple method of asking a few hundred experts to make specific, time-limited forecasts such as “Will Italy’s government debt/GDP ratio be between 70 and 90 per cent in December 1998?” or “Will Saddam Hussein be the president of Iraq on Dec 31 2002?” 

It is only a modest oversimplification to summarise Prof Tetlock’s results using the late William Goldman’s aphorism: nobody knows anything

Yet Profs Mellers, Tetlock and Don Moore then ran a larger forecasting tournament and discovered that a small number of people seem to be able to forecast better than the rest of us. These so-called superforecasters are not necessarily subject-matter experts, but they tend to be proactively open-minded, always looking for contrary evidence or opinions. 

There are certain mental virtues, then, that make people better forecasters. The new research turns the question around: might trying to become a better forecaster strengthen such mental virtues? In particular, might it make us less polarised in our political views? 

Of course there is nothing particularly virtuous about many of the forecasts we make, which are often pure bluff, attention-seeking or cheerleading. “We are going to make America so great again” (Donald Trump, February 2016); “There will be no downside to Brexit, only a considerable upside” ( David Davis, October 2016); “If this exit poll is right . . . I will publicly eat my hat” (Paddy Ashdown, May 2015). These may all be statements about the future, but it seems reasonable to say that they were never really intended as forecasts. 

A forecasting tournament, on the other hand, rewards a good-faith effort at getting the answer right. A serious forecaster will soon be confronted by the gaps in his or her knowledge. In 2002, psychologists Leonid Rozenblit and Frank Keil coined the phrase “the illusion of explanatory depth”. If you ask people to explain how a flush lavatory actually works (or a helicopter, or a sewing machine) they will quickly find it is hard to explain beyond hand-waving. Most parents discover this when faced by questions from curious children. 

Yet subsequent work has shown that asking people to explain how the US Affordable Care Act or the European Single Market work prompts some humility and, with it, political moderation. It seems plausible that thoughtful forecasting has a similar effect. 

Good forecasters are obliged to consider different scenarios. Few prospects in a forecasting tournament are certainties. A forecaster may believe that parliament is likely to reject the deal the UK has negotiated with the EU, but he or she must seriously evaluate the alternative. Under which circumstances might parliament accept the deal instead? Again, pondering alternative scenarios and viewpoints has been shown to reduce our natural overconfidence. 

My own experience with scenario planning — a very different type of futurology than a forecasting tournament — suggests another benefit of exploring the future. If the issue at hand is contentious, it can feel safer and less confrontational to talk about future possibilities than to argue about the present. 

It may not be so surprising, then, that Profs Mellers, Tetlock and Arkes found that forecasting reduces political polarisation. They recruited people to participate in a multi-month forecasting tournament, then randomly assigned some to the tournament and some to a non-forecasting control group. (A sample question: “Will President Trump announce that the US will pull out of the Trans-Pacific Partnership during the first 100 days of his administration?”) 

At the end of the experiment, the forecasters had moderated their views on a variety of policy domains. They also tempered their inclination to presume the opposite side was packed with extremists. Forecasting, it seems, is an antidote to political tribalism. 

Of course, centrism is not always a virtue and, if forecasting tournaments are a cure for tribalism, then they are a course of treatment that lasts months. Yet the research is a reminder that not all forecasters are blowhards and bluffers. Thinking seriously about the future requires keeping an open mind, understanding what you don’t know, and seeing things as others see them. If the end result is a good forecast, perhaps we should see that as the icing on the cake.

Friday, 1 June 2018

I can make one confident prediction: my forecasts will fail

Tim Harford in The Financial Times 

I am not one of those clever people who claims to have seen the 2008 financial crisis coming, but by this time 10 years ago I could see that the fallout was going to be bad. Banking crises are always damaging, and this was a big one. The depth of the recession and the long-lasting hit to productivity came as no surprise to me. I knew it would happen. 


Or did I? This is the story I tell myself, but if I am honest I do not really know. I did not keep a diary, and so must rely on my memory — which, it turns out, is not a reliable servant. 

In 1972, the psychologists Baruch Fischhoff and Ruth Beyth conducted a survey in which they asked for predictions about Richard Nixon’s imminent presidential visit to China and Russia. How likely was it that Nixon and Mao Zedong would meet? What were the chances that the US would grant diplomatic recognition to China? Professors Fischhoff and Beyth wanted to know how people would later remember their forecasts. Since their subjects had taken the unusual step of writing down a specific probability for each of 15 outcomes, one might have hoped for accuracy. But no — the subjects flattered themselves hopelessly. The Fischhoff-Beyth paper was titled, “I knew it would happen”. 

This is a reminder of what a difficult task we face when we try to make big-picture macroeconomic and geopolitical forecasts. To start with, the world is a complicated place, which makes predictions challenging. For many of the subjects that interest us, there is a substantial delay between the forecast and the outcome, and this delayed feedback makes it harder to learn from our successes and failures. Even worse, as Profs Fischhoff and Beyth discovered, we systematically misremember what we once believed. 

Small wonder that forecasters turn to computers for help. We have also known for a long time — since work in the 1950s by the late psychologist Paul Meehl — that simple statistical rules often outperform expert intuition. Meehl’s initial work focused on clinical cases — for example, faced with a patient suffering chest pains, could a two or three-point checklist beat the judgment of an expert doctor? The experts did not fare well. However, Meehl’s rules, like more modern machine learning systems, require data to work. It is all very well for Amazon to forecast what impact a price drop may have on the demand for a book — and some of the most successful hedge funds use algorithmically-driven strategies — but trying to forecast the chance of Italy leaving the eurozone, or Donald Trump’s impeachment, is not as simple. Faced with an unprecedented situation, machines are no better than we are. And they may be worse. 

Much of what we know about forecasting in a complex world, we know from the research of the psychologist Philip Tetlock. In the 1980s, Prof Tetlock began to build on the Fischhoff-Beyth research by soliciting specific and often long-term forecasts from a wide variety of forecasters — initially hundreds. The early results, described in Prof Tetlock’s book Expert Political Judgement, were not encouraging. Yet his idea of evaluating large numbers of forecasters over an extended period of time has blossomed, and some successful forecasters have emerged. 

The latest step in this research is a “Hybrid Forecasting Tournament”, sponsored by the US Intelligence Advanced Research Projects Activity, designed to explore ways in which humans and machine learning systems can co-operate to produce better forecasts. We await the results. If the computers do produce some insight, it may be because they can tap into data that we could hardly have imagined using before. Satellite imaging can now track the growth of crops or the stockpiling of commodities such as oil. Computers can guess at human sentiment by analysing web searches for terms such as “job seekers allowance”, mentions of “recession” in news stories, and positive emotions in tweets. 

And there are stranger correlations, too. A study by economists Kasey Buckles, Daniel Hungerman and Steven Lugauer showed that a few quarters before an economic downturn in the US, the rate of conceptions also falls. Conceptions themselves may be deducible by computers tracking sales of pregnancy tests and folic acid. 

Back in 1991, a psychologist named Harold Zullow published research suggesting that the emotional content of songs in the Billboard Hot 100 chart could predict recessions. Hits containing “pessimistic rumination” (“I heard it through the grapevine / Not much longer would you be mine”) tended to predict an economic downturn. 

His successor is a young economist named Hisam Sabouni, who reckons that a computer-aided analysis of Spotify streaming gives him an edge in forecasting stock market movements and consumer sentiment. Will any of this prove useful for forecasting significant economic and political events? Perhaps. But for now, here is an easy way to use a computer to help you forecast: open up a spreadsheet, note down what you believe today, and regularly revisit and reflect. The simplest forecasting tip of all is to keep score.

Friday, 10 August 2012

Predictions are hard, especially about the future


By   Last updated: August 10th, 2012

Asteroid hitting earth
Tomorrow's weather: changeable, with a 66 per cent chance of extinction events
Yesterday I read a startling-ish statistic. A Twitter account calledUberFacts, which has around two and a half million followers, solemnly informed us that there is a 95 per cent chance that humans will be extinct in the next 9,000 years. Now, it's from Twitter, it's probably nonsense. But it got me thinking. What does it even mean?
Obviously, it means that we have a one in 20 chance of surviving to the 2,280th Olympiad, held on RoboColony 46 in the balmy Europan summer of 11012AD. But how can they possibly know that? Have they perhaps got access to other universes and a time machine, and gone forward to a thousand 11012ADs in a thousand alternate realities, and noted with sadness that only 50 such timelines contained humans?
One imagines not, or someone would have said. What they're doing is offering a prediction: if we were to run the universe 20 times, we'd probably survive once. So how might they arrive at that figure? More generally, what does it mean when sports commentators say "Sunderland have a 65 per cent chance of beating Swansea", or financial journalists say "There's an 80 per cent chance that Greece will leave the euro by the start of 2013"?
I don't have any idea how UberFacts arrived at their 95 per cent figure, because they didn't give a source. Someone else suggested it came from the Stern Review into the economic effect of climate change: I had a look around, and Stern in fact assumed a 10 per cent chance of human extinction in the next century. If we extrapolate that to a 9,000-year timescale, that's 90 centuries: 0.9 (the likelihood of not going extinct per century) to the power 90 = 0.00008, or a mere 0.008 per cent chance of survival. UberFacts were being extremely optimistic.
But we've just pushed our question back a stage. We know how we got to our 95 per cent figure (sort of). But how did we get that 0.9 in the first place?
Presumably we assess the ways we could get killed, or kill ourselves. In the journal Risk Analysis, Jason Matheny put forward a few possibilities:, including nuclear war, asteroid strikes, rogue microbes, climate change, or a physics experiment that goes wrong, "creating a 'true vacuum' or strangelets that destroy the planet".
Some of them are semi-predictable. It's not impossible to put a figure on the possibility of a fatal impact with a stellar object. If you know how many large objects are wandering around the relevant bits of solar system, you could put an estimate on the likelihood of one hitting us: a Nasa scientist put it at roughly one impact per 100 million years. You can build a predictive physical model, with known uncertainties, and come up with a figure of probability which is not meaningless. Climate models are an attempt to do something similar, but the sheer number of variables involved means that even the IPCC are unwilling to go past statements such as it is "likely" that, for instance, sea levels will rise, or "very likely" that temperatures will continue to go up: the odds of "total extinction" are not given. And as for the odds of nuclear war or accidentally creating a black hole, there's no model that can even pretend to be helpful.
That 0.9-chance-of-survival-per-century is not a mathematically arrived-at probability, but a guess (and, as a commenter has pointed out, rather a high one, since we've survived 500 centuries or so without trouble so far). You can call it something more flattering – a working assumption; an estimate – but it's a guess. And, obviously, the same applies to all the others: financial journalists aren't laboriously working through all the possible universes in which Greece does and doesn't leave the euro; sports commentators haven't created mathematical models of the Sunderland and Swansea players and run them through a simulation, with carefully defined error bars. They're expressing a guess, based on their own knowledge, and giving it a percentage figure to put a label on how confident they feel.
Is that the death knell for all percentage-expressed figures? No: there is a way of finding out whether a pundit's prediction is meaningful. You can look at an expert's predictions over time, and see whether her "70 per cent likelies" come out 70 per cent of the time, and whether her "definites" come up every time. Luckily, someone's done this: Philip Tetlock, a researcher at the University of California's business school. He has dedicated 25 years to doing precisely that, examining 284 expert commentators in dozens of fields, assessing their predictions over decades. You can read about Tetlock's work in Dan Gardner's fantastic book Future Babble: Why Expert Predictions Fail and Why We Believe them Anyway, but here's the top line: the experts he looked at would, on average, have been beaten by (in Tetlock's words) "a dart-throwing chimpanzee" – ie they were worse than random guesses.
What he also found, however, was that not all experts were the same. The ones who did worse than the imaginary chimp tended to be the Big Idea thinkers: the ones who have a clear theory about how the world works, and apply it to all situations. Those thinkers tended, paradoxically, to be the most confident. The ones who did (slightly) better than average were the ones who had no clear template, no grand theoretical vision; who accepted the world as complex and uncertain and doubted the ability of anyone, including themselves, to be able to predict it. It's a strange thing to learn: the people who are most certain of the rightness of their predictions are very likely to be wrong; the people who are most likely to be right are the ones who will tell you they probably aren't. This applied equally whether or not someone was Right-wing or Left-wing, a journalist or an academic, a doomsayer or an optimist. When it comes to predicting the future, the best lack all conviction, while the worst are full of passionate intensity.
So what does this tell us about UberFacts's supremely confident but infuriatingly unsourced "95 per cent" claim? Essentially: it's nonsense. It might be possible to make a reasonable guess at the chance of extinction per century, if it's done cautiously. But extending it to 9,000 years is simply taking the (considerable) likelihood that it's wrong and raising it to the power 90. It is a guess, and a meaningless one: in 11012AD there will either be humans, or there won't, and our spacefaring descendants won't know whether they've been lucky or not any more than we do.
But there is a wider lesson to learn than that you probably shouldn't trust huge sweeping predictions on Twitter. It's that you shouldn't trust sweeping predictions at all. Anyone who says that the euro is definitely going to collapse, or that climate change is definitely going to cause wars, or that humanity is 95 per cent doomed, is no doubt utterly sure of themselves, but is also, very probably, guessing.