By Tom Chivers Science Last updated: August 10th, 2012
Yesterday I read a startling-ish statistic. A Twitter account calledUberFacts, which has around two and a half million followers, solemnly informed us that there is a 95 per cent chance that humans will be extinct in the next 9,000 years. Now, it's from Twitter, it's probably nonsense. But it got me thinking. What does it even mean?
Obviously, it means that we have a one in 20 chance of surviving to the 2,280th Olympiad, held on RoboColony 46 in the balmy Europan summer of 11012AD. But how can they possibly know that? Have they perhaps got access to other universes and a time machine, and gone forward to a thousand 11012ADs in a thousand alternate realities, and noted with sadness that only 50 such timelines contained humans?
One imagines not, or someone would have said. What they're doing is offering a prediction: if we were to run the universe 20 times, we'd probably survive once. So how might they arrive at that figure? More generally, what does it mean when sports commentators say "Sunderland have a 65 per cent chance of beating Swansea", or financial journalists say "There's an 80 per cent chance that Greece will leave the euro by the start of 2013"?
I don't have any idea how UberFacts arrived at their 95 per cent figure, because they didn't give a source. Someone else suggested it came from the Stern Review into the economic effect of climate change: I had a look around, and Stern in fact assumed a 10 per cent chance of human extinction in the next century. If we extrapolate that to a 9,000-year timescale, that's 90 centuries: 0.9 (the likelihood of not going extinct per century) to the power 90 = 0.00008, or a mere 0.008 per cent chance of survival. UberFacts were being extremely optimistic.
But we've just pushed our question back a stage. We know how we got to our 95 per cent figure (sort of). But how did we get that 0.9 in the first place?
Presumably we assess the ways we could get killed, or kill ourselves. In the journal Risk Analysis, Jason Matheny put forward a few possibilities:, including nuclear war, asteroid strikes, rogue microbes, climate change, or a physics experiment that goes wrong, "creating a 'true vacuum' or strangelets that destroy the planet".
Some of them are semi-predictable. It's not impossible to put a figure on the possibility of a fatal impact with a stellar object. If you know how many large objects are wandering around the relevant bits of solar system, you could put an estimate on the likelihood of one hitting us: a Nasa scientist put it at roughly one impact per 100 million years. You can build a predictive physical model, with known uncertainties, and come up with a figure of probability which is not meaningless. Climate models are an attempt to do something similar, but the sheer number of variables involved means that even the IPCC are unwilling to go past statements such as it is "likely" that, for instance, sea levels will rise, or "very likely" that temperatures will continue to go up: the odds of "total extinction" are not given. And as for the odds of nuclear war or accidentally creating a black hole, there's no model that can even pretend to be helpful.
That 0.9-chance-of-survival-per-century is not a mathematically arrived-at probability, but a guess (and, as a commenter has pointed out, rather a high one, since we've survived 500 centuries or so without trouble so far). You can call it something more flattering – a working assumption; an estimate – but it's a guess. And, obviously, the same applies to all the others: financial journalists aren't laboriously working through all the possible universes in which Greece does and doesn't leave the euro; sports commentators haven't created mathematical models of the Sunderland and Swansea players and run them through a simulation, with carefully defined error bars. They're expressing a guess, based on their own knowledge, and giving it a percentage figure to put a label on how confident they feel.
Is that the death knell for all percentage-expressed figures? No: there is a way of finding out whether a pundit's prediction is meaningful. You can look at an expert's predictions over time, and see whether her "70 per cent likelies" come out 70 per cent of the time, and whether her "definites" come up every time. Luckily, someone's done this: Philip Tetlock, a researcher at the University of California's business school. He has dedicated 25 years to doing precisely that, examining 284 expert commentators in dozens of fields, assessing their predictions over decades. You can read about Tetlock's work in Dan Gardner's fantastic book Future Babble: Why Expert Predictions Fail and Why We Believe them Anyway, but here's the top line: the experts he looked at would, on average, have been beaten by (in Tetlock's words) "a dart-throwing chimpanzee" – ie they were worse than random guesses.
What he also found, however, was that not all experts were the same. The ones who did worse than the imaginary chimp tended to be the Big Idea thinkers: the ones who have a clear theory about how the world works, and apply it to all situations. Those thinkers tended, paradoxically, to be the most confident. The ones who did (slightly) better than average were the ones who had no clear template, no grand theoretical vision; who accepted the world as complex and uncertain and doubted the ability of anyone, including themselves, to be able to predict it. It's a strange thing to learn: the people who are most certain of the rightness of their predictions are very likely to be wrong; the people who are most likely to be right are the ones who will tell you they probably aren't. This applied equally whether or not someone was Right-wing or Left-wing, a journalist or an academic, a doomsayer or an optimist. When it comes to predicting the future, the best lack all conviction, while the worst are full of passionate intensity.
So what does this tell us about UberFacts's supremely confident but infuriatingly unsourced "95 per cent" claim? Essentially: it's nonsense. It might be possible to make a reasonable guess at the chance of extinction per century, if it's done cautiously. But extending it to 9,000 years is simply taking the (considerable) likelihood that it's wrong and raising it to the power 90. It is a guess, and a meaningless one: in 11012AD there will either be humans, or there won't, and our spacefaring descendants won't know whether they've been lucky or not any more than we do.
But there is a wider lesson to learn than that you probably shouldn't trust huge sweeping predictions on Twitter. It's that you shouldn't trust sweeping predictions at all. Anyone who says that the euro is definitely going to collapse, or that climate change is definitely going to cause wars, or that humanity is 95 per cent doomed, is no doubt utterly sure of themselves, but is also, very probably, guessing.
No comments:
Post a Comment