Search This Blog

Showing posts with label multiple. Show all posts
Showing posts with label multiple. Show all posts

Saturday 24 November 2018

Why good forecasters become better people

Tim Harford in The FT

So, what’s going to happen next, eh? Hard to say: the future has a lotta ins, a lotta outs, a lotta what-have-yous. 

Perhaps I should be more willing to make bold forecasts. I see my peers forecasting all kinds of things with a confidence that only seems to add to their credibility. Bad forecasts are usually forgotten and you can milk a spectacular success for years. 

Yet forecasts are the junk food of political and economic analysis: tasty to consume but neither satisfying nor healthy in the long run. So why should they be any more wholesome to produce? The answer, it seems, is that those who habitually make forecasts may turn into better people. That is the conclusion suggested by a research paper from three psychologists, Barbara Mellers, Philip Tetlock and Hal Arkes. 

Prof Tetlock won attention for his 2005 book Expert Political Judgment, which used the simple method of asking a few hundred experts to make specific, time-limited forecasts such as “Will Italy’s government debt/GDP ratio be between 70 and 90 per cent in December 1998?” or “Will Saddam Hussein be the president of Iraq on Dec 31 2002?” 

It is only a modest oversimplification to summarise Prof Tetlock’s results using the late William Goldman’s aphorism: nobody knows anything

Yet Profs Mellers, Tetlock and Don Moore then ran a larger forecasting tournament and discovered that a small number of people seem to be able to forecast better than the rest of us. These so-called superforecasters are not necessarily subject-matter experts, but they tend to be proactively open-minded, always looking for contrary evidence or opinions. 

There are certain mental virtues, then, that make people better forecasters. The new research turns the question around: might trying to become a better forecaster strengthen such mental virtues? In particular, might it make us less polarised in our political views? 

Of course there is nothing particularly virtuous about many of the forecasts we make, which are often pure bluff, attention-seeking or cheerleading. “We are going to make America so great again” (Donald Trump, February 2016); “There will be no downside to Brexit, only a considerable upside” ( David Davis, October 2016); “If this exit poll is right . . . I will publicly eat my hat” (Paddy Ashdown, May 2015). These may all be statements about the future, but it seems reasonable to say that they were never really intended as forecasts. 

A forecasting tournament, on the other hand, rewards a good-faith effort at getting the answer right. A serious forecaster will soon be confronted by the gaps in his or her knowledge. In 2002, psychologists Leonid Rozenblit and Frank Keil coined the phrase “the illusion of explanatory depth”. If you ask people to explain how a flush lavatory actually works (or a helicopter, or a sewing machine) they will quickly find it is hard to explain beyond hand-waving. Most parents discover this when faced by questions from curious children. 

Yet subsequent work has shown that asking people to explain how the US Affordable Care Act or the European Single Market work prompts some humility and, with it, political moderation. It seems plausible that thoughtful forecasting has a similar effect. 

Good forecasters are obliged to consider different scenarios. Few prospects in a forecasting tournament are certainties. A forecaster may believe that parliament is likely to reject the deal the UK has negotiated with the EU, but he or she must seriously evaluate the alternative. Under which circumstances might parliament accept the deal instead? Again, pondering alternative scenarios and viewpoints has been shown to reduce our natural overconfidence. 

My own experience with scenario planning — a very different type of futurology than a forecasting tournament — suggests another benefit of exploring the future. If the issue at hand is contentious, it can feel safer and less confrontational to talk about future possibilities than to argue about the present. 

It may not be so surprising, then, that Profs Mellers, Tetlock and Arkes found that forecasting reduces political polarisation. They recruited people to participate in a multi-month forecasting tournament, then randomly assigned some to the tournament and some to a non-forecasting control group. (A sample question: “Will President Trump announce that the US will pull out of the Trans-Pacific Partnership during the first 100 days of his administration?”) 

At the end of the experiment, the forecasters had moderated their views on a variety of policy domains. They also tempered their inclination to presume the opposite side was packed with extremists. Forecasting, it seems, is an antidote to political tribalism. 

Of course, centrism is not always a virtue and, if forecasting tournaments are a cure for tribalism, then they are a course of treatment that lasts months. Yet the research is a reminder that not all forecasters are blowhards and bluffers. Thinking seriously about the future requires keeping an open mind, understanding what you don’t know, and seeing things as others see them. If the end result is a good forecast, perhaps we should see that as the icing on the cake.

Tuesday 3 October 2017

The many shades of darkness and light

If we do not recognise the multiplicity of our past, we cannot accept the multiplicity of our present


Tabish Khair in The Hindu


For most Europeans and Europeanised peoples, Western modernity starts assuming shape with something called the Enlightenment, which, riding the steed of Pure Reason, sweeps away the preceding ‘Dark Ages’ of Europe. Similarly, for religious Muslims, the revelations of Islam mark a decisive break in Arabia from an earlier age of ignorance and superstition, often referred to as ‘Jahillia’.

Both the ideas are based on a perception of historical changes, but they also tinker with historical facts. In that sense, they are ideological: not ‘fake’, but a particular reading of the material realities that they set out to chronicle. Their light is real, but it blinds us to many things too.

For instance, it has been increasingly contested whether the European Dark Ages were as dark as the rhetoric of the Enlightenment assumes. It has also been doubted whether the Enlightenment shed as much light on the world as its champions claim. For instance, some of the darkest deeds to be perpetuated against non-Europeans were justified in the light of the notion of ‘historical progress’ demanded by the Enlightenment. Finally, even the movement away from religion to reason was not as clear-cut as it is assumed: well into the 19th century, Christianity (particularly Protestantism) was justified in terms of divinely illuminated reason as against the dark heathen superstitions of other faiths, and this logic has survived in subtler forms even today.

In a similar way, the Islamic notion of a prior age of Jahillia is partly a construct. While it might have applied to some Arab tribes most directly influenced by the coming of Islam, it was not as if pre-Islamic Arabia was simply a den of darkness and ignorance. There were developed forms of culture, poetry, worship and social organisation in so-called Jahillia too, all of which many religious Arab Muslims are not willing to consider as part of their own inheritance today. Once again, this notion of a past Jahillia has enabled extremists in Muslim societies to treat other people in brutal ways: a recent consequence was the 2001 destruction of the ancient Buddhas of Bamiyan statues by the Taliban in Afghanistan, not to mention the persecution of some supposed ‘idolators’ in Islamic State-occupied territories.

Achievements and an error

Both the notions — the Dark Ages followed by the Enlightenment and Jahillia followed by the illumination of Islam — are based on some real developments and achievements. Europe did move, slowly and often contradictorily, from religious and feudal authority to a greater tendency to reason and hence, finally, to allotting all individuals a theoretically equivalent (democratic) space as a human right rather than as a divine boon. Similarly, many parts of pre-Islamic Arabia (‘Jahillia’) did move from incessant social strife and a certain lack of cohesion to the far more organised, and hence hugely successful, politico-religious systems enabled by Islam. It might also be, as many religious Muslims claim, that early Islam marked some distinctively progressive and egalitarian values compared to the predominant tribalism of so-called Jahillia.

In both cases, however, the error has been to posit a complete break: a before-after scenario. This is not sustained by all the historical evidence. Why do I need to point this out? Because there are two great problems with positing such decisive before-after scenarios, apart from that of historical error.

Two problems of a complete break

First, it reduces one’s own complex relationship to one’s past to sheer negation. The past — as the Dark Ages or Jahillia — simply becomes a black hole into which we dump everything that we feel does not belong to our present. This reduces not only the past but also our present.

Second, the past — once reduced to a negative, obscure, dark caricature of our present — can then be used to persecute peoples who do not share our present. In that sense, the before-after scenario is aimed at the future. When Europeans set out to bring ‘Enlightenment’ values to non-European people, they also justified many atrocities by reasoning that these people were stuck in the dark ages of a past that should have vanished, and hence such people needed to be forcibly civilised for their own good. History could be recruited to explain away — no, even call for — the persecution that was necessary to ‘improve’ and ‘enlighten’ such people. I need not point out that some very religious Muslims thought in ways that were similar, and some fanatics still do.

I have often wondered whether the European Enlightenment did not adopt just Arab discoveries in philosophy or science, ranging from algebra to the theory of the camera. Perhaps their binary division of their own past is also an unconscious imitation of the Arab bifurcation of its past into dark ‘Jahillia’ and the light of Islam. Or maybe it is a sad ‘civilisational’ trend — for some caste Hindus tend to make a similar cut between ‘Arya’ and ‘pre-Arya’ pasts, with similar consequences: a dismissal of aboriginal cultures, practices and rights today as “lapsed” forms, or the whitewashing of Dravidian history by the fantasy of a permanent ‘Aryan’ presence in what is India.

All such attempts — Muslim, Arya-Hindu, or European — bear the germs of potential violence. After all, if we cannot accept our own evolving identities in the past, how can we accept our differences with others today? And if we cannot accept the diversity and richness of our multiple pasts, how can we accept the multiplicity of our present?

Wednesday 1 January 2014

DRS in the parallel universe

JANUARY 1, 2014
Jon Hotten in Cricinfo 

India may not use DRS, but the decisions they receive from umpires today are tinged with a DRS worldview  © Getty Images
Enlarge
The theory of multiple universes was developed by an academic physicist called Hugh Everett. He was proposing an answer to the famous paradox of Schrödinger's Cat, a thought experiment in which the animal is both alive and dead until observed in one state or the other.
Everett's idea was that every outcome of any event happened somewhere - in the case of the cat, it lived in one universe and died in another. All possible alternative histories and futures were real. It was a mind-bending thought, but then the sub-atomic world operates on such scales. Everett's idea was dismissed at first, and wasn't accepted as a mainstream interpretation in its field until after his death in 1982. Like most theories in physics, its nature is essentially ungraspable by the layman - certainly by me - but superficially it chimes with one of the sports fan's favourite question, the "what-if". And after all, the DRS has produced a moment when a batsman can be both in and out to exactly the same ball. 
There came a point during the fourth Test in Melbourne, as Monty Panesar bowled to Brad Haddin with Australia at 149 for 6 in reply to England's 255, when Monty had what looked like a stone-dead LBW shout upheld. Haddin reviewed, as the match situation demanded he must, and the decision was overturned by less than the width of a cricket ball.
In the second Test in Durban, Dale Steyn delivered the first ball of the final day to Virat Kohli with India on 68 for 2, 98 runs behind South Africa. The ball brushed his shoulder and the umpire sent him on his way. India don't use DRS, and so the on-field decision stood.
When the fans of the future stare back through time at the scorecards of both games, they will look at wins by wide margins - eight wickets for Australia and ten wickets for South Africa. They might not notice these "what-if" events.
Yet it's worth a thought as to what might have happened should England have had another 50 or 60 runs in the bank on first innings, and India the in-form Kohli at the crease to take the morning wrath of Steyn. Test cricket has a capacity to develop thin cracks into chasms as wide as the cracks in a WACA pitch, and the game is full of subtle changes that discharge their payload further down the line.
The thought that somewhere out there is a universe without DRS for England and with it for India is no consolation to the losing sides, but such moments highlight the ongoing flux within the system.
As soon as Kohli was fired out, Twitter was filled with comments along the lines of: "Bet they wish they had DRS now", but as one voice amongst the clamour noted: "India don't deserve poor umpiring because they don't want DRS."
That point had weight. Even in games without the system it retains its impact because it has reshaped the way umpires and players approach the game. India will, for example, still have batsmen given out leg before wicket while stretching well down the pitch in the post-DRS manner, because the worldview of the umpire has been changed by what he has seen on its monitors. Players bat and bowl differently, and umpires give different decisions, because of what DRS has shown them.
The retirement of Graeme Swann was something of a milestone in this respect. His career would have been significantly altered had he not been such a master of exploiting the conditions created by DRS. He knew how to bowl to get front-foot LBW decisions. In response, batsmen have had to adapt their techniques when playing spin bowlers.
In this way and in others, DRS has become knitted into the fabric of Test cricket, whether it is being used or not. Were it to be withdrawn now, its effects would still exist, and irrevocably so.
But India's aversion still has its merits. It's now thuddingly obvious that DRS will never be used for its original purpose, the eradication of the obvious mistake. Instead, it has, in a classic case of function-creep, become the sentry of the fine margin, inserting itself into places where its own deficiencies are highlighted. The Ashes series in England was inflamed by a malfunctioning Hotspot. The Ashes series in Australia has revealed that umpires no longer seem to check the bowler for front-foot no-balls.
The outsourcing of DRS technology remains a paradox worthy of Schrödinger. TV companies have to pay for it, and the developers of the system have a commercial reason to stress its accuracies. Such truths sit uneasily with the notion of fairness and impartiality. Similarly, players have been radicalised into ersatz umpires, having to choose whether or not to have decisions made. Such randomness also impinges on impartiality.
It's hard to think of something more implacable as a piece of machinery, and yet cricket has found a way to politicise it, and it's this, at heart, where India's objections lay. They have a point.
 .