Search This Blog

Showing posts with label numbers. Show all posts
Showing posts with label numbers. Show all posts

Thursday, 10 September 2020

Facts v feelings: how to stop our emotions misleading us

The pandemic has shown how a lack of solid statistics can be dangerous. But even with the firmest of evidence, we often end up ignoring the facts we don’t like. By Tim Harford in The Guardian
 

By the spring of 2020, the high stakes involved in rigorous, timely and honest statistics had suddenly become all too clear. A new coronavirus was sweeping the world. Politicians had to make their most consequential decisions in decades, and fast. Many of those decisions depended on data detective work that epidemiologists, medical statisticians and economists were scrambling to conduct. Tens of millions of lives were potentially at risk. So were billions of people’s livelihoods.

In early April, countries around the world were a couple of weeks into lockdown, global deaths passed 60,000, and it was far from clear how the story would unfold. Perhaps the deepest economic depression since the 1930s was on its way, on the back of a mushrooming death toll. Perhaps, thanks to human ingenuity or good fortune, such apocalyptic fears would fade from memory. Many scenarios seemed plausible. And that’s the problem.

An epidemiologist, John Ioannidis, wrote in mid-March that Covid-19 “might be a once-in-a-century evidence fiasco”. The data detectives are doing their best – but they’re having to work with data that’s patchy, inconsistent and woefully inadequate for making life-and-death decisions with the confidence we would like.

Details of this fiasco will, no doubt, be studied for years to come. But some things already seem clear. At the beginning of the crisis, politics seem to have impeded the free flow of honest statistics. Although the claim is contested, Taiwan complained that in late December 2019 it had given important clues about human-to-human transmission to the World Health Organization – but as late as mid-January, the WHO was reassuringly tweeting that China had found no evidence of human-to-human transmission. (Taiwan is not a member of the WHO, because China claims sovereignty over the territory and demands that it should not be treated as an independent state. It’s possible that this geopolitical obstacle led to the alleged delay.)

Did this matter? Almost certainly; with cases doubling every two or three days, we will never know what might have been different with an extra couple of weeks of warning. It’s clear that many leaders took a while to appreciate the potential gravity of the threat. President Trump, for instance, announced in late February: “It’s going to disappear. One day it’s like a miracle, it will disappear.” Four weeks later, with 1,300 Americans dead and more confirmed cases in the US than any other country, Trump was still talking hopefully about getting everybody to church at Easter.

As I write, debates are raging. Can rapid testing, isolation and contact tracing contain outbreaks indefinitely, or merely delay their spread? Should we worry more about small indoor gatherings or large outdoor ones? Does closing schools help to prevent the spread of the virus, or do more harm as children go to stay with vulnerable grandparents? How much does wearing masks help? These and many other questions can be answered only by good data about who has been infected, and when.

But in the early months of the pandemic, a vast number of infections were not being registered in official statistics, owing to a lack of tests. And the tests that were being conducted were giving a skewed picture, being focused on medical staff, critically ill patients, and – let’s face it – rich, famous people. It took several months to build a picture of how many mild or asymptomatic cases there are, and hence how deadly the virus really is. As the death toll rose exponentially in March, doubling every two days in the UK, there was no time to wait and see. Leaders put economies into an induced coma – more than 3 million Americans filed jobless claims in a single week in late March, five times the previous record. The following week was even worse: more than 6.5m claims were filed. Were the potential health consequences really catastrophic enough to justify sweeping away so many people’s incomes? It seemed so – but epidemiologists could only make their best guesses with very limited information.

It’s hard to imagine a more extraordinary illustration of how much we usually take accurate, systematically gathered numbers for granted. The statistics for a huge range of important issues that predate the coronavirus have been painstakingly assembled over the years by diligent statisticians, and often made available to download, free of charge, anywhere in the world. Yet we are spoiled by such luxury, casually dismissing “lies, damned lies and statistics”. The case of Covid-19 reminds us how desperate the situation can become when the statistics simply aren’t there.

When it comes to interpreting the world around us, we need to realise that our feelings can trump our expertise. This explains why we buy things we don’t need, fall for the wrong kind of romantic partner, or vote for politicians who betray our trust. In particular, it explains why we so often buy into statistical claims that even a moment’s thought would tell us cannot be true. Sometimes, we want to be fooled.

Psychologist Ziva Kunda found this effect in the lab, when she showed experimental subjects an article laying out the evidence that coffee or other sources of caffeine could increase the risk to women of developing breast cysts. Most people found the article pretty convincing. Women who drank a lot of coffee did not.

We often find ways to dismiss evidence that we don’t like. And the opposite is true, too: when evidence seems to support our preconceptions, we are less likely to look too closely for flaws. It is not easy to master our emotions while assessing information that matters to us, not least because our emotions can lead us astray in different directions.

We don’t need to become emotionless processors of numerical information – just noticing our emotions and taking them into account may often be enough to improve our judgment. Rather than requiring superhuman control of our emotions, we need simply to develop good habits. Ask yourself: how does this information make me feel? Do I feel vindicated or smug? Anxious, angry or afraid? Am I in denial, scrambling to find a reason to dismiss the claim?

In the early days of the coronavirus epidemic, helpful-seeming misinformation spread even faster than the virus itself. One viral post – circulating on Facebook and email newsgroups – all-too-confidently explained how to distinguish between Covid-19 and a cold, reassured people that the virus was destroyed by warm weather, and incorrectly advised that iced water was to be avoided, while warm water kills any virus. The post, sometimes attributed to “my friend’s uncle”, sometimes to “Stanford hospital board” or some blameless and uninvolved paediatrician, was occasionally accurate but generally speculative and misleading. But still people – normally sensible people – shared it again and again and again. Why? Because they wanted to help others. They felt confused, they saw apparently useful advice, and they felt impelled to share. That impulse was only human, and it was well-meaning – but it was not wise.


Protestors in Edinburgh demonstrating against Covid-19 prevention measures. Photograph: Jeff J Mitchell/Getty Images

Before I repeat any statistical claim, I first try to take note of how it makes me feel. It’s not a foolproof method against tricking myself, but it’s a habit that does little harm, and is sometimes a great deal of help. Our emotions are powerful. We can’t make them vanish, and nor should we want to. But we can, and should, try to notice when they are clouding our judgment.

In 1997, the economists Linda Babcock and George Loewenstein ran an experiment in which participants were given evidence from a real court case about a motorbike accident. They were then randomly assigned to play the role of plaintiff’s attorney (arguing that the injured motorcyclist should receive $100,000 in damages) or defence attorney (arguing that the case should be dismissed or the damages should be low).

The experimental subjects were given a financial incentive to argue their side of the case persuasively, and to reach an advantageous settlement with the other side. They were also given a separate financial incentive to accurately guess what the damages the judge in the real case had actually awarded. Their predictions should have been unrelated to their role-playing, but their judgment was strongly influenced by what they hoped would be true.

Psychologists call this “motivated reasoning”. Motivated reasoning is thinking through a topic with the aim, conscious or unconscious, of reaching a particular kind of conclusion. In a football game, we see the fouls committed by the other team but overlook the sins of our own side. We are more likely to notice what we want to notice. Experts are not immune to motivated reasoning. Under some circumstances their expertise can even become a disadvantage. The French satirist Molière once wrote: “A learned fool is more foolish than an ignorant one.” Benjamin Franklin commented: “So convenient a thing is it to be a reasonable creature, since it enables us to find or make a reason for everything one has a mind to.”

Modern social science agrees with Molière and Franklin: people with deeper expertise are better equipped to spot deception, but if they fall into the trap of motivated reasoning, they are able to muster more reasons to believe whatever they really wish to believe.

One recent review of the evidence concluded that this tendency to evaluate evidence and test arguments in a way that is biased towards our own preconceptions is not only common, but just as common among intelligent people. Being smart or educated is no defence. In some circumstances, it may even be a weakness.

One illustration of this is a study published in 2006 by two political scientists, Charles Taber and Milton Lodge. They wanted to examine the way Americans reasoned about controversial political issues. The two they chose were gun control and affirmative action.

Taber and Lodge asked their experimental participants to read a number of arguments on either side, and to evaluate the strength and weakness of each argument. One might hope that being asked to review these pros and cons might give people more of a shared appreciation of opposing viewpoints; instead, the new information pulled people further apart.

This was because people mined the information they were given for ways to support their existing beliefs. When invited to search for more information, people would seek out data that backed their preconceived ideas. When invited to assess the strength of an opposing argument, they would spend considerable time thinking up ways to shoot it down.

This isn’t the only study to reach this sort of conclusion, but what’s particularly intriguing about Taber and Lodge’s experiment is that expertise made matters worse. More sophisticated participants in the experiment found more material to back up their preconceptions. More surprisingly, they found less material that contradicted them – as though they were using their expertise actively to avoid uncomfortable information. They produced more arguments in favour of their own views, and picked up more flaws in the other side’s arguments. They were vastly better equipped to reach the conclusion they had wanted to reach all along.

Of all the emotional responses we might have, the most politically relevant are motivated by partisanship. People with a strong political affiliation want to be on the right side of things. We see a claim, and our response is immediately shaped by whether we believe “that’s what people like me think”.

Consider this claim about climate change: “Human activity is causing the Earth’s climate to warm up, posing serious risks to our way of life.” Many of us have an emotional reaction to a claim like that; it’s not like a claim about the distance to Mars. Believing it or denying it is part of our identity; it says something about who we are, who our friends are, and the sort of world we want to live in. If I put a claim about climate change in a news headline, or in a graph designed to be shared on social media, it will attract attention and engagement not because it is true or false, but because of the way people feel about it.

If you doubt this, ponder the findings of a Gallup poll conducted in 2015. It found a huge gap between how much Democrats and Republicans in the US worried about climate change. What rational reason could there be for that?

Scientific evidence is scientific evidence. Our beliefs around climate change shouldn’t skew left and right. But they do. This gap became wider the more education people had. Among those with no college education, 45% of Democrats and 23% of Republicans worried “a great deal” about climate change. Yet among those with a college education, the figures were 50% of Democrats and 8% of Republicans. A similar pattern holds if you measure scientific literacy: more scientifically literate Republicans and Democrats are further apart than those who know very little about science.

If emotion didn’t come into it, surely more education and more information would help people to come to an agreement about what the truth is – or at least, the current best theory? But giving people more information seems actively to polarise them on the question of climate change. This fact alone tells us how important our emotions are. People are straining to reach the conclusion that fits with their other beliefs and values – and the more they know, the more ammunition they have to reach the conclusion they hope to reach.


Anti-carbon tax protesters in Australia in 2011. Photograph: Torsten Blackwood/AFP/Getty Images

In the case of climate change, there is an objective truth, even if we are unable to discern it with perfect certainty. But as you are one individual among nearly 8 billion on the planet, the environmental consequences of what you happen to think are irrelevant. With a handful of exceptions – say, if you’re the president of China – climate change is going to take its course regardless of what you say or do. From a self-centred point of view, the practical cost of being wrong is close to zero. The social consequences of your beliefs, however, are real and immediate.

Imagine that you’re a barley farmer in Montana, and hot, dry summers are ruining your crop with increasing frequency. Climate change matters to you. And yet rural Montana is a conservative place, and the words “climate change” are politically charged. Anyway, what can you personally do about it?

Here’s how one farmer, Erik Somerfeld, threads that needle, as described by the journalist Ari LeVaux: “In the field, looking at his withering crop, Somerfeld was unequivocal about the cause of his damaged crop – ‘climate change’. But back at the bar, with his friends, his language changed. He dropped those taboo words in favour of ‘erratic weather’ and ‘drier, hotter summers’ – a not-uncommon conversational tactic in farm country these days.”

If Somerfeld lived in Portland, Oregon, or Brighton, East Sussex, he wouldn’t need to be so circumspect at his local tavern – he’d be likely to have friends who took climate change very seriously indeed. But then those friends would quickly ostracise someone else in the social group who went around loudly claiming that climate change is a Chinese hoax.

So perhaps it is not so surprising after all to find educated Americans poles apart on the topic of climate change. Hundreds of thousands of years of human evolution have wired us to care deeply about fitting in with those around us. This helps to explain the findings of Taber and Lodge that better-informed people are actually more at risk of motivated reasoning on politically partisan topics: the more persuasively we can make the case for what our friends already believe, the more our friends will respect us.

It’s far easier to lead ourselves astray when the practical consequences of being wrong are small or non-existent, while the social consequences of being “wrong” are severe. It’s no coincidence that this describes many controversies that divide along partisan lines.

It’s tempting to assume that motivated reasoning is just something that happens to other people. I have political principles; you’re politically biased; he’s a fringe conspiracy theorist. But we would be wiser to acknowledge that we all think with our hearts rather than our heads sometimes.

Kris De Meyer, a neuroscientist at King’s College, London, shows his students a message describing an environmental activist’s problem with climate change denialism:


To summarise the climate deniers’ activities, I think we can say that:

(1) Their efforts have been aggressive while ours have been defensive.

(2) The deniers’ activities are rather orderly – almost as if they had a plan working for them.

I think the denialist forces can be characterised as dedicated opportunists. They are quick to act and seem to be totally unprincipled in the type of information they use to attack the scientific community. There is no question, though, that we have been inept in getting our side of the story, good though it may be, across to the news media and the public.

The students, all committed believers in climate change, outraged at the smokescreen laid down by the cynical and anti-scientific deniers, nod in recognition. Then De Meyer reveals the source of the text. It’s not a recent email. It’s taken, sometimes word for word, from an infamous internal memo written by a cigarette marketing executive in 1968. The memo is complaining not about “climate deniers” but about “anti-cigarette forces”, but otherwise, few changes were required.

You can use the same language, the same arguments, and perhaps even have the same conviction that you’re right, whether you’re arguing (rightly) that climate change is real or (wrongly) that the cigarette-cancer link is not.

(Here’s an example of this tendency that, for personal reasons, I can’t help but be sensitive about. My left-leaning, environmentally conscious friends are justifiably critical of ad hominem attacks on climate scientists. You know the kind of thing: claims that scientists are inventing data because of their political biases, or because they’re scrambling for funding from big government. In short, smearing the person rather than engaging with the evidence.

Yet the same friends are happy to embrace and amplify the same kind of tactics when they are used to attack my fellow economists: that we are inventing data because of our political biases, or scrambling for funding from big business. I tried to point out the parallel to one thoughtful person, and got nowhere. She was completely unable to comprehend what I was talking about. I’d call this a double standard, but that would be unfair – it would suggest that it was deliberate. It’s not. It’s an unconscious bias that’s easy to see in others and very hard to see in ourselves.)

Our emotional reaction to a statistical or scientific claim isn’t a side issue. Our emotions can, and often do, shape our beliefs more than any logic. We are capable of persuading ourselves to believe strange things, and to doubt solid evidence, in service of our political partisanship, our desire to keep drinking coffee, our unwillingness to face up to the reality of our HIV diagnosis, or any other cause that invokes an emotional response.

But we shouldn’t despair. We can learn to control our emotions – that is part of the process of growing up. The first simple step is to notice those emotions. When you see a statistical claim, pay attention to your own reaction. If you feel outrage, triumph, denial, pause for a moment. Then reflect. You don’t need to be an emotionless robot, but you could and should think as well as feel.

Most of us do not actively wish to delude ourselves, even when that might be socially advantageous. We have motives to reach certain conclusions, but facts matter, too. Lots of people would like to be movie stars, billionaires or immune to hangovers, but very few people believe that they actually are. Wishful thinking has limits. The more we get into the habit of counting to three and noticing our knee-jerk reactions, the closer to the truth we are likely to get.

For example, one survey, conducted by a team of academics, found that most people were perfectly able to distinguish serious journalism from fake news, and also agreed that it was important to amplify the truth, not lies. Yet the same people would happily share headlines such as “Over 500 ‘Migrant Caravaners’ Arrested With Suicide Vests”, because at the moment at which they clicked “share”, they weren’t stopping to think. They weren’t thinking, “Is this true?”, and they weren’t thinking, “Do I think the truth is important?” 

Instead, as they skimmed the internet in that state of constant distraction that we all recognise, they were carried away with their emotions and their partisanship. The good news is that simply pausing for a moment to reflect was all it took to filter out a lot of the misinformation. It doesn’t take much; we can all do it. All we need to do is acquire the habit of stopping to think.

Inflammatory memes or tub-thumping speeches invite us to leap to the wrong conclusion without thinking. That’s why we need to be calm. And that is also why so much persuasion is designed to arouse us – our lust, our desire, our sympathy or our anger. When was the last time Donald Trump, or for that matter Greenpeace, tweeted something designed to make you pause in calm reflection? Today’s persuaders don’t want you to stop and think. They want you to hurry up and feel. Don’t be rushed.

Wednesday, 31 January 2018

Numbers aren't neutral

A S Paneerselvan in The Hindu



Analysing data without providing sufficient context is dangerous


An inherent challenge in journalism is to meet deadlines without compromising on quality, while sticking to the word limit. However, brevity takes a toll when it comes to reporting on surveys, indexes, and big data. Let me examine three sets of stories which were based on surveys and carried prominently by this newspaper, to understand the limits of presenting data without providing comprehensive context.

Three reports

The Annual Status of Education Report (ASER), Oxfam’s report titled ‘Reward Work, Not Wealth’, and the World Bank’s ease of doing business (EoDB) rankings have been widely reported, commented on, and editorialised. In most cases, the numbers and rankings were presented as neutral evaluations; they were not seen as data originating from institutions that have political underpinnings. Data become meaningful only when the methodology of data collection is spelt out in clear terms.

Every time I read surveys, indexes, and big data, I look for at least three basic parameters to understand the numbers: the sample size, the sample questionnaire, and the methodology. The sample size used indicates the robustness of the study, the questionnaire reveals whether there are leading questions, and the methodology reveals the rigour in the study. As a reporter, there were instances where I failed to mention these details in my resolve to stick to the word limit. Those were my mistakes.

The ASER study covering specific districts in States is about children’s schooling status. It attempts to measure children’s abilities with regard to basic reading and writing. It is a significant study as it gives us an insight into some of the problems with our educational system. However, we must be aware of the fact that these figures are restricted only to the districts in which the survey was conducted. It cannot be extrapolated as a State-wide sample, nor is it fair to rank States based on how specific districts fare in the study. A news item, “Report highlights India’s digital divide” (Jan. 19, 2018), conflated these figures.

For instance, the district surveyed in Kerala was Ernakulam, which is an urban district; in West Bengal it was South 24 Parganas, a complex district that stretches from metropolitan Kolkata to remote villages at the mouth of the Bay of Bengal. How can we compare these two districts with Odisha’s Khordha, Jharkhand’s Purbi Singhbhum and Bihar’s Muzaffarpur? It could be irresistible for a reporter, who accessed the data, to paint a larger picture based on these specific numbers. But we may not learn anything when we compare oranges and apples.

Questionable methodology


Oxfam, in the ‘Reward Work, Not Wealth’ report, used a methodology that has been questioned by many economists. Inequality is calculated on the basis of “net assets”. The economists point out that in this method, the poorest are not those living with very little resources, but young professionals who own no assets and with a high educational loan. Inequality is the elephant in the room which we cannot ignore. But Oxfam’s figures seem to mimic the huge notional loss figures put out by the Comptroller and Auditor General of India. Readers should know that Oxfam’s study has drawn its figures from disparate sources such as the Global Wealth Report by Credit Suisse, the Forbes’ billionaires list, adjusting last year’s figure using the average annual U.S. Consumer Price Index inflation rate from the U.S. Bureau of Labour Statistics, the World Bank’s household survey data, and an online survey in 10 countries.

When the World Bank announced the EoDB index last year, there was euphoria in India. However, this newspaper’s editorial “Moving up” (Nov. 2, 2017), which looked at India’s surge in the latest World Bank ranking from the 130th position to the 100th in a year, cautioned and asked the government, which has great orators in its ranks, to be a better listener. In hindsight, this position was vindicated when the World Bank’s chief economist, Paul Romer, said that he could no longer defend the integrity of changes made to the methodology and that the Bank would recalculate the national rankings of business competitiveness going back to at least four years. Readers would have appreciated the FAQ section (“Recalculating ease of doing business”, Jan. 25) that explained this controversy in some detail, had it looked at India’s ranking using the old methodology.

Wednesday, 28 September 2016

The tyranny of numbers can often stymie selectors

Suresh Menon in The Hindu

Selectors must make inspired choices relying on instinct rather than the calculator.

In an essay, The Ethnic Theory of Plane Crashes, Malcolm Gladwell wrote about the hierarchical nature of Korean society that might have led to a plane crash. The junior pilot was so deferential to his senior that when the latter made a mistake, he didn’t point it out. Hierarchy in Indian society is well-established too.

Also, numbers slot people. Hence, the highest tax payer versus average payer, 100 Tests versus 10 Tests. It is the last that concerns us here.

The Board of Control for Cricket in India is being criticised for picking a five-man selection committee with a combined playing experience of 13 Tests and 31 one-dayers. The argument here is that only those who have played a large number of Tests are qualified to choose a national team (or perhaps even write about it!). The Cardusian counter is that one need not have laid an egg to be able to tell a good one from the bad.

If that sounds too cute, there is the empirical evidence available to those who have followed Indian cricket for long. A player with 50 or 60 Tests is not automatically qualified to recognise talent at an early stage or see a world in a grain of sand as it were.

Not all international cricketers are students of the game. I would rather talk cricket with someone like Vasu Paranjpe, the legendary coach, than with some players. To be able to play is a wonderful thing and admirable. Many players can demonstrate, but few can explain. Often the experience of 50 Tests is merely the experience of one Test multiplied 50 times.

Selectors must make inspired choices relying on instinct rather than the calculator. There are spinners or batsmen lurking in the thicket of Indian cricket who may not have the record but who are long-term prospects.

Retrospective judging

The successful selector can only be judged retrospectively. Often former players, conscious of how corrosive criticism can be, would rather be praised for sticking to the straight and narrow than invite censure for taking a chance or two.

I have advocated for years that the best selectors should pick the junior sides. Most intelligent watchers of the game can pick 20 national team players without too much effort. Ideal selectors are special people. They bring to the table an instinct for the job which is independent of the number of internationals they have played.

After all, if it were all down only to scores and stats, a computer would do the job just as well. I have no idea how the current committee will function, but the five-man team should not be dismissed out of hand merely because they haven’t played 100 Tests.

Vasu Paranjpe who didn’t play a Test would have made a wonderful selector. In fact, off the top of my head, I can think of many without Test experience who would have. From Mumbai, Raj Singh Dungarpur, Kailash Gattani, Makarand Waingankar, from Delhi Akash Lal, from Kolkata Karthik Bose, from Chennai A.G. Ramsingh, V. Ramnarayan, Abdul Jabbar and from Karnataka V.S. Vijaykumar, Sanjay Desai. Dungarpur and Lal were National selectors in the old days. The list is by no means exhaustive.

Temperament matters

It has often been argued that only someone who has played a bunch of Tests can understand the off-field pressures a young debutant may be subjected to. Hence the call for those who have experienced that. But a good selector will take temperament into account too.

Some of the heaviest scorers and highest wicket takers in the national championship have not played for India; clearly the selection committee has worked out that runs and wickets alone are not enough.

The question of hierarchy, however, is a valid one. At least two recent selectors, Mohinder Amarnath and Sandip Patil, respected internationals both, have admitted that dealing with the senior players with more Tests than they played is no picnic.

Within a committee too, if there is a big gap in experience or popular stature, those who may have better ideas but fewer Tests have been forced to go with the flow. Lala Amarnath, for example, was known to browbeat the panel.

I remember a respected former player, when he was manager of the national side being asked, “How many Tests have you played?” in a nasty sort of way. This is the hierarchy of numbers.

If M.S.K. Prasad (Chairman), Sarandeep Singh, Devang Gandhi, Gagan Khoda and Jatin Paranjpe bring to their job a professionalism, integrity and an instinct for the right pick, they would have rendered irrelevant numbers pertaining to their international experience. All this is, of course, assuming the Supreme Court endorses the BCCI’s stand.

There will be criticism — that is part of the job description of a selector. But if the BCCI is throwing its net wider to include those with the skill, but without the record, then there’s a hint for the selectors here. Sometimes you must take a punt on perceived skill regardless of record.

Monday, 18 July 2016

A nine-point guide to spotting a dodgy statistic

 
Boris Johnson did not remove the £350m figure from the Leave campaign bus even after it had been described as ‘misleading’. Photograph: Stefan Rousseau/PA


David Spiegelhalter in The Guardian

I love numbers. They allow us to get a sense of magnitude, to measure change, to put claims in context. But despite their bold and confident exterior, numbers are delicate things and that’s why it upsets me when they are abused. And since there’s been a fair amount of number abuse going on recently, it seems a good time to have a look at the classic ways in which politicians and spin doctors meddle with statistics.

Every statistician is familiar with the tedious “Lies, damned lies, and statistics” gibe, but the economist, writer and presenter of Radio 4’s More or Less, Tim Harford, has identified the habit of some politicians as not so much lying – to lie means having some knowledge of the truth – as “bullshitting”: a carefree disregard of whether the number is appropriate or not.

So here, with some help from the UK fact-checking organisation Full Fact, is a nine-point guide to what’s really going on.

Use a real number, but change its meaning


There’s almost always some basis for numbers that get quoted, but it’s often rather different from what is claimed. Take, for example, the famous £350m, as in the “We send the EU £350m a week” claim plastered over the big red Brexit campaign bus. This is a true National Statistic (see Table 9.9 of the ONS Pink Book 2015), but, in the words of Sir Andrew Dilnot, chair of the UK Statistics Authority watchdog, it “is not an amount of money that the UK pays to the EU”. In fact, the UK’s net contribution is more like £250m a week when Britain’s rebate is taken into account – and much of that is returned in the form of agricultural subsidies and grants to poorer UK regions, reducing the figure to £136m. Sir Andrew expressed disappointment that this “misleading” claim was being made by Brexit campaigners but this ticking-off still did not get the bus repainted.


George Osborne quoted the Treasury’s projection of £4,300 as the cost per household of leaving the EU. Photograph: Matt Cardy/Getty Images


Make the number look big (but not too big) 

Why did the Leave campaign frame the amount of money as “£350m per week”, rather than the equivalent “£19bn a year”? They probably realised that, once numbers get large, say above 10m, they all start seeming the same – all those extra zeros have diminishing emotional impact. Billions, schmillions, it’s just a Big Number.

Of course they could have gone the other way and said “£50m a day”, but then people might have realised that this is equivalent to around a packet of crisps each, which does not sound so impressive.

George Osborne, on the other hand, preferred to quote the Treasury’s projection of the potential cost of leaving the EU as £4,300 per household per year, rather than as the equivalent £120bn for the whole country. Presumably he was trying to make the numbers seem relevant, but perhaps he would have been better off framing the projected cost as “£2.5bn a week” so as to provide a direct comparison with the Leave campaign’s £350m. It probably would not have made any difference: the weighty 200-page Treasury report is on course to become a classic example of ignored statistics.



Recent studies confirmed higher death rates at weekends, but showed no relationship to weekend staffing levels. Photograph: Peter Byrne/PA


Casually imply causation from correlation

In July 2015 Jeremy Hunt said: “Around 6,000 people lose their lives every year because we do not have a proper seven-day service in hospitals….” and by February 2016 this had increased to “11,000 excess deaths because we do not staff our hospitals properly at weekends”. These categorical claims that weekend staffing was responsible for increased weekend death rates were widely criticised at the time, particularly by the people who had done the actual research. Recent studies have confirmed higher death rates at weekends, but these showed no relationship to weekend staffing levels.


Choose your definitions carefully

On 17 December 2014, Tom Blenkinsop MP said, “Today, there are 2,500 fewer nurses in our NHS than in May 2010”, while on the same day David Cameron claimed “Today, actually, there are new figures out on the NHS… there are 3,000 more nurses under this government.” Surely one must be wrong?

But Mr Blenkinsop compared the number of people working as nurses between September 2010 and September 2014, while Cameron used the full-time-equivalent number of nurses, health visitors and midwives between the start of the government in May 2010 and September 2014. So they were both, in their own particular way, right.


‘Indicator hopper’: Health secretary Jeremy Hunt. Photograph: PA


Use total numbers rather than proportions (or whichever way suits your argument)

In the final three months of 2014, less than 93% of attendances at Accident and Emergency units were seen within four hours, the lowest proportion for 10 years. And yet Jeremy Hunt managed to tweet that “More patients than ever being seen in less than four hours”. Which, strictly speaking, was correct, but only because more people were attending A&E than ever before. Similarly, when it comes to employment, an increasing population means that the number of employed can go up even when the employment rate goes down. Full Fact has shown how the political parties play “indicator hop”, picking whichever measure currently supports their argument.


Is crime going up or down? Don’t ask Andy Burnham. Photograph: PA

Don’t provide any relevant context

Last September shadow home secretary Andy Burnham declared that “crime is going up”, and when pressed pointed to the police recording more violent and sexual offences than the previous year. But police-recorded crime data were de-designated as “official” statistics by the UK Statistics Authority in 2014 as they were so unreliable: they depend strongly on what the public choose to report, and how the police choose to record it.

Instead the Crime Survey for England and Wales is the official source of data, as it records crimes that are not reported to the police. And the Crime Survey shows a steady reduction in crime for more than 20 years, and no evidence of an increase in violent and sexual offences last year.
Exaggerate the importance of a possibly illusory change


Next time you hear a politician boasting that unemployment has dropped by 30,000 over the previous quarter, just remember that this is an estimate based on a survey. And that estimate has a margin of error of +/- 80,000, meaning that unemployment may well have gone down, but it may have gone up – the best we can say is that it hasn’t changed very much, but that hardly makes a speech. And to be fair, the politician probably has no idea that this is an estimate and not a head count.
Serious youth crime has actually declined, but that’s not because of TKAP. Photograph: Action Press / Rex Features


Prematurely announce the success of a policy initiative using unofficial selected data

In June 2008, just a year after the start of the Tackling Knives Action Programme (TKAP), No 10 got the Home Office to issue a press release saying “the number of teenagers admitted to hospital for knife or sharp instrument wounding in nine… police force areas fell by 27% according to new figures published today”. But this used unchecked unofficial data, and was against the explicit advice of official statisticians. They got publicity, but also a serious telling-off from the UK Statistics Authority which accused No 10 of making an announcement that was “corrosive of public trust in official statistics”. The final conclusion about the TKAP was that serious youth violence had declined in the country, but no more in TKAP areas than elsewhere.


  Donald Trump: ‘Am I going to check every statistic?’
Photograph: Robert F. Bukaty/AP


If all else fails, just make the numbers up

Last November, Donald Trump tweeted a recycled image that included the claim that “Whites killed by blacks – 81%”, citing “Crime Statistics Bureau – San Francisco”. The US fact-checking site Politifact identified this as completely fabricated – the “Bureau” did not exist, and the true figure is around 15%. When confronted with this, Trump shrugged and said, “Am I going to check every statistic?”

Not all politicians are so cavalier with statistics, and of course it’s completely reasonable for them to appeal to our feelings and values. But there are some serial offenders who conscript innocent numbers, purely to provide rhetorical flourish to their arguments.

We deserve to have statistical evidence presented in a fair and balanced way, and it’s only by public scrutiny and exposure that anything will ever change. There are noble efforts to dam the flood of naughty numbers. The BBC’s More or Less team take apart dodgy data, organisations such as Full Fact and Channel 4’s FactCheck expose flagrant abuses, the UK Statistics Authority write admonishing letters. The Royal Statistical Society offers statistical training for MPs, and the House of Commons library publishes a Statistical Literacy Guide: how to spot spin and inappropriate use of statistics.

They are all doing great work, but the shabby statistics keep on coming. Maybe these nine points can provide a checklist, or even the basis for a competition – how many points can your favourite minister score? In my angrier moments I feel that number abuse should be made a criminal offence. But that’s a law unlikely to be passed by politicians.

David Spiegelhalter is the Winton Professor of the Public Understanding of Risk at the University of Cambridge and president elect of the Royal Statistical Society

Wednesday, 26 March 2014

The banality of evil


NISSIM MANNATHUKKAREN
  
Illustration: Deepak Harichandan
The HinduIllustration: Deepak Harichandan

When carnage is reduced to numbers and development to just economic growth, real human beings and their tragedies remain forgotten.


Empires collapse. Gang leaders/Are strutting about like statesmen. The peoples/Can no longer be seen under all those armaments — Bertolt Brecht

German-American philosopher Hannah Arendt gave the world the phrase, “the banality of evil”. In 1963, she published the book Eichmann in Jerusalem: A Report on the Banality of Evil, her account of the trial of Adolf Eichmann, a Nazi military officer and one of the key figures of the Holocaust. Eichmann was hanged to death for war crimes. Arendt’s fundamental thesis is that ghastly crimes like the Holocaust are not necessarily committed by psychopaths and sadists, but, often, by normal, sane and ordinary human beings who perform their tasks with a bureaucratic diligence.

Maya Kodnani, MLA from Naroda, handed out swords to the mobs that massacred 95 people in the Gujarat riots of 2002. She was sentenced to 28 years in prison. She is a gynecologist who ran a clinic, and was later appointed as Minister for Women and Child Development under Narendra Modi.

Jagdish Tytler was, allegedly, one of the key individuals in the 1984 pogrom against the Sikhs. He was born to a Sikh mother and was brought up by a Christian, a prominent educationist who established institutions like the Delhi Public School. A Congress Party leader, he has been a minister in the Union government. The supposedly long arm of law has still not reached him. Guess they never will, considering that the conviction rate in the cases for butchering nearly 8000 Sikhs is only around one per cent.

For every “monstrous” Babu Bajrangi and Dara Singh, there are the Kodnanis and Tytlers. Evil, according to Arendt, becomes banal when it acquires an unthinking and systematic character. Evil becomes banal when ordinary people participate in it, build distance from it and justify it, in countless ways. There are no moral conundrums or revulsions. Evil does not even look like evil, it becomes faceless.

Thus, a terrifyingly fascinating exercise that is right now underway in the election campaign is the trivialisation and normalisation of the Gujarat pogrom, to pave the way for the crowning of the emperor, the Vikas Purush. If there was some moral indignation and horror at the thought of Narendra Modi becoming prime minister until recently, they have been washed away in the tidal wave of poll surveys, media commentaries, intellectual opinion, political bed-hopping, and of course, what the Americans think, all of which reinforce each other in their collective will to see Modi ascend to power.

Banalisation of evil happens when great human crimes are reduced to numbers. Thus, for example, scholars Jagdish Bhagwati and Arvind Panagariya write a letter to The Economist on the latter’s article on Modi: “You said that Mr. Modi refuses to atone for a ‘pogrom’ against Muslims in Gujarat, where he is chief minister. But what you call a pogrom was in fact a ‘communal riot’ in 2002 in which a quarter of the people killed were Hindus”. So, apparently, if we change the terminology, the gravity of the crime and the scale of the human tragedy would be drastically less!

This intellectual discourse is mirrored in ordinary people who adduce long-winded explanations for how moral responsibility for events like the Gujarat pogrom cannot really be attributed to anybody, especially the chief minister, who is distant from the crime scene. No moral universe exists beyond the one of “legally admissible evidence”. To be innocent means only to be innocent in the eyes of law. But what does evidence mean when the most powerful political, bureaucratic, and legal machineries are deployed to manipulate, manufacture and kill evidence as seen in both the 2002 and 1984 cases?

Another strategy of banalisation is to pit the number of dead in 2002 with that of 1984 (Bhagwati and Panagariya go onto assert that 1984 “was indeed a pogrom”). Modi’s infamous response to post-Godhra violence is countered with Rajiv Gandhi’s equally notorious comment after his mother’s assassination. In this game of mathematical equivalence, what actually slip through are real human beings and their tragedies.

Banalisation of evil happens when the process of atonement is reduced to a superficial seeking of apology. Even when that meaningless apology is not tendered, we can wonder to what extent reconciliation is possible.

The biggest tool in this banalisation is development. Everyday, you see perfectly decent, educated, and otherwise civil people normalise the Gujarat riots and Modi, because he is, after all, the “Man of Development”. “Yes, it might be that he is ultimately responsible for the riots, but look at the roads in Gujarat!” It is a strange moral world in which roads have moral equivalence to the pain of Zakia Jaffrey and other victims.

Ironically, along with evil, development itself becomes banal. Development becomes hollowed and is reduced to merely economic growth. E.F. Schumacher’s famous book Small is Beautiful has a less famous subtitle, A Study of Economics as if People Mattered. But when development is banal, people do not matter. Nor does the ecosystem. There are no inviolable ethical principles in pursuit of development. If Atal Behari Vajpayee was the mask of the BJP’s first foray into national governance, development becomes the mask of the Modi-led BJP’s present attempt, and a façade for the pogrom.

But what is fascinating is how such a banal understanding of development has captured public imagination. The most striking aspect of the Gujarat model is the divergence between its growing economy and its declining rank on the Human Development Index (HDI). For instance, in the UNDP's inequality-adjusted HDI (2011) Gujarat ranks ninth in education and 10th in health (among 19 major states). On gains in the HDI (1999-2008), Gujarat is 18th among 23 states. In the first India State Hunger Index (2009), Gujarat is 13th out of 17 states (beating only Chhattisgarh, Bihar, Jharkhand and Madhya Pradesh).

Yet, shockingly, prominent economists like Bhagwati participate in this banalisation by glorifying the Gujarat model. His response to the poor record of Gujarat is that it “inherited low levels of social indicators” and thus we should focus on “the change in these indicators” where he finds “impressive progress”. If so, how is it that many other states starting off at the same low levels have made much better gains than Gujarat without similar economic growth?

These figures and others about a whole range of human deprivation are in the public domain for some time, but, astonishingly, are not a matter of debate in the elections. Even if they were, they would not apparently dent the myth of the “Man of Development”. Such is the power of banalisation that it has no correlation with facts.

Even as the developed countries are realising the catastrophic human and environmental costs of the urban, industrial-based models of boundless economic growth (in America, the number of new cancer cases is going to rise by 45 per cent in just 15 years), we are, ironically, hurtling down the same abyss to a known hell — India fell 32 ranks in the global Environmental Performance Index to 155 and Delhi has become the most polluted city in the world this year! The corporate-led Gujarat model is an even grander industrial utopia based on the wanton devastation of mangroves and grazing lands.

In a recent election opinion poll, the three most important problems identified by the voters in Punjab were drug addiction (70 per cent), cancer caused by pesticides (17 per cent) and alcoholism (nine per cent)! This is shocking and unprecedented, and it stems from the fact than an estimated 67 per cent of rural population in Punjab had at least one drug addict in each household. Nevertheless, the juggernaut of development as economic growth careens on.

Disturbingly, the scope of questioning this banalisation of evil and development diminishes everyday. Many reports emerge about the self-censorship imposed by media institutions already in preparation for the inauguration of a new power dispensation. A book which raises serious questions about the Special Investigation Team’s interrogation of Modi hardly gets any media attention and, instead, is dismissed as propaganda against the BJP. It does not matter that the same journalist subjected the investigation in the anti-Sikh pogrom to similar scrutiny. And the pulping of the book on Hinduism by a publisher portends dangerous tendencies for the freedom of speech and democracy in the country.

The vacuity of the attempts to counter the banalisation of development is evident in the media discourse on elections. Just sample the much-lauded interview conducted by the nation’s conscience keeper with Rahul Gandhi. In a 90-minute conversation, Arnab Goswami could ask only a single question on the economy — on price rise. This is in a nation, which, on some social indicators, is behind neighbours like Sri Lanka, Nepal and Bangladesh. Elections are not about the substantive issues of human well being, environmental destruction, and ethics, but are reduced to a superficial drama of a clash of personalities.

Fascism is in the making when economics and development are amputated from ethics and an overarching conception of human good, and violence against minorities becomes banal. Moral choices are not always black and white, but they still have to be made. And if India actually believes this election to be a moral dilemma, then the conscience of the land of Buddha and Gandhi is on the verge of imploding.

Sunday, 2 March 2014

In God we trust - all others bring data. The perils of data-driven cricket -

 

For all his triumphs as England coach, Andy Flower ultimately got the balance between trusting people and numbers wrong
Tim Wigmore in Cricinfo
March 2, 2014
 

Was Andy Flower ultimately empowered by data or inhibited by it? © PA Photos

Cricket is an art, not a science. It's a fact that needs restating after the disintegration of Andy Flower's reign as England coach. Slavery to data had gone too far. The triumphs of the more jocund Darren Lehmann, Flower's coaching antithesis, are a salutary reminder of the importance of fun and flair in a successful cricket team. And it's not only cricket that could learn from the tale.
Big data - the vogue term used to describe the manifold growth and availability of data, both structured and not - is an inescapable reality of the 21st century. There are 1200 exabytes of data stored in the world; translated, that means that, if it were all placed on CD-ROMs and stacked up, it would stretch to the moon in five separate piles, according to Kenneth Cukier and Viktor Mayer-Schonberger's book Big Data. Day-to-day life can often feel like a battle to stay afloat against the relentless tide of data. One hundred and sixty billion instant messages were sent in Britain in 2013. Over 500 million tweets are sent worldwide every day.
Kevin Pietersen was the subject of a good number of those after his sacking as an England cricketer. Amid the cacophony of opinions, one voice we could have done without was David Cameron's; the prime minister gave a radio interview saying that there was a "powerful argument" for keeping the "remarkable" Pietersen in the team.
Cameron had once recognised the dangers of descending into a roving reporter, promising, "We are not going to sit in an office with the 24-hour news blaring out, shouting at the headlines." Downing Street's impulse to comment on the Pietersen affair is a manifestation of information overload at its worst: with so much space to fill, politicians feel compelled to fill it. The result is that they have less time to do their day jobs.
 
 
Flower's reign, for the most part, showed the virtues of using data smartly. But data is emphatically not a substitute for intuition and flair - either in the office or on the cricket field
 
Datafication often brings ugly and perverse consequences. The easiest way to reduce poverty is to give people just enough money to inch them ahead of an arbitrarily defined definition of poverty, rather than tackle the deep-rooted and more complex causes. Schools are routinely decried for a narrow-minded approach to education - "teaching to the test" - but this is the inevitable result of the obsession with standarised tests. California has pioneered performance-related pay for teachers, but a huge rise in teacher-enabled cheating has been one unforeseen result.
No industry has been permeated by datafication quite like the financial sector. The complex - oh, so complex - algorithms that underpinned the financial system had a simple rationale. In place of impulsive human beings, decision-making would be transferred to formulas that dealt only in cold logic, ensuring an end to financial catastrophes. We know what happened next. Yet the crash has changed less than is commonly supposed: around seven billion shares change hands every day on US equity markets - and five billion are traded by algorithms.
The Ashes tour felt like English cricket's crash. The numbers said that it couldn't possibly happen; those who spotted the warning signs were belittled as naysayers letting emotions cloud their judgement. The Ashes series was caricatured as the triumph of the old school - Lehmann's penchant for discussing the day's play over a beer - over Flower's pseudo-scientific approach. While clearly a simplification - Lehmann is no philistine when it comes to data - the accusation contains a grain of truth.
Flower's attraction to big data originated from reading Moneyball, the book that examined how the scientific methods of Oakland Athletics manager Billy Beane helped the baseball team punch above its financial limitations. But it is too readily forgotten that the Oakland Athletics ran out of steam in knockout games. "My shit doesn't work in the playoffs," Beane exclaimed. "My job is to get us to the playoffs. What happens after that is luck." Not even Beane found an empirical way of measuring flair, spontaneity and big-game aptitude.
After the debris of England's tour Down Under, The Sun published its list of the 61 "guilty men" - including 29 non-players - involved in England's Ashes tour. It was hard not to ask what on earth the backroom staff was doing. And, more pertinently, if England's total touring party had numbered only 51 or 41, could England really have performed any worse? The proliferation of specialist coaches and analysts seemed antithetical to the self-expression of players on the pitch.

Who's ahead? Boyd Rankin, Steven Finn and Chris Tremlett all had a few problems in Perth, Western Australia Chairman's XI v England XI, Tour match, Perth, 3rd day, November 2, 2013
The selection of Finn, Rankin and Tremlett for the Ashes was proof of the pitfalls of the reliance on bogus statistics © Getty Images 
Enlarge
Similar questions are being asked in different fields. The average businessman now sends 108 emails a day. But, as inboxes get bigger, so opportunity for creativity decreases. This reality is slowly being recognised: a multi-million dollar industry has grown around filtering emails to liberate businessmen from the grind. The world is running into the limits of Silicon Valley's favoured mantra "In God we trust - all others bring data."
No one would advocate pretending that big data doesn't exist. Datafication is happening at a staggering rate - the amount of digital data doubles every three years. Flower's reign, for the most part, showed the virtues of using it smartly. But data is emphatically not a substitute for intuition and flair - either in the office or on the cricket field.
By the last embers of Flower's rule, England seemed not empowered by data but inhibited by it, as instinct, spontaneity and joy seeped from their cricket. Accusations of England lacking flair on the field had a point - witness Alastair Cook's insistence on having a cover sweeper regardless of the match situation. Going back to 2011, consider England's approach to tying down Sachin Tendulkar in the home series against India: they relied on drawing Tendulkar outside his off stump in the early part of his innings rather than let him get his runs on the on side, the result of a computer simulator plan, created by their team analyst Nathan "Numbers" Leamon.
The selection of three beanpole quick bowlers to tour Australia was rooted in data showing that such bowlers were most likely to thrive in Australia. The ECB looked at the characteristics of the best quick bowlers - delayed delivery, braced front leg and so on, and then tried to coach those virtues into their own players, seemingly not realising it was too late; you can't change those things once bowlers are more than about 15. But it did not matter how many boxes Steven Finn, Boyd Rankin and Chris Tremlett ticked in theory when they were utterly bereft of fitness and form in practice. It was proof of the pitfalls of excess devotion to data and reliance on bogus statistics. "Garbage in, garbage out," as some who work with data are prone to saying.
Data is a complement to intuition and judgement, not a replacement for them. As Cukier and Mayer-Schonberger argue in their study, big data "exacerbates a very old problem: relying on the numbers when they are far more fallible than we think".
Criticisms of Flower's reliance on data always lingered under the surface, as when South Africa expressed bafflement when Graham Onions was dropped for Ryan Sidebottom in 2010, a data-driven decision largely made before the tour even began. For all his triumphs as England coach, Flower ultimately got the balance between trusting people and numbers wrong. He was in good company. In the brave new world, those who thrive will not be those who use data most - but those who use it most smartly.

Sunday, 12 May 2013

Lies, damned lies and Iain Duncan Smith



The way the work and pensions secretary manipulates statistics is a shaming indictment of his department's failings
IDS's claims slammed
The work and pensions secretary, Iain Duncan Smith, was reprimanded by the UK's statistics watchdog over his claims about the benefits cap. Photograph: Ian Nicholson/PA
When you see rottenness in a system you must ask: does it come from one bad apple or does the whole barrel stink?
The rank smell emanating from the coalition is impossible to miss. At first sniff, it appears to come from the blazered figure of Iain Duncan Smith. It has taken me some time to identify its source, because appearances deceive. From his clipped hair to his polished shoes, Duncan Smith seems to be a man who has retained the values of the officer corps of the Scots Guards he once served. Conservative commentators emphasise his honour and decency. They speak in reverential tones of his Easterhouse epiphany: the moment in 2002 when he saw the poverty on a Glasgow estate, brushed a manly tear from his eye and vowed to end the "dependency culture" that kept the poor jobless.
Duncan Smith's belief that the welfare state holds down the very people it is meant to serve is pleasing to Conservative ears. To maintain his supporters' illusions, he has to lie. Last week, the UK Statistics Authority gave him a reprimand that broke from the genteel language of the civil service. The work and pensions secretary had claimed that his department's cap on benefits was turning scroungers into strivers – even before it had come into force. "Already we have seen 8,000 people who would have been affected by the cap move into jobs." How sweet those words must have sounded to Conservative ears. The government was forcing the feckless to stop sponging off hard-working taxpayers. (Taxpayers are always "hard working" in British politics, in case you haven't noticed. We never try to get by doing the bare minimum.)
The figures did not show that, the statistics authority said. More to the point, they could not possibly have shown that. Duncan Smith's claims were "unsupported" by the very statistics his department had collected.
If this were a one-off, I would say Duncan Smith "misspoke" or "lacked judgment" or, in plain English, that he was an idiot. If every politician who spun statistics were damned, after all, parliament would be empty. I would use stronger language; indeed, Andrew Dilnot, the chair of the statistics authority, is thinking about sending his inspectors into the Department for Work and Pensions (DWP) because Duncan Smith is a habitual manipulator.
As journalists know, Duncan Smith's modus operandi is well established. His "people" – all of them scroungers, not strivers, who sponge off the taxpayer from their Whitehall offices – brief reporters with unpublished figures. The Tory press uses them, and, as theFinancial Times explained, when his spin doctors meet an honest journalist, who asks hard questions, they end the call and never ring back. By the time the true figures appear on theDWP website , and informed commentators can see the falsity, the spin, the old saying applies: "A lie is halfway round the world before the truth has got its boots on."
Before the benefit cap, it was the work programme, which is meant to provide training for the unemployed. The statistics authority criticised the "coherence" of Duncan Smith's statistics and, once again, the manner in which his department presented them to the public. Far from being a success, the programme found work for a mere 8.6% of the desperate people who went on it. Meanwhile, Jonathan Portes, director of the National Institute of Economic and Social Research and a former chief economist at the Cabinet Office, has convincingly demonstrated that the Tory claim that "more than a third of people who were on incapacity benefit dropped their claims rather than complete a medical assessment" is false and demonstrably false.
Numbers are stronger than words. When the powerful lie with statistics, they do so in the cynical knowledge that the public is more likely to believe them. But the manipulation does not just tell us how sly operators view the credulous masses, but how they see themselves.
The UK Statistics Authority has a fine phrase that guides its mathematicians: "Numbers should be a light, not a crutch". Duncan Smith does not wish to shine light on his policies, for he fears what he may see. He uses his twisted figures as a crutch instead, to help his dogmas hobble alongFrancis Wheen once said that the one fact everyone believes they know about a public figure is always wrong. Whatever they think about his policies, the public assumes that Duncan Smith is a gentleman. He is anything but.
Portes thinks there is no wider decay in British government beyond Duncan Smith's department. I am not so sure. The British right is riding off with the loons. Like the Republicans with the Tea Party, the supposedly mainstream Conservatives have decided to woo Ukip rather than fight it. To show that they are "listening", they must pursue policies that make little sense and invent the evidence to support them.
Welfare is already at the centre of the deceit. Duncan Smith's duff data always suggests that the unemployed are on the dole because they are workshy, not because there are no jobs for them to find. If he were to admit for a moment that the distinction between strivers and scroungers was meaningless and all of us could be in a job one day and out of it the next, the rightwing argument on welfare would collapse and then where would the Tories appeal to angry, old white men be?
It is not just Duncan Smith. The health secretary says he will stop foreign "health tourists" costing the NHS hundreds of millions. He has no reputable evidence to support that figure. David Cameron says he wants tax breaks for married couples, when there is no evidence whatsoever that they encourage lovers to marry.
The policies may not work, the ills they seek to combat and the benefits they hope to reap may be illusory. But fear holds Conservatives in its grip and the general election is drawing closer. When pressed, they say that they want to "flag up" their support of marriage, "signal" their dislike of scroungers or "send a message" to illegal immigrants.
Our language has been so corrupted by the euphemisms of advertising and public relations that we no longer realise that what they mean is that they intend to lie.