Search This Blog

Showing posts with label statistics. Show all posts
Showing posts with label statistics. Show all posts

Saturday, 22 June 2024

In Broken Britain, even the statistics don’t work

 Tim Harford in The FT 


From the bone-jarring potholes to the human excrement regularly released into British rivers, the country’s creaking infrastructure is one of the most visceral manifestations of the past 15 years of stagnation. To these examples of the shabby neglect of the essential underpinnings of modern life, let me add another: our statistical infrastructure. 

In her new book, How Infrastructure Works, engineering professor Deb Chachra argues that infrastructure is an extraordinary collective achievement and a triumph of long-term thinking. She adds that a helpful starting point for defining infrastructure is “all of the stuff that you don’t think about”. 

Statistical infrastructure certainly matches those descriptions. The idea that someone needs to decide what information to gather, and how to gather it, rarely crosses our mind — any more than we give much thought to what we flush down the toilet, or the fact that clean water comes from taps and electricity from the flick of a switch. 

As a result the UK’s statistical system, administrative databases, and evidence base for policy are suffering the same depredations as the nation’s roads, prisons and sewers. Easiest to measure are the inputs: the Office for National Statistics faces a 5 per cent cut in real terms to its budget this year, has been losing large numbers of experienced staff, and is hiring dramatically fewer than five years ago. 

But it is more instructive to consider some of the problems. The ONS has struggled to produce accurate estimates of something as fundamental as the unemployment rate, as it tries to divide resources between the traditional-but-foundering Labour Force Survey, and a streamlined-but-delayed new version which has been in the pipeline since 2017. 

That is an embarrassment, but the ONS can’t be held responsible for other gaps in our statistical system. A favourite example of Will Moy, chief executive of the Campbell Collaboration, a non-profit producer of systematic reviews of evidence in social science, is that we know more about the nation’s golfing habits than about trends in robbery or rape. This is because the UK’s survey of sporting participation is larger than the troubled Crime Survey of England and Wales, recently stripped of its status as an official National Statistic because of concerns over data quality. Surely nobody made a deliberate decision to establish those curious statistical priorities, but they are the priorities nonetheless. They exemplify the British state’s haphazard approach in deciding what to measure and what to neglect. 

This is foolishness. The government spends more than £1,200bn a year — nearly £18,000 for each person in the country — and without solid statistics, that money is being spent with eyes shut. 

For an example of the highs and lows of statistical infrastructure, consider the National Tutoring Programme, which was launched in 2020 in an effort to offset the obvious harms caused by the pandemic’s disruption to the school system. When the Department for Education designed the programme, it was able to turn to the Education Endowment Foundation for a solid, practical evidence base for what type of intervention was likely to work well. The answer: high-quality tutoring in small groups. 

This was the statistical system, in its broadest sense, working as it should: the EEF is a charity backed by the Department for Education, and when the crisis hit it had already gathered the evidence base to provide solutions. Yet — as the Centre for Public Data recently lamented — the DfE lacked the most basic data needed to evaluate its own programme: how many disadvantaged pupils were receiving tutoring, the quality of the tutoring, and what difference it made. The National Tutoring Programme could have gathered this information from the start, collecting evidence by design. But it did not. And as a result, we are left guessing about whether or not this was money well spent. 

Good data is not cheap to collect — but it is good value, especially when thoughtfully commissioned or built into policymaking by default. One promising avenue is support for systematic research summaries such as those produced by the Cochrane Collaboration for medicine and the Campbell Collaboration for social science and policy. If you want to understand how to promote literacy in primary schools, or whether neighbourhood policing is effective, a good research synthesis will tell you what the evidence says. Just as important, by revealing the gaps in our knowledge it provides a basis for funding new research. 

Another exciting opportunity is for the government to gather and link the administrative data we all produce as a byproduct of our interactions with officialdom. A well-designed system can safeguard personal privacy while unlocking all manner of insights. 

But fundamentally, policymakers need to take statistics seriously. These numbers are the eyes and ears of the state. If we neglect them, waste and mismanagement are all but inevitable. 

Chachra writes, “We should be seeing [infrastructure systems], celebrating them, and protecting them. Instead, these systems have been invisible and taken for granted.” 

We have taken a lot of invisible systems for granted over the past 20 years. The Resolution Foundation has estimated that in this period, UK public investment has lagged the OECD average by a cumulative half a trillion pounds. That is a lot of catching up to do. The next government will need some quick wins. Investing in better statistical infrastructure might be one of them.

Monday, 23 May 2022

India’s once-vaunted statistical infrastructure is crumbling

 The reasons are worryingly familiar writes The Economist



The modern Indian state has a proud statistical heritage. Soon after the country gained independence in 1947, the government resolved to achieve its development through comprehensive five-year plans. The strategy, though economically inadvisable, nonetheless required the creation of a robust data-gathering apparatus. In 1950 PC Mahalanobis, the leading light of Indian statistics, designed the National Sample Survey, which sent staff to the far corners of the vast country to jot down data regarding its mostly illiterate citizens. The survey’s complexity and scope seemed “beyond the bounds of possibility”, reckoned one American statistician.

Of late, however, admiration has been replaced by alarm. India’s statistical services are in a bad way. Across some measures, figures are simply not gathered; for others, the data are often dodgy, unrepresentative, untimely, or just wrong. The country’s tracking of covid-19 provides a grim example. As the pandemic raged across India, officials struggled to keep tabs on its toll. Officially, covid has claimed more than half a million lives in India; The Economist’s excess-deaths tracker puts the figure far higher, between 2m and 9.4m. India’s government has also hampered efforts to assess the pandemic’s global impact, refusing at first to share data with the World Health Organisation (who), and criticising its methods.

The preference for flattering but flawed figures is pervasive. In education, state governments regularly ignore data showing that Indian children are performing woefully in school and instead cite their own administrative numbers, which are often wrong. In Madhya Pradesh, a state in central India, an official assessment showed that all pupils had scored more than 60% in a maths test; an independent assessment revealed that none of them had. Similarly, in sanitation, the central government says that India is now free of open defecation, meaning that people both have access to a toilet and consistently use it. Anyone who takes a train out of Delhi at dawn and looks out of the window, however, might question the claim.

When it comes to poverty, arguably India’s biggest problem, timely figures are not available. Official estimates are based on a poverty line derived from consumption data in 2011-12, despite the fact that more recent but as yet unpublished numbers exist for 2017-18. By contrast, Indonesia calculates its poverty rate twice a year. India’s government explains its approach by pointing to discrepancies between recently gathered data and national accounts statistics—but many suspect the true reason is that newer data would probably show an increase in poverty.

In some cases, flawed data seem more a problem of methodology than malign intent. India’s gdp estimates, for instance, have been mired in controversy ever since the statistics ministry introduced a new series in 2015 (a change that was in the works before the current government entered office). Arvind Subramanian, a former government adviser, calculated that the new methodology overestimated average annual growth by as much as three to four percentage points between 2011-12 and 2016-17. Although current advisers insist that the official methodology is in line with global standards, other studies have also found problems with the calculations.

The erosion of India’s statistical infrastructure predates the current government, but seems to have grown worse in recent years. Narendra Modi, the prime minister, has previously bristled at technocratic expertise and number-crunching. (“Hard work is more powerful than Harvard,” he said in 2017.)

India’s data woes are also troubling for what they suggest about the ability of the state to provide the essential public services needed to foster long-run growth. The statistics ministry, short of staff and resources, is emblematic of the civil service. Data-gathering has become excessively centralised and over-politicised. A National Statistical Commission was set up in 2005 and tasked with fixing India’s data infrastructure. But its work has been complicated by turf wars and internal politics; it is widely considered toothless, including by former members.

Who’s counting

The situation is not hopeless, perhaps because of statisticians’ past efforts. According to the World Bank, the quality of Indian data is still in line with that of other developing countries, even after years of neglect. India’s new goods-and-services tax and digital-welfare infrastructure are yielding troves of data. Leading Indian statisticians argue that an empowered regulator could fix existing problems.

State governments and departments are also doing their bit. Telangana, a southern state, is investing in its own household surveys, for example. India’s rural-development ministry recently released a dataset covering 770,000 rural public facilities, such as schools and hospitals, inviting data whizzes to peruse the figures and suggest improvements. Civil society is also responding. During the pandemic, dozens of volunteers co-operated to produce granular, timely estimates of covid cases. New technologies could help gather data quickly and cheaply, over phones and tablets.

Yet in a modern economy there is no substitute for high-quality national data-gathering. The sunlight provided by accurate figures is often unwelcome for an increasingly autocratic government: transparency invites accountability. But neglect of the statistical services also leaves Indian policymakers flailing in the dark, unable to quickly spot and respond to brewing economic and social problems.

Sunday, 13 September 2020

Statistics, lies and the virus: Five lessons from a pandemic

In an age of disinformation, the value of rigorous data has never been more evident writes Tim Harford in The FT 


Will this year be 1954 all over again? Forgive me, I have become obsessed with 1954, not because it offers another example of a pandemic (that was 1957) or an economic disaster (there was a mild US downturn in 1953), but for more parochial reasons. 

Nineteen fifty-four saw the appearance of two contrasting visions for the world of statistics — visions that have shaped our politics, our media and our health. This year confronts us with a similar choice. 

The first of these visions was presented in How to Lie with Statistics, a book by a US journalist named Darrell Huff. Brisk, intelligent and witty, it is a little marvel of numerical communication. 

The book received rave reviews at the time, has been praised by many statisticians over the years and is said to be the best-selling work on the subject ever published. It is also an exercise in scorn: read it and you may be disinclined to believe a number-based claim ever again. 

There are good reasons for scepticism today. David Spiegelhalter, author of last year’s The Art of Statistics, laments some of the UK government’s coronavirus graphs and testing targets as “number theatre”, with “dreadful, awful” deployment of numbers as a political performance. 

“There is great damage done to the integrity and trustworthiness of statistics when they’re under the control of the spin doctors,” Spiegelhalter says. He is right. But we geeks must be careful — because the damage can come from our own side, too. 

For Huff and his followers, the reason to learn statistics is to catch the liars at their tricks. That sceptical mindset took Huff to a very unpleasant place, as we shall see. Once the cynicism sets in, it becomes hard to imagine that statistics could ever serve a useful purpose.  

But they can — and back in 1954, the alternative perspective was embodied in the publication of an academic paper by the British epidemiologists Richard Doll and Austin Bradford Hill. They marshalled some of the first compelling evidence that smoking cigarettes dramatically increases the risk of lung cancer. 

The data they assembled persuaded both men to quit smoking and helped save tens of millions of lives by prompting others to do likewise. This was no statistical trickery, but a contribution to public health that is almost impossible to exaggerate.  

You can appreciate, I hope, my obsession with these two contrasting accounts of statistics: one as a trick, one as a tool. Doll and Hill’s painstaking approach illuminates the world and saves lives into the bargain. 

Huff’s alternative seems clever but is the easy path: seductive, addictive and corrosive. Scepticism has its place, but easily curdles into cynicism and can be weaponized into something even more poisonous than that. 

The two worldviews soon began to collide. Huff’s How to Lie with Statistics seemed to be the perfect illustration of why ordinary, honest folk shouldn’t pay too much attention to the slippery experts and their dubious data. 

Such ideas were quickly picked up by the tobacco industry, with its darkly brilliant strategy of manufacturing doubt in the face of evidence such as that provided by Doll and Hill. 

As described in books such as Merchants of Doubt by Erik Conway and Naomi Oreskes, this industry perfected the tactics of spreading uncertainty: calling for more research, emphasising doubt and the need to avoid drastic steps, highlighting disagreements between experts and funding alternative lines of inquiry. The same tactics, and sometimes even the same personnel, were later deployed to cast doubt on climate science. 

These tactics are powerful in part because they echo the ideals of science. It is a short step from the Royal Society’s motto, “nullius in verba” (take nobody’s word for it), to the corrosive nihilism of “nobody knows anything”.  

So will 2020 be another 1954? From the point of view of statistics, we seem to be standing at another fork in the road. The disinformation is still out there, as the public understanding of Covid-19 has been muddied by conspiracy theorists, trolls and government spin doctors.  

Yet the information is out there too. The value of gathering and rigorously analysing data has rarely been more evident. Faced with a complete mystery at the start of the year, statisticians, scientists and epidemiologists have been working miracles. I hope that we choose the right fork, because the pandemic has lessons to teach us about statistics — and vice versa — if we are willing to learn. 


The numbers matter 

One lesson this pandemic has driven home to me is the unbelievable importance of the statistics,” says Spiegelhalter. Without statistical information, we haven’t a hope of grasping what it means to face a new, mysterious, invisible and rapidly spreading virus. 

Once upon a time, we would have held posies to our noses and prayed to be spared; now, while we hope for advances from medical science, we can also coolly evaluate the risks. 

Without good data, for example, we would have no idea that this infection is 10,000 times deadlier for a 90-year-old than it is for a nine-year-old — even though we are far more likely to read about the deaths of young people than the elderly, simply because those deaths are surprising. It takes a statistical perspective to make it clear who is at risk and who is not. 

Good statistics, too, can tell us about the prevalence of the virus — and identify hotspots for further activity. Huff may have viewed statistics as a vector for the dark arts of persuasion, but when it comes to understanding an epidemic, they are one of the few tools we possess. 


Don’t take the numbers for granted 

But while we can use statistics to calculate risks and highlight dangers, it is all too easy to fail to ask the question “Where do these numbers come from?” By that, I don’t mean the now-standard request to cite sources, I mean the deeper origin of the data. For all his faults, Huff did not fail to ask the question. 
 
He retells a cautionary tale that has become known as “Stamp’s Law” after the economist Josiah Stamp — warning that no matter how much a government may enjoy amassing statistics, “raise them to the nth power, take the cube root and prepare wonderful diagrams”, it was all too easy to forget that the underlying numbers would always come from a local official, “who just puts down what he damn pleases”. 

The cynicism is palpable, but there is insight here too. Statistics are not simply downloaded from an internet database or pasted from a scientific report. Ultimately, they came from somewhere: somebody counted or measured something, ideally systematically and with care. These efforts at systematic counting and measurement require money and expertise — they are not to be taken for granted. 

In my new book, How to Make the World Add Up, I introduce the idea of “statistical bedrock” — data sources such as the census and the national income accounts that are the results of painstaking data collection and analysis, often by official statisticians who get little thanks for their pains and are all too frequently the target of threats, smears or persecution. 
 
In Argentina, for example, long-serving statistician Graciela Bevacqua was ordered to “round down” inflation figures, then demoted in 2007 for producing a number that was too high. She was later fined $250,000 for false advertising — her crime being to have helped produce an independent estimate of inflation. 

In 2011, Andreas Georgiou was brought in to head Greece’s statistical agency at a time when it was regarded as being about as trustworthy as the country’s giant wooden horses. When he started producing estimates of Greece’s deficit that international observers finally found credible, he was prosecuted for his “crimes” and threatened with life imprisonment. Honest statisticians are braver — and more invaluable — than we know.  

In the UK, we don’t habitually threaten our statisticians — but we do underrate them. “The Office for National Statistics is doing enormously valuable work that frankly nobody has ever taken notice of,” says Spiegelhalter, pointing to weekly death figures as an example. “Now we deeply appreciate it.”  

Quite so. This statistical bedrock is essential, and when it is missing, we find ourselves sinking into a quagmire of confusion. 

The foundations of our statistical understanding of the world are often gathered in response to a crisis. For example, nowadays we take it for granted that there is such a thing as an “unemployment rate”, but a hundred years ago nobody could have told you how many people were searching for work. Severe recessions made the question politically pertinent, so governments began to collect the data. 

More recently, the financial crisis hit. We discovered that our data about the banking system was patchy and slow, and regulators took steps to improve it. 

So it is with the Sars-Cov-2 virus. At first, we had little more than a few data points from Wuhan, showing an alarmingly high death rate of 15 per cent — six deaths in 41 cases. Quickly, epidemiologists started sorting through the data, trying to establish how exaggerated that case fatality rate was by the fact that the confirmed cases were mostly people in intensive care. Quirks of circumstance — such as the Diamond Princess cruise ship, in which almost everyone was tested — provided more insight. 

Johns Hopkins University in the US launched a dashboard of data resources, as did the Covid Tracking Project, an initiative from the Atlantic magazine. An elusive and mysterious threat became legible through the power of this data.  

That is not to say that all is well. Nature recently reported on “a coronavirus data crisis” in the US, in which “political meddling, disorganization and years of neglect of public-health data management mean the country is flying blind”.  

Nor is the US alone. Spain simply stopped reporting certain Covid deaths in early June, making its figures unusable. And while the UK now has an impressively large capacity for viral testing, it was fatally slow to accelerate this in the critical early weeks of the pandemic. 

Ministers repeatedly deceived the public about the number of tests being carried out by using misleading definitions of what was happening. For weeks during lockdown, the government was unable to say how many people were being tested each day. 

Huge improvements have been made since then. The UK’s Office for National Statistics has been impressively flexible during the crisis, for example in organising systematic weekly testing of a representative sample of the population. This allows us to estimate the true prevalence of the virus. Several countries, particularly in east Asia, provide accessible, usable data about recent infections to allow people to avoid hotspots. 

These things do not happen by accident: they require us to invest in the infrastructure to collect and analyse the data. On the evidence of this pandemic, such investment is overdue, in the US, the UK and many other places. 


Even the experts see what they expect to see 

Jonas Olofsson, a psychologist who studies our perceptions of smell, once told me of a classic experiment in the field. Researchers gave people a whiff of scent and asked them for their reactions to it. In some cases, the experimental subjects were told: “This is the aroma of a gourmet cheese.” Others were told: “This is the smell of armpits.” 

In truth, the scent was both: an aromatic molecule present both in runny cheese and in bodily crevices. But the reactions of delight or disgust were shaped dramatically by what people expected. 

Statistics should, one would hope, deliver a more objective view of the world than an ambiguous aroma. But while solid data offers us insights we cannot gain in any other way, the numbers never speak for themselves. They, too, are shaped by our emotions, our politics and, perhaps above all, our preconceptions. 

A striking example is the decision, on March 23 this year, to introduce a lockdown in the UK. In hindsight, that was too late. 

“Locking down a week earlier would have saved thousands of lives,” says Kit Yates, author of The Maths of Life and Death — a view now shared by influential epidemiologist Neil Ferguson and by David King, chair of the “Independent Sage” group of scientists. 

The logic is straightforward enough: at the time, cases were doubling every three to four days. If a lockdown had stopped that process in its tracks a week earlier, it would have prevented two doublings and saved three-quarters of the 65,000 people who died in the first wave of the epidemic, as measured by the excess death toll. 

That might be an overestimate of the effect, since people were already voluntarily pulling back from social interactions. Yet there is little doubt that if a lockdown was to happen at all, an earlier one would have been more effective. And, says Yates, since the infection rate took just days to double before lockdown but long weeks to halve once it started, “We would have got out of lockdown so much sooner . . . Every week before lockdown cost us five to eight weeks at the back end of the lockdown.” 

Why, then, was the lockdown so late? No doubt there were political dimensions to that decision, but senior scientific advisers to the government seemed to believe that the UK still had plenty of time. On March 12, prime minister Boris Johnson was flanked by Chris Whitty, the government’s chief medical adviser, and Patrick Vallance, chief scientific adviser, in the first big set-piece press conference. Italy had just suffered its 1,000th Covid death and Vallance noted that the UK was about four weeks behind Italy on the epidemic curve. 

With hindsight, this was wrong: now that late-registered deaths have been tallied, we know that the UK passed the same landmark on lockdown day, March 23, just 11 days later.  

It seems that in early March the government did not realise how little time it had. As late as March 16, Johnson declared that infections were doubling every five to six days. 

The trouble, says Yates, is that UK data on cases and deaths suggested that things were moving much faster than that, doubling every three or four days — a huge difference. What exactly went wrong is unclear — but my bet is that it was a cheese-or-armpit problem. 

Some influential epidemiologists had produced sophisticated models suggesting that a doubling time of five to six days seemed the best estimate, based on data from the early weeks of the epidemic in China. These models seemed persuasive to the government’s scientific advisers, says Yates: “If anything, they did too good a job.” 

Yates argues that the epidemiological models that influenced the government’s thinking about doubling times were sufficiently detailed and convincing that when the patchy, ambiguous, early UK data contradicted them, it was hard to readjust. We all see what we expect to see. 

The result, in this case, was a delay to lockdown: that led to a much longer lockdown, many thousands of preventable deaths and needless extra damage to people’s livelihoods. The data is invaluable but, unless we can overcome our own cognitive filters, the data is not enough. 


The best insights come from combining statistics with personal experience 

The expert who made the biggest impression on me during this crisis was not the one with the biggest name or the biggest ego. It was Nathalie MacDermott, an infectious-disease specialist at King’s College London, who in mid-February calmly debunked the more lurid public fears about how deadly the new coronavirus was. 

Then, with equal calm, she explained to me that the virus was very likely to become a pandemic, that barring extraordinary measures we could expect it to infect more than half the world’s population, and that the true fatality rate was uncertain but seemed to be something between 0.5 and 1 per cent. In hindsight, she was broadly right about everything that mattered. MacDermott’s educated guesses pierced through the fog of complex modelling and data-poor speculation. 

I was curious as to how she did it, so I asked her. “People who have spent a lot of their time really closely studying the data sometimes struggle to pull their head out and look at what’s happening around them,” she said. “I trust data as well, but sometimes when we don’t have the data, we need to look around and interpret what’s happening.” 

MacDermott worked in Liberia in 2014 on the front line of an Ebola outbreak that killed more than 11,000 people. At the time, international organisations were sanguine about the risks, while the local authorities were in crisis. When she arrived in Liberia, the treatment centres were overwhelmed, with patients lying on the floor, bleeding freely from multiple areas and dying by the hour. 

The horrendous experience has shaped her assessment of subsequent risks: on the one hand, Sars-Cov-2 is far less deadly than Ebola; on the other, she has seen the experts move too slowly while waiting for definitive proof of a risk. 

“From my background working with Ebola, I’d rather be overprepared than underprepared because I’m in a position of denial,” she said. 

There is a broader lesson here. We can try to understand the world through statistics, which at their best provide a broad and representative overview that encompasses far more than we could personally perceive. Or we can try to understand the world up close, through individual experience. Both perspectives have their advantages and disadvantages. 

Muhammad Yunus, a microfinance pioneer and Nobel laureate, has praised the “worm’s eye view” over the “bird’s eye view”, which is a clever sound bite. But birds see a lot too. Ideally, we want both the rich detail of personal experience and the broader, low-resolution view that comes from the spreadsheet. Insight comes when we can combine the two — which is what MacDermott did. 


Everything can be polarised 

Reporting on the numbers behind the Brexit referendum, the vote on Scottish independence, several general elections and the rise of Donald Trump, there was poison in the air: many claims were made in bad faith, indifferent to the truth or even embracing the most palpable lies in an effort to divert attention from the issues. Fact-checking in an environment where people didn’t care about the facts, only whether their side was winning, was a thankless experience. 

For a while, one of the consolations of doing data-driven journalism during the pandemic was that it felt blessedly free of such political tribalism. People were eager to hear the facts after all; the truth mattered; data and expertise were seen to be helpful. The virus, after all, could not be distracted by a lie on a bus.  

That did not last. America polarised quickly, with mask-wearing becoming a badge of political identity — and more generally the Democrats seeking to underline the threat posed by the virus, with Republicans following President Trump in dismissing it as overblown.  

The prominent infectious-disease expert Anthony Fauci does not strike me as a partisan figure — but the US electorate thinks otherwise. He is trusted by 32 per cent of Republicans and 78 per cent of Democrats. 

The strangest illustration comes from the Twitter account of the Republican politician Herman Cain, which late in August tweeted: “It looks like the virus is not as deadly as the mainstream media first made it out to be.” Cain, sadly, died of Covid-19 in July — but it seems that political polarisation is a force stronger than death. 

Not every issue is politically polarised, but when something is dragged into the political arena, partisans often prioritise tribal belonging over considerations of truth. One can see this clearly, for example, in the way that highly educated Republicans and Democrats are further apart on the risks of climate change than less-educated Republicans and Democrats. 

Rather than bringing some kind of consensus, more years of education simply seem to provide people with the cognitive tools they require to reach the politically convenient conclusion. From climate change to gun control to certain vaccines, there are questions for which the answer is not a matter of evidence but a matter of group identity. 

In this context, the strategy that the tobacco industry pioneered in the 1950s is especially powerful. Emphasise uncertainty, expert disagreement and doubt and you will find a willing audience. If nobody really knows the truth, then people can believe whatever they want. 

All of which brings us back to Darrell Huff, statistical sceptic and author of How to Lie with Statistics. While his incisive criticism of statistical trickery has made him a hero to many of my fellow nerds, his career took a darker turn, with scepticism providing the mask for disinformation. 

Huff worked on a tobacco-funded sequel, How to Lie with Smoking Statistics, casting doubt on the scientific evidence that cigarettes were dangerous. (Mercifully, it was not published.)  

Huff also appeared in front of a US Senate committee that was pondering mandating health warnings on cigarette packaging. He explained to the lawmakers that there was a statistical correlation between babies and storks (which, it turns out, there is) even though the true origin of babies is rather different. The connection between smoking and cancer, he argued, was similarly tenuous.  

Huff’s statistical scepticism turned him into the ancestor of today’s contrarian trolls, spouting bullshit while claiming to be the straight-talking voice of common sense. It should be a warning to us all. There is a place in anyone’s cognitive toolkit for healthy scepticism, but that scepticism can all too easily turn into a refusal to look at any evidence at all.

This crisis has reminded us of the lure of partisanship, cynicism and manufactured doubt. But surely it has also demonstrated the power of honest statistics. Statisticians, epidemiologists and other scientists have been producing inspiring work in the footsteps of Doll and Hill. I suggest we set aside How to Lie with Statistics and pay attention. 

Carefully gathering the data we need, analysing it openly and truthfully, sharing knowledge and unlocking the puzzles that nature throws at us — this is the only chance we have to defeat the virus and, more broadly, an essential tool for understanding a complex and fascinating world.

Thursday, 10 September 2020

Facts v feelings: how to stop our emotions misleading us

The pandemic has shown how a lack of solid statistics can be dangerous. But even with the firmest of evidence, we often end up ignoring the facts we don’t like. By Tim Harford in The Guardian
 

By the spring of 2020, the high stakes involved in rigorous, timely and honest statistics had suddenly become all too clear. A new coronavirus was sweeping the world. Politicians had to make their most consequential decisions in decades, and fast. Many of those decisions depended on data detective work that epidemiologists, medical statisticians and economists were scrambling to conduct. Tens of millions of lives were potentially at risk. So were billions of people’s livelihoods.

In early April, countries around the world were a couple of weeks into lockdown, global deaths passed 60,000, and it was far from clear how the story would unfold. Perhaps the deepest economic depression since the 1930s was on its way, on the back of a mushrooming death toll. Perhaps, thanks to human ingenuity or good fortune, such apocalyptic fears would fade from memory. Many scenarios seemed plausible. And that’s the problem.

An epidemiologist, John Ioannidis, wrote in mid-March that Covid-19 “might be a once-in-a-century evidence fiasco”. The data detectives are doing their best – but they’re having to work with data that’s patchy, inconsistent and woefully inadequate for making life-and-death decisions with the confidence we would like.

Details of this fiasco will, no doubt, be studied for years to come. But some things already seem clear. At the beginning of the crisis, politics seem to have impeded the free flow of honest statistics. Although the claim is contested, Taiwan complained that in late December 2019 it had given important clues about human-to-human transmission to the World Health Organization – but as late as mid-January, the WHO was reassuringly tweeting that China had found no evidence of human-to-human transmission. (Taiwan is not a member of the WHO, because China claims sovereignty over the territory and demands that it should not be treated as an independent state. It’s possible that this geopolitical obstacle led to the alleged delay.)

Did this matter? Almost certainly; with cases doubling every two or three days, we will never know what might have been different with an extra couple of weeks of warning. It’s clear that many leaders took a while to appreciate the potential gravity of the threat. President Trump, for instance, announced in late February: “It’s going to disappear. One day it’s like a miracle, it will disappear.” Four weeks later, with 1,300 Americans dead and more confirmed cases in the US than any other country, Trump was still talking hopefully about getting everybody to church at Easter.

As I write, debates are raging. Can rapid testing, isolation and contact tracing contain outbreaks indefinitely, or merely delay their spread? Should we worry more about small indoor gatherings or large outdoor ones? Does closing schools help to prevent the spread of the virus, or do more harm as children go to stay with vulnerable grandparents? How much does wearing masks help? These and many other questions can be answered only by good data about who has been infected, and when.

But in the early months of the pandemic, a vast number of infections were not being registered in official statistics, owing to a lack of tests. And the tests that were being conducted were giving a skewed picture, being focused on medical staff, critically ill patients, and – let’s face it – rich, famous people. It took several months to build a picture of how many mild or asymptomatic cases there are, and hence how deadly the virus really is. As the death toll rose exponentially in March, doubling every two days in the UK, there was no time to wait and see. Leaders put economies into an induced coma – more than 3 million Americans filed jobless claims in a single week in late March, five times the previous record. The following week was even worse: more than 6.5m claims were filed. Were the potential health consequences really catastrophic enough to justify sweeping away so many people’s incomes? It seemed so – but epidemiologists could only make their best guesses with very limited information.

It’s hard to imagine a more extraordinary illustration of how much we usually take accurate, systematically gathered numbers for granted. The statistics for a huge range of important issues that predate the coronavirus have been painstakingly assembled over the years by diligent statisticians, and often made available to download, free of charge, anywhere in the world. Yet we are spoiled by such luxury, casually dismissing “lies, damned lies and statistics”. The case of Covid-19 reminds us how desperate the situation can become when the statistics simply aren’t there.

When it comes to interpreting the world around us, we need to realise that our feelings can trump our expertise. This explains why we buy things we don’t need, fall for the wrong kind of romantic partner, or vote for politicians who betray our trust. In particular, it explains why we so often buy into statistical claims that even a moment’s thought would tell us cannot be true. Sometimes, we want to be fooled.

Psychologist Ziva Kunda found this effect in the lab, when she showed experimental subjects an article laying out the evidence that coffee or other sources of caffeine could increase the risk to women of developing breast cysts. Most people found the article pretty convincing. Women who drank a lot of coffee did not.

We often find ways to dismiss evidence that we don’t like. And the opposite is true, too: when evidence seems to support our preconceptions, we are less likely to look too closely for flaws. It is not easy to master our emotions while assessing information that matters to us, not least because our emotions can lead us astray in different directions.

We don’t need to become emotionless processors of numerical information – just noticing our emotions and taking them into account may often be enough to improve our judgment. Rather than requiring superhuman control of our emotions, we need simply to develop good habits. Ask yourself: how does this information make me feel? Do I feel vindicated or smug? Anxious, angry or afraid? Am I in denial, scrambling to find a reason to dismiss the claim?

In the early days of the coronavirus epidemic, helpful-seeming misinformation spread even faster than the virus itself. One viral post – circulating on Facebook and email newsgroups – all-too-confidently explained how to distinguish between Covid-19 and a cold, reassured people that the virus was destroyed by warm weather, and incorrectly advised that iced water was to be avoided, while warm water kills any virus. The post, sometimes attributed to “my friend’s uncle”, sometimes to “Stanford hospital board” or some blameless and uninvolved paediatrician, was occasionally accurate but generally speculative and misleading. But still people – normally sensible people – shared it again and again and again. Why? Because they wanted to help others. They felt confused, they saw apparently useful advice, and they felt impelled to share. That impulse was only human, and it was well-meaning – but it was not wise.


Protestors in Edinburgh demonstrating against Covid-19 prevention measures. Photograph: Jeff J Mitchell/Getty Images

Before I repeat any statistical claim, I first try to take note of how it makes me feel. It’s not a foolproof method against tricking myself, but it’s a habit that does little harm, and is sometimes a great deal of help. Our emotions are powerful. We can’t make them vanish, and nor should we want to. But we can, and should, try to notice when they are clouding our judgment.

In 1997, the economists Linda Babcock and George Loewenstein ran an experiment in which participants were given evidence from a real court case about a motorbike accident. They were then randomly assigned to play the role of plaintiff’s attorney (arguing that the injured motorcyclist should receive $100,000 in damages) or defence attorney (arguing that the case should be dismissed or the damages should be low).

The experimental subjects were given a financial incentive to argue their side of the case persuasively, and to reach an advantageous settlement with the other side. They were also given a separate financial incentive to accurately guess what the damages the judge in the real case had actually awarded. Their predictions should have been unrelated to their role-playing, but their judgment was strongly influenced by what they hoped would be true.

Psychologists call this “motivated reasoning”. Motivated reasoning is thinking through a topic with the aim, conscious or unconscious, of reaching a particular kind of conclusion. In a football game, we see the fouls committed by the other team but overlook the sins of our own side. We are more likely to notice what we want to notice. Experts are not immune to motivated reasoning. Under some circumstances their expertise can even become a disadvantage. The French satirist Molière once wrote: “A learned fool is more foolish than an ignorant one.” Benjamin Franklin commented: “So convenient a thing is it to be a reasonable creature, since it enables us to find or make a reason for everything one has a mind to.”

Modern social science agrees with Molière and Franklin: people with deeper expertise are better equipped to spot deception, but if they fall into the trap of motivated reasoning, they are able to muster more reasons to believe whatever they really wish to believe.

One recent review of the evidence concluded that this tendency to evaluate evidence and test arguments in a way that is biased towards our own preconceptions is not only common, but just as common among intelligent people. Being smart or educated is no defence. In some circumstances, it may even be a weakness.

One illustration of this is a study published in 2006 by two political scientists, Charles Taber and Milton Lodge. They wanted to examine the way Americans reasoned about controversial political issues. The two they chose were gun control and affirmative action.

Taber and Lodge asked their experimental participants to read a number of arguments on either side, and to evaluate the strength and weakness of each argument. One might hope that being asked to review these pros and cons might give people more of a shared appreciation of opposing viewpoints; instead, the new information pulled people further apart.

This was because people mined the information they were given for ways to support their existing beliefs. When invited to search for more information, people would seek out data that backed their preconceived ideas. When invited to assess the strength of an opposing argument, they would spend considerable time thinking up ways to shoot it down.

This isn’t the only study to reach this sort of conclusion, but what’s particularly intriguing about Taber and Lodge’s experiment is that expertise made matters worse. More sophisticated participants in the experiment found more material to back up their preconceptions. More surprisingly, they found less material that contradicted them – as though they were using their expertise actively to avoid uncomfortable information. They produced more arguments in favour of their own views, and picked up more flaws in the other side’s arguments. They were vastly better equipped to reach the conclusion they had wanted to reach all along.

Of all the emotional responses we might have, the most politically relevant are motivated by partisanship. People with a strong political affiliation want to be on the right side of things. We see a claim, and our response is immediately shaped by whether we believe “that’s what people like me think”.

Consider this claim about climate change: “Human activity is causing the Earth’s climate to warm up, posing serious risks to our way of life.” Many of us have an emotional reaction to a claim like that; it’s not like a claim about the distance to Mars. Believing it or denying it is part of our identity; it says something about who we are, who our friends are, and the sort of world we want to live in. If I put a claim about climate change in a news headline, or in a graph designed to be shared on social media, it will attract attention and engagement not because it is true or false, but because of the way people feel about it.

If you doubt this, ponder the findings of a Gallup poll conducted in 2015. It found a huge gap between how much Democrats and Republicans in the US worried about climate change. What rational reason could there be for that?

Scientific evidence is scientific evidence. Our beliefs around climate change shouldn’t skew left and right. But they do. This gap became wider the more education people had. Among those with no college education, 45% of Democrats and 23% of Republicans worried “a great deal” about climate change. Yet among those with a college education, the figures were 50% of Democrats and 8% of Republicans. A similar pattern holds if you measure scientific literacy: more scientifically literate Republicans and Democrats are further apart than those who know very little about science.

If emotion didn’t come into it, surely more education and more information would help people to come to an agreement about what the truth is – or at least, the current best theory? But giving people more information seems actively to polarise them on the question of climate change. This fact alone tells us how important our emotions are. People are straining to reach the conclusion that fits with their other beliefs and values – and the more they know, the more ammunition they have to reach the conclusion they hope to reach.


Anti-carbon tax protesters in Australia in 2011. Photograph: Torsten Blackwood/AFP/Getty Images

In the case of climate change, there is an objective truth, even if we are unable to discern it with perfect certainty. But as you are one individual among nearly 8 billion on the planet, the environmental consequences of what you happen to think are irrelevant. With a handful of exceptions – say, if you’re the president of China – climate change is going to take its course regardless of what you say or do. From a self-centred point of view, the practical cost of being wrong is close to zero. The social consequences of your beliefs, however, are real and immediate.

Imagine that you’re a barley farmer in Montana, and hot, dry summers are ruining your crop with increasing frequency. Climate change matters to you. And yet rural Montana is a conservative place, and the words “climate change” are politically charged. Anyway, what can you personally do about it?

Here’s how one farmer, Erik Somerfeld, threads that needle, as described by the journalist Ari LeVaux: “In the field, looking at his withering crop, Somerfeld was unequivocal about the cause of his damaged crop – ‘climate change’. But back at the bar, with his friends, his language changed. He dropped those taboo words in favour of ‘erratic weather’ and ‘drier, hotter summers’ – a not-uncommon conversational tactic in farm country these days.”

If Somerfeld lived in Portland, Oregon, or Brighton, East Sussex, he wouldn’t need to be so circumspect at his local tavern – he’d be likely to have friends who took climate change very seriously indeed. But then those friends would quickly ostracise someone else in the social group who went around loudly claiming that climate change is a Chinese hoax.

So perhaps it is not so surprising after all to find educated Americans poles apart on the topic of climate change. Hundreds of thousands of years of human evolution have wired us to care deeply about fitting in with those around us. This helps to explain the findings of Taber and Lodge that better-informed people are actually more at risk of motivated reasoning on politically partisan topics: the more persuasively we can make the case for what our friends already believe, the more our friends will respect us.

It’s far easier to lead ourselves astray when the practical consequences of being wrong are small or non-existent, while the social consequences of being “wrong” are severe. It’s no coincidence that this describes many controversies that divide along partisan lines.

It’s tempting to assume that motivated reasoning is just something that happens to other people. I have political principles; you’re politically biased; he’s a fringe conspiracy theorist. But we would be wiser to acknowledge that we all think with our hearts rather than our heads sometimes.

Kris De Meyer, a neuroscientist at King’s College, London, shows his students a message describing an environmental activist’s problem with climate change denialism:


To summarise the climate deniers’ activities, I think we can say that:

(1) Their efforts have been aggressive while ours have been defensive.

(2) The deniers’ activities are rather orderly – almost as if they had a plan working for them.

I think the denialist forces can be characterised as dedicated opportunists. They are quick to act and seem to be totally unprincipled in the type of information they use to attack the scientific community. There is no question, though, that we have been inept in getting our side of the story, good though it may be, across to the news media and the public.

The students, all committed believers in climate change, outraged at the smokescreen laid down by the cynical and anti-scientific deniers, nod in recognition. Then De Meyer reveals the source of the text. It’s not a recent email. It’s taken, sometimes word for word, from an infamous internal memo written by a cigarette marketing executive in 1968. The memo is complaining not about “climate deniers” but about “anti-cigarette forces”, but otherwise, few changes were required.

You can use the same language, the same arguments, and perhaps even have the same conviction that you’re right, whether you’re arguing (rightly) that climate change is real or (wrongly) that the cigarette-cancer link is not.

(Here’s an example of this tendency that, for personal reasons, I can’t help but be sensitive about. My left-leaning, environmentally conscious friends are justifiably critical of ad hominem attacks on climate scientists. You know the kind of thing: claims that scientists are inventing data because of their political biases, or because they’re scrambling for funding from big government. In short, smearing the person rather than engaging with the evidence.

Yet the same friends are happy to embrace and amplify the same kind of tactics when they are used to attack my fellow economists: that we are inventing data because of our political biases, or scrambling for funding from big business. I tried to point out the parallel to one thoughtful person, and got nowhere. She was completely unable to comprehend what I was talking about. I’d call this a double standard, but that would be unfair – it would suggest that it was deliberate. It’s not. It’s an unconscious bias that’s easy to see in others and very hard to see in ourselves.)

Our emotional reaction to a statistical or scientific claim isn’t a side issue. Our emotions can, and often do, shape our beliefs more than any logic. We are capable of persuading ourselves to believe strange things, and to doubt solid evidence, in service of our political partisanship, our desire to keep drinking coffee, our unwillingness to face up to the reality of our HIV diagnosis, or any other cause that invokes an emotional response.

But we shouldn’t despair. We can learn to control our emotions – that is part of the process of growing up. The first simple step is to notice those emotions. When you see a statistical claim, pay attention to your own reaction. If you feel outrage, triumph, denial, pause for a moment. Then reflect. You don’t need to be an emotionless robot, but you could and should think as well as feel.

Most of us do not actively wish to delude ourselves, even when that might be socially advantageous. We have motives to reach certain conclusions, but facts matter, too. Lots of people would like to be movie stars, billionaires or immune to hangovers, but very few people believe that they actually are. Wishful thinking has limits. The more we get into the habit of counting to three and noticing our knee-jerk reactions, the closer to the truth we are likely to get.

For example, one survey, conducted by a team of academics, found that most people were perfectly able to distinguish serious journalism from fake news, and also agreed that it was important to amplify the truth, not lies. Yet the same people would happily share headlines such as “Over 500 ‘Migrant Caravaners’ Arrested With Suicide Vests”, because at the moment at which they clicked “share”, they weren’t stopping to think. They weren’t thinking, “Is this true?”, and they weren’t thinking, “Do I think the truth is important?” 

Instead, as they skimmed the internet in that state of constant distraction that we all recognise, they were carried away with their emotions and their partisanship. The good news is that simply pausing for a moment to reflect was all it took to filter out a lot of the misinformation. It doesn’t take much; we can all do it. All we need to do is acquire the habit of stopping to think.

Inflammatory memes or tub-thumping speeches invite us to leap to the wrong conclusion without thinking. That’s why we need to be calm. And that is also why so much persuasion is designed to arouse us – our lust, our desire, our sympathy or our anger. When was the last time Donald Trump, or for that matter Greenpeace, tweeted something designed to make you pause in calm reflection? Today’s persuaders don’t want you to stop and think. They want you to hurry up and feel. Don’t be rushed.

Saturday, 2 May 2020

'A deluge of death': how reading obituaries can humanise a crisis

Those brief features have long provided readers with a moment of connection that help us stay in touch with our humanity writes Oscar Schwartz in The Guardian


 
Obituaries can help readers stay in touch with humanity during an overwhelming crisis. Photograph: Kzenon/Alamy Stock Photo


Over the past few weeks, we’ve learned how to think a little more like epidemiologists. Each morning, we pore over statistical models that offer grim projections about how many people might get sick, when hospital beds will run short, how many might die within our age bracket. The coronavirus pandemic, in other words, has been a plague of statistics – and our expectations about the future have suddenly come to hinge on abstractions.

In opposition stands the obituary. These brief features, a cross between a short story and a death notice, have long provided readers with a moment of particular connection within the impersonal headlines. In a crisis of this magnitude, finding the emotional space to care about a single death can feel purposeless, unnecessary. But for many obituary writers past and present, there is a belief that this unique and embattled form of journalism can help us stay in touch with our humanity.

“It’s a deluge of death at the moment,” said Adam Bernstein, obituaries editor at the Washington Post. “When you see a figure like ‘50,000 people have died’, those are numbers that make the mind reel. But it’s very hard to touch people’s hearts with numbers – that’s where we come in.” 

Bernstein has been working on the death beat at the Post since 1999, and for the past decade has led a team that prides itself on the obituary craft. A good obituary, according to Bernstein, reveals surprising, intimate details about a life. “Maybe this person was most famous for being a criminal, but maybe they were also a roguish criminal with a terrific sense of humor,” Bernstein said. “Those details are what connect us to other human beings and our task is to find them.”

Since writing his first obituary as an intern at a newspaper in Bakersfield, California, Bernstein has relished the task. “It’s the hidden gem of the newsroom,” he said. But in the past month, the work has become increasingly taxing as the list of deaths they confront each morning balloons. They have churned out obituaries for notable deaths, like John Prine and Lee Konitz, while some non-coronavirus-related deaths have been sidelined.

There is also a sense of dread and suspense involved in monitoring those who have become ill. “If a well-known person is sick and it’s looking dire we make sure we have a story ready to go,” he said. “It feels like an endless game of Russian roulette and you just never know what the next day will bring.”

Janny Scott can relate to the experience of writing obituaries in a time of crisis. On 11 September 2001, Scott, then a young reporter on the New York Times metro desk, was assigned to cover, simply, “the dead”. With the city in chaos and no official victim count forthcoming, she and her colleagues trawled the streets of Manhattan collecting missing persons flyers that had become the city’s gloomy wallpaper.

As days passed, it became clear that most of the missing had died. “We began calling families and contacts, trying to piece together who these people were,” Scott told me. From these conversations, Scott and her colleagues began drawing up 250-word thumbnail sketches of those who had been lost, which were run at the back of the paper under the title “Portraits of Grief“. The paper ran almost 2,000 of these mini-obituaries in the coming months. “In New York, reading the portraits became some kind of religious ritual that helped us grieve together,” Scott said.

Obituaries and death notices can also serve an important political function during a crisis. In 1989, when obituaries at major newspapers still refused to cite Aids as a cause of death, the Bay Area Reporter published an eight-page section titled Aids Deaths, which listed all the people who had died from the illness during the previous year. Obituaries have similarly functioned as a form of advocacy around the opioid crisis, providing parents with a chance to publicly address the issue of addiction and connect with others in the community dealing with similar hardship.

As local newspapers across the nation continue to fold, however, most obituaries are now published on memorial sites, such as legacy.com, which hosts notices for more than 70% of all US deaths. During the current pandemic, these sites provide an accessible way for families to memorialize those lost at a time when obituary writers are otherwise overwhelmed.

“But the local news obituary is more than a death notice or a eulogy,” Kay Powell said. “It really should be an objective news story about one person’s life.” Powell worked at the Atlanta Journal-Constitution from 1996 to 2009, where she wrote close to 2,000 obituaries about “extraordinary ordinary people”. The church choir singer who had a frontal lobotomy and donated his brain to science, the boy who sang at Martin Luther King Jr’s funeral, the woman who was Flannery O’Connor’s secret pen pal for 30 years. 

Powell told me that she often fell in love with her recently deceased subjects and tried to impart this affection to her readers. But as a journalist, she also prided herself on accuracy and objectivity. She would never euphemize cause of death, believing that wider social truths about disease, mental health, addiction could be communicated more effectively through the experience of an individual. “When it is the name of somebody right there in your community, these issues are no longer some arbitrary thing affecting some number of people somewhere else,” she said.

The psychologist Paul Slovic has referred to this greater concern for the one over the many as a product of “psychic numbing”, a psychological glitch whereby, as the number in a tragedy increases, our empathy decreases. For many of us, this has intensified as the weeks pass. As the death count rises, cold-eyed statistical thinking that would have a few months ago seemed abhorrent becomes part of our daily news diet.

Of course, thinking about the pandemic in numbers is crucial. Demographic analysis shines a light on systemic truths that the individual story cannot, like how this virus is disproportionately taking lives in communities of color.

But Powell, who is in her 70s and sheltering in place alone, told me that engaging with the granularity of human suffering can shock people back into a sense of moral responsibility. “The emotion makes it harder to deny the reality of what’s happening here,” Powell said. “In the end, it keeps us better informed.”

Thursday, 8 February 2018

A simple guide to statistics in the age of deception

Tim Harford in The Financial Times

Image result for statistics



“The best financial advice for most people would fit on an index card.” That’s the gist of an offhand comment in 2013 by Harold Pollack, a professor at the University of Chicago. Pollack’s bluff was duly called, and he quickly rushed off to find an index card and scribble some bullet points — with respectable results. 


When I heard about Pollack’s notion — he elaborated upon it in a 2016 book — I asked myself: would this work for statistics, too? There are some obvious parallels. In each case, common sense goes a surprisingly long way; in each case, dizzying numbers and impenetrable jargon loom; in each case, there are stubborn technical details that matter; and, in each case, there are people with a sharp incentive to lead us astray. 

The case for everyday practical numeracy has never been more urgent. Statistical claims fill our newspapers and social media feeds, unfiltered by expert judgment and often designed as a political weapon. We do not necessarily trust the experts — or more precisely, we may have our own distinctive view of who counts as an expert and who does not.  

Nor are we passive consumers of statistical propaganda; we are the medium through which the propaganda spreads. We are arbiters of what others will see: what we retweet, like or share online determines whether a claim goes viral or vanishes. If we fall for lies, we become unwittingly complicit in deceiving others. On the bright side, we have more tools than ever to help weigh up what we see before we share it — if we are able and willing to use them. 

In the hope that someone might use it, I set out to write my own postcard-sized citizens’ guide to statistics. Here’s what I learnt. 

Professor Pollack’s index card includes advice such as “Save 20 per cent of your money” and “Pay your credit card in full every month”. The author Michael Pollan offers dietary advice in even pithier form: “Eat Food. Not Too Much. Mostly Plants.” Quite so, but I still want a cheeseburger.  

However good the advice Pollack and Pollan offer, it’s not always easy to take. The problem is not necessarily ignorance. Few people think that Coca-Cola is a healthy drink, or believe that credit cards let you borrow cheaply. Yet many of us fall into some form of temptation or other. That is partly because slick marketers are focused on selling us high-fructose corn syrup and easy credit. And it is partly because we are human beings with human frailties. 

With this in mind, my statistical postcard begins with advice about emotion rather than logic. When you encounter a new statistical claim, observe your feelings. Yes, it sounds like a line from Star Wars, but we rarely believe anything because we’re compelled to do so by pure deduction or irrefutable evidence. We have feelings about many of the claims we might read — anything from “inequality is rising” to “chocolate prevents dementia”. If we don’t notice and pay attention to those feelings, we’re off to a shaky start. 

What sort of feelings? Defensiveness. Triumphalism. Righteous anger. Evangelical fervour. Or, when it comes to chocolate and dementia, relief. It’s fine to have an emotional response to a chart or shocking statistic — but we should not ignore that emotion, or be led astray by it. 

There are certain claims that we rush to tell the world, others that we use to rally like-minded people, still others we refuse to believe. Our belief or disbelief in these claims is part of who we feel we are. “We all process information consistent with our tribe,” says Dan Kahan, professor of law and psychology at Yale University. 

In 2005, Charles Taber and Milton Lodge, political scientists at Stony Brook University, New York, conducted experiments in which subjects were invited to study arguments around hot political issues. Subjects showed a clear confirmation bias: they sought out testimony from like-minded organisations. For example, subjects who opposed gun control would tend to start by reading the views of the National Rifle Association. Subjects also showed a disconfirmation bias: when the researchers presented them with certain arguments and invited comment, the subjects would quickly accept arguments with which they agreed, but devote considerable effort to disparage opposing arguments.  

Expertise is no defence against this emotional reaction; in fact, Taber and Lodge found that better-informed experimental subjects showed stronger biases. The more they knew, the more cognitive weapons they could aim at their opponents. “So convenient a thing it is to be a reasonable creature,” commented Benjamin Franklin, “since it enables one to find or make a reason for everything one has a mind to do.” 

This is why it’s important to face up to our feelings before we even begin to process a statistical claim. If we don’t at least acknowledge that we may be bringing some emotional baggage along with us, we have little chance of discerning what’s true. As the physicist Richard Feynman once commented, “You must not fool yourself — and you are the easiest person to fool.” 

The second crucial piece of advice is to understand the claim. That seems obvious. But all too often we leap to disbelieve or believe (and repeat) a claim without pausing to ask whether we really understand what the claim is. To quote Douglas Adams’s philosophical supercomputer, Deep Thought, “Once you know what the question actually is, you’ll know what the answer means.” 

For example, take the widely accepted claim that “inequality is rising”. It seems uncontroversial, and urgent. But what does it mean? Racial inequality? Gender inequality? Inequality of opportunity, of consumption, of education attainment, of wealth? Within countries or across the globe? 

Even given a narrower claim, “inequality of income before taxes is rising” (and you should be asking yourself, since when?), there are several different ways to measure this. One approach is to compare the income of people at the 90th percentile and the 10th percentile, but that tells us nothing about the super-rich, nor the ordinary people in the middle. An alternative is to examine the income share of the top 1 per cent — but this approach has the opposite weakness, telling us nothing about how the poorest fare relative to the majority.  

There is no single right answer — nor should we assume that all the measures tell a similar story. In fact, there are many true statements that one can make about inequality. It may be worth figuring out which one is being made before retweeting it. 

Perhaps it is not surprising that a concept such as inequality turns out to have hidden depths. But the same holds true of more tangible subjects, such as “a nurse”. Are midwives nurses? Health visitors? Should two nurses working half-time count as one nurse? Claims over the staffing of the UK’s National Health Service have turned on such details. 

All this can seem like pedantry — or worse, a cynical attempt to muddy the waters and suggest that you can prove anything with statistics. But there is little point in trying to evaluate whether a claim is true if one is unclear what the claim even means. 

Imagine a study showing that kids who play violent video games are more likely to be violent in reality. Rebecca Goldin, a mathematician and director of the statistical literacy project STATS, points out that we should ask questions about concepts such as “play”, “violent video games” and “violent in reality”. Is Space Invaders a violent game? It involves shooting things, after all. And are we measuring a response to a questionnaire after 20 minutes’ play in a laboratory, or murderous tendencies in people who play 30 hours a week? “Many studies won’t measure violence,” says Goldin. “They’ll measure something else such as aggressive behaviour.” Just like “inequality” or “nurse”, these seemingly common sense words hide a lot of wiggle room. 

Two particular obstacles to our understanding are worth exploring in a little more detail. One is the question of causation. “Taller children have a higher reading age,” goes the headline. This may summarise the results of a careful study about nutrition and cognition. Or it may simply reflect the obvious point that eight-year-olds read better than four-year-olds — and are taller. Causation is philosophically and technically a knotty business but, for the casual consumer of statistics, the question is not so complicated: just ask whether a causal claim is being made, and whether it might be justified. 

Returning to this study about violence and video games, we should ask: is this a causal relationship, tested in experimental conditions? Or is this a broad correlation, perhaps because the kind of thing that leads kids to violence also leads kids to violent video games? Without clarity on this point, we don’t really have anything but an empty headline.  

We should never forget, either, that all statistics are a summary of a more complicated truth. For example, what’s happening to wages? With tens of millions of wage packets being paid every month, we can only ever summarise — but which summary? The average wage can be skewed by a small number of fat cats. The median wage tells us about the centre of the distribution but ignores everything else. 

Or we might look at the median increase in wages, which isn’t the same thing as the increase in the median wage — not at all. In a situation where the lowest and highest wages are increasing while the middle sags, it’s quite possible for the median pay rise to be healthy while median pay falls.  

Sir Andrew Dilnot, former chair of the UK Statistics Authority, warns that an average can never convey the whole of a complex story. “It’s like trying to see what’s in a room by peering through the keyhole,” he tells me.  

In short, “you need to ask yourself what’s being left out,” says Mona Chalabi, data editor for The Guardian US. That applies to the obvious tricks, such as a vertical axis that’s been truncated to make small changes look big. But it also applies to the less obvious stuff — for example, why does a graph comparing the wages of African-Americans with those of white people not also include data on Hispanic or Asian-Americans? There is no shame in leaving something out. No chart, table or tweet can contain everything. But what is missing can matter. 

Channel the spirit of film noir: get the backstory. Of all the statistical claims in the world, this particular stat fatale appeared in your newspaper or social media feed, dressed to impress. Why? Where did it come from? Why are you seeing it?  

Sometimes the answer is little short of a conspiracy: a PR company wanted to sell ice cream, so paid a penny-ante academic to put together the “equation for the perfect summer afternoon”, pushed out a press release on a quiet news day, and won attention in a media environment hungry for clicks. Or a political donor slung a couple of million dollars at an ideologically sympathetic think-tank in the hope of manufacturing some talking points. 

Just as often, the answer is innocent but unedifying: publication bias. A study confirming what we already knew — smoking causes cancer — is unlikely to make news. But a study with a surprising result — maybe smoking doesn’t cause cancer after all — is worth a headline. The new study may have been rigorously conducted but is probably wrong: one must weigh it up against decades of contrary evidence. 

Publication bias is a big problem in academia. The surprising results get published, the follow-up studies finding no effect tend to appear in lesser journals if they appear at all. It is an even bigger problem in the media — and perhaps bigger yet in social media. Increasingly, we see a statistical claim because people like us thought it was worth a Like on Facebook. 

David Spiegelhalter, president of the Royal Statistical Society, proposes what he calls the “Groucho principle”. Groucho Marx famously resigned from a club — if they’d accept him as a member, he reasoned, it couldn’t be much of a club. Spiegelhalter feels the same about many statistical claims that reach the headlines or the social media feed. He explains, “If it’s surprising or counter-intuitive enough to have been drawn to my attention, it is probably wrong.”  

OK. You’ve noted your own emotions, checked the backstory and understood the claim being made. Now you need to put things in perspective. A few months ago, a horrified citizen asked me on Twitter whether it could be true that in the UK, seven million disposable coffee cups were thrown away every day.  

I didn’t have an answer. (A quick internet search reveals countless repetitions of the claim, but no obvious source.) But I did have an alternative question: is that a big number? The population of the UK is 65 million. If one person in 10 used a disposable cup each day, that would do the job.  

Many numbers mean little until we can compare them with a more familiar quantity. It is much more informative to know how many coffee cups a typical person discards than to know how many are thrown away by an entire country. And more useful still to know whether the cups are recycled (usually not, alas) or what proportion of the country’s waste stream is disposable coffee cups (not much, is my guess, but I may be wrong).  

So we should ask: how big is the number compared with other things I might intuitively understand? How big is it compared with last year, or five years ago, or 30? It’s worth a look at the historical trend, if the data are available.  

Finally, beware “statistical significance”. There are various technical objections to the term, some of which are important. But the simplest point to appreciate is that a number can be “statistically significant” while being of no practical importance. Particularly in the age of big data, it’s possible for an effect to clear this technical hurdle of statistical significance while being tiny. 

One study was able to demonstrate that unborn children exposed to a heatwave while in the womb went on to earn less as adults. The finding was statistically significant. But the impact was trivial: $30 in lost income per year. Just because a finding is statistically robust does not mean it matters; the word “significance” obscures that. 

In an age of computer-generated images of data clouds, some of the most charming data visualisations are hand-drawn doodles by the likes of Mona Chalabi and the cartoonist Randall Munroe. But there is more to these pictures than charm: Chalabi uses the wobble of her pen to remind us that most statistics have a margin of error. A computer plot can confer the illusion of precision on what may be a highly uncertain situation. 

“It is better to be vaguely right than exactly wrong,” wrote Carveth Read in Logic (1898), and excessive precision can lead people astray. On the eve of the US presidential election in 2016, the political forecasting website FiveThirtyEight gave Donald Trump a 28.6 per cent chance of winning. In some ways that is impressive, because other forecasting models gave Trump barely any chance at all. But how could anyone justify the decimal point on such a forecast? No wonder many people missed the basic message, which was that Trump had a decent shot. “One in four” would have been a much more intuitive guide to the vagaries of forecasting.

Exaggerated precision has another cost: it makes numbers needlessly cumbersome to remember and to handle. So, embrace imprecision. The budget of the NHS in the UK is about £10bn a month. The national income of the United States is about $20tn a year. One can be much more precise about these things, but carrying the approximate numbers around in my head lets me judge pretty quickly when — say — a £50m spending boost or a $20bn tax cut is noteworthy, or a rounding error. 

My favourite rule of thumb is that since there are 65 million people in the UK and people tend to live a bit longer than 65, the size of a typical cohort — everyone retiring or leaving school in a given year — will be nearly a million people. Yes, it’s a rough estimate — but vaguely right is often good enough. 

Be curious. Curiosity is bad for cats, but good for stats. Curiosity is a cardinal virtue because it encourages us to work a little harder to understand what we are being told, and to enjoy the surprises along the way.  

This is partly because almost any statistical statement raises questions: who claims this? Why? What does this number mean? What’s missing? We have to be willing — in the words of UK statistical regulator Ed Humpherson — to “go another click”. If a statistic is worth sharing, isn’t it worth understanding first? The digital age is full of informational snares — but it also makes it easier to look a little deeper before our minds snap shut on an answer.  

While curiosity gives us the motivation to ask another question or go another click, it gives us something else, too: a willingness to change our minds. For many of the statistical claims that matter, we have already reached a conclusion. We already know what our tribe of right-thinking people believe about Brexit, gun control, vaccinations, climate change, inequality or nationalisation — and so it is natural to interpret any statistical claim as either a banner to wave, or a threat to avoid.  

Curiosity can put us into a better frame of mind to engage with statistical surprises. If we treat them as mysteries to be resolved, we are more likely to spot statistical foul play, but we are also more open-minded when faced with rigorous new evidence. 

In research with Asheley Landrum, Katie Carpenter, Laura Helft and Kathleen Hall Jamieson, Dan Kahan has discovered that people who are intrinsically curious about science — they exist across the political spectrum — tend to be less polarised in their response to questions about politically sensitive topics. We need to treat surprises as a mystery rather than a threat.  

Isaac Asimov is thought to have said, “The most exciting phrase in science isn’t ‘Eureka!’, but ‘That’s funny…’” The quip points to an important truth: if we treat the open question as more interesting than the neat answer, we’re on the road to becoming wiser.  

In the end, my postcard has 50-ish words and six commandments. Simple enough, I hope, for someone who is willing to make an honest effort to evaluate — even briefly — the statistical claims that appear in front of them. That willingness, I fear, is what is most in question.  

“Hey, Bill, Bill, am I gonna check every statistic?” said Donald Trump, then presidential candidate, when challenged by Bill O’Reilly about a grotesque lie that he had retweeted about African-Americans and homicides. And Trump had a point — sort of. He should, of course, have got someone to check a statistic before lending his megaphone to a false and racist claim. We all know by now that he simply does not care. 

But Trump’s excuse will have struck a chord with many, even those who are aghast at his contempt for accuracy (and much else). He recognised that we are all human. We don’t check everything; we can’t. Even if we had all the technical expertise in the world, there is no way that we would have the time. 

My aim is more modest. I want to encourage us all to make the effort a little more often: to be open-minded rather than defensive; to ask simple questions about what things mean, where they come from and whether they would matter if they were true. And, above all, to show enough curiosity about the world to want to know the answers to some of these questions — not to win arguments, but because the world is a fascinating place.