Search This Blog

Showing posts with label confirmation. Show all posts
Showing posts with label confirmation. Show all posts

Thursday 10 September 2020

Facts v feelings: how to stop our emotions misleading us

The pandemic has shown how a lack of solid statistics can be dangerous. But even with the firmest of evidence, we often end up ignoring the facts we don’t like. By Tim Harford in The Guardian
 

By the spring of 2020, the high stakes involved in rigorous, timely and honest statistics had suddenly become all too clear. A new coronavirus was sweeping the world. Politicians had to make their most consequential decisions in decades, and fast. Many of those decisions depended on data detective work that epidemiologists, medical statisticians and economists were scrambling to conduct. Tens of millions of lives were potentially at risk. So were billions of people’s livelihoods.

In early April, countries around the world were a couple of weeks into lockdown, global deaths passed 60,000, and it was far from clear how the story would unfold. Perhaps the deepest economic depression since the 1930s was on its way, on the back of a mushrooming death toll. Perhaps, thanks to human ingenuity or good fortune, such apocalyptic fears would fade from memory. Many scenarios seemed plausible. And that’s the problem.

An epidemiologist, John Ioannidis, wrote in mid-March that Covid-19 “might be a once-in-a-century evidence fiasco”. The data detectives are doing their best – but they’re having to work with data that’s patchy, inconsistent and woefully inadequate for making life-and-death decisions with the confidence we would like.

Details of this fiasco will, no doubt, be studied for years to come. But some things already seem clear. At the beginning of the crisis, politics seem to have impeded the free flow of honest statistics. Although the claim is contested, Taiwan complained that in late December 2019 it had given important clues about human-to-human transmission to the World Health Organization – but as late as mid-January, the WHO was reassuringly tweeting that China had found no evidence of human-to-human transmission. (Taiwan is not a member of the WHO, because China claims sovereignty over the territory and demands that it should not be treated as an independent state. It’s possible that this geopolitical obstacle led to the alleged delay.)

Did this matter? Almost certainly; with cases doubling every two or three days, we will never know what might have been different with an extra couple of weeks of warning. It’s clear that many leaders took a while to appreciate the potential gravity of the threat. President Trump, for instance, announced in late February: “It’s going to disappear. One day it’s like a miracle, it will disappear.” Four weeks later, with 1,300 Americans dead and more confirmed cases in the US than any other country, Trump was still talking hopefully about getting everybody to church at Easter.

As I write, debates are raging. Can rapid testing, isolation and contact tracing contain outbreaks indefinitely, or merely delay their spread? Should we worry more about small indoor gatherings or large outdoor ones? Does closing schools help to prevent the spread of the virus, or do more harm as children go to stay with vulnerable grandparents? How much does wearing masks help? These and many other questions can be answered only by good data about who has been infected, and when.

But in the early months of the pandemic, a vast number of infections were not being registered in official statistics, owing to a lack of tests. And the tests that were being conducted were giving a skewed picture, being focused on medical staff, critically ill patients, and – let’s face it – rich, famous people. It took several months to build a picture of how many mild or asymptomatic cases there are, and hence how deadly the virus really is. As the death toll rose exponentially in March, doubling every two days in the UK, there was no time to wait and see. Leaders put economies into an induced coma – more than 3 million Americans filed jobless claims in a single week in late March, five times the previous record. The following week was even worse: more than 6.5m claims were filed. Were the potential health consequences really catastrophic enough to justify sweeping away so many people’s incomes? It seemed so – but epidemiologists could only make their best guesses with very limited information.

It’s hard to imagine a more extraordinary illustration of how much we usually take accurate, systematically gathered numbers for granted. The statistics for a huge range of important issues that predate the coronavirus have been painstakingly assembled over the years by diligent statisticians, and often made available to download, free of charge, anywhere in the world. Yet we are spoiled by such luxury, casually dismissing “lies, damned lies and statistics”. The case of Covid-19 reminds us how desperate the situation can become when the statistics simply aren’t there.

When it comes to interpreting the world around us, we need to realise that our feelings can trump our expertise. This explains why we buy things we don’t need, fall for the wrong kind of romantic partner, or vote for politicians who betray our trust. In particular, it explains why we so often buy into statistical claims that even a moment’s thought would tell us cannot be true. Sometimes, we want to be fooled.

Psychologist Ziva Kunda found this effect in the lab, when she showed experimental subjects an article laying out the evidence that coffee or other sources of caffeine could increase the risk to women of developing breast cysts. Most people found the article pretty convincing. Women who drank a lot of coffee did not.

We often find ways to dismiss evidence that we don’t like. And the opposite is true, too: when evidence seems to support our preconceptions, we are less likely to look too closely for flaws. It is not easy to master our emotions while assessing information that matters to us, not least because our emotions can lead us astray in different directions.

We don’t need to become emotionless processors of numerical information – just noticing our emotions and taking them into account may often be enough to improve our judgment. Rather than requiring superhuman control of our emotions, we need simply to develop good habits. Ask yourself: how does this information make me feel? Do I feel vindicated or smug? Anxious, angry or afraid? Am I in denial, scrambling to find a reason to dismiss the claim?

In the early days of the coronavirus epidemic, helpful-seeming misinformation spread even faster than the virus itself. One viral post – circulating on Facebook and email newsgroups – all-too-confidently explained how to distinguish between Covid-19 and a cold, reassured people that the virus was destroyed by warm weather, and incorrectly advised that iced water was to be avoided, while warm water kills any virus. The post, sometimes attributed to “my friend’s uncle”, sometimes to “Stanford hospital board” or some blameless and uninvolved paediatrician, was occasionally accurate but generally speculative and misleading. But still people – normally sensible people – shared it again and again and again. Why? Because they wanted to help others. They felt confused, they saw apparently useful advice, and they felt impelled to share. That impulse was only human, and it was well-meaning – but it was not wise.


Protestors in Edinburgh demonstrating against Covid-19 prevention measures. Photograph: Jeff J Mitchell/Getty Images

Before I repeat any statistical claim, I first try to take note of how it makes me feel. It’s not a foolproof method against tricking myself, but it’s a habit that does little harm, and is sometimes a great deal of help. Our emotions are powerful. We can’t make them vanish, and nor should we want to. But we can, and should, try to notice when they are clouding our judgment.

In 1997, the economists Linda Babcock and George Loewenstein ran an experiment in which participants were given evidence from a real court case about a motorbike accident. They were then randomly assigned to play the role of plaintiff’s attorney (arguing that the injured motorcyclist should receive $100,000 in damages) or defence attorney (arguing that the case should be dismissed or the damages should be low).

The experimental subjects were given a financial incentive to argue their side of the case persuasively, and to reach an advantageous settlement with the other side. They were also given a separate financial incentive to accurately guess what the damages the judge in the real case had actually awarded. Their predictions should have been unrelated to their role-playing, but their judgment was strongly influenced by what they hoped would be true.

Psychologists call this “motivated reasoning”. Motivated reasoning is thinking through a topic with the aim, conscious or unconscious, of reaching a particular kind of conclusion. In a football game, we see the fouls committed by the other team but overlook the sins of our own side. We are more likely to notice what we want to notice. Experts are not immune to motivated reasoning. Under some circumstances their expertise can even become a disadvantage. The French satirist Molière once wrote: “A learned fool is more foolish than an ignorant one.” Benjamin Franklin commented: “So convenient a thing is it to be a reasonable creature, since it enables us to find or make a reason for everything one has a mind to.”

Modern social science agrees with Molière and Franklin: people with deeper expertise are better equipped to spot deception, but if they fall into the trap of motivated reasoning, they are able to muster more reasons to believe whatever they really wish to believe.

One recent review of the evidence concluded that this tendency to evaluate evidence and test arguments in a way that is biased towards our own preconceptions is not only common, but just as common among intelligent people. Being smart or educated is no defence. In some circumstances, it may even be a weakness.

One illustration of this is a study published in 2006 by two political scientists, Charles Taber and Milton Lodge. They wanted to examine the way Americans reasoned about controversial political issues. The two they chose were gun control and affirmative action.

Taber and Lodge asked their experimental participants to read a number of arguments on either side, and to evaluate the strength and weakness of each argument. One might hope that being asked to review these pros and cons might give people more of a shared appreciation of opposing viewpoints; instead, the new information pulled people further apart.

This was because people mined the information they were given for ways to support their existing beliefs. When invited to search for more information, people would seek out data that backed their preconceived ideas. When invited to assess the strength of an opposing argument, they would spend considerable time thinking up ways to shoot it down.

This isn’t the only study to reach this sort of conclusion, but what’s particularly intriguing about Taber and Lodge’s experiment is that expertise made matters worse. More sophisticated participants in the experiment found more material to back up their preconceptions. More surprisingly, they found less material that contradicted them – as though they were using their expertise actively to avoid uncomfortable information. They produced more arguments in favour of their own views, and picked up more flaws in the other side’s arguments. They were vastly better equipped to reach the conclusion they had wanted to reach all along.

Of all the emotional responses we might have, the most politically relevant are motivated by partisanship. People with a strong political affiliation want to be on the right side of things. We see a claim, and our response is immediately shaped by whether we believe “that’s what people like me think”.

Consider this claim about climate change: “Human activity is causing the Earth’s climate to warm up, posing serious risks to our way of life.” Many of us have an emotional reaction to a claim like that; it’s not like a claim about the distance to Mars. Believing it or denying it is part of our identity; it says something about who we are, who our friends are, and the sort of world we want to live in. If I put a claim about climate change in a news headline, or in a graph designed to be shared on social media, it will attract attention and engagement not because it is true or false, but because of the way people feel about it.

If you doubt this, ponder the findings of a Gallup poll conducted in 2015. It found a huge gap between how much Democrats and Republicans in the US worried about climate change. What rational reason could there be for that?

Scientific evidence is scientific evidence. Our beliefs around climate change shouldn’t skew left and right. But they do. This gap became wider the more education people had. Among those with no college education, 45% of Democrats and 23% of Republicans worried “a great deal” about climate change. Yet among those with a college education, the figures were 50% of Democrats and 8% of Republicans. A similar pattern holds if you measure scientific literacy: more scientifically literate Republicans and Democrats are further apart than those who know very little about science.

If emotion didn’t come into it, surely more education and more information would help people to come to an agreement about what the truth is – or at least, the current best theory? But giving people more information seems actively to polarise them on the question of climate change. This fact alone tells us how important our emotions are. People are straining to reach the conclusion that fits with their other beliefs and values – and the more they know, the more ammunition they have to reach the conclusion they hope to reach.


Anti-carbon tax protesters in Australia in 2011. Photograph: Torsten Blackwood/AFP/Getty Images

In the case of climate change, there is an objective truth, even if we are unable to discern it with perfect certainty. But as you are one individual among nearly 8 billion on the planet, the environmental consequences of what you happen to think are irrelevant. With a handful of exceptions – say, if you’re the president of China – climate change is going to take its course regardless of what you say or do. From a self-centred point of view, the practical cost of being wrong is close to zero. The social consequences of your beliefs, however, are real and immediate.

Imagine that you’re a barley farmer in Montana, and hot, dry summers are ruining your crop with increasing frequency. Climate change matters to you. And yet rural Montana is a conservative place, and the words “climate change” are politically charged. Anyway, what can you personally do about it?

Here’s how one farmer, Erik Somerfeld, threads that needle, as described by the journalist Ari LeVaux: “In the field, looking at his withering crop, Somerfeld was unequivocal about the cause of his damaged crop – ‘climate change’. But back at the bar, with his friends, his language changed. He dropped those taboo words in favour of ‘erratic weather’ and ‘drier, hotter summers’ – a not-uncommon conversational tactic in farm country these days.”

If Somerfeld lived in Portland, Oregon, or Brighton, East Sussex, he wouldn’t need to be so circumspect at his local tavern – he’d be likely to have friends who took climate change very seriously indeed. But then those friends would quickly ostracise someone else in the social group who went around loudly claiming that climate change is a Chinese hoax.

So perhaps it is not so surprising after all to find educated Americans poles apart on the topic of climate change. Hundreds of thousands of years of human evolution have wired us to care deeply about fitting in with those around us. This helps to explain the findings of Taber and Lodge that better-informed people are actually more at risk of motivated reasoning on politically partisan topics: the more persuasively we can make the case for what our friends already believe, the more our friends will respect us.

It’s far easier to lead ourselves astray when the practical consequences of being wrong are small or non-existent, while the social consequences of being “wrong” are severe. It’s no coincidence that this describes many controversies that divide along partisan lines.

It’s tempting to assume that motivated reasoning is just something that happens to other people. I have political principles; you’re politically biased; he’s a fringe conspiracy theorist. But we would be wiser to acknowledge that we all think with our hearts rather than our heads sometimes.

Kris De Meyer, a neuroscientist at King’s College, London, shows his students a message describing an environmental activist’s problem with climate change denialism:


To summarise the climate deniers’ activities, I think we can say that:

(1) Their efforts have been aggressive while ours have been defensive.

(2) The deniers’ activities are rather orderly – almost as if they had a plan working for them.

I think the denialist forces can be characterised as dedicated opportunists. They are quick to act and seem to be totally unprincipled in the type of information they use to attack the scientific community. There is no question, though, that we have been inept in getting our side of the story, good though it may be, across to the news media and the public.

The students, all committed believers in climate change, outraged at the smokescreen laid down by the cynical and anti-scientific deniers, nod in recognition. Then De Meyer reveals the source of the text. It’s not a recent email. It’s taken, sometimes word for word, from an infamous internal memo written by a cigarette marketing executive in 1968. The memo is complaining not about “climate deniers” but about “anti-cigarette forces”, but otherwise, few changes were required.

You can use the same language, the same arguments, and perhaps even have the same conviction that you’re right, whether you’re arguing (rightly) that climate change is real or (wrongly) that the cigarette-cancer link is not.

(Here’s an example of this tendency that, for personal reasons, I can’t help but be sensitive about. My left-leaning, environmentally conscious friends are justifiably critical of ad hominem attacks on climate scientists. You know the kind of thing: claims that scientists are inventing data because of their political biases, or because they’re scrambling for funding from big government. In short, smearing the person rather than engaging with the evidence.

Yet the same friends are happy to embrace and amplify the same kind of tactics when they are used to attack my fellow economists: that we are inventing data because of our political biases, or scrambling for funding from big business. I tried to point out the parallel to one thoughtful person, and got nowhere. She was completely unable to comprehend what I was talking about. I’d call this a double standard, but that would be unfair – it would suggest that it was deliberate. It’s not. It’s an unconscious bias that’s easy to see in others and very hard to see in ourselves.)

Our emotional reaction to a statistical or scientific claim isn’t a side issue. Our emotions can, and often do, shape our beliefs more than any logic. We are capable of persuading ourselves to believe strange things, and to doubt solid evidence, in service of our political partisanship, our desire to keep drinking coffee, our unwillingness to face up to the reality of our HIV diagnosis, or any other cause that invokes an emotional response.

But we shouldn’t despair. We can learn to control our emotions – that is part of the process of growing up. The first simple step is to notice those emotions. When you see a statistical claim, pay attention to your own reaction. If you feel outrage, triumph, denial, pause for a moment. Then reflect. You don’t need to be an emotionless robot, but you could and should think as well as feel.

Most of us do not actively wish to delude ourselves, even when that might be socially advantageous. We have motives to reach certain conclusions, but facts matter, too. Lots of people would like to be movie stars, billionaires or immune to hangovers, but very few people believe that they actually are. Wishful thinking has limits. The more we get into the habit of counting to three and noticing our knee-jerk reactions, the closer to the truth we are likely to get.

For example, one survey, conducted by a team of academics, found that most people were perfectly able to distinguish serious journalism from fake news, and also agreed that it was important to amplify the truth, not lies. Yet the same people would happily share headlines such as “Over 500 ‘Migrant Caravaners’ Arrested With Suicide Vests”, because at the moment at which they clicked “share”, they weren’t stopping to think. They weren’t thinking, “Is this true?”, and they weren’t thinking, “Do I think the truth is important?” 

Instead, as they skimmed the internet in that state of constant distraction that we all recognise, they were carried away with their emotions and their partisanship. The good news is that simply pausing for a moment to reflect was all it took to filter out a lot of the misinformation. It doesn’t take much; we can all do it. All we need to do is acquire the habit of stopping to think.

Inflammatory memes or tub-thumping speeches invite us to leap to the wrong conclusion without thinking. That’s why we need to be calm. And that is also why so much persuasion is designed to arouse us – our lust, our desire, our sympathy or our anger. When was the last time Donald Trump, or for that matter Greenpeace, tweeted something designed to make you pause in calm reflection? Today’s persuaders don’t want you to stop and think. They want you to hurry up and feel. Don’t be rushed.

Thursday 8 February 2018

A simple guide to statistics in the age of deception

Tim Harford in The Financial Times

Image result for statistics



“The best financial advice for most people would fit on an index card.” That’s the gist of an offhand comment in 2013 by Harold Pollack, a professor at the University of Chicago. Pollack’s bluff was duly called, and he quickly rushed off to find an index card and scribble some bullet points — with respectable results. 


When I heard about Pollack’s notion — he elaborated upon it in a 2016 book — I asked myself: would this work for statistics, too? There are some obvious parallels. In each case, common sense goes a surprisingly long way; in each case, dizzying numbers and impenetrable jargon loom; in each case, there are stubborn technical details that matter; and, in each case, there are people with a sharp incentive to lead us astray. 

The case for everyday practical numeracy has never been more urgent. Statistical claims fill our newspapers and social media feeds, unfiltered by expert judgment and often designed as a political weapon. We do not necessarily trust the experts — or more precisely, we may have our own distinctive view of who counts as an expert and who does not.  

Nor are we passive consumers of statistical propaganda; we are the medium through which the propaganda spreads. We are arbiters of what others will see: what we retweet, like or share online determines whether a claim goes viral or vanishes. If we fall for lies, we become unwittingly complicit in deceiving others. On the bright side, we have more tools than ever to help weigh up what we see before we share it — if we are able and willing to use them. 

In the hope that someone might use it, I set out to write my own postcard-sized citizens’ guide to statistics. Here’s what I learnt. 

Professor Pollack’s index card includes advice such as “Save 20 per cent of your money” and “Pay your credit card in full every month”. The author Michael Pollan offers dietary advice in even pithier form: “Eat Food. Not Too Much. Mostly Plants.” Quite so, but I still want a cheeseburger.  

However good the advice Pollack and Pollan offer, it’s not always easy to take. The problem is not necessarily ignorance. Few people think that Coca-Cola is a healthy drink, or believe that credit cards let you borrow cheaply. Yet many of us fall into some form of temptation or other. That is partly because slick marketers are focused on selling us high-fructose corn syrup and easy credit. And it is partly because we are human beings with human frailties. 

With this in mind, my statistical postcard begins with advice about emotion rather than logic. When you encounter a new statistical claim, observe your feelings. Yes, it sounds like a line from Star Wars, but we rarely believe anything because we’re compelled to do so by pure deduction or irrefutable evidence. We have feelings about many of the claims we might read — anything from “inequality is rising” to “chocolate prevents dementia”. If we don’t notice and pay attention to those feelings, we’re off to a shaky start. 

What sort of feelings? Defensiveness. Triumphalism. Righteous anger. Evangelical fervour. Or, when it comes to chocolate and dementia, relief. It’s fine to have an emotional response to a chart or shocking statistic — but we should not ignore that emotion, or be led astray by it. 

There are certain claims that we rush to tell the world, others that we use to rally like-minded people, still others we refuse to believe. Our belief or disbelief in these claims is part of who we feel we are. “We all process information consistent with our tribe,” says Dan Kahan, professor of law and psychology at Yale University. 

In 2005, Charles Taber and Milton Lodge, political scientists at Stony Brook University, New York, conducted experiments in which subjects were invited to study arguments around hot political issues. Subjects showed a clear confirmation bias: they sought out testimony from like-minded organisations. For example, subjects who opposed gun control would tend to start by reading the views of the National Rifle Association. Subjects also showed a disconfirmation bias: when the researchers presented them with certain arguments and invited comment, the subjects would quickly accept arguments with which they agreed, but devote considerable effort to disparage opposing arguments.  

Expertise is no defence against this emotional reaction; in fact, Taber and Lodge found that better-informed experimental subjects showed stronger biases. The more they knew, the more cognitive weapons they could aim at their opponents. “So convenient a thing it is to be a reasonable creature,” commented Benjamin Franklin, “since it enables one to find or make a reason for everything one has a mind to do.” 

This is why it’s important to face up to our feelings before we even begin to process a statistical claim. If we don’t at least acknowledge that we may be bringing some emotional baggage along with us, we have little chance of discerning what’s true. As the physicist Richard Feynman once commented, “You must not fool yourself — and you are the easiest person to fool.” 

The second crucial piece of advice is to understand the claim. That seems obvious. But all too often we leap to disbelieve or believe (and repeat) a claim without pausing to ask whether we really understand what the claim is. To quote Douglas Adams’s philosophical supercomputer, Deep Thought, “Once you know what the question actually is, you’ll know what the answer means.” 

For example, take the widely accepted claim that “inequality is rising”. It seems uncontroversial, and urgent. But what does it mean? Racial inequality? Gender inequality? Inequality of opportunity, of consumption, of education attainment, of wealth? Within countries or across the globe? 

Even given a narrower claim, “inequality of income before taxes is rising” (and you should be asking yourself, since when?), there are several different ways to measure this. One approach is to compare the income of people at the 90th percentile and the 10th percentile, but that tells us nothing about the super-rich, nor the ordinary people in the middle. An alternative is to examine the income share of the top 1 per cent — but this approach has the opposite weakness, telling us nothing about how the poorest fare relative to the majority.  

There is no single right answer — nor should we assume that all the measures tell a similar story. In fact, there are many true statements that one can make about inequality. It may be worth figuring out which one is being made before retweeting it. 

Perhaps it is not surprising that a concept such as inequality turns out to have hidden depths. But the same holds true of more tangible subjects, such as “a nurse”. Are midwives nurses? Health visitors? Should two nurses working half-time count as one nurse? Claims over the staffing of the UK’s National Health Service have turned on such details. 

All this can seem like pedantry — or worse, a cynical attempt to muddy the waters and suggest that you can prove anything with statistics. But there is little point in trying to evaluate whether a claim is true if one is unclear what the claim even means. 

Imagine a study showing that kids who play violent video games are more likely to be violent in reality. Rebecca Goldin, a mathematician and director of the statistical literacy project STATS, points out that we should ask questions about concepts such as “play”, “violent video games” and “violent in reality”. Is Space Invaders a violent game? It involves shooting things, after all. And are we measuring a response to a questionnaire after 20 minutes’ play in a laboratory, or murderous tendencies in people who play 30 hours a week? “Many studies won’t measure violence,” says Goldin. “They’ll measure something else such as aggressive behaviour.” Just like “inequality” or “nurse”, these seemingly common sense words hide a lot of wiggle room. 

Two particular obstacles to our understanding are worth exploring in a little more detail. One is the question of causation. “Taller children have a higher reading age,” goes the headline. This may summarise the results of a careful study about nutrition and cognition. Or it may simply reflect the obvious point that eight-year-olds read better than four-year-olds — and are taller. Causation is philosophically and technically a knotty business but, for the casual consumer of statistics, the question is not so complicated: just ask whether a causal claim is being made, and whether it might be justified. 

Returning to this study about violence and video games, we should ask: is this a causal relationship, tested in experimental conditions? Or is this a broad correlation, perhaps because the kind of thing that leads kids to violence also leads kids to violent video games? Without clarity on this point, we don’t really have anything but an empty headline.  

We should never forget, either, that all statistics are a summary of a more complicated truth. For example, what’s happening to wages? With tens of millions of wage packets being paid every month, we can only ever summarise — but which summary? The average wage can be skewed by a small number of fat cats. The median wage tells us about the centre of the distribution but ignores everything else. 

Or we might look at the median increase in wages, which isn’t the same thing as the increase in the median wage — not at all. In a situation where the lowest and highest wages are increasing while the middle sags, it’s quite possible for the median pay rise to be healthy while median pay falls.  

Sir Andrew Dilnot, former chair of the UK Statistics Authority, warns that an average can never convey the whole of a complex story. “It’s like trying to see what’s in a room by peering through the keyhole,” he tells me.  

In short, “you need to ask yourself what’s being left out,” says Mona Chalabi, data editor for The Guardian US. That applies to the obvious tricks, such as a vertical axis that’s been truncated to make small changes look big. But it also applies to the less obvious stuff — for example, why does a graph comparing the wages of African-Americans with those of white people not also include data on Hispanic or Asian-Americans? There is no shame in leaving something out. No chart, table or tweet can contain everything. But what is missing can matter. 

Channel the spirit of film noir: get the backstory. Of all the statistical claims in the world, this particular stat fatale appeared in your newspaper or social media feed, dressed to impress. Why? Where did it come from? Why are you seeing it?  

Sometimes the answer is little short of a conspiracy: a PR company wanted to sell ice cream, so paid a penny-ante academic to put together the “equation for the perfect summer afternoon”, pushed out a press release on a quiet news day, and won attention in a media environment hungry for clicks. Or a political donor slung a couple of million dollars at an ideologically sympathetic think-tank in the hope of manufacturing some talking points. 

Just as often, the answer is innocent but unedifying: publication bias. A study confirming what we already knew — smoking causes cancer — is unlikely to make news. But a study with a surprising result — maybe smoking doesn’t cause cancer after all — is worth a headline. The new study may have been rigorously conducted but is probably wrong: one must weigh it up against decades of contrary evidence. 

Publication bias is a big problem in academia. The surprising results get published, the follow-up studies finding no effect tend to appear in lesser journals if they appear at all. It is an even bigger problem in the media — and perhaps bigger yet in social media. Increasingly, we see a statistical claim because people like us thought it was worth a Like on Facebook. 

David Spiegelhalter, president of the Royal Statistical Society, proposes what he calls the “Groucho principle”. Groucho Marx famously resigned from a club — if they’d accept him as a member, he reasoned, it couldn’t be much of a club. Spiegelhalter feels the same about many statistical claims that reach the headlines or the social media feed. He explains, “If it’s surprising or counter-intuitive enough to have been drawn to my attention, it is probably wrong.”  

OK. You’ve noted your own emotions, checked the backstory and understood the claim being made. Now you need to put things in perspective. A few months ago, a horrified citizen asked me on Twitter whether it could be true that in the UK, seven million disposable coffee cups were thrown away every day.  

I didn’t have an answer. (A quick internet search reveals countless repetitions of the claim, but no obvious source.) But I did have an alternative question: is that a big number? The population of the UK is 65 million. If one person in 10 used a disposable cup each day, that would do the job.  

Many numbers mean little until we can compare them with a more familiar quantity. It is much more informative to know how many coffee cups a typical person discards than to know how many are thrown away by an entire country. And more useful still to know whether the cups are recycled (usually not, alas) or what proportion of the country’s waste stream is disposable coffee cups (not much, is my guess, but I may be wrong).  

So we should ask: how big is the number compared with other things I might intuitively understand? How big is it compared with last year, or five years ago, or 30? It’s worth a look at the historical trend, if the data are available.  

Finally, beware “statistical significance”. There are various technical objections to the term, some of which are important. But the simplest point to appreciate is that a number can be “statistically significant” while being of no practical importance. Particularly in the age of big data, it’s possible for an effect to clear this technical hurdle of statistical significance while being tiny. 

One study was able to demonstrate that unborn children exposed to a heatwave while in the womb went on to earn less as adults. The finding was statistically significant. But the impact was trivial: $30 in lost income per year. Just because a finding is statistically robust does not mean it matters; the word “significance” obscures that. 

In an age of computer-generated images of data clouds, some of the most charming data visualisations are hand-drawn doodles by the likes of Mona Chalabi and the cartoonist Randall Munroe. But there is more to these pictures than charm: Chalabi uses the wobble of her pen to remind us that most statistics have a margin of error. A computer plot can confer the illusion of precision on what may be a highly uncertain situation. 

“It is better to be vaguely right than exactly wrong,” wrote Carveth Read in Logic (1898), and excessive precision can lead people astray. On the eve of the US presidential election in 2016, the political forecasting website FiveThirtyEight gave Donald Trump a 28.6 per cent chance of winning. In some ways that is impressive, because other forecasting models gave Trump barely any chance at all. But how could anyone justify the decimal point on such a forecast? No wonder many people missed the basic message, which was that Trump had a decent shot. “One in four” would have been a much more intuitive guide to the vagaries of forecasting.

Exaggerated precision has another cost: it makes numbers needlessly cumbersome to remember and to handle. So, embrace imprecision. The budget of the NHS in the UK is about £10bn a month. The national income of the United States is about $20tn a year. One can be much more precise about these things, but carrying the approximate numbers around in my head lets me judge pretty quickly when — say — a £50m spending boost or a $20bn tax cut is noteworthy, or a rounding error. 

My favourite rule of thumb is that since there are 65 million people in the UK and people tend to live a bit longer than 65, the size of a typical cohort — everyone retiring or leaving school in a given year — will be nearly a million people. Yes, it’s a rough estimate — but vaguely right is often good enough. 

Be curious. Curiosity is bad for cats, but good for stats. Curiosity is a cardinal virtue because it encourages us to work a little harder to understand what we are being told, and to enjoy the surprises along the way.  

This is partly because almost any statistical statement raises questions: who claims this? Why? What does this number mean? What’s missing? We have to be willing — in the words of UK statistical regulator Ed Humpherson — to “go another click”. If a statistic is worth sharing, isn’t it worth understanding first? The digital age is full of informational snares — but it also makes it easier to look a little deeper before our minds snap shut on an answer.  

While curiosity gives us the motivation to ask another question or go another click, it gives us something else, too: a willingness to change our minds. For many of the statistical claims that matter, we have already reached a conclusion. We already know what our tribe of right-thinking people believe about Brexit, gun control, vaccinations, climate change, inequality or nationalisation — and so it is natural to interpret any statistical claim as either a banner to wave, or a threat to avoid.  

Curiosity can put us into a better frame of mind to engage with statistical surprises. If we treat them as mysteries to be resolved, we are more likely to spot statistical foul play, but we are also more open-minded when faced with rigorous new evidence. 

In research with Asheley Landrum, Katie Carpenter, Laura Helft and Kathleen Hall Jamieson, Dan Kahan has discovered that people who are intrinsically curious about science — they exist across the political spectrum — tend to be less polarised in their response to questions about politically sensitive topics. We need to treat surprises as a mystery rather than a threat.  

Isaac Asimov is thought to have said, “The most exciting phrase in science isn’t ‘Eureka!’, but ‘That’s funny…’” The quip points to an important truth: if we treat the open question as more interesting than the neat answer, we’re on the road to becoming wiser.  

In the end, my postcard has 50-ish words and six commandments. Simple enough, I hope, for someone who is willing to make an honest effort to evaluate — even briefly — the statistical claims that appear in front of them. That willingness, I fear, is what is most in question.  

“Hey, Bill, Bill, am I gonna check every statistic?” said Donald Trump, then presidential candidate, when challenged by Bill O’Reilly about a grotesque lie that he had retweeted about African-Americans and homicides. And Trump had a point — sort of. He should, of course, have got someone to check a statistic before lending his megaphone to a false and racist claim. We all know by now that he simply does not care. 

But Trump’s excuse will have struck a chord with many, even those who are aghast at his contempt for accuracy (and much else). He recognised that we are all human. We don’t check everything; we can’t. Even if we had all the technical expertise in the world, there is no way that we would have the time. 

My aim is more modest. I want to encourage us all to make the effort a little more often: to be open-minded rather than defensive; to ask simple questions about what things mean, where they come from and whether they would matter if they were true. And, above all, to show enough curiosity about the world to want to know the answers to some of these questions — not to win arguments, but because the world is a fascinating place. 

Tuesday 16 January 2018

Both the left and the right can learn from Carillion's demise

Ben Chu in The Independent

Psychologists have identified a phenomenon they call “confirmation bias”. This is the tendency for people to interpret new information in a way that simply confirms their pre-existing beliefs. We’ve seen quite a lot of confirmation bias in the wake of Carillion’s belly flop into liquidation this week.

For some on the left this is all confirmation that privatisation of the provision of public services has been a disaster. It shows that corporate fat cats can walk away with profits while ordinary workers and small firms suffer, public services are put in jeopardy and taxpayers foot the bill.

For some on the right, on the other hand, it confirms that privatisation is working broadly as it should. A badly-run private company failed. Its contracts will now be re-distributed to other, more competent, private firms. As for profiteering at public expense, they see the precise opposite. If anything, civil servants have got too good at putting the squeeze on private contractors, forcing them into bidding wars which screw down their margins to almost nothing. Tough for the private companies, certainly, but it means better value for money for taxpayers.

Both sides should take a step back and remove the blinkers. It’s certainly welcome that Carillion’s shareholders and its lenders have not, despite intense corporate lobbying, been bailed out by the Government in the way banks were rescued in 2008. The shareholders will lose their shirts. And the banks must write-down their loans. That is how it ought to be. Leftist nationalisers ought to recognise that this represents progress.

But champions of privatisation should also face up to some unpalatable realities laid bare by this scandal. The profit margins of some contractors may be small but Carillion still managed to pay regular and substantial dividends to its shareholders, even when it was clear the company was financially overstretched.

And there have been high personal rewards for failed management. If these services had been managed “in-house”, no civil servant would have been paid the £1.5m a year that Richard Howson, the former chief executive of Carillion, commanded. The head of the NHS, Simon Stevens, by comparison, earns £190,000 a year. Are we really to believe that more modestly paid civil servants would have been vastly less competent than Howson and his team at Carillion?

As for the idea that civil servants have morphed into hard-nosed contracting experts, that rather stretches credulity given the miserable history of Private Finance Initiative deals. Moreover, this adversarial image isn’t a particularly useful way to conceptualise the relationship between private contractors and the state when it comes to the delivery of public services.

This relationship is inherently different from a normal commercial transaction between two parties. It has to be a much closer (and ongoing) relationship because society cannot cope with even a brief interruption of supply of the services. Ministers can’t allow a prison to be unguarded, a hospital to go uncleaned, a school to be without catering, a care home to be shut down.

Commissioning a contractor to deliver a public service extremely cheaply is a false economy if that contractor runs the risk of financial collapse and the state will have to fork out to keep the show on the road, as it is now with Carillion’s contracts.

This reality was also demonstrated last year when the Transport Secretary allowed Virgin and Stagecoach to exit their East Coast rail franchise early, costing the state £2bn in foregone payments, after the operators discovered they were running at a loss. It was not wise for the Transport Department to have accepted such a high bid from the consortium in 2015, however good it looked at the time.

One clear lesson from Carillion’s demise is that much more public transparency over contractors’ books is needed, something the National Audit Office urged back in 2013. The Carillion fiasco demonstrates that it’s impossible to rely on the expertise, or perhaps integrity, of auditing firms to flag looming problems.

In the end, the broader privatisation versus nationalisation debate might be an unhelpful framing of the issue. Even if many more services are managed in-house, as Labour wants, there will still be contracting out. Even Jeremy Corbyn is not demanding the nationalisation of construction firms.

When it comes to the delivery of vital public services, there is an unavoidable symbiosis between the state sector and the private sector. There is no purity to be found. The key question, which is too little addressed, is the appropriate balance of authority in that relationship and the institutional checks on that authority to ensure the broad public interest is always paramount.

Thursday 14 December 2017

Facts do not matter

Amit Varma in The Hindu



The most surprising thing about these Gujarat elections is that people are so surprised at the Prime Minister’s rhetoric. Narendra Modi has eschewed all talk of development, and has played to the worst impulses of the Gujarati people. His main tool is Hindu-Muslim polarisation, which is reflected in the language he uses for his opponents. The Congress has a “Mughlai” mentality, they are ushering in an “Aurangzeb Raj”, and their top leaders are conspiring with Pakistan to make sure Mr. Modi loses. A Bharatiya Janata Party (BJP) spokesperson has also launched a scathing attack on Congress president-elect Rahul Gandhi. None of this is new.

Mr. Modi’s rhetoric in the heat of campaigning has always come from below. From his references to “Mian Musharraf” over a decade ago to the “kabristan-shamshaan” comments of the recent elections in Uttar Pradesh, it has been clear that the otherness of Muslims is central to the BJP playbook. Hate drives more people to the polling booth than warm, fuzzy feelings of pluralism. But, the question is, are the Congress leaders really conspiring with Pakistan to make sure the BJP lose?

Answer: It doesn’t matter.

No care for truth

In 1986, the philosopher Harry G. Frankfurt wrote an essay named “On Bullshit”, which was published as a book in 2005 and became a surprise bestseller. The book attempts to arrive at “a theoretical understanding of bullshit”. The key difference between a liar and a , ‘bullshitter’, Frankfurt tells us, is that the liar knows the truth and aims to deceive. The ‘bullshitter’, on the other hand, doesn’t care about the truth. He is “neither on the side of the true nor on the side of the false,” in Frankfurt’s words. “His eye is not on the facts at all, as the eyes of the honest man and of the liar are, except insofar as they may be pertinent to his interest in getting away with what he says.”

The ‘bullshitter’ is wise, for he has cottoned on to an important truth that has become more and more glaring in these modern times: that facts don’t matter. And to understand why, I ask you to go back with me in time to another seminal book, this one published in 1922.

The first chapter of “Public Opinion”, by the American journalist, Walter Lippmann, is titled “The World Outside and the Pictures in Our Heads”. In it, Lippmann makes the point that all of us have a version of the world inside our heads that resembles, but is not identical to, the world as it is. “The real environment,” he writes, “is altogether too big, too complex, and too fleeting for direct acquaintance.”

We construct a version of the world in our heads, and feed that version, for modifying it too much will require too much effort. If facts conflict with it, we ignore those facts, and accept only those that conform to our worldview. (Cognitive psychologists call this the “Confirmation Bias”.)

Lippmann sees this as a challenge for democracy, for how are we to elect our leaders if we cannot comprehend the impact they will have on the world?

Fragmented media

I would argue that this is a far greater problem today than it was in Lippmann’s time. Back then, and until a couple of decades ago, there was a broad consensus on the truth. There were gatekeepers to information and knowledge. Even accounting for biases, the mainstream media agreed on some basic facts. That has changed. The media is fragmented, there are no barriers to entry, and the mainstream media no longer has a monopoly of the dissemination of information. This is a good thing, with one worrying side effect: whatever beliefs or impulses we might have — the earth is flat, the Jews carried out 9/11, India is a Hindu nation — we can find plenty of “evidence” for it online, and connect with like-minded people. Finding others who share our beliefs makes us more strident, and soon we form multiple echo chambers that become more and more extreme. Polarisation increases. The space in the middle disappears. And the world inside our heads, shared by so many other, becomes impervious to facts.

This also means that impulses we would otherwise not express in polite society find validation, and a voice. Here’s another book you should read: in 1997, the sociologist, Timur Kuran, wrote “Private Truths, Public Lies” in which he coined the term “Preference Falsification”. There are many things we feel or believe but do not express because we fear social approbation. But as soon as we realise that others share our views, we are emboldened to express ourselves. This leads to a “Preference Cascade”: Kuran gives the example of the collapse of the Soviet Union, but an equally apt modern illustration is the rise of right-wing populists everywhere. I believe — and I apologise if this is too depressing to contemplate — that the majority of us are bigots, misogynists, racists, and tribal in our thinking. We have always been this way, but because liberal elites ran the media, and a liberal consensus seemed to prevail, we did not express these feelings. Social media showed us that we were not alone, and gave us the courage to express ourselves.

That’s where Donald Trump comes from. That’s where Mr. Modi comes from. Our masses vote for these fine gentlemen not in spite of their bigotry and misogyny, but because of it. Mr. Trump and Mr. Modi provide them a narrative that feeds the world inside their heads. Mexicans are rapists, foreigners are bad, Muslims are stealing our girls, gaumutra cures cancer — and so on. The truth is irrelevant. Facts. Don’t. Matter.

Think about the implication of this. This means that the men and women who wrote the Constitution were an out-of-touch elite, and the values they embedded in it were not shared by most of the nation. (As a libertarian, I think the Constitution was deeply flawed because it did not do enough to protect individual rights, but our society’s consensus would probably be that it did too much.) The “Idea of India” that these elites spoke of was never India’s Idea of India. These “liberal” values were imposed on an unwilling nation — and is such imposition, ironically, not deeply illiberal itself? This is what I call The Liberal Paradox.

All the ugliness in our politics today is the ugliness of the human condition. This is how we are. This is not a perversion of democracy but an expression of it. Those of us who are saddened by it — the liberal elites, libertarians like me — have to stop feeling entitled, and get down to work. The alt-right guru Andrew Breitbart once said something I never get tired of quoting: “Politics is downstream from Culture.” A political victory will now not come until there is a social revolution. Where will it begin?

Tuesday 30 October 2012

'You Are Not So Smart: Why Your Memory is Mostly Fiction....



So you remember your wedding day like it was yesterday. You can spot when something is of high quality. You keep yourself well-informed about current affairs but would be open to debate and discussion, You love your phone because it's the best, right? Are you sure? David McRaney from Hattiesburg, Mississippi, is here to tell you that you don't know yourself as well as you think. The journalist and self-described psychology nerd's new book, You Are Not So Smart, consists of 48 short chapters on the assorted ways that we mislead ourselves every day. "The central theme is that you are the unreliable narrator in the story of your life. And this is because you're unaware of how unaware you are," says McRaney. "It's fun to go through legitimate scientific research and pull out all of the examples that show how everyone, no matter how smart or educated or experienced, is radically self-deluded in predictable and quantifiable ways." Based on the blog of the same name, You Are Not So Smart is not so much a self-help book as a self-hurt book. Here McRaney gives some key examples

Expectation

The Misconception: Wine is a complicated elixir, full of subtle flavours only an expert can truly distinguish, and experienced tasters are impervious to deception.
The Truth: Wine experts and consumers can be fooled by altering their expectations.
An experiment in 2001 at the University of Bordeaux had wine experts taste a red and white wine, to determine which was the best. They dutifully explained what they liked about each wine but what they didn't realise was that scientists had just dyed the same white wine red and told them it was red wine. The tasters described the sorts of berries and tannins they could detect in the red wine as if it really was red. Another test had them judge a cheap bottle of wine and an expensive one. They rated the expensive wine much more highly than the cheap, with much more flattering descriptions. It was actually the same wine. It's not to say wine-tasting is pointless, it's to show that expectation can radically change experience. Yes, these people were experts, but that doesn't mean they can't be influenced by the same things as the rest of us, whether it be presentation or advertising or price. This drives home the idea that reality is a construction of the brain. You don't passively receive the outside world, you actively construct your experience moment by moment.

The Texas Sharpshooter Fallacy

The Misconception: We take randomness into account when determining cause and effect.
The Truth: We tend to ignore random chance when the results seem meaningful or when we want a random event to have a meaningful cause.
Imagine a cowboy shooting at the side of a barn over and over again with a gun. The side of the barn fills up with holes. If you walk over and paint a bullseye around clusters of holes it will make it look like you have made quite a lot of correct shots. It's a metaphor for the way the human mind naturally works when trying to make sense out of chaos. The brain is very invested in taking chaos and turning it into order. For example, in America it's very popular to discuss how similar the Lincoln and Kennedy assassinations were. Elected 100 years apart, Lincoln was killed in the Ford theatre; Kennedy was in a Lincoln automobile made by Ford. They were both killed on a Friday, sitting next to their wives, by men with three names. And so on and so on. It's not spooky. People take hold of the hits but ignore the misses. They are pulled into the things that line up, and are similar or coincidental, but they ignore everything else that's not. The similarities are merely bullseyes drawn around the many random facts.

Confirmation Bias

The Misconception: Your opinions are the result of years of rational, objective analysis.
The Truth: Your opinions are the result of years of paying attention to information that confirmed what you believed, while ignoring information that challenged your preconceived notions.
Any cognitive bias is a tendency to think in one way and not another whenever your mind is on auto-pilot; whenever you're going with the flow. Confirmation bias is a tendency to pay attention to evidence that confirms pre-existing beliefs and notions and conclusions about life and to completely ignore other information. This happens so automatically that we don't even notice. Say you have a flatmate, and you are arguing over who does most of the housework, and both people believe that they do most of the work. What is really happening is that both people are noticing when they do the work and not noticing when they don't. The way it plays into most of our lives is the media that we choose to put into our brains; the television, news, magazines and books. We tend to only pick out things that line up with our pre-existing beliefs and rarely choose anything that challenges those beliefs. It relays the backfire effect, which is a cognitive bias where if we're presented with contradictory evidence, we tend to reject it and support our initial belief even more firmly. When people watch a news programme or pundit, they aren't looking for information so much as confirmation of what they already believe is going in.

Brand Loyalty

The Misconception: We prefer the things we own over the things we don't because we made rational choices when we bought them.
The Truth: We prefer the things we own because we rationalise our past choices to protect our sense of self.
Why do people argue over Apple vs Android? Or one car company versus another? After all, these are just corporations. Why would you defend a brand as if you are their PR representative? We believe that we prefer the things we own because we made these deep rational evaluations of them before we bought them, but most of the rationalisation takes place after you own the thing. It's the choosing of one thing over another that leads to narratives about why you did it, which usually tie in to your self-image.
There are at least a dozen psychological effects that play into brand loyalty, the most potent of which is the endowment effect: you feel like the things you own are superior to the things you don't. When you buy a product you tend to connect the product to your self-image, then once it's connected to your self-image you will defend it as if you're defending your own ego or belief structure.

The Misinformation Effect

The Misconception: Memories are played back like recordings.
The Truth: Memories are constructed anew each time from whatever information is currently available, which makes them highly permeable to influences from the present.
You might think your memory is a little fuzzy but not that it's completely inaccurate. People believe that memory is like a video or files stored in some sort of computer. But it's not like that at all. Memories are actually constructed anew each time that you remember something.
Each time you take an old activation sequence in your brain and re-construct it; like building a toy airplane out of Lego and then smashing the Lego, putting it back into the box, and building it again. Each time you build it it's going to be a little bit different based on the context and experience you have had since the last time you created it.
Oddly enough, the least remembered memory is the most accurate. Each time you bring it into your life you edit it a little more. In 1974 Elizabeth Loftus had people watch a film of two cars having a collision and divided them into groups. Asking each group the same question, she used a slightly different description: how fast were the cars going when they contacted, hit, bumped, collided or smashed? The more violent the wording, the higher they estimated the speed. The way in which questions were worded altered the memories subjects reported.
They weren't looking back to the memory of the film they watched, they were building a new experience based on current information. Memory is actually very malleable and it's dangerous to think that memory is a perfect recording of a past event.

'You Are Not So Smart: Why Your Memory is Mostly Fiction, Why You Have Too Many Friends on Facebook and 46 Other Ways You're Deluding Yourself' by David McRaney (Oneworld, £8.99)