Search This Blog

Showing posts with label sceptic. Show all posts
Showing posts with label sceptic. Show all posts

Sunday, 23 April 2023

The Confidence Game......2

 There’s a likely apocryphal story about the French poet Jacques Prevert. One day he was walking past a blind man who held up a sign “Blind man without a pension”. He stopped to chat. How was it going? Were people helpful? “Not great”, the man replied.


Could I borrow your sign?” Prevert asked. The blind man nodded.


The poet took the sign, flipped it over and wrote a message.


The next day, he again walked past the blind man, “How is it going now?” he asked. “Incredible,” the man replied. “I’ve never received so much money in my life.”


On the sign, Prevert had written: “Spring is coming, but I won’t see it.”


Give us a compelling story, and we open up. Scepticism gives way to belief. The same approach that makes a blind man’s cup overflow with donations can make us more receptive to almost any persuasive message, for good or for ill.


When we step into a magic show, we come in actively wanting to be fooled. We  want deception to cover our eyes and make our world a tiny bit more fantastical, more awesome than it was before. And the magician, in many ways, uses the exact same approaches as the confidence man - only without the destruction of the con’s end game. “Magic is a kind of a conscious, willing con,” says Michael Shermer, a science historian and writer. “You’re not being foolish to fall for it. If you don’t fall for it, the magician is doing something wrong.” 


At their root, magic tricks and confidence games share the same fundamental principle: a manipulation of our beliefs. Magic operates at the most basic level of visual perception, manipulating how we see and experience reality. It changes for an instant what we think possible, quite literally taking advantage of our eyes’ and brains’ foibles to create an alternative version of the world. A con does the same thing, but can go much deeper. Long cons, the kind that take weeks, months or even years to unfold, manipulate reality at a higher level, playing with our most basic beliefs about humanity and the world.


The real confidence game feeds on the desire for magic, exploiting our endless taste for an existence that is more extraordinary and somehow more meaningful.


When we fall for a con, we aren’t actively seeking deception - or at least we don’t think we are. As long as the desire for magic, for a reality that is somehow greater than our everyday existence remains, the confidence game will thrive.


Extracted from The Confidence Game by Maria Konnikova


Tuesday, 21 March 2023

From SVB to the BBC: why did no one see the crisis coming?

Michael Skapinker in The FT  

Silicon Valley Bank collapses after its investments in long-dated bonds made it vulnerable to interest rate rises. The BBC is thrown into chaos after suspending its top football pundit and colleagues abandon their posts in solidarity. JPMorgan Chase suffers reputational damage and lawsuits after keeping sex offender Jeffrey Epstein on as a client for five years after he pleaded guilty to soliciting prostitution, including from a minor. 

In all these cases, we can ask, as Queen Elizabeth II did on a visit to the London School of Economics during the global financial crisis in 2008: “Why did no one see it coming?” 

Did anyone in the BBC’s leadership ask whether, if they suspended Gary Lineker from presenting its top Saturday night football programme Match of the Day, other pundits might walk out too? Did SVB run through the risks attached to its investment policies if interest rates rose faster than expected? And why did JPMorgan accede to senior banker Jes Staley’s desire to keep Epstein on? These are dramatic examples of what can go wrong, but any organisation that fails to keep its possible risks under regular review could go the same way. 

All too often senior managers fail to consider the worst-case scenario. Why don’t they listen to doubters? 

Amy Edmondson, a professor at Harvard Business School, says sometimes it is because there are no doubters. Leadership groups become so locked into a “shared myth” that they ignore any suggestions they might be wrong. “We’ve got the well-known confirmation bias where we are predisposed to pick up signals, data, evidence that reinforce our current belief. And we will be filtering out disconfirming evidence,” she says. 

It is like taking the wrong route in a car. “You’re on the highway driving somewhere and you’re heading in the wrong direction, but you don’t know it until you’re just hit over the head by disconfirming data that you can’t miss: you suddenly cross a state line that you didn’t expect to cross.” 

This groupthink and confirmation bias is prevalent in the wider society, where people leap on any evidence to support their view on, for example, climate change, Edmonson says. “Oh my gosh, this is the coldest winter ever. What do you mean global warming?” 

In many cases, there are doubters, but they are either reluctant to raise their voices or, when they do, colleagues hesitate to join them. At JPMorgan, there were questions about Epstein. An internal email in 2010 asked: “Are you still comfortable with this client who is now a registered sex offender?” 

James Detert, a professor at the University of Virginia’s Darden School of Business, says evolution has hard-wired us not to deviate from our group. “If you think about our time on earth as a species, for most of it we lived in very small clans, bands, tribes, and our daily struggle was for survival, both around food security and physical safety. In that environment, if you were ostracised, you were going to die. There was no solo living in those days.” 

We carry this fear of being cast out into our workplaces, compounded by the experience of whistleblowers, who sometimes suffer retribution from their employers and are shunned by colleagues. Dissenters present their colleagues with an uncomfortable choice: either to view themselves as cowards for not speaking up too, or to regard the rebel as “some kind of crackpot”. The second is often easier. 

Isn’t the Lineker saga a counter-example? His colleagues supported him, forcing the BBC to quickly see how badly it had miscalculated. Detert says this was an unusual case. Celebrated footballers-turned-commentators are brands themselves, Lineker in particular. The BBC realised how much it needed him, and how easily he could have secured a contract with a rival. Usually, he says, rebels find themselves isolated. 

So what can leaders do to encourage doubters to speak up, to ensure they consider all the possible downsides of their strategies, and escape eventual humiliation or disaster? Detert is not a fan of appointing a “devil’s advocate” who is tasked with giving a contrary view. It is often clear that they are simply going through the motions. He prefers what he calls “joint evaluation”. As well as the preferred policy — investing in long-dated bonds, for example — senior managers should draw up a distinctively different policy and compare the two. This is more likely to show up the flaws in the preferred strategy. 

Simon Walker, whose roles have included head of communications at British Airways and spokesman for Queen Elizabeth, and Sue Williams, Scotland Yard’s former chief kidnap and hostage negotiator, told me at an event organised by the Financial Times’ business networking organisation, that leaders should involve every function from communications to legal to HR when examining possible future crises. Detert agrees this can be valuable, provided the presence of often under-regarded departments such as HR is taken seriously. 

Leaders’ behaviour is a signal of whether they want staff to speak up. Edmondson says: “Leaders of organisations have to go out of their way to invite the dissenting view, the missed risk. Before we close down any conversation where there’s a decision, we need to say, without fail: ‘What are we missing?’ We say: ‘OK, let’s just say we’re wrong about this and it goes badly awry, what would have explained it?’” She recommends calling on people by name, asking what their thoughts are. 

Detert adds that office design can signal to staff that their thoughts are welcome: the leader sitting in open plan, or having bright stripes on the floor indicating the way to their office, or sitting at square tables without place names rather than at rectangular ones where their seat position makes it obvious they are in charge. 

How relevant are these workplace layouts when, post-lockdown, employees no longer come into the office every day? “That’s the $10mn question,” Detert says. On the one hand, remote working might be making it harder for leaders to read the signs that people are uneasy with a strategy. On the other, it could be that people find it easier to speak out from their own homes. They may also feel that other aspects of their lives, such as family, are now more important than work, which could encourage them to talk. 

Others think SVB’s relaxed remote-working culture, which meant senior executives were scattered across the US, contributed to its failure. Nicholas Bloom, a Stanford professor who has studied remote working, told the Financial Times: “It’s hard to have a challenging call over Zoom.” Hedging interest rate risk was more likely to come up over lunch or in small meetings. 

Leaders also need to persistently praise people who speak up. The penalties for doing so are often more obvious than the rewards. Those who keep their heads down are seldom blamed. As Warren Buffett said: “As a group, lemmings may have a rotten image, but no individual lemming has ever received bad press.”

Saturday, 30 April 2022

Yearning for the Miracle Man

Pervez Hoodbhoy in The Dawn


After rough weather and stormy seas battered the country for three quarters of a century, a nation adrift saw two miracle men arise. Separated by 50 years and endowed with magical personalities, Zulfikar Ali Bhutto and Imran Khan set the public imagination on fire by challenging the established order.

After Bhutto was sent to the gallows, many PPP jiyalas self-flagellated, with several immolating themselves in despair. Till their fiery end, they believed in a feudal lord’s promise of socialist utopia. Similar horrific scenes occurred after the assassination of his charismatic daughter. That the father was instrumental in the break-up of Pakistan, and that during the daughter’s years Pakistan fell yet deeper into a pit of corruption, left jiyalas unfazed. Today’s Sindh remains firmly in the grip of a quasi-feudal dynasty and the Bhutto cult.

But still worse might lie ahead as Imran Khan’s cult goes from strength to strength. Writing in Dawn, Adrian Husain worries that a matinee idol with a freshly acquired messianic status is skillfully exploiting widespread anger at corruption to sow hate and division among Pakistanis. Fahd Husain evinces alarm that PTI’s flag-waving ‘youthias’ can see no wrong in whatever Khan says or does. He wonders why even those with Ivy League degrees put their rational faculties into deep sleep. Conversing with PTI supporters, says Ayesha Khan, has become close to impossible.

What enabled these two men to command the senseless devotion of so many millions? Can science explain it? Forget political science. The dark secret is that this isn’t really a science. So, could neuroscience give the answer? Although this area has seen spectacular progress, it is nowhere close to cracking the brain’s inner code.

Instead one must turn to the animal kingdom. Gregariousness and suppression of individuality helps protect members of a species because leaders give direction in a difficult environment. But there is a downside. Herds of sheep are known to follow their leader over a cliff and self-destruct. Human groupies have done similarly.

Specific social attitudes — groupthink and its diametrical opposite, scepticism — explain why some societies crave messiahs while others don’t. At one level, everyone is a sceptic. When it comes to everyday life — where to invest one’s life’s savings, what food to eat, or which doctor to see for a serious health problem — we don’t simply believe all that’s told to us. Instead, we look around for evidence and are willing to let go of ideas when contrary evidence piles up. But in political and religious matters, open-mindedness often turns into absolutism.

Absolutism has made Pakistani politics less and less issue-oriented and more and more tribal. It is hard to tell apart PML-N or PPP from PTI on substantive matters such as the economy, foreign debt, or relations with neighbouring countries. The only certainty is that the government in power will blame the previous government for everything.

This absolutism makes most party supporters purely partisan — you are with us or against us. Zealots willingly believe accusations aimed at the other side but dismiss those aimed at their own. A rational PTI supporter, on the other hand, will entertain the Toshakhana as possible evidence of wrongdoing just as much as Surrey Palace or Avenfield Apartments. He is also willing to admit that all Pakistani political leaders — including Khan — have lifestyles at odds with their declared assets and income. Rational supporters who can say ‘yeh sub chor hain’ exist but are few.

Instead, a culture of intellectual laziness feeds upon wild conspiracy theories coupled with unshakeable belief that political destinies are controlled by some overarching, external power. The ancient Greeks believed that the world was run by the whims and desires of the great god Zeus. For the PTI zealot, the centre of the universe has shifted from Mount Olympus to Washington.

In the zealot’s imagination, an omnipotent American god sits in the White House. With just the flick of his wrist, he ordered Imran Khan’s (former) military sponsors to dump him and then stitched together his fractured political opposition into organising a no-confidence vote. Of course, everyone dutifully obeyed orders. And this supposedly happened inside one of the world’s most anti-American countries! But we know that pigs can fly, don’t we? (Incidentally, America’s severest critic for over 60 years, Noam Chomsky, has reportedly trashed Khan’s claim of a regime change conspiracy.)

Fortunately, not all who stand with a political party, PTI included, are zealots. They do recognise that the country’s entire political class is crass, corrupt, self-seeking, and puts personal interest above that of the electorate. Knowing this they choose a party that, in their estimation, is a lesser evil over a greater one. Democracy depends on this vital principle.

To see this, compare the mass hysteria generated by Khan after being voted out of office with the calmness that followed France’s recent elections. Though despised by the majority of those who voted for him, Macron won handsomely over Marie Le Pen, his far-right, Islamophobic opponent. To her credit, Le Pen did not attribute the defeat either to Washington or to a global Islamic conspiracy. That’s civilised politics.

Why democracy works for France but has had such a rough time in Pakistan is easy to see. It’s not just the military and its constant meddling in political affairs. More important is a culture where emotion and dogma shove truth into the margins. What else explains the enormous popularity of motivational speakers who lecture engineering students on methods to deal with jinns and other supernatural creatures?

Pakistan’s education system stresses faith-unity-discipline at the cost of reason-diversity-liberty. This has seriously impaired the ordinary Pakistani’s capacity to judge. Even in private English-medium schools for the elite, teachers and students remain shackled to a madressah mindset. Why be surprised that so many ‘youthias’ are burger bachas? Unless we allow children to think, the yearning for Miracle Man will continue. It will long outlast Imran Khan — whenever and however he finally exits the scene.

Friday, 15 January 2021

Conspiracy theorists destroy a rational society: resist them

John Thornhill in The FT

Buzz Aldrin’s reaction to the conspiracy theorist who told him the moon landings never happened was understandable, if not excusable. The astronaut punched him in the face. 

Few things in life are more tiresome than engaging with cranks who refuse to accept evidence that disproves their conspiratorial beliefs — even if violence is not the recommended response. It might be easier to dismiss such conspiracy theorists as harmless eccentrics. But while that is tempting, it is in many cases wrong. 

As we have seen during the Covid-19 pandemic and in the mob assault on the US Congress last week, conspiracy theories can infect the real world — with lethal effect. Our response to the pandemic will be undermined if the anti-vaxxer movement persuades enough people not to take a vaccine. Democracies will not endure if lots of voters refuse to accept certified election results. We need to rebut unproven conspiracy theories. But how? 

The first thing to acknowledge is that scepticism is a virtue and critical scrutiny is essential. Governments and corporations do conspire to do bad things. The powerful must be effectively held to account. The US-led war against Iraq in 2003, to destroy weapons of mass destruction that never existed, is a prime example.  

The second is to re-emphasise the importance of experts, while accepting there is sometimes a spectrum of expert opinion. Societies have to base decisions on experts’ views in many fields, such as medicine and climate change, otherwise there is no point in having a debate. Dismissing the views of experts, as Michael Gove famously did during the Brexit referendum campaign, is to erode the foundations of a rational society. No sane passenger would board an aeroplane flown by an unqualified pilot.  

In extreme cases, societies may well decide that conspiracy theories are so harmful that they must suppress them. In Germany, for example, Holocaust denial is a crime. Social media platforms that do not delete such content within 24 hours of it being flagged are fined. 

In Sweden, the government is even establishing a national psychological defence agency to combat disinformation. A study published this week by the Oxford Internet Institute found “computational propaganda” is now being spread in 81 countries. 

Viewing conspiracy theories as political propaganda is the most useful way to understand them, according to Quassim Cassam, a philosophy professor at Warwick university who has written a book on the subject. In his view, many conspiracy theories support an implicit or explicit ideological goal: opposition to gun control, anti-Semitism or hostility to the federal government, for example. What matters to the conspiracy theorists is not whether their theories are true, but whether they are seductive. 

So, as with propaganda, conspiracy theories must be as relentlessly opposed as they are propagated. 

That poses a particular problem when someone as powerful as the US president is the one shouting the theories. Amid huge controversy, Twitter and Facebook have suspended Donald Trump’s accounts. But Prof Cassam says: “Trump is a mega disinformation factory. You can de-platform him and address the supply side. But you still need to address the demand side.” 

On that front, schools and universities should do more to help students discriminate fact from fiction. Behavioural scientists say it is more effective to “pre-bunk” a conspiracy theory — by enabling people to dismiss it immediately — than debunk it later. But debunking serves a purpose, too. 

As of 2019, there were 188 fact-checking sites in more than 60 countries. Their ability to inject facts into any debate can help sway those who are curious about conspiracy theories, even if they cannot convince true believers. 

Under intense public pressure, social media platforms are also increasingly filtering out harmful content and nudging users towards credible sources of information, such as medical bodies’ advice on Covid. 

Some activists have even argued for “cognitive infiltration” of extremist groups, suggesting that government agents should intervene in online chat rooms to puncture conspiracy theories. That may work in China but is only likely to backfire in western democracies, igniting an explosion of new conspiracy theories. 

Ultimately, we cannot reason people out of beliefs that they have not reasoned themselves into. But we can, and should, punish those who profit from harmful irrationality. There is a tried-and-tested method of countering politicians who peddle and exploit conspiracy theories: vote them out of office.

Sunday, 13 September 2020

Statistics, lies and the virus: Five lessons from a pandemic

In an age of disinformation, the value of rigorous data has never been more evident writes Tim Harford in The FT 


Will this year be 1954 all over again? Forgive me, I have become obsessed with 1954, not because it offers another example of a pandemic (that was 1957) or an economic disaster (there was a mild US downturn in 1953), but for more parochial reasons. 

Nineteen fifty-four saw the appearance of two contrasting visions for the world of statistics — visions that have shaped our politics, our media and our health. This year confronts us with a similar choice. 

The first of these visions was presented in How to Lie with Statistics, a book by a US journalist named Darrell Huff. Brisk, intelligent and witty, it is a little marvel of numerical communication. 

The book received rave reviews at the time, has been praised by many statisticians over the years and is said to be the best-selling work on the subject ever published. It is also an exercise in scorn: read it and you may be disinclined to believe a number-based claim ever again. 

There are good reasons for scepticism today. David Spiegelhalter, author of last year’s The Art of Statistics, laments some of the UK government’s coronavirus graphs and testing targets as “number theatre”, with “dreadful, awful” deployment of numbers as a political performance. 

“There is great damage done to the integrity and trustworthiness of statistics when they’re under the control of the spin doctors,” Spiegelhalter says. He is right. But we geeks must be careful — because the damage can come from our own side, too. 

For Huff and his followers, the reason to learn statistics is to catch the liars at their tricks. That sceptical mindset took Huff to a very unpleasant place, as we shall see. Once the cynicism sets in, it becomes hard to imagine that statistics could ever serve a useful purpose.  

But they can — and back in 1954, the alternative perspective was embodied in the publication of an academic paper by the British epidemiologists Richard Doll and Austin Bradford Hill. They marshalled some of the first compelling evidence that smoking cigarettes dramatically increases the risk of lung cancer. 

The data they assembled persuaded both men to quit smoking and helped save tens of millions of lives by prompting others to do likewise. This was no statistical trickery, but a contribution to public health that is almost impossible to exaggerate.  

You can appreciate, I hope, my obsession with these two contrasting accounts of statistics: one as a trick, one as a tool. Doll and Hill’s painstaking approach illuminates the world and saves lives into the bargain. 

Huff’s alternative seems clever but is the easy path: seductive, addictive and corrosive. Scepticism has its place, but easily curdles into cynicism and can be weaponized into something even more poisonous than that. 

The two worldviews soon began to collide. Huff’s How to Lie with Statistics seemed to be the perfect illustration of why ordinary, honest folk shouldn’t pay too much attention to the slippery experts and their dubious data. 

Such ideas were quickly picked up by the tobacco industry, with its darkly brilliant strategy of manufacturing doubt in the face of evidence such as that provided by Doll and Hill. 

As described in books such as Merchants of Doubt by Erik Conway and Naomi Oreskes, this industry perfected the tactics of spreading uncertainty: calling for more research, emphasising doubt and the need to avoid drastic steps, highlighting disagreements between experts and funding alternative lines of inquiry. The same tactics, and sometimes even the same personnel, were later deployed to cast doubt on climate science. 

These tactics are powerful in part because they echo the ideals of science. It is a short step from the Royal Society’s motto, “nullius in verba” (take nobody’s word for it), to the corrosive nihilism of “nobody knows anything”.  

So will 2020 be another 1954? From the point of view of statistics, we seem to be standing at another fork in the road. The disinformation is still out there, as the public understanding of Covid-19 has been muddied by conspiracy theorists, trolls and government spin doctors.  

Yet the information is out there too. The value of gathering and rigorously analysing data has rarely been more evident. Faced with a complete mystery at the start of the year, statisticians, scientists and epidemiologists have been working miracles. I hope that we choose the right fork, because the pandemic has lessons to teach us about statistics — and vice versa — if we are willing to learn. 


The numbers matter 

One lesson this pandemic has driven home to me is the unbelievable importance of the statistics,” says Spiegelhalter. Without statistical information, we haven’t a hope of grasping what it means to face a new, mysterious, invisible and rapidly spreading virus. 

Once upon a time, we would have held posies to our noses and prayed to be spared; now, while we hope for advances from medical science, we can also coolly evaluate the risks. 

Without good data, for example, we would have no idea that this infection is 10,000 times deadlier for a 90-year-old than it is for a nine-year-old — even though we are far more likely to read about the deaths of young people than the elderly, simply because those deaths are surprising. It takes a statistical perspective to make it clear who is at risk and who is not. 

Good statistics, too, can tell us about the prevalence of the virus — and identify hotspots for further activity. Huff may have viewed statistics as a vector for the dark arts of persuasion, but when it comes to understanding an epidemic, they are one of the few tools we possess. 


Don’t take the numbers for granted 

But while we can use statistics to calculate risks and highlight dangers, it is all too easy to fail to ask the question “Where do these numbers come from?” By that, I don’t mean the now-standard request to cite sources, I mean the deeper origin of the data. For all his faults, Huff did not fail to ask the question. 
 
He retells a cautionary tale that has become known as “Stamp’s Law” after the economist Josiah Stamp — warning that no matter how much a government may enjoy amassing statistics, “raise them to the nth power, take the cube root and prepare wonderful diagrams”, it was all too easy to forget that the underlying numbers would always come from a local official, “who just puts down what he damn pleases”. 

The cynicism is palpable, but there is insight here too. Statistics are not simply downloaded from an internet database or pasted from a scientific report. Ultimately, they came from somewhere: somebody counted or measured something, ideally systematically and with care. These efforts at systematic counting and measurement require money and expertise — they are not to be taken for granted. 

In my new book, How to Make the World Add Up, I introduce the idea of “statistical bedrock” — data sources such as the census and the national income accounts that are the results of painstaking data collection and analysis, often by official statisticians who get little thanks for their pains and are all too frequently the target of threats, smears or persecution. 
 
In Argentina, for example, long-serving statistician Graciela Bevacqua was ordered to “round down” inflation figures, then demoted in 2007 for producing a number that was too high. She was later fined $250,000 for false advertising — her crime being to have helped produce an independent estimate of inflation. 

In 2011, Andreas Georgiou was brought in to head Greece’s statistical agency at a time when it was regarded as being about as trustworthy as the country’s giant wooden horses. When he started producing estimates of Greece’s deficit that international observers finally found credible, he was prosecuted for his “crimes” and threatened with life imprisonment. Honest statisticians are braver — and more invaluable — than we know.  

In the UK, we don’t habitually threaten our statisticians — but we do underrate them. “The Office for National Statistics is doing enormously valuable work that frankly nobody has ever taken notice of,” says Spiegelhalter, pointing to weekly death figures as an example. “Now we deeply appreciate it.”  

Quite so. This statistical bedrock is essential, and when it is missing, we find ourselves sinking into a quagmire of confusion. 

The foundations of our statistical understanding of the world are often gathered in response to a crisis. For example, nowadays we take it for granted that there is such a thing as an “unemployment rate”, but a hundred years ago nobody could have told you how many people were searching for work. Severe recessions made the question politically pertinent, so governments began to collect the data. 

More recently, the financial crisis hit. We discovered that our data about the banking system was patchy and slow, and regulators took steps to improve it. 

So it is with the Sars-Cov-2 virus. At first, we had little more than a few data points from Wuhan, showing an alarmingly high death rate of 15 per cent — six deaths in 41 cases. Quickly, epidemiologists started sorting through the data, trying to establish how exaggerated that case fatality rate was by the fact that the confirmed cases were mostly people in intensive care. Quirks of circumstance — such as the Diamond Princess cruise ship, in which almost everyone was tested — provided more insight. 

Johns Hopkins University in the US launched a dashboard of data resources, as did the Covid Tracking Project, an initiative from the Atlantic magazine. An elusive and mysterious threat became legible through the power of this data.  

That is not to say that all is well. Nature recently reported on “a coronavirus data crisis” in the US, in which “political meddling, disorganization and years of neglect of public-health data management mean the country is flying blind”.  

Nor is the US alone. Spain simply stopped reporting certain Covid deaths in early June, making its figures unusable. And while the UK now has an impressively large capacity for viral testing, it was fatally slow to accelerate this in the critical early weeks of the pandemic. 

Ministers repeatedly deceived the public about the number of tests being carried out by using misleading definitions of what was happening. For weeks during lockdown, the government was unable to say how many people were being tested each day. 

Huge improvements have been made since then. The UK’s Office for National Statistics has been impressively flexible during the crisis, for example in organising systematic weekly testing of a representative sample of the population. This allows us to estimate the true prevalence of the virus. Several countries, particularly in east Asia, provide accessible, usable data about recent infections to allow people to avoid hotspots. 

These things do not happen by accident: they require us to invest in the infrastructure to collect and analyse the data. On the evidence of this pandemic, such investment is overdue, in the US, the UK and many other places. 


Even the experts see what they expect to see 

Jonas Olofsson, a psychologist who studies our perceptions of smell, once told me of a classic experiment in the field. Researchers gave people a whiff of scent and asked them for their reactions to it. In some cases, the experimental subjects were told: “This is the aroma of a gourmet cheese.” Others were told: “This is the smell of armpits.” 

In truth, the scent was both: an aromatic molecule present both in runny cheese and in bodily crevices. But the reactions of delight or disgust were shaped dramatically by what people expected. 

Statistics should, one would hope, deliver a more objective view of the world than an ambiguous aroma. But while solid data offers us insights we cannot gain in any other way, the numbers never speak for themselves. They, too, are shaped by our emotions, our politics and, perhaps above all, our preconceptions. 

A striking example is the decision, on March 23 this year, to introduce a lockdown in the UK. In hindsight, that was too late. 

“Locking down a week earlier would have saved thousands of lives,” says Kit Yates, author of The Maths of Life and Death — a view now shared by influential epidemiologist Neil Ferguson and by David King, chair of the “Independent Sage” group of scientists. 

The logic is straightforward enough: at the time, cases were doubling every three to four days. If a lockdown had stopped that process in its tracks a week earlier, it would have prevented two doublings and saved three-quarters of the 65,000 people who died in the first wave of the epidemic, as measured by the excess death toll. 

That might be an overestimate of the effect, since people were already voluntarily pulling back from social interactions. Yet there is little doubt that if a lockdown was to happen at all, an earlier one would have been more effective. And, says Yates, since the infection rate took just days to double before lockdown but long weeks to halve once it started, “We would have got out of lockdown so much sooner . . . Every week before lockdown cost us five to eight weeks at the back end of the lockdown.” 

Why, then, was the lockdown so late? No doubt there were political dimensions to that decision, but senior scientific advisers to the government seemed to believe that the UK still had plenty of time. On March 12, prime minister Boris Johnson was flanked by Chris Whitty, the government’s chief medical adviser, and Patrick Vallance, chief scientific adviser, in the first big set-piece press conference. Italy had just suffered its 1,000th Covid death and Vallance noted that the UK was about four weeks behind Italy on the epidemic curve. 

With hindsight, this was wrong: now that late-registered deaths have been tallied, we know that the UK passed the same landmark on lockdown day, March 23, just 11 days later.  

It seems that in early March the government did not realise how little time it had. As late as March 16, Johnson declared that infections were doubling every five to six days. 

The trouble, says Yates, is that UK data on cases and deaths suggested that things were moving much faster than that, doubling every three or four days — a huge difference. What exactly went wrong is unclear — but my bet is that it was a cheese-or-armpit problem. 

Some influential epidemiologists had produced sophisticated models suggesting that a doubling time of five to six days seemed the best estimate, based on data from the early weeks of the epidemic in China. These models seemed persuasive to the government’s scientific advisers, says Yates: “If anything, they did too good a job.” 

Yates argues that the epidemiological models that influenced the government’s thinking about doubling times were sufficiently detailed and convincing that when the patchy, ambiguous, early UK data contradicted them, it was hard to readjust. We all see what we expect to see. 

The result, in this case, was a delay to lockdown: that led to a much longer lockdown, many thousands of preventable deaths and needless extra damage to people’s livelihoods. The data is invaluable but, unless we can overcome our own cognitive filters, the data is not enough. 


The best insights come from combining statistics with personal experience 

The expert who made the biggest impression on me during this crisis was not the one with the biggest name or the biggest ego. It was Nathalie MacDermott, an infectious-disease specialist at King’s College London, who in mid-February calmly debunked the more lurid public fears about how deadly the new coronavirus was. 

Then, with equal calm, she explained to me that the virus was very likely to become a pandemic, that barring extraordinary measures we could expect it to infect more than half the world’s population, and that the true fatality rate was uncertain but seemed to be something between 0.5 and 1 per cent. In hindsight, she was broadly right about everything that mattered. MacDermott’s educated guesses pierced through the fog of complex modelling and data-poor speculation. 

I was curious as to how she did it, so I asked her. “People who have spent a lot of their time really closely studying the data sometimes struggle to pull their head out and look at what’s happening around them,” she said. “I trust data as well, but sometimes when we don’t have the data, we need to look around and interpret what’s happening.” 

MacDermott worked in Liberia in 2014 on the front line of an Ebola outbreak that killed more than 11,000 people. At the time, international organisations were sanguine about the risks, while the local authorities were in crisis. When she arrived in Liberia, the treatment centres were overwhelmed, with patients lying on the floor, bleeding freely from multiple areas and dying by the hour. 

The horrendous experience has shaped her assessment of subsequent risks: on the one hand, Sars-Cov-2 is far less deadly than Ebola; on the other, she has seen the experts move too slowly while waiting for definitive proof of a risk. 

“From my background working with Ebola, I’d rather be overprepared than underprepared because I’m in a position of denial,” she said. 

There is a broader lesson here. We can try to understand the world through statistics, which at their best provide a broad and representative overview that encompasses far more than we could personally perceive. Or we can try to understand the world up close, through individual experience. Both perspectives have their advantages and disadvantages. 

Muhammad Yunus, a microfinance pioneer and Nobel laureate, has praised the “worm’s eye view” over the “bird’s eye view”, which is a clever sound bite. But birds see a lot too. Ideally, we want both the rich detail of personal experience and the broader, low-resolution view that comes from the spreadsheet. Insight comes when we can combine the two — which is what MacDermott did. 


Everything can be polarised 

Reporting on the numbers behind the Brexit referendum, the vote on Scottish independence, several general elections and the rise of Donald Trump, there was poison in the air: many claims were made in bad faith, indifferent to the truth or even embracing the most palpable lies in an effort to divert attention from the issues. Fact-checking in an environment where people didn’t care about the facts, only whether their side was winning, was a thankless experience. 

For a while, one of the consolations of doing data-driven journalism during the pandemic was that it felt blessedly free of such political tribalism. People were eager to hear the facts after all; the truth mattered; data and expertise were seen to be helpful. The virus, after all, could not be distracted by a lie on a bus.  

That did not last. America polarised quickly, with mask-wearing becoming a badge of political identity — and more generally the Democrats seeking to underline the threat posed by the virus, with Republicans following President Trump in dismissing it as overblown.  

The prominent infectious-disease expert Anthony Fauci does not strike me as a partisan figure — but the US electorate thinks otherwise. He is trusted by 32 per cent of Republicans and 78 per cent of Democrats. 

The strangest illustration comes from the Twitter account of the Republican politician Herman Cain, which late in August tweeted: “It looks like the virus is not as deadly as the mainstream media first made it out to be.” Cain, sadly, died of Covid-19 in July — but it seems that political polarisation is a force stronger than death. 

Not every issue is politically polarised, but when something is dragged into the political arena, partisans often prioritise tribal belonging over considerations of truth. One can see this clearly, for example, in the way that highly educated Republicans and Democrats are further apart on the risks of climate change than less-educated Republicans and Democrats. 

Rather than bringing some kind of consensus, more years of education simply seem to provide people with the cognitive tools they require to reach the politically convenient conclusion. From climate change to gun control to certain vaccines, there are questions for which the answer is not a matter of evidence but a matter of group identity. 

In this context, the strategy that the tobacco industry pioneered in the 1950s is especially powerful. Emphasise uncertainty, expert disagreement and doubt and you will find a willing audience. If nobody really knows the truth, then people can believe whatever they want. 

All of which brings us back to Darrell Huff, statistical sceptic and author of How to Lie with Statistics. While his incisive criticism of statistical trickery has made him a hero to many of my fellow nerds, his career took a darker turn, with scepticism providing the mask for disinformation. 

Huff worked on a tobacco-funded sequel, How to Lie with Smoking Statistics, casting doubt on the scientific evidence that cigarettes were dangerous. (Mercifully, it was not published.)  

Huff also appeared in front of a US Senate committee that was pondering mandating health warnings on cigarette packaging. He explained to the lawmakers that there was a statistical correlation between babies and storks (which, it turns out, there is) even though the true origin of babies is rather different. The connection between smoking and cancer, he argued, was similarly tenuous.  

Huff’s statistical scepticism turned him into the ancestor of today’s contrarian trolls, spouting bullshit while claiming to be the straight-talking voice of common sense. It should be a warning to us all. There is a place in anyone’s cognitive toolkit for healthy scepticism, but that scepticism can all too easily turn into a refusal to look at any evidence at all.

This crisis has reminded us of the lure of partisanship, cynicism and manufactured doubt. But surely it has also demonstrated the power of honest statistics. Statisticians, epidemiologists and other scientists have been producing inspiring work in the footsteps of Doll and Hill. I suggest we set aside How to Lie with Statistics and pay attention. 

Carefully gathering the data we need, analysing it openly and truthfully, sharing knowledge and unlocking the puzzles that nature throws at us — this is the only chance we have to defeat the virus and, more broadly, an essential tool for understanding a complex and fascinating world.

Tuesday, 28 June 2016

Why bad ideas refuse to die

Steven Poole in The Guardian

In January 2016, the rapper BoB took to Twitter to tell his fans that the Earth is really flat. “A lot of people are turned off by the phrase ‘flat earth’,” he acknowledged, “but there’s no way u can see all the evidence and not know … grow up.” At length the astrophysicist Neil deGrasse Tyson joined in the conversation, offering friendly corrections to BoB’s zany proofs of non-globism, and finishing with a sarcastic compliment: “Being five centuries regressed in your reasoning doesn’t mean we all can’t still like your music.”

Actually, it’s a lot more than five centuries regressed. Contrary to what we often hear, people didn’t think the Earth was flat right up until Columbus sailed to the Americas. In ancient Greece, the philosophers Pythagoras and Parmenides had already recognised that the Earth was spherical. Aristotle pointed out that you could see some stars in Egypt and Cyprus that were not visible at more northerly latitudes, and also that the Earth casts a curved shadow on the moon during a lunar eclipse. The Earth, he concluded with impeccable logic, must be round.

The flat-Earth view was dismissed as simply ridiculous – until very recently, with the resurgence of apparently serious flat-Earthism on the internet. An American named Mark Sargent, formerly a professional videogamer and software consultant, has had millions of views on YouTube for his Flat Earth Clues video series. (“You are living inside a giant enclosed system,” his website warns.) The Flat Earth Society is alive and well, with a thriving website. What is going on?

Many ideas have been brilliantly upgraded or repurposed for the modern age, and their revival seems newly compelling. Some ideas from the past, on the other hand, are just dead wrong and really should have been left to rot. When they reappear, what is rediscovered is a shambling corpse. These are zombie ideas. You can try to kill them, but they just won’t die. And their existence is a big problem for our normal assumptions about how the marketplace of ideas operates.

The phrase “marketplace of ideas” was originally used as a way of defending free speech. Just as traders and customers are free to buy and sell wares in the market, so freedom of speech ensures that people are free to exchange ideas, test them out, and see which ones rise to the top. Just as good consumer products succeed and bad ones fail, so in the marketplace of ideas the truth will win out, and error and dishonesty will disappear.

There is certainly some truth in the thought that competition between ideas is necessary for the advancement of our understanding. But the belief that the best ideas will always succeed is rather like the faith that unregulated financial markets will always produce the best economic outcomes. As the IMF chief Christine Lagarde put this standard wisdom laconically in Davos: “The market sorts things out, eventually.” Maybe so. But while we wait, very bad things might happen.

Zombies don’t occur in physical marketplaces – take technology, for example. No one now buys Betamax video recorders, because that technology has been superseded and has no chance of coming back. (The reason that other old technologies, such as the manual typewriter or the acoustic piano, are still in use is that, according to the preferences of their users, they have not been superseded.) So zombies such as flat-Earthism simply shouldn’t be possible in a well‑functioning marketplace of ideas. And yet – they live. How come?

One clue is provided by economics. It turns out that the marketplace of economic ideas itself is infested with zombies. After the 2008 financial crisis had struck, the Australian economist John Quiggin published an illuminating work called Zombie Economics, describing theories that still somehow shambled around even though they were clearly dead, having been refuted by actual events in the world. An example is the notorious efficient markets hypothesis, which holds, in its strongest form, that “financial markets are the best possible guide to the value of economic assets and therefore to decisions about investment and production”. That, Quiggin argues, simply can’t be right. Not only was the efficient markets hypothesis refuted by the global meltdown of 2007–8, in Quiggin’s view it actually caused it in the first place: the idea “justified, and indeed demanded, financial deregulation, the removal of controls on international capital flows, and a massive expansion of the financial sector. These developments ultimately produced the global financial crisis.”

Even so, an idea will have a good chance of hanging around as a zombie if it benefits some influential group of people. The efficient markets hypothesis is financially beneficial for bankers who want to make deals unencumbered by regulation. A similar point can be made about the privatisation of state-owned industry: it is seldom good for citizens, but is always a cash bonanza for those directly involved.

The marketplace of ideas, indeed, often confers authority through mere repetition – in science as well as in political campaigning. You probably know, for example, that the human tongue has regional sensitivities: sweetness is sensed on the tip, saltiness and sourness on the sides, and bitter at the back. At some point you’ve seen a scientific tongue map showing this – they appear in cookery books as well as medical textbooks. It’s one of those nice, slightly surprising findings of science that no one questions. And it’s rubbish.

 
A fantasy map of a flat earth. Photograph: Antar Dayal/Getty Images/Illustration Works

As the eminent professor of biology, Stuart Firestein, explained in his 2012 book Ignorance: How it Drives Science, the tongue-map myth arose because of a mistranslation of a 1901 German physiology textbook. Regions of the tongue are just “very slightly” more or less sensitive to each of the four basic tastes, but they each can sense all of them. The translation “considerably overstated” the original author’s claims. And yet the mythical tongue map has endured for more than a century.

One of the paradoxes of zombie ideas, though, is that they can have positive social effects. The answer is not necessarily to suppress them, since even apparently vicious and disingenuous ideas can lead to illuminating rebuttal and productive research. Few would argue that a commercial marketplace needs fraud and faulty products. But in the marketplace of ideas, zombies can actually be useful. Or if not, they can at least make us feel better. That, paradoxically, is what I think the flat-Earthers of today are really offering – comfort.

Today’s rejuvenated flat-Earth philosophy, as promoted by rappers and YouTube videos, is not simply a recrudescence of pre-scientific ignorance. It is, rather, the mother of all conspiracy theories. The point is that everyone who claims the Earth is round is trying to fool you, and keep you in the dark. In that sense, it is a very modern version of an old idea.

As with any conspiracy theory, the flat-Earth idea is introduced by way of a handful of seeming anomalies, things that don’t seem to fit the “official” story. Have you ever wondered, the flat-Earther will ask, why commercial aeroplanes don’t fly over Antarctica? It would, after all, be the most direct route from South Africa to New Zealand, or from Sydney to Buenos Aires – if the Earth were round. But it isn’t. There is no such thing as the South Pole, so flying over Antarctica wouldn’t make any sense. Plus, the Antarctic treaty, signed by the world’s most powerful countries, bans any flights over it, because something very weird is going on there. So begins the conspiracy sell. Well, in fact, some commercial routes do fly over part of the continent of Antarctica. The reason none fly over the South Pole itself is because of aviation rules that require any aircraft taking such a route to have expensive survival equipment for all passengers on board – which would obviously be prohibitive for a passenger jet.

OK, the flat-Earther will say, then what about the fact that photographs taken from mountains or hot-air balloons don’t show any curvature of the horizon? It is perfectly flat – therefore the Earth must be flat. Well, a reasonable person will respond, it looks flat because the Earth, though round, is really very big. But photographs taken from the International Space Station in orbit show a very obviously curved Earth.

And here is where the conspiracy really gets going. To a flat-Earther, any photograph from the International Space Station is just a fake. So too are the famous photographs of the whole round Earth hanging in space that were taken on the Apollo missions. Of course, the Moon landings were faked too. This is a conspiracy theory that swallows other conspiracy theories whole. According to Mark Sargent’s “enclosed world” version of the flat-Earth theory, indeed, space travel had to be faked because there is actually an impermeable solid dome enclosing our flat planet. The US and USSR tried to break through this dome by nuking it in the 1950s: that’s what all those nuclear tests were really about.

 
Flat-Earthers regard as fake any photographs of the Earth that were taken on the Apollo missions Photograph: Alamy

The intellectual dynamic here, is one of rejection and obfuscation. A lot of ingenuity evidently goes into the elaboration of modern flat-Earth theories to keep them consistent. It is tempting to suppose that some of the leading writers (or, as fans call them, “researchers”) on the topic are cynically having some intellectual fun, but there are also a lot of true believers on the messageboards who find the notion of the “globist” conspiracy somehow comforting and consonant with their idea of how the world works. You might think that the really obvious question here, though, is: what purpose would such an incredibly elaborate and expensive conspiracy serve? What exactly is the point?

It seems to me that the desire to believe such stuff stems from a deranged kind of optimism about the capabilities of human beings. It is a dark view of human nature, to be sure, but it is also rather awe-inspiring to think of secret agencies so single-minded and powerful that they really can fool the world’s population over something so enormous. Even the pro-Brexit activists who warned one another on polling day to mark their crosses with a pen so that MI5 would not be able to erase their votes, were in a way expressing a perverse pride in the domination of Britain’s spookocracy. “I literally ran out of new tin hat topics to research and I STILL wouldn’t look at this one without embarrassment,” confesses Sargent on his website, “but every time I glanced at it there was something unresolved, and once I saw the near perfection of the whole plan, I was hooked.” It is rather beautiful. Bonkers, but beautiful. As the much more noxious example of Scientology also demonstrates, it is all too tempting to take science fiction for truth – because narratives always make more sense than reality.

We know that it’s a good habit to question received wisdom. Sometimes, though, healthy scepticism can run over into paranoid cynicism, and giant conspiracies seem oddly consoling. One reason why myths and urban legends hang around so long seems to be that we like simple explanations – such as that immigrants are to blame for crumbling public services – and are inclined to believe them. The “MMR causes autism” scare perpetrated by Andrew Wakefield, for example, had the apparent virtue of naming a concrete cause (vaccination) for a deeply worrying and little-understood syndrome (autism). Years after it was shown that there was nothing to Wakefield’s claims, there is still a strong and growing “anti-vaxxer” movement, particularly in the US, which poses a serious danger to public health. The benefits of immunisation, it seems, have been forgotten.

The yearning for simple explanations also helps to account for the popularity of outlandish conspiracy theories that paint a reassuring picture of all the world’s evils as being attributable to a cabal of supervillains. Maybe a secret society really is running the show – in which case the world at least has a weird kind of coherence. Hence, perhaps, the disappointed amazement among some of those who had not expected their protest votes for Brexit to count.

And what happens when the world of ideas really does operate as a marketplace? It happens to be the case that many prominent climate sceptics have been secretly funded by oil companies. The idea that there is some scientific controversy over whether burning fossil fuels has contributed in large part to the present global warming (there isn’t) is an idea that has been literally bought and sold, and remains extraordinarily successful. That, of course, is just a particularly dramatic example of the way all western democracies have been captured by industry lobbying and party donations, in which friendly consideration of ideas that increase the profits of business is simply purchased, like any other commodity. If the marketplace of ideas worked as advertised, not only would this kind of corruption be absent, it would be impossible in general for ideas to stay rejected for hundreds or thousands of years before eventually being revived. Yet that too has repeatedly happened.

While the return of flat-Earth theories is silly and rather alarming, meanwhile, it also illustrates some real and deep issues about human knowledge. How, after all, do you or I know that the Earth really is round? Essentially, we take it on trust. We may have experienced some common indications of it ourselves, but we accept the explanations of others. The experts all say the Earth is round; we believe them, and get on with our lives. Rejecting the economic consensus that Brexit would be bad for the UK, Michael Gove said that the British public had had enough of experts (or at least of experts who lurked in acronymically named organisations), but the truth is that we all depend on experts for most of what we think we know.

The second issue is that we cannot actually know for sure that the way the world appears to us is not actually the result of some giant conspiracy or deception. The modern flat-Earth theory comes quite close to an even more all-encompassing species of conspiracy theory. As some philosophers have argued, it is not entirely impossible that God created the whole universe, including fossils, ourselves and all our (false) memories, only five minutes ago. Or it might be the case that all my sensory impressions are being fed to my brain by a clever demon intent on deceiving me (Descartes) or by a virtual-reality program controlled by evil sentient artificial intelligences (The Matrix).

The resurgence of flat-Earth theory has also spawned many web pages that employ mathematics, science, and everyday experience to explain why the world actually is round. This is a boon for public education. And we should not give in to the temptation to conclude that belief in a conspiracy is prima facie evidence of stupidity. Evidently, conspiracies really happen. Members of al-Qaida really did conspire in secret to fly planes into the World Trade Center. And, as Edward Snowden revealed, the American and British intelligence services really did conspire in secret to intercept the electronic communications of millions of ordinary citizens. Perhaps the most colourful official conspiracy that we now know of happened in China. When the half-millennium-old Tiananmen Gate was found to be falling down in the 1960s, it was secretly replaced, bit by bit, with an exact replica, in a successful conspiracy that involved nearly 3,000 people who managed to keep it a secret for years.

Indeed, a healthy openness to conspiracy may be said to underlie much honest intellectual inquiry. This is how the physicist Frank Wilczek puts it: “When I was growing up, I loved the idea that great powers and secret meanings lurk behind the appearance of things.” Newton’s grand idea of an invisible force (gravity) running the universe was definitely a cosmological conspiracy theory in this sense. Yes, many conspiracy theories are zombies – but so is the idea that conspiracies never happen.

 
‘When the half-millennium-old Tiananmen Gate was found to be falling down in the 1960s, it was secretly replaced, bit by bit, with an exact replica’ Photograph: Kevin Frayer/Getty Images

Things are better, one assumes, in the rarefied marketplace of scientific ideas. There, the revered scientific journals have rigorous editorial standards. Zombies and other market failures are thereby prevented. Not so fast. Remember the tongue map. It turns out that the marketplace of scientific ideas is not perfect either.
The scientific community operates according to the system of peer review, in which an article submitted to a journal will be sent out by the editor to several anonymous referees who are expert in the field and will give a considered view on whether the paper is worthy of publication, or will be worthy if revised. (In Britain, the Royal Society began to seek such reports in 1832.) The barriers to entry for the best journals in the sciences and humanities mean that – at least in theory – it is impossible to publish clownish, evidence-free hypotheses.

But there are increasing rumblings in the academic world itself that peer review is fundamentally broken. Even that it actively suppresses good new ideas while letting through a multitude of very bad ones. “False positives and exaggerated results in peer-reviewed scientific studies have reached epidemic proportions in recent years,” reported Scientific American magazine in 2011. Indeed, the writer of that column, a professor of medicine named John Ioannidis, had previously published a famous paper titled Why Most Published Research Findings Are False. The issues, he noted, are particularly severe in healthcare research, in which conflicts of interest arise because studies are funded by large drug companies, but there is also a big problem in psychology.

Take the widely popularised idea of priming. In 1996, a paper was published claiming that experimental subjects who had been verbally primed to think of old age by being made to think about words such as bingo, Florida, grey, and wrinkles subsequently walked more slowly when they left the laboratory than those who had not been primed. It was a dazzling idea, and led to a flurry of other findings that priming could affect how well you did on a quiz, or how polite you were to a stranger. In recent years, however, researchers have become suspicious, and have not been able to generate the same findings as many of the early studies. This is not definitive proof of falsity, but it does show that publication in a peer-reviewed journal is no guarantee of reliability. Psychology, some argue, is currently going through a crisis in replicability, which Daniel Kahneman has called a looming “train wreck” for the field as a whole.

Could priming be a future zombie idea? Well, most people think it unlikely that all such priming effects will be refuted, since there is now such a wide variety of studies on them. The more interesting problem is to work out what scientists call the idea’s “ecological validity” – that is, how well do the effects translate from the artificial simplicity of the lab situation to the ungovernable messiness of real life? This controversy in psychology just shows science working as it should – being self-correcting. One marketplace-of-ideas problem here, though, is that papers with surprising and socially intriguing results will be described throughout the media, and lauded as definitive evidence in popularising books, as soon as they are published, and long before awkward second questions begin to be asked.




China’s memory manipulators



It would be sensible, for a start, for us to make the apparently trivial rhetorical adjustment from the popular phrase “studies show …” and limit ourselves to phrases such as “studies suggest” or “studies indicate”. After all, “showing” strongly implies proving, which is all too rare an activity outside mathematics. Studies can always be reconsidered. That is part of their power.

Nearly every academic inquirer I talked to while researching this subject says that the interface of research with publishing is seriously flawed. Partly because the incentives are all wrong – a “publish or perish” culture rewards academics for quantity of published research over quality. And partly because of the issue of “publication bias”: the studies that get published are the ones that have yielded hoped-for results. Studies that fail to show what they hoped for end up languishing in desk drawers.

One reform suggested by many people to counteract publication bias would be to encourage the publication of more “negative findings” – papers where a hypothesis was not backed up by the experiment performed. One problem, of course, is that such findings are not very exciting. Negative results do not make headlines. (And they sound all the duller for being called “negative findings”, rather than being framed as positive discoveries that some ideas won’t fly.)

The publication-bias issue is even more pressing in the field of medicine, where it is estimated that the results of around half of all trials conducted are never published at all, because their results are negative. “When half the evidence is withheld,” writes the medical researcher Ben Goldacre, “doctors and patients cannot make informed decisions about which treatment is best.”Accordingly, Goldacre has kickstarted a campaigning group named AllTrials to demand that all results be published.

When lives are not directly at stake, however, it might be difficult to publish more negative findings in other areas of science. One idea, floated by the Economist, is that “Journals should allocate space for ‘uninteresting’ work, and grant-givers should set aside money to pay for it.” It sounds splendid, to have a section in journals for tedious results, or maybe an entire journal dedicated to boring and perfectly unsurprising research. But good luck getting anyone to fund it.

The good news, though, is that some of the flaws in the marketplace of scientific ideas might be hidden strengths. It’s true that some people think peer review, at its glacial pace and with its bias towards the existing consensus, works to actively repress new ideas that are challenging to received opinion. Notoriously, for example, the paper that first announced the invention of graphene – a way of arranging carbon in a sheet only a single atom thick – was rejected by Nature in 2004 on the grounds that it was simply “impossible”. But that idea was too impressive to be suppressed; in fact, the authors of the graphene paper had it published in Science magazine only six months later. Most people have faith that very well-grounded results will find their way through the system. Yet it is right that doing so should be difficult. If this marketplace were more liquid and efficient, we would be overwhelmed with speculative nonsense. Even peremptory or aggressive dismissals of new findings have a crucial place in the intellectual ecology. Science would not be so robust a means of investigating the world if it eagerly embraced every shiny new idea that comes along. It has to put on a stern face and say: “Impress me.” Great ideas may well face a lot of necessary resistance, and take a long time to gain traction. And we wouldn’t wish things to be otherwise.

In many ways, then, the marketplace of ideas does not work as advertised: it is not efficient, there are frequent crashes and failures, and dangerous products often win out, to widespread surprise and dismay. It is important to rethink the notion that the best ideas reliably rise to the top: that itself is a zombie idea, which helps entrench powerful interests. Yet even zombie ideas can still be useful when they motivate energetic refutations that advance public education. Yes, we may regret that people often turn to the past to renew an old theory such as flat-Earthism, which really should have stayed dead. But some conspiracies are real, and science is always engaged in trying to uncover the hidden powers behind what we see. The resurrection of zombie ideas, as well as the stubborn rejection of promising new ones, can both be important mechanisms for the advancement of human understanding.

Monday, 11 May 2015

Natural leaders are made in retrospect

Ed Smith in Cricinfo

There is no template for the perfect captain. Some of the game's greatest were not identified as such straightaway


It took five years of not winning and patience before success came for Mike Brearley (centre) and Middlesex in 1976 © Getty Images



So England finished a tour of the West Indies with some widespread areas of consensus. The results were disappointing, the immediate future is dodgy, opportunities were missed and the captain - according to many of the loudest voices - is not a natural leader.

The tour, of course, happened in 2008-09. England lost 1-0. But Andrew Strauss, after that tricky start, became one of the most successful England captains of modern times. Now England are turning to Strauss again, this time as director of cricket, because his leadership credentials are, of course, axiomatic. How quickly people forget views they once vehemently held. Memory is not quite the same thing as intelligence, or even judgement, but it is a good first step on the road towards greater scepticism.

The problem with analysing leadership, especially captaincy, is that people forget how rare it is for successful leaders to stand out as "natural leaders" from the very beginning of their tenure. In fact, the idea of "natural" leadership is usually a retrospective trick - or narrative fallacy - used to make sense of events that, at the time, felt far more contingent and unpredictable.

The most iconic example of great captaincy is also the most misused: Mike Brearley. Perception: Brearley could wander into any team, move gully a bit deeper, and, hey presto, you win by an innings. Reality: by the time Brearley did his Ashes conjuring trick in 1981, he had indeed established a reputation for tactical and managerial brilliance. Crucially, however, that reputation was hard-earned over many years at the coalface. Even more pertinently, Brearley's captaincy could easily have been cut short before anyone noticed how good he was.

Brearley took over as captain of Middlesex in 1971. The seasons of 1971, 1972, 1973, 1974 and 1975 slipped by without Middlesex winning the championship. Brearley has privately told me that in those early seasons he often found the job very difficult. Did he have enough support? Results improved, but not always evenly. In that six-year period, Brearley's gift for patience was tested. So was the constancy of Middlesex. As I write this, only one of the 18 captains of England's first-class counties has been leader for six uninterrupted seasons. Clearly there aren't dozens of Brearleys out there, if only clubs would persevere with them. But if Middlesex had been less patient - or, put differently, less anxious to jump at the first convenience - then England could have been deprived of a superb leader.

Which leads to the second problem with analysing captains. Pundits tend to have a fairly fixed idea of what a natural leader looks like, and then judge the incumbent against their own personal template.

To the alpha-male mindset, the captain should be the leader of the pack, the macho hero. To the Machiavellian world view, a captain should be streetwise and opportunistic. To the progressive, leadership relies on novelty and innovation. To the nostalgic, quite the reverse - the answers always reside in the past. To the laissez-faire, he must be relaxed. To the hard man, a captain must rule with fear.

All valid, none essential. There is no one such thing as the ultimate template for a good captain. All good leaders are different. Indeed, a preparedness to be different - rather than copying someone else - is perhaps the only prerequisite for being any good.

In terms of value added, few managers can match Billy Beane of the Oakland Athletics baseball team. His decisions and the wins that followed have earned Oakland hundreds of millions of dollars (as this article on FiveThirtyEight demonstrates). Despite an emotional temperament, Beane has tried to remove the cult of personality from his decision-making. This is leadership by methodology - thinking, or more accurately calculating, your way to victory.

At the other end of the spectrum stands Sir Alex Ferguson. Anyone who has read Ferguson's autobiography knows that the idea of "copying" Ferguson is inconceivable. His management was founded on the controlling and coercive nature of his personality. Some players seethe with violence. Very occasionally, that survives the transition into management. No leader achieves greatness by punching people. Some, however, clearly benefit from the impression that it would be a grave error for anyone to entirely rule out the possibility of the direct approach. Ferguson ran a pub before becoming a manager. "Sometimes I would come home with a split head or black eye. That was pub life. When fights broke out, it was necessary to jump in to restore order."

Now imagine hearing Pep Guardiola, Roberto Martinez or Arsene Wenger saying that. Inconceivable. Yet all are fine managers.



Despite England's poor finish in the West Indies, there has been no outward sign that Alastair Cook is wilting © Getty Images


Which leads us back to Alastair Cook. It is clear that Cook does not fit some preconceptions of cricketing leadership. He is not restless and ingenious, as Michael Vaughan was. He does not cast a magnetic and charismatic presence over the whole arena, as MS Dhoni does. And yet there have been fine captains who possessed neither of those assets.

Last week, after discussing captaincy in the commentary box, the brilliant statistician Andrew Samson passed me two pieces of paper about the records of two captains, each after 31 Tests in charge. The first read:

Runs: 2478
Average: 45.88
Hundreds: 8
Wins: 13
Losses: 9
Draws: 9

The second read:

Runs: 2792
Average: 60.69
Hundreds: 10
Wins: 6
Losses: 9
Draws: 15
Tied: 1

The first is Cook, the second Allan Border, the man who turned around Australian cricket in the 1980s. (Although, of course, the nature of the opposition should always be taken into account with comparative stats.)

Border had also faced criticism about his manner and tactics. But eventually his resilience and run-scoring provided such an inspiring example that his team fell in step
. The two men, so different on the surface - Border was known as Captain Grumpy, where Cook is courteous and self-deprecating - share an epic capacity for endurance. Border outlasted many bowling attacks and, eventually, his critics.

The case against Cook tends to rest on the conviction that he is about to crack, that he can't take much more. This theory is conveniently self-perpetuating because it encourages his detractors to press on with their endeavours. They look eagerly for signs that the strain is becoming too great. This type of thinking contributed to his sacking as ODI captain ridiculously close to the World Cup.

Yet in the West Indies - a patchy tour for England with some poor selection errors - there was no outward sign at all that Cook was wilting. Quite the reverse. His hundred in the third Test was almost faultless.

Many bowling attacks have pinned their hopes on Cook cracking, only to find the wait inconveniently lengthy. I wonder if the detractors of Cook's captaincy will experience a similar story.