Search This Blog

Showing posts with label correlation. Show all posts
Showing posts with label correlation. Show all posts

Thursday, 10 August 2023

'Karma is a Bitch': Is It?

Karma's Complex Dance: A Critical Examination of the Concept's Moral Implications

The phrase "Karma is a bitch" has become a ubiquitous expression in modern language, reflecting the notion that negative actions will inevitably result in negative consequences. The concept of karma originates from Hindu and Buddhist traditions and emphasizes the idea that one's actions will determine their future experiences. While the phrase might convey a sense of poetic justice, a comprehensive analysis reveals that the concept of karma is more nuanced and complex, encompassing both positive and negative dimensions. This essay aims to critically evaluate the moral implications of the concept of karma, drawing on a variety of examples from history, philosophy, and popular culture.

  1. Ethical Justification and Cosmic Justice: Karma is often portrayed as a form of cosmic justice, where good deeds lead to positive outcomes and bad deeds to negative ones. While this interpretation might provide a sense of moral reassurance, it raises ethical questions. The inherent belief that every individual's circumstances are the direct result of their actions can lead to victim-blaming. For instance, attributing poverty or illness solely to past actions overlooks systemic factors and external influences that shape a person's life.

    Example: The caste system in India historically justified social hierarchies based on karma, leading to the oppression of lower castes and reinforcing inequality.


  2. Causality and Complexity: The linear relationship between actions and consequences, as depicted by the phrase, oversimplifies the intricate web of cause-and-effect relationships. Actions often have far-reaching and unpredictable consequences, involving multiple agents and factors. The concept of karma tends to ignore this complexity and overemphasizes individual agency.

    Example: The butterfly effect, a concept from chaos theory, illustrates how small actions can lead to significant and unforeseeable outcomes, challenging the deterministic view of karma.


  3. Moral Accountability and Personal Growth: The concept of karma raises the question of whether the fear of negative consequences or the promise of rewards is the primary motivation behind moral behavior. An approach that focuses solely on retribution overlooks the potential for personal growth, empathy, and genuine concern for others.

    Example: In Viktor Frankl's "Man's Search for Meaning," he emphasizes the importance of finding meaning and purpose in suffering, suggesting that growth can emerge from even the most challenging circumstances.


  4. Interpretations and Cultural Variation: Different cultures and philosophical schools interpret karma in diverse ways. Some traditions view karma as a way to break free from the cycle of suffering, while others emphasize fulfilling one's duty regardless of the outcomes. The phrase "Karma is a bitch" disregards this richness of interpretation.

    Example: Jainism emphasizes minimizing harm to all living beings, indicating that karma is not just about individual consequences but also collective well-being.


  5. Modern Relevance and Popular Culture: The phrase "Karma is a bitch" has found its place in modern vernacular, often used humorously or to express satisfaction at seeing someone receive their comeuppance. This highlights the enduring appeal of karma's basic principle: actions have consequences.

    Example: In the TV show "Breaking Bad," the character Walter White's morally reprehensible actions eventually catch up with him, illustrating a narrative application of the concept of karma.

In conclusion, the phrase "Karma is a bitch" encapsulates only a fraction of the complexity inherent in the concept of karma. While the idea of actions leading to consequences resonates with basic notions of justice, it oversimplifies the intricate dynamics of cause and effect, ethical accountability, and personal growth. The moral implications of karma are diverse, reflecting a rich cultural tapestry that extends beyond simple notions of reward and punishment. By critically examining the concept, we can gain a deeper understanding of its potential pitfalls and opportunities for cultivating a more compassionate and nuanced worldview.

----


Rethinking "Karma is a Bitch": A Critical Analysis of Oversimplification and Negative Connotations

The phrase "Karma is a bitch" has gained popularity in contemporary discourse as a way to express satisfaction at the perceived downfall of individuals who have engaged in negative behavior. However, this phrase oversimplifies the complex concept of karma and promotes a skewed perspective on the principles of cause and effect, personal growth, and moral accountability. This essay aims to critically repudiate the phrase by examining its limitations and highlighting the need for a more nuanced understanding of karma, using examples from philosophy, psychology, and real-world scenarios.

  1. Oversimplification of Cause and Effect: The phrase reduces the intricate web of cause-and-effect relationships to a simplistic equation of "bad action equals bad consequence." This disregards the intricate factors and contextual nuances that contribute to outcomes, making it an inadequate representation of reality.

    Example: In complex geopolitical conflicts, attributing the suffering of entire populations to their past actions ignores the historical, economic, and political complexities involved.


  2. Negative Connotations and Lack of Empathy: The phrase fosters a sense of satisfaction in witnessing the suffering of others, perpetuating a culture of negativity and judgment. This lack of empathy contradicts the essence of many ethical and spiritual traditions, which emphasize understanding and compassion.

    Example: Instead of rejoicing in another's misfortune, embracing the principle of forgiveness and offering support can lead to personal growth and positive social interactions.


  3. Discouraging Redemption and Growth: Branding individuals as victims of their own actions overlooks the potential for growth and change. The phrase implies that once someone engages in negative behavior, their fate is sealed, discouraging personal transformation and second chances.

    Example: The story of Nelson Mandela demonstrates the power of redemption and forgiveness. After serving 27 years in prison, he emerged as a symbol of reconciliation, transcending the cycle of vengeance.


  4. Cultural and Philosophical Diversity: The concept of karma varies across different cultural and philosophical contexts. Reducing it to a negative sentiment ignores the positive dimensions of karma, such as the idea of accumulating positive actions for a better future.

    Example: In Buddhism, karma is not about punishment but about creating positive intentions and actions to break free from the cycle of suffering.


  5. Promotion of Fatalism and Passivity: The phrase "Karma is a bitch" can inadvertently endorse a fatalistic attitude, implying that individuals have no control over their lives. This can discourage proactive efforts and a sense of responsibility for shaping one's destiny.

    Example: The growth mindset theory emphasizes the belief that effort and learning can lead to personal development, countering the notion of predestined outcomes.

The phrase "Karma is a bitch" encapsulates a simplified and often negative view of the complex concept of karma. Its connotations of satisfaction in others' suffering, lack of empathy, and discouragement of personal growth undermine the true potential of human agency and transformation. By examining the limitations of this phrase and considering the rich diversity of interpretations of karma, we can foster a more compassionate, empathetic, and holistic understanding of cause and effect in our lives. It is crucial to move beyond the allure of quick judgments and instead embrace the complexities that define human experiences.

Monday, 14 February 2022

English football: why are there so few black people in senior positions?

Simon Kuper in The FT







Possibly the only English football club run mostly by black staff is Queens Park Rangers, in the Championship, the English game’s second tier. 

QPR’s director of football, Les Ferdinand, and technical director, Chris Ramsey, have spent their entire careers in the sport watching hiring discrimination persist almost everywhere else. Teams have knelt in protest against racism, but Ferdinand says, “I didn’t want to see people taking the knee. I just wanted to see action. I’m tired of all these gestures.”  

Now a newly founded group, the Black Footballers Partnership (BFP), argues that it is time to adopt compulsory hiring quotas for minorities. Voluntary measures have not worked, says its executive director, Delroy Corinaldi. 

The BFP has commissioned a report from Stefan Szymanski (economics professor at the University of Michigan, and my co-author on the book Soccernomics) to document apparent discrimination in coaching, executive and scouting jobs. 

It is a dogma of football that these roles must be filled by ex-players — but only, it seems, by white ones. Last year 43 per cent of players in the Premier League were black, yet black people held “only 4.4 per cent of managerial positions, usually taken by former players” and 1.6 per cent of “executive, leadership and ownership positions”, writes Szymanski. 

Today 14 per cent of holders of the highest coaching badge in England, the Uefa Pro Licence, are black, but they too confront prejudice. Looking ahead, the paucity of black scouts and junior coaches is keeping the pipeline for bigger jobs overwhelmingly white. Corinaldi hopes that current black footballers will follow England’s forward Raheem Sterling in calling for more off-field representation. 

There have been 28 black managers in the English game since the Football League was founded in 1888, calculates Corinaldi. As for the Premier League, which has had 11 black managers in 30 years, he says: “Sam Allardyce [an ex-England manager] has had nearly as many roles as the whole black population.” The situation is similar in women’s football, says former England international Anita Asante. 

Ramsey, who entered coaching in the late 1980s, when he says “there were literally no black coaches”, reflects: “There’s always a dream that you’re going to make the highest level, so naively you coach believing that your talent will get you there, but very early on I realised that wasn’t going to happen.”  

Reluctant to hire 

He says discrimination in hiring is always unspoken: “People hide behind politically correct language. They will take a knee, and say, ‘I’m all for it’. You’re just never really seen as able to do the job. And then people sometimes employ people less qualified than you. Plenty of white managers have failed, and I just want to have the opportunity to be as bad as them, and to be given an opportunity again. You don’t want to have to be better just because you’re black.” 

When Ferdinand’s glittering playing career ended, he worried that studying for his coaching badges might “waste five years of my life”, given that the white men running clubs were reluctant to hire even famous black ex-players such as John Barnes and Paul Ince. In Ferdinand’s first seven years on the market, he was offered one managerial job. “People tend to employ what looks, sounds and acts like them,” he shrugs. Yet he says he isn’t angry: “Anger’s not the right word, because that’s unfortunately how they see a lot of young black men, as angry.” 

He suspects QPR hired him in part because its then co-chair, the Malaysian Tony Fernandes, is a person of colour. After the two men met and began talking, recalls Ferdinand, “he said, ‘Why are you not doing this job [management] in football?’ I said, ‘Because I’ve not been given the opportunity.’ The conversations went from there. Had he not been a person of colour, I perhaps wouldn’t have had the opportunity to talk to him in the way that I did.” 

Szymanski can identify only two black owners in English football, both at small clubs: Ben Robinson of Burton Albion, and Ryan Giggs, co-owner of Salford City. 

Szymanski believes discrimination persists for managerial jobs in part because football managers have little impact on team performance — much less than is commonly thought. He calculates that over 10 seasons, the average correlation between a club’s wage bill for players and its league position exceeds 90 per cent. If the quality of players determines results almost by itself, then managers are relatively insignificant, and so clubs can continue to hire the stereotype manager — a white male ex-player aged between 35 and 55 — without harming their on-field performance. 

For about 20 years, English football has launched various fruitless attempts to address discrimination. Ramsey recalls the Football Association — the national governing body — inviting black ex-players to “observe” training sessions. He marvels: “You’re talking about qualified people with full badges standing and watching people train. And most of them have been in the game longer than the people they’re watching.” 

Modest though that initiative was, Ferdinand recalls warning FA officials: “A certain amount of people at St George’s Park [the FA’s National Football Centre], when you tell them this is the initiative, their eyes will be rolling and thinking, ‘Here we go, we’re doing something for them again, we’re trying to give them another opportunity.’ What those people don’t realise is: we don’t get opportunities.”  

Rooney Rule 

After the NFL of American gridiron football introduced the Rooney Rule in 2003, requiring teams to interview minority candidates for job openings, the English ex-player Ricky Hill presented the idea to the League Managers Association. Ramsey recalls, “Everyone said, ‘God, this is brilliant’.” Yet only in the 2016/2017 season did 10 smaller English clubs even pilot the Rooney Rule. Ramsey says: “We are expected to accept as minority coaches that these things take a long time. I have seen this train move along so slowly that it’s ridiculous.” He mourns the black managerial careers lost in the wait. 

In 2019 the Rooney Rule was made mandatory in the three lower tiers of English professional football, though not in the Premier League or anywhere else in Europe. Clubs had to interview at least one black, Asian or minority ethnic (Bame) candidate (if any applied) for all first team managerial, coaching and youth development roles. Why didn’t the rule noticeably increase minority hiring? Ferdinand replies, “Because there’s nobody being held accountable to it. What is the Rooney Rule? You give someone the opportunity to come through the door and talk.” Moreover, English football’s version of the rule has a significant loophole: clubs are exempt if they interview only one candidate, typically someone found through the white old boys’ network. 

Nor has the Rooney Rule made much difference in the NFL. In 2020, 57.5 per cent of the league’s players were black, but today only two out of 32 head coaches are, while one other identifies as multiracial. This month, the former Miami Dolphins coach Brian Flores filed a lawsuit against the NFL and three clubs, accusing them of racist and discriminatory practices. He and other black coaches report being called for sham interviews for jobs that have already been filled, as teams tick the Rooney Rule’s boxes. 

Voluntary diversity targets 

In 2020 England’s FA adopted a voluntary “Football Leadership Diversity Code”. Only about half of English professional clubs signed it. They committed to achieving percentage targets for Bame people among new hires: 15 per cent for senior leadership and team operations positions, and 25 per cent for men’s coaching — “a discrepancy in goals that itself reflects the problem”, comments Szymanski. Clubs were further allowed to water down these targets “based on local demographics”. 

The FA said: “The FA is deeply committed to ensuring the diversity of those playing and coaching within English football is truly reflective of our modern society. 

“We’re focused on increasing the number of, and ongoing support for, coaches who have been historically under-represented in the game. This includes a bursary programme for the Uefa qualifications required to coach in academy and senior professional football.” 

A report last November showed mixed results. Many clubs had missed the code’s targets, with several Premier League clubs reporting zero diversity hires. On the other hand, more than 20 per cent of new hires in men’s football were of Bame origin, which was at least well above historical hiring rates. 

Do clubs take the code seriously? Ferdinand smiles ironically: “From day one I didn’t take it seriously. Because it’s a voluntary code. What’s the repercussions if you don’t follow the voluntary code? No one will say anything, no one will do anything about it.”  

The BFP and the League Managers Association have called for the code’s targets to be made compulsory. Ferdinand cites the example of countries that set mandatory quotas for women on corporate boards of listed companies. 

Asante says it takes minorities in positions of power to understand the problems of minorities. “If you are a majority in any group, when are you ever thinking about the needs of others?” Corinaldi adds: “When you have a monoculture in any boardroom, you only know what you know, and it tends to be the same stories you heard growing up.” He predicts that once football has more black directors and senior executives, they will hire more diversely. 

The BFP’s model for English football is the National Basketball Association in the US, a 30-team league with 14 African-American head coaches. For now, that feels like a distant utopia. Ramsey warns: “If there is no revolutionary action, we’ll be having this same conversation in 10 years’ time.” And he remembers saying exactly those words 10 years ago.

Sunday, 13 September 2020

Statistics, lies and the virus: Five lessons from a pandemic

In an age of disinformation, the value of rigorous data has never been more evident writes Tim Harford in The FT 


Will this year be 1954 all over again? Forgive me, I have become obsessed with 1954, not because it offers another example of a pandemic (that was 1957) or an economic disaster (there was a mild US downturn in 1953), but for more parochial reasons. 

Nineteen fifty-four saw the appearance of two contrasting visions for the world of statistics — visions that have shaped our politics, our media and our health. This year confronts us with a similar choice. 

The first of these visions was presented in How to Lie with Statistics, a book by a US journalist named Darrell Huff. Brisk, intelligent and witty, it is a little marvel of numerical communication. 

The book received rave reviews at the time, has been praised by many statisticians over the years and is said to be the best-selling work on the subject ever published. It is also an exercise in scorn: read it and you may be disinclined to believe a number-based claim ever again. 

There are good reasons for scepticism today. David Spiegelhalter, author of last year’s The Art of Statistics, laments some of the UK government’s coronavirus graphs and testing targets as “number theatre”, with “dreadful, awful” deployment of numbers as a political performance. 

“There is great damage done to the integrity and trustworthiness of statistics when they’re under the control of the spin doctors,” Spiegelhalter says. He is right. But we geeks must be careful — because the damage can come from our own side, too. 

For Huff and his followers, the reason to learn statistics is to catch the liars at their tricks. That sceptical mindset took Huff to a very unpleasant place, as we shall see. Once the cynicism sets in, it becomes hard to imagine that statistics could ever serve a useful purpose.  

But they can — and back in 1954, the alternative perspective was embodied in the publication of an academic paper by the British epidemiologists Richard Doll and Austin Bradford Hill. They marshalled some of the first compelling evidence that smoking cigarettes dramatically increases the risk of lung cancer. 

The data they assembled persuaded both men to quit smoking and helped save tens of millions of lives by prompting others to do likewise. This was no statistical trickery, but a contribution to public health that is almost impossible to exaggerate.  

You can appreciate, I hope, my obsession with these two contrasting accounts of statistics: one as a trick, one as a tool. Doll and Hill’s painstaking approach illuminates the world and saves lives into the bargain. 

Huff’s alternative seems clever but is the easy path: seductive, addictive and corrosive. Scepticism has its place, but easily curdles into cynicism and can be weaponized into something even more poisonous than that. 

The two worldviews soon began to collide. Huff’s How to Lie with Statistics seemed to be the perfect illustration of why ordinary, honest folk shouldn’t pay too much attention to the slippery experts and their dubious data. 

Such ideas were quickly picked up by the tobacco industry, with its darkly brilliant strategy of manufacturing doubt in the face of evidence such as that provided by Doll and Hill. 

As described in books such as Merchants of Doubt by Erik Conway and Naomi Oreskes, this industry perfected the tactics of spreading uncertainty: calling for more research, emphasising doubt and the need to avoid drastic steps, highlighting disagreements between experts and funding alternative lines of inquiry. The same tactics, and sometimes even the same personnel, were later deployed to cast doubt on climate science. 

These tactics are powerful in part because they echo the ideals of science. It is a short step from the Royal Society’s motto, “nullius in verba” (take nobody’s word for it), to the corrosive nihilism of “nobody knows anything”.  

So will 2020 be another 1954? From the point of view of statistics, we seem to be standing at another fork in the road. The disinformation is still out there, as the public understanding of Covid-19 has been muddied by conspiracy theorists, trolls and government spin doctors.  

Yet the information is out there too. The value of gathering and rigorously analysing data has rarely been more evident. Faced with a complete mystery at the start of the year, statisticians, scientists and epidemiologists have been working miracles. I hope that we choose the right fork, because the pandemic has lessons to teach us about statistics — and vice versa — if we are willing to learn. 


The numbers matter 

One lesson this pandemic has driven home to me is the unbelievable importance of the statistics,” says Spiegelhalter. Without statistical information, we haven’t a hope of grasping what it means to face a new, mysterious, invisible and rapidly spreading virus. 

Once upon a time, we would have held posies to our noses and prayed to be spared; now, while we hope for advances from medical science, we can also coolly evaluate the risks. 

Without good data, for example, we would have no idea that this infection is 10,000 times deadlier for a 90-year-old than it is for a nine-year-old — even though we are far more likely to read about the deaths of young people than the elderly, simply because those deaths are surprising. It takes a statistical perspective to make it clear who is at risk and who is not. 

Good statistics, too, can tell us about the prevalence of the virus — and identify hotspots for further activity. Huff may have viewed statistics as a vector for the dark arts of persuasion, but when it comes to understanding an epidemic, they are one of the few tools we possess. 


Don’t take the numbers for granted 

But while we can use statistics to calculate risks and highlight dangers, it is all too easy to fail to ask the question “Where do these numbers come from?” By that, I don’t mean the now-standard request to cite sources, I mean the deeper origin of the data. For all his faults, Huff did not fail to ask the question. 
 
He retells a cautionary tale that has become known as “Stamp’s Law” after the economist Josiah Stamp — warning that no matter how much a government may enjoy amassing statistics, “raise them to the nth power, take the cube root and prepare wonderful diagrams”, it was all too easy to forget that the underlying numbers would always come from a local official, “who just puts down what he damn pleases”. 

The cynicism is palpable, but there is insight here too. Statistics are not simply downloaded from an internet database or pasted from a scientific report. Ultimately, they came from somewhere: somebody counted or measured something, ideally systematically and with care. These efforts at systematic counting and measurement require money and expertise — they are not to be taken for granted. 

In my new book, How to Make the World Add Up, I introduce the idea of “statistical bedrock” — data sources such as the census and the national income accounts that are the results of painstaking data collection and analysis, often by official statisticians who get little thanks for their pains and are all too frequently the target of threats, smears or persecution. 
 
In Argentina, for example, long-serving statistician Graciela Bevacqua was ordered to “round down” inflation figures, then demoted in 2007 for producing a number that was too high. She was later fined $250,000 for false advertising — her crime being to have helped produce an independent estimate of inflation. 

In 2011, Andreas Georgiou was brought in to head Greece’s statistical agency at a time when it was regarded as being about as trustworthy as the country’s giant wooden horses. When he started producing estimates of Greece’s deficit that international observers finally found credible, he was prosecuted for his “crimes” and threatened with life imprisonment. Honest statisticians are braver — and more invaluable — than we know.  

In the UK, we don’t habitually threaten our statisticians — but we do underrate them. “The Office for National Statistics is doing enormously valuable work that frankly nobody has ever taken notice of,” says Spiegelhalter, pointing to weekly death figures as an example. “Now we deeply appreciate it.”  

Quite so. This statistical bedrock is essential, and when it is missing, we find ourselves sinking into a quagmire of confusion. 

The foundations of our statistical understanding of the world are often gathered in response to a crisis. For example, nowadays we take it for granted that there is such a thing as an “unemployment rate”, but a hundred years ago nobody could have told you how many people were searching for work. Severe recessions made the question politically pertinent, so governments began to collect the data. 

More recently, the financial crisis hit. We discovered that our data about the banking system was patchy and slow, and regulators took steps to improve it. 

So it is with the Sars-Cov-2 virus. At first, we had little more than a few data points from Wuhan, showing an alarmingly high death rate of 15 per cent — six deaths in 41 cases. Quickly, epidemiologists started sorting through the data, trying to establish how exaggerated that case fatality rate was by the fact that the confirmed cases were mostly people in intensive care. Quirks of circumstance — such as the Diamond Princess cruise ship, in which almost everyone was tested — provided more insight. 

Johns Hopkins University in the US launched a dashboard of data resources, as did the Covid Tracking Project, an initiative from the Atlantic magazine. An elusive and mysterious threat became legible through the power of this data.  

That is not to say that all is well. Nature recently reported on “a coronavirus data crisis” in the US, in which “political meddling, disorganization and years of neglect of public-health data management mean the country is flying blind”.  

Nor is the US alone. Spain simply stopped reporting certain Covid deaths in early June, making its figures unusable. And while the UK now has an impressively large capacity for viral testing, it was fatally slow to accelerate this in the critical early weeks of the pandemic. 

Ministers repeatedly deceived the public about the number of tests being carried out by using misleading definitions of what was happening. For weeks during lockdown, the government was unable to say how many people were being tested each day. 

Huge improvements have been made since then. The UK’s Office for National Statistics has been impressively flexible during the crisis, for example in organising systematic weekly testing of a representative sample of the population. This allows us to estimate the true prevalence of the virus. Several countries, particularly in east Asia, provide accessible, usable data about recent infections to allow people to avoid hotspots. 

These things do not happen by accident: they require us to invest in the infrastructure to collect and analyse the data. On the evidence of this pandemic, such investment is overdue, in the US, the UK and many other places. 


Even the experts see what they expect to see 

Jonas Olofsson, a psychologist who studies our perceptions of smell, once told me of a classic experiment in the field. Researchers gave people a whiff of scent and asked them for their reactions to it. In some cases, the experimental subjects were told: “This is the aroma of a gourmet cheese.” Others were told: “This is the smell of armpits.” 

In truth, the scent was both: an aromatic molecule present both in runny cheese and in bodily crevices. But the reactions of delight or disgust were shaped dramatically by what people expected. 

Statistics should, one would hope, deliver a more objective view of the world than an ambiguous aroma. But while solid data offers us insights we cannot gain in any other way, the numbers never speak for themselves. They, too, are shaped by our emotions, our politics and, perhaps above all, our preconceptions. 

A striking example is the decision, on March 23 this year, to introduce a lockdown in the UK. In hindsight, that was too late. 

“Locking down a week earlier would have saved thousands of lives,” says Kit Yates, author of The Maths of Life and Death — a view now shared by influential epidemiologist Neil Ferguson and by David King, chair of the “Independent Sage” group of scientists. 

The logic is straightforward enough: at the time, cases were doubling every three to four days. If a lockdown had stopped that process in its tracks a week earlier, it would have prevented two doublings and saved three-quarters of the 65,000 people who died in the first wave of the epidemic, as measured by the excess death toll. 

That might be an overestimate of the effect, since people were already voluntarily pulling back from social interactions. Yet there is little doubt that if a lockdown was to happen at all, an earlier one would have been more effective. And, says Yates, since the infection rate took just days to double before lockdown but long weeks to halve once it started, “We would have got out of lockdown so much sooner . . . Every week before lockdown cost us five to eight weeks at the back end of the lockdown.” 

Why, then, was the lockdown so late? No doubt there were political dimensions to that decision, but senior scientific advisers to the government seemed to believe that the UK still had plenty of time. On March 12, prime minister Boris Johnson was flanked by Chris Whitty, the government’s chief medical adviser, and Patrick Vallance, chief scientific adviser, in the first big set-piece press conference. Italy had just suffered its 1,000th Covid death and Vallance noted that the UK was about four weeks behind Italy on the epidemic curve. 

With hindsight, this was wrong: now that late-registered deaths have been tallied, we know that the UK passed the same landmark on lockdown day, March 23, just 11 days later.  

It seems that in early March the government did not realise how little time it had. As late as March 16, Johnson declared that infections were doubling every five to six days. 

The trouble, says Yates, is that UK data on cases and deaths suggested that things were moving much faster than that, doubling every three or four days — a huge difference. What exactly went wrong is unclear — but my bet is that it was a cheese-or-armpit problem. 

Some influential epidemiologists had produced sophisticated models suggesting that a doubling time of five to six days seemed the best estimate, based on data from the early weeks of the epidemic in China. These models seemed persuasive to the government’s scientific advisers, says Yates: “If anything, they did too good a job.” 

Yates argues that the epidemiological models that influenced the government’s thinking about doubling times were sufficiently detailed and convincing that when the patchy, ambiguous, early UK data contradicted them, it was hard to readjust. We all see what we expect to see. 

The result, in this case, was a delay to lockdown: that led to a much longer lockdown, many thousands of preventable deaths and needless extra damage to people’s livelihoods. The data is invaluable but, unless we can overcome our own cognitive filters, the data is not enough. 


The best insights come from combining statistics with personal experience 

The expert who made the biggest impression on me during this crisis was not the one with the biggest name or the biggest ego. It was Nathalie MacDermott, an infectious-disease specialist at King’s College London, who in mid-February calmly debunked the more lurid public fears about how deadly the new coronavirus was. 

Then, with equal calm, she explained to me that the virus was very likely to become a pandemic, that barring extraordinary measures we could expect it to infect more than half the world’s population, and that the true fatality rate was uncertain but seemed to be something between 0.5 and 1 per cent. In hindsight, she was broadly right about everything that mattered. MacDermott’s educated guesses pierced through the fog of complex modelling and data-poor speculation. 

I was curious as to how she did it, so I asked her. “People who have spent a lot of their time really closely studying the data sometimes struggle to pull their head out and look at what’s happening around them,” she said. “I trust data as well, but sometimes when we don’t have the data, we need to look around and interpret what’s happening.” 

MacDermott worked in Liberia in 2014 on the front line of an Ebola outbreak that killed more than 11,000 people. At the time, international organisations were sanguine about the risks, while the local authorities were in crisis. When she arrived in Liberia, the treatment centres were overwhelmed, with patients lying on the floor, bleeding freely from multiple areas and dying by the hour. 

The horrendous experience has shaped her assessment of subsequent risks: on the one hand, Sars-Cov-2 is far less deadly than Ebola; on the other, she has seen the experts move too slowly while waiting for definitive proof of a risk. 

“From my background working with Ebola, I’d rather be overprepared than underprepared because I’m in a position of denial,” she said. 

There is a broader lesson here. We can try to understand the world through statistics, which at their best provide a broad and representative overview that encompasses far more than we could personally perceive. Or we can try to understand the world up close, through individual experience. Both perspectives have their advantages and disadvantages. 

Muhammad Yunus, a microfinance pioneer and Nobel laureate, has praised the “worm’s eye view” over the “bird’s eye view”, which is a clever sound bite. But birds see a lot too. Ideally, we want both the rich detail of personal experience and the broader, low-resolution view that comes from the spreadsheet. Insight comes when we can combine the two — which is what MacDermott did. 


Everything can be polarised 

Reporting on the numbers behind the Brexit referendum, the vote on Scottish independence, several general elections and the rise of Donald Trump, there was poison in the air: many claims were made in bad faith, indifferent to the truth or even embracing the most palpable lies in an effort to divert attention from the issues. Fact-checking in an environment where people didn’t care about the facts, only whether their side was winning, was a thankless experience. 

For a while, one of the consolations of doing data-driven journalism during the pandemic was that it felt blessedly free of such political tribalism. People were eager to hear the facts after all; the truth mattered; data and expertise were seen to be helpful. The virus, after all, could not be distracted by a lie on a bus.  

That did not last. America polarised quickly, with mask-wearing becoming a badge of political identity — and more generally the Democrats seeking to underline the threat posed by the virus, with Republicans following President Trump in dismissing it as overblown.  

The prominent infectious-disease expert Anthony Fauci does not strike me as a partisan figure — but the US electorate thinks otherwise. He is trusted by 32 per cent of Republicans and 78 per cent of Democrats. 

The strangest illustration comes from the Twitter account of the Republican politician Herman Cain, which late in August tweeted: “It looks like the virus is not as deadly as the mainstream media first made it out to be.” Cain, sadly, died of Covid-19 in July — but it seems that political polarisation is a force stronger than death. 

Not every issue is politically polarised, but when something is dragged into the political arena, partisans often prioritise tribal belonging over considerations of truth. One can see this clearly, for example, in the way that highly educated Republicans and Democrats are further apart on the risks of climate change than less-educated Republicans and Democrats. 

Rather than bringing some kind of consensus, more years of education simply seem to provide people with the cognitive tools they require to reach the politically convenient conclusion. From climate change to gun control to certain vaccines, there are questions for which the answer is not a matter of evidence but a matter of group identity. 

In this context, the strategy that the tobacco industry pioneered in the 1950s is especially powerful. Emphasise uncertainty, expert disagreement and doubt and you will find a willing audience. If nobody really knows the truth, then people can believe whatever they want. 

All of which brings us back to Darrell Huff, statistical sceptic and author of How to Lie with Statistics. While his incisive criticism of statistical trickery has made him a hero to many of my fellow nerds, his career took a darker turn, with scepticism providing the mask for disinformation. 

Huff worked on a tobacco-funded sequel, How to Lie with Smoking Statistics, casting doubt on the scientific evidence that cigarettes were dangerous. (Mercifully, it was not published.)  

Huff also appeared in front of a US Senate committee that was pondering mandating health warnings on cigarette packaging. He explained to the lawmakers that there was a statistical correlation between babies and storks (which, it turns out, there is) even though the true origin of babies is rather different. The connection between smoking and cancer, he argued, was similarly tenuous.  

Huff’s statistical scepticism turned him into the ancestor of today’s contrarian trolls, spouting bullshit while claiming to be the straight-talking voice of common sense. It should be a warning to us all. There is a place in anyone’s cognitive toolkit for healthy scepticism, but that scepticism can all too easily turn into a refusal to look at any evidence at all.

This crisis has reminded us of the lure of partisanship, cynicism and manufactured doubt. But surely it has also demonstrated the power of honest statistics. Statisticians, epidemiologists and other scientists have been producing inspiring work in the footsteps of Doll and Hill. I suggest we set aside How to Lie with Statistics and pay attention. 

Carefully gathering the data we need, analysing it openly and truthfully, sharing knowledge and unlocking the puzzles that nature throws at us — this is the only chance we have to defeat the virus and, more broadly, an essential tool for understanding a complex and fascinating world.