Search This Blog

Showing posts with label science. Show all posts
Showing posts with label science. Show all posts

Thursday 30 June 2022

Scientific Facts have a Half-Life - Life is Poker not Chess 4

Abridged and adapted from Thinking in Bets by Annie Duke





The Half-Life of Facts, by Samuel Arbesman, is a great read about how practically every fact we’ve ever known has been subject to revision or reversal. The book talks about the extinction of the coelacanth, a fish from the Late Cretaceous period. This was the period that also saw the extinction of dinosaurs and other species. In the late 1930s and independently in the mid 1950s, coelacanths were found alive and well. Arbesman quoted a list of 187 species of mammals declared extinct, more than a third of which have subsequently been discovered as un-extinct.


Given that even scientific facts have an expiration date, we would all be well advised to take a good hard look at our beliefs, which are formed and updated in a much more haphazard way than in science.


We would be better served as communicators and decision makers if we thought less about whether we are confident in our beliefs and more about how confident we are about each of our beliefs. What if, in addition to expressing what we believe, we also rated our level of confidence about the accuracy of our belief on a scale of zero to ten? Zero would mean we are certain a belief is not true. Ten would mean we are certain that our belief is true. Forcing ourselves to express how sure we are of our beliefs brings to plain sight the probabilistic nature of those beliefs, that we believe is almost never 100% or 0% accurate but, rather, somewhere in between.


Incorporating uncertainty in the way we think about what we believe creates open-mindedness, moving us closer to a more objective stance towards information that disagrees with us. We are less likely to succumb to motivated reasoning since it feels better to make small adjustments in degrees of certainty instead of having to grossly downgrade from ‘right’ to ‘’wrong’. This shifts us away from treating information that disagrees with us as a threat, as something we have to defend against, making us better able to truthseek.


There is no sin in finding out there is evidence that contradicts what we believe. The only sin is in not using that evidence as objectively as possible to refine that belief going forward. By saying, ‘I’m 80%’ and thereby communicating we aren’t sure, we open the door for others to tell us what they know. They realise they can contribute without having to confront us by saying or implying, ‘You’re wrong’. Admitting we are not sure is an invitation for help in refining our beliefs and that will make our beliefs more accurate over time as we are more likely to gather relevant information.


Acknowledging that decisions are bets based on our beliefs, getting comfortable with uncertainty and redefining right and wrong are integral to a good overall approach to decision making.


Tuesday 7 June 2022

Science is political

People who say “science is political” usually aren’t just stating facts - they’re trying to push something on you. Don’t let them

Stuart Ritchie
 

The statue of David Hume on Edinburgh’s Royal Mile


Imagine you heard a scientist saying the following:


I’m being paid massive consultation fees by a pharmaceutical company who want the results of my research to turn out in one specific way. And that’s a good thing. I’m proud of my conflicts of interest. I tell all my students that they should have conflicts if possible. On social media, I regularly post about how science is inevitably conflicted in one way or another, and how anyone criticising me for my conflicts is simply hopelessly naive.

I hope this would at least cause you to raise an eyebrow. And that’s because, whereas this scientist is right that conflicts of interest of some kind are probably inevitable, conflicts are a bad thing.

We all know how biases can affect scientists: failing to publish studies that don’t go their way; running or reporting the stats in ways that push the results in a favoured direction; harshly critiquing experiments they don’t like while letting equally-bad, but more sympathetic, ones off the hook. Insofar as a conflict of interest makes any of these (often unconscious) biases more likely, it’s not something to be proud of.

And that’s why we report conflicts of interest in scientific papers - both because it helps the reader understand where a particular study is coming from, and because it would be embarrassing if someone found out after the fact if nothing had been said. We also take steps to ensure that our conflicts don’t affect our research - we do double-blinding; we do replications; we post our data online; we try and show the world that the results would’ve been the results, regardless of what we were being paid by Big Pharma.

We can also all agree that conflicts of interest aren’t just financial. They can be personal - maybe you’re married to someone who would benefit if your results turn out a particular way. They can be reputational - maybe you’re the world’s no.1 proponent of Theory X, and would lose prestige if the results of this study didn’t support it. And they can be political - you can have a view of the world that comports with the research turning out one way, but not another.

When it comes to political conflicts of interest, I’ve noticed something very strange. I’ve noticed that, instead of treating them like other kinds of conflicts—where you put your hands up and admit to them but then do your best to make sure they don’t influence your science—scientists sometimes revel in political conflicts. Like the fictional conflicted scientist quoted above, they ostentatiously tell us that they’re being political and they don’t care: “don’t you know”, they scoff, “that science and politics are inseparable?”

Indeed, this phrase—“Science and Politics are Inseparable”—was the title of a Nature editorial in 2020, and it’s not hard to find other examples in popular-science publications:


Science Has Always Been Inseparable From Politics (Scientific American)


News Flash: Science Has Always Been Political (American Scientist)


Science Is Political (Chemistry World)


Yes, Science Is Political (Scientific American)

When Nature, Science, the New England Journal of Medicine, and Scientific American all either strongly criticised the Trump administration, or explicitly endorsed Joe Biden for US President during the 2020 election campaign, they were met with surprise from many who found it unsettling to see scientific publications so openly engaging in politics. The response from their defenders? “Don’t you know science is political?”.

What does “science is political” mean?

Here’s a (non-exhaustive) list of what people might mean when they say “science is political”:

The things scientists choose to study can be influenced by their political views of what’s important;

The way scientists interpret data from scientific research can often be in line with their pre-existing political views;

Since scientists are human, it’s impossible for them to be totally objective - anything they do is always going to be tainted by political views and assumptions;

It’s easy for scientists to forget that human subjectivity influences a great many aspects of science - even things like algorithms which might seem objective but often recapitulate the biases of their human creators;

Even the choice to use science—as opposed to some other way of knowing—in the first place is influenced by our political and cultural perspective;

A lot of science is funded by the taxpayer, via governments, which are run by political parties who set the agenda. Non-governmental funders of science can also have their own political agendas;

People of different political persuasions hold predictable views on controversial scientific topics (e.g. global warming, COVID vaccines, nuclear power, and so on);

Politicians, or those engaged in political debate, regularly use “science” to back up their points of view in a cynical, disingenuous way, often by cherry-picking studies or relying on any old thing that supports them, regardless of its quality.

There’s no argument from me about any of those points. These are all absolutely true. I wrote a whole book about how biases, some of them political, can dramatically affect research in all sorts of ways. But these are just factual statements - and I don’t think the people who always tell you that “science is political” are just idly chatting sociology-of-science for the fun of it. They want to make one of two points.

1. The argument from inevitability

The first point they might be making is what we might call the argument from inevitability. “There’s no way around it. You’re being naive if you think you could stop science from being political. It’s arrogance in the highest degree to think that you are somehow being ‘objective’, and aren’t a slave to your biases.”

But this is a weirdly black-and-white view. It’s not just that something “is political” (say, a piece of research done by the Pro-Life Campaign Against Abortion which concludes that the science proves human life starts at conception) or “is not political” (say, a piece of research on climate change run by Martians who have no idea about Earth politics). There are all sorts of shades of grey - and our job is to get as close to the “not political” end as possible, even in the knowledge that we might never get fully get there.

Indeed, there’s a weird reverse-arrogance in the argument from inevitability. As noted by Scott Alexander at Astral Codex Ten:

Talking about the impossibility of true rationality or objectivity might feel humble - you're admitting you can't do this difficult thing. But analyzed more carefully, it becomes really arrogant. You're admitting there are people worse than you - Alex Jones, the fossil fuel lobby, etc. You're just saying it's impossible to do better. You personally - or maybe your society, or some existing group who you trust - are butting up against the light speed limit of rationality and objectivity.

Let’s restate this using a scientific example. We can all agree that Trofim Lysenko’s Soviet agriculture is among the worst examples of politicised science in history - a whole pseudoscientific ideology that denied the basic realities of evolution and genetic transmission, and replaced them with techniques based on discredited ideas like the “inheritance of acquired characteristics”, helping to exacerbate famines that killed millions in the Soviet Union and China. That’s pretty much as bad as politicised science gets (you can bet your bottom ruble, by the way, that Lysenko himself thought that “science is political”).


If you think you’re better than Lysenko in terms of keeping politics out of your science (and let’s face it, you totally do think this), you’re already agreeing that there are gradations. And if you agree that there are gradations, it would be daft—or highly conceited—to think that nobody could ever to do a better job than you. Thus, you probably do agree that we could always try and improve our level of objectivity in science.

(By the way, by “objectivity” I mean scientific results that would look the same regardless of the observer, so long as that observer had the right level of training and/or equipment to see them. In the case of Lysenkoism, the “science” was highly idiosyncratic to Lysenko - things could’ve been entirely different if we ran the tape of history again with Lysenko removed. In the case of, say, the double-helix structure of DNA, we could be pretty confident that, were there to have been no Watson or Crick or Franklin or Wilkins, someone would’ve eventually still made that same discovery).

We already have a system that attempts to improve objectivity. The whole edifice of scientific review and publication—heck, the whole edifice of doing experiments, as opposed to just relying on your gut instinct—is an attempt to infuse some degree of objectivity into the process of discovering stuff about the world. I think that system of review and publication is a million miles from perfect (again, I wrote a book about this), but that’s just another way of saying: “the objectivity of the system could be improved”.

And it could be. If scientists shared all their code and data by default, the process would be a little more objective. If scientists publicly pre-registered their hypotheses before they looked at the data, the process would be a little more objective. If science funders used lotteries to award grant funding, the process would be a little more objective. And so on. In each of these cases—none of which give us perfect objectivity, of course, but which just inch us a little closer to it—we’d also move further away from a world where scientists’ subjective views, political or otherwise, influenced their science.

The fact that we can’t get rid of those subjective views altogether can serve a useful purpose: there’s a good argument for having a pluralist setup where people of all different views and perspectives and backgrounds contribute to the general scientific “commons”, and in doing so help debate, test, and refine each other’s ideas. But that’s still not an argument against each of those different people trying to be as objective as they can, within their own set of inevitable, human limitations.

After a decade of discussion about the replication crisis, open science, and all the ways we could reform the way we do research, we’re more aware than ever of how biases can distort things - but also how we can improve the system. So throwing up our hands and saying “science is always political! There’s nothing we can do!” is the very last thing we want to be telling aspiring scientists, who should be using and developing all these new techniques to improve their objectivity.

Not only is the argument from inevitability mistaken. Not only is it black-and-white thinking. It’s also cheems. Even if we can’t be perfect, it’s possible to be better - and that’s the kind of progressive message that all new scientists need to hear.


2. The activist’s argument

The second point that people might be making when they say that “science is political” is what we could call the activist’s argument. “The fact that science is political isn’t just an inevitability, but it’s good. We should all be using our science to make the world a better place (according to my political views), and to the extent that people are using science to make the world worse (according to my political views), we should stop them. All scientists should be political activists (who agree with my political views)”.

If my opening example of the scientist who’s proud of his or her conflict of interest moved you at all, you already have antibodies to this idea. You should ask what the difference is between a financial conflict of interest and an ideological one.

The activist’s argument is often invoked in response to other people politicising science. For example, after the recent mass shooting in Buffalo, New York, it was discovered that the white nationalist gunman had written a manifesto that referenced some papers from population- and behaviour-genetics research. This led to explicit calls to make genetics more political in the opposite direction (including banning some forms of research that are deemed too controversial). An article in WIRED argued that, in the wake of the killings:

…scientists can no longer justify silence in the name of objectivity or use the escape tactic of “leaving politics out of science.”

This argument—which is effectively stating that two wrongs do make a right—seems terribly misguided to me. If you think it’s bad that politics are being injected into science, it’s jarringly nonsensical to argue that “leaving politics out of science” is a bad thing. Isn’t the more obvious conclusion that we should endeavour to lessen the influence of politics and ideology on science across the board? If you think it’s bad when other people do it, you should think it’s bad when you do it yourself.

Of course, a lot of people don’t think it’s bad - they only think it’s bad when their opponents do it. They want to push their own political agenda and just happen to be working in science (witness all the biologists—why is it always biologists?—who advertise their socialism, or even include a little hammer and sickle, in their Twitter bio; or on the other hand, witness all the people complaining about “wokeness” invading science who don’t bat an eyelid when right-wingers push unscientific views about COVID or climate change). There’s probably little I can do to argue round anyone who is happy to mix up their politics and their science in this way.

But there are a lot of well-meaning, otherwise non-ideological people who use the argument too. At best, by repeating “science is political” like a mantra, they’re just engaging in the usual social conformism that we all do to some extent. At worst, they’re providing active cover for those who want to politicise science (“everyone says science is inevitably political, so why can’t I insert my ideology?”).

If you explicitly encourage scientists to be biased in a particular direction, don’t be surprised if you start getting biased results. We all know that publication bias and p-hacking occur when scientists care more about the results of a scientific study than the quality of its methods. Do we think that telling scientists that it’s okay to be ideological when doing research would make this better, or worse?

If you encourage scientists to focus on the “greater good” of their political ideology rather than the science itself, don’t be surprised if the incentives change. Don’t be surprised if they get sloppy - what are a few mistakes if it all goes toward making the world a better place? And don’t be surprised if some of them break the rules - I’ve heard enough stories of scientific fraudsters who had a strong, pre-existing belief in their theory, and after they couldn’t see it in the results from their experiment, proceeded to give the numbers a little “push” in the “right” direction. Do we think a similar dynamic is more, or less likely to evolve if we tell people it’s good to put their ideology first?

If we encourage scientists to bring their political ideology to the lab, do we think groupthink—a very common human problem which in at least some scientific fields seems to have stifled debate and held back progress—will get better, or worse?

And finally, think about the effect on people who aren’t scientists, but who read or rely on its results. Scientists loudly and explicitly endorsing political positions certainly isn’t going to help those on the opposite side of the political aisle to take science more seriously (there’s some polling evidence for this). Not only that, but the suggestion that some results might be being covered up for political reasons can be perfect tinder for conspiracy theories (remember what happened during the Climategate scandal).

A better way

When scientific research is misappropriated for political ends, either by extremists or by more mainstream figures, the answer isn’t to drop all attempts at objectivity. The answer is to get as far away from politics as we can. Instead of saying “science is political - get over it”, we could say:

We’ll redouble our efforts to make our results transparent and our interpretations clear - we’ll ensure that we explain in detail why the conclusions being drawn by political actors aren’t justified based on the evidence;

We’ll make sure that what we think are incorrect interpretations are clearly described and refuted;

We’ll do the scientific equivalent of putting our results in a blind trust, by using the kinds of practices discussed above (open data, pre-registration, code sharing) and others, to lessen the effect of our pre-existing views and ensure that others can easily check our results;

We’ll tighten up processes like peer-review so that there’s an even more rigorous quality filter on new scientific papers. If they’re subjected to more scrutiny, any bad or incorrect results that are the focus of political worries should be more likely to fall by the wayside;

We’ll expand our definition of a conflict of interest, and be more open about when our personal politics, affiliations, memberships, religious beliefs, employments, relationships, commitments, previous statements, diets, hobbies, or anything else relevant might influence the way we do our research;

We’ll stop broadcasting the idea that it’s good to be ideological in science, and in fact we’ll make being ostentatiously ideological about one’s results at least as shameful as p-hacking, or publishing a paper with a glaring typo in the title;

We’ll restate our commitment to open inquiry and academic freedom, making sure that we keep an open—though highly critical and sceptical—mind when assessing anyone’s scientific claims.

To repeat: I don’t think it’s possible to fully remove politics from science. But it’s not all-or-nothing - the point is to get as close to non-political science as we can. By following some of the above steps (and I’m sure you can think of many other ways - another one that’s been discussed is the idea of adversarial collaboration), we can combat misrepresentation of research by using high-quality research of our own.

This is all rather like the discussion of the “Mertonian norms” of science, which are supposed to be the ethos of the whole activity - universalism (no matter who says it, we evaluate a claim the same way), communalism (we share results and methods around the community), organised scepticism (we constantly subject all results to unforgiving scrutiny), and, most relevant to our discussion here, disinterestedness (scientists don’t have a stake in their results turning out one way or another). These aren’t necessarily descriptions of how science is right now, but they’re aspirational - we should do our best to organise the system so it leans towards them. The idea that we should loudly and proudly bring in our political ideologies does violence to these already-fragile norms.

And we really should aspire to disinterestedness. The ideal scientist shouldn’t care whether an hypothesis comes out one way or another. And since, because they’re human beings, the vast majority of them really do, we should set the system up so their views are kept at arms’ length from the results. At the same time, we should remind ourselves of some very basic philosophy via David Hume in 1739: “is” and “ought” questions are different things. The “is” answers we get from science don’t necessarily tell us what we “ought” to to, and just as importantly, the “ought” beliefs from our moral and political philosophy don’t tell us how the world “is”. To think otherwise is to make a category error.

Or as Tom Chivers put it, somewhat more recently:

Finding out whether the Earth revolves around the Sun is a different kind of question from asking whether humans have equal moral value. One is a question of fact about the world as it is; to answer it, you have to go out into the world and look. The other is a question of our moral system, and the answer comes from within.

The inspiring, resounding peroration

The view that scientists should do their best to be as objective as possible is a boring, default, commonly-believed, run-of-the-mill opinion. It also happens to be correct.

The problem with boring, default, commonly-believed, run-of-the-mill opinions is that you don’t get a thrill from reciting them or shocking people with their counterintuitiveness. The fire that powers so much online activism just isn’t there, and the whole thing comes across as rather dull. So in an attempt to remedy that, let me try and make my position sound as exciting as possible. Ahem:

Science is political - but that’s a bad thing! We must RESIST attempts to make our science less objective! We must PUSH BACK against attempts to insert ideology—any ideology—into our science! We must STRIVE to be as apolitical as we possibly can be! I know that I’m a human being with my own biases, and so are you - but objective science is humanity’s best tool for overcoming those biases, and arriving at SHARED KNOWLEDGE. We can do better - TOGETHER.

Hmm. I’m not much of a speech-writer, and that felt a little bit embarrassing. But remember well that cringey feeling: that’s exactly how you should feel the next time someone tells you—with a clear, yet unspoken, agenda—that “science is political”.

Friday 11 June 2021

Obscurantist India: Mired in the Past, Messing with the Present, Muddled about the Future



Parakala Prabhakar (Teluguపరకాà°² à°ª్à°°à°­ాà°•à°°్; born 2 January 1959) is an Indian political economist, political commentator, economic, and social affairs. He served as Communications Advisor, held a cabinet rank position in Andhra Pradesh Government between July 2014 and June 2018. For several years he presented a current affairs discussion programme on television channels of Andhra Pradesh. His programmes, Pratidhwani on ETV2 and Namaste Andhra Pradesh on NTV.[1] He was also a former spokesman and one of the founding general secretaries of Praja Rajyam Party.[2] In the early 2000s, Parakala was the spokesperson of the Andhra Pradesh unit of the BJP.[3] He is the spouse of the incumbent union Minister of Finance and Corporate Affairs of India, Nirmala Sitharaman.

Source: Wikipedia

 

Tuesday 1 June 2021

If the Wuhan lab-leak hypothesis is true, expect a political earthquake

 Thomas Frank in The Guardian


‘My own complacency on the matter was dynamited by the lab-leak essay that ran in the Bulletin of the Atomic Scientists earlier this month.’
‘My own complacency on the matter was dynamited by the lab-leak essay that ran in the Bulletin of the Atomic Scientists earlier this month.’ Photograph: Thomas Peter/Reuters
 

There was a time when the Covid pandemic seemed to confirm so many of our assumptions. It cast down the people we regarded as villains. It raised up those we thought were heroes. It prospered people who could shift easily to working from home even as it problematized the lives of those Trump voters living in the old economy.

Like all plagues, Covid often felt like the hand of God on earth, scourging the people for their sins against higher learning and visibly sorting the righteous from the unmasked wicked. “Respect science,” admonished our yard signs. And lo!, Covid came and forced us to do so, elevating our scientists to the highest seats of social authority, from where they banned assembly, commerce, and all the rest.

We cast blame so innocently in those days. We scolded at will. We knew who was right and we shook our heads to behold those in the wrong playing in their swimming pools and on the beach. It made perfect sense to us that Donald Trump, a politician we despised, could not grasp the situation, that he suggested people inject bleach, and that he was personally responsible for more than one super-spreading event. Reality itself punished leaders like him who refused to bow to expertise. The prestige news media even figured out a way to blame the worst death tolls on a system of organized ignorance they called “populism.”

But these days the consensus doesn’t consense quite as well as it used to. Now the media is filled with disturbing stories suggesting that Covid might have come — not from “populism” at all, but from a laboratory screw-up in Wuhan, China. You can feel the moral convulsions beginning as the question sets in: What if science itself is in some way culpable for all this?

*

I am no expert on epidemics. Like everyone else I know, I spent the pandemic doing as I was told. A few months ago I even tried to talk a Fox News viewer out of believing in the lab-leak theory of Covid’s origins. The reason I did that is because the newspapers I read and the TV shows I watched had assured me on many occasions that the lab-leak theory wasn’t true, that it was a racist conspiracy theory, that only deluded Trumpists believed it, that it got infinite pants-on-fire ratings from the fact-checkers, and because (despite all my cynicism) I am the sort who has always trusted the mainstream news media.

My own complacency on the matter was dynamited by the lab-leak essay that ran in the Bulletin of the Atomic Scientists earlier this month; a few weeks later everyone from Doctor Fauci to President Biden is acknowledging that the lab-accident hypothesis might have some merit. We don’t know the real answer yet, and we probably will never know, but this is the moment to anticipate what such a finding might ultimately mean. What if this crazy story turns out to be true?

The answer is that this is the kind of thing that could obliterate the faith of millions. The last global disaster, the financial crisis of 2008, smashed people’s trust in the institutions of capitalism, in the myths of free trade and the New Economy, and eventually in the elites who ran both American political parties. 

In the years since (and for complicated reasons), liberal leaders have labored to remake themselves into defenders of professional rectitude and established legitimacy in nearly every field. In reaction to the fool Trump, liberalism made a sort of cult out of science, expertise, the university system, executive-branch “norms,” the “intelligence community,” the State Department, NGOs, the legacy news media, and the hierarchy of credentialed achievement in general.

Now here we are in the waning days of Disastrous Global Crisis #2. Covid is of course worse by many orders of magnitude than the mortgage meltdown — it has killed millions and ruined lives and disrupted the world economy far more extensively. Should it turn out that scientists and experts and NGOs, etc. are villains rather than heroes of this story, we may very well see the expert-worshiping values of modern liberalism go up in a fireball of public anger.

Consider the details of the story as we have learned them in the last few weeks:

  • Lab leaks happen. They aren’t the result of conspiracies: “a lab accident is an accident,” as Nathan Robinson points out; they happen all the time, in this country and in others, and people die from them.
  • There is evidence that the lab in question, which studies bat coronaviruses, may have been conducting what is called “gain of function” research, a dangerous innovation in which diseases are deliberately made more virulent. By the way, right-wingers didn’t dream up “gain of function”: all the cool virologists have been doing it (in this country and in others) even as the squares have been warning against it for years.
  • There are strong hints that some of the bat-virus research at the Wuhan lab was funded in part by the American national-medical establishment — which is to say, the lab-leak hypothesis doesn’t implicate China alone.
  • There seem to have been astonishing conflicts of interest among the people assigned to get to the bottom of it all, and (as we know from Enron and the housing bubble) conflicts of interest are always what trip up the well-credentialed professionals whom liberals insist we must all heed, honor, and obey.
  • The news media, in its zealous policing of the boundaries of the permissible, insisted that Russiagate was ever so true but that the lab-leak hypothesis was false false false, and woe unto anyone who dared disagree. Reporters gulped down whatever line was most flattering to the experts they were quoting and then insisted that it was 100% right and absolutely incontrovertible — that anything else was only unhinged Trumpist folly, that democracy dies when unbelievers get to speak, and so on.
  • The social media monopolies actually censored posts about the lab-leak hypothesis. Of course they did! Because we’re at war with misinformation, you know, and people need to be brought back to the true and correct faith — as agreed upon by experts.
*

“Let us pray, now, for science,” intoned a New York Times columnist back at the beginning of the Covid pandemic. The title of his article laid down the foundational faith of Trump-era liberalism: “Coronavirus is What You Get When You Ignore Science.”

Ten months later, at the end of a scary article about the history of “gain of function” research and its possible role in the still ongoing Covid pandemic, Nicholson Baker wrote as follows: “This may be the great scientific meta-experiment of the 21st century. Could a world full of scientists do all kinds of reckless recombinant things with viral diseases for many years and successfully avoid a serious outbreak? The hypothesis was that, yes, it was doable. The risk was worth taking. There would be no pandemic.”

Except there was. If it does indeed turn out that the lab-leak hypothesis is the right explanation for how it began — that the common people of the world have been forced into a real-life lab experiment, at tremendous cost — there is a moral earthquake on the way.

Because if the hypothesis is right, it will soon start to dawn on people that our mistake was not insufficient reverence for scientists, or inadequate respect for expertise, or not enough censorship on Facebook. It was a failure to think critically about all of the above, to understand that there is no such thing as absolute expertise. Think of all the disasters of recent years: economic neoliberalism, destructive trade policies, the Iraq War, the housing bubble, banks that are “too big to fail,” mortgage-backed securities, the Hillary Clinton campaign of 2016 — all of these disasters brought to you by the total, self-assured unanimity of the highly educated people who are supposed to know what they’re doing, plus the total complacency of the highly educated people who are supposed to be supervising them.

Then again, maybe I am wrong to roll out all this speculation. Maybe the lab-leak hypothesis will be convincingly disproven. I certainly hope it is.

But even if it inches closer to being confirmed, we can guess what the next turn of the narrative will be. It was a “perfect storm,” the experts will say. Who coulda known? And besides (they will say), the origins of the pandemic don’t matter any more. Go back to sleep.

Friday 1 January 2021

What we have learnt about the limits of science

Thiago Carvalho in The FT

Some years ago, on New Year’s Day, my wife and I noticed that our son, not yet two months old, was struggling to breathe — a belaboured, wheezing effort was all he could manage — and we decided to face the holiday emergency room crush. After assessing his blood oxygen levels, the pediatrician said: “Pack a bag, you will be here all week. He will get worse. Then he will get better.”  

Our son had contracted something called respiratory syncytial virus, and it was replicating in his lungs. In a scenario similar to Covid-19, most healthy adults infected with RSV will experience a mild cold, or no symptoms at all. However, some unfortunate infants who contract RSV may suffer a devastating pulmonary infection. For those kids, there are no drugs available that can reliably stop, or even slow down RSV’s relentless replication in the lungs. 

Instead, according to Mustafa Khokha, a pediatric critical care professor at Yale University, doctors first give oxygen and then if the child does not improve, there follows a series of progressively more aggressive procedures. “That’s all supportive therapy for the body to recover, as opposed to treatment against the virus itself,” says Khokha. Outstanding supportive care was what our son received, and the week unfolded exactly as his pediatrician predicted. (It was still the worst week of my life.)

For all the progress we have seen in 2020, a patient brought to the emergency room with severe Covid-19 will essentially receive the same kind of supportive care our son did — treatment to help the body endure a viral assault, but not effectively targeting the virus itself. The main difference will be the uncertain outcome — there will be no comforting, near-certain “he will get better” from the attending physician. 

Contrast that story with a different one. On a Tuesday morning in early December, in the English city of Coventry, Margaret Keenan, just a few days shy of her 91st birthday, became the first person in the world to receive the BioNTech/Pfizer Covid-19 vaccine outside of a clinical trial. The pace of progress was astonishing. It was less than a year since, in the closing moments of 2019, Chinese health authorities alerted the World Health Organization to an outbreak of a pneumonia of unknown cause in Hubei province.  

The Covid-19 pandemic has given us an accelerated tutorial on the promise and the limits of science. With vaccines, testing, epidemiological surveillance, we know where we are going, and we have a good idea how to get there. These are essentially challenges of technological development, reliant now on adequate resources and personnel and tweaking of regulatory frameworks. For other scientific challenges, though, there may be no gas pedal to step on — these include the prickly problems of arresting acute viral infection, or understanding how the virus and the host interact to produce disease. Science, as Nobel Prize-winning immunologist Peter Medawar put it, is the art of the soluble. 


In March, when, incredibly, the first human vaccine trials for Covid-19 were kicking off in Seattle, the WHO launched an ambitious clinical trial to try to identify existing pharmaceuticals that could show some benefit against Sars-Cov-2. In October, the WHO declared that all four arms of its Solidarity trial had essentially failed. The search for effective antiviral drugs has not lacked resources or researchers, but in contrast to the vaccine victories, it has yet to produce a single clear success story. The concentrated efforts of many of the world’s most capable scientists, relying on ample public support and private investment, are sometimes not enough to crack a problem. 

Perhaps nothing exemplifies this more clearly than what followed Richard Nixon’s signing of the National Cancer Act on December 23 1971. The act was cautiously phrased, but January’s State of the Union address declared an all-out war on cancer: “The time has come in America when the same kind of concentrated effort that split the atom and took man to the moon should be turned toward conquering this dread disease.” The war on cancer would funnel almost $1.6bn to cancer labs over the next three years, and fuel expectations that a cure for the disease would be found before the end of the decade. Curing cancer remains, of course, an elusive target. In 2016, then vice-president Joe Biden presented the report of his own Cancer Moonshot task force. 

The success of the Apollo program planted the Moonshot analogy in the science policy lexicon. Some grand challenges in biology could properly be considered “moonshots”. The Human Genome Project was one example. Like the race to the Moon, it had a clear finish line: to produce a draft with the precise sequence of genetic letters in the 23 pairs of human chromosomes. This was, like the propulsion problems solved by Nasa en route to the Moon, a matter of developing and perfecting technology — technology that later would allow us to have a genetic portrait of the cause of Covid-19 in under two weeks.  

The cancer context was rather different. In the countdown to the war on cancer, Sol Spiegelman, the director of Columbia University’s Institute of Cancer Research, quipped that “an all-out effort at this time [to find a cure for cancer] would be like trying to land a man on the Moon without knowing Newton’s laws of gravity.” And so it proved. 

We now know quite a lot about how the body resists viral infections, certainly much more than we knew about the biology of cancer in 1971. Over 60 years ago, at London’s National Institute for Medical Research, Alick Isaacs and Jean Lindemann exposed fragments of chicken egg membranes to heat-inactivated influenza A virus. In a matter of hours, the liquid from these cultures acquired the capacity to interfere with the growth of not only influenza A, but other, unrelated viruses, as well. Isaacs and Lindemann named their factor interferon. Interferons are fleet-footed messengers produced and released by cells almost immediately upon viral infection. These molecules warn other host cells to ready themselves to resist a viral onslaught. 

Viruses rely on hijacking the normal cellular machinery to make more copies of themselves and interferons interfere with almost all stages of the process: from making it more difficult for the virus to enter cells, to slowing down the cellular protein factories required to make the viral capsule, to reducing the export of newly made viral particles. Interferons are now part of our pharmaceutical armoury for diseases as diverse as multiple sclerosis and cancer, as well as hepatitis C and other chronic viral infections. 

Multiple interferon-based strategies have been tried in the pandemic, from intravenous administration to nebulising the molecule so that the patient inhales an antiviral mist directly into the lungs. The results have been inconclusive. “A lot of it has to do with the timing,” says Yale immunologist Akiko Iwasaki, “the only stage that recombinant interferon might be effective is pre-exposure or early post-exposure, and it’s really hard to catch it for this virus, because everyone is pretty much asymptomatic at that time.”  


This year’s scramble for effective antiviral drugs led to a revival of other failed approaches. In 2016, a team of researchers from the United States Army Medical Research Institute of Infectious Diseases in Frederick, Maryland, and the biotech company Gilead Sciences reported that the molecule GS-5734 protected Rhesus monkeys from being infected with the Ebola virus. GS-5734, or as it is more familiarly known now, remdesivir unfortunately failed in clinical trials. This was a bona fide antiviral, backed up by demonstrations that the drug efficiently blocked an enzyme used by viruses to copy their genome. Ebola was already remdesivir’s third dead-end: Gilead had previously tested GS-5734 against hepatitis C and RSV, and the results were disappointing. 

 In late April, National Institute of Allergy and Infectious Diseases director Anthony Fauci, a member of the White House coronavirus task force, proclaimed that the US remdesivir trials had established “a new standard of care” for Covid-19 patients. As has happened repeatedly during the Covid-19 crisis, the data backing this claim not been made public, nor had it, at the time, been peer-reviewed. 

Fauci explained that the drug had no significant effect on mortality, but claimed that remdesivir reduced hospitalisation times by about 30 per cent. It was the first piece of good news in a spring marked by global lockdowns. Unfortunately, results from a large-scale trial run by the WHO released in the autumn failed to support even the limited claims of the US study (Gilead has challenged the study’s design), and the WHO currently advises against giving remdesivir to Covid-19 patients.  

For those who do not naturally control Sars-Cov-2 infection, or who have not been vaccinated, the failure to repurpose or create effective antiviral agents leaves supportive care. We are only beginning to understand the interplay of this new virus and human hosts. It is also a protean affliction, as sex, age, and pre-existing conditions all affect outcomes. The single clearest way to reduce mortality remains the dexamethasone treatment for patients requiring supplemental oxygen initially reported in the UK Recovery trial. It is not a direct attack on the virus, but a way to ameliorate the effects of infection and the immune response to it on the human body. Dexamethasone is, in a very real sense, supportive care. 

So what have we learned about the limits of science? First, we were reminded that spectacular successes are built on a foundation of decades of basic research. Even the novel, first-in-class vaccines are at the end of a long road. It was slow-going to get to warp speed. We learned that there are no shortcuts to deciphering how a new virus makes us sick (and kills us) and that there is no ignoring the importance of human diversity for cracking this code. Diabetes, obesity, hypertension — we are still finding our way through a comorbidity labyrinth. Most of all, we have learned an old lesson again: science is the art of the soluble. No amount of resources and personnel, no Manhattan Project, can ensure that science will solve a problem in the absence of a well-stocked toolbox and a solid, painstakingly built theoretical framework. 

South Korea recorded its first Covid-19 case on January 20. Eleven days later, Spain confirmed its first infection: a German tourist in the Canary Islands. Spain and South Korea have similar populations of about 50m people. As of publication of this piece, South Korea has had 879 deaths, while Spain reports over 50,000. The west missed its moment. Efficient testing, tracing and containment of Covid-19 was a soluble technological and organisational problem. Here too, we can hear echoes of the war on cancer. The biggest single reduction in cancer mortality did not come from a miracle drug. It was the drop in lung cancer deaths, due to what we could call the war on tobacco. Perhaps Dr Spiegelman might concede that even if we don’t have a law of gravity, we do have a first law of medicine: always start with prevention. 

Covid-19 has pushed science to its limits and, in some cases, sharply outlined its borders. This century’s first pandemic finds humanity, with its transport hubs and supply chains, more vulnerable to a new pathogen. But virology, immunology, critical care medicine and epidemiology, to name a few, have progressed immeasurably since 1918. Unfortunately, in a public health emergency, the best science must be used to inform the best policies. In the seasonal spirit of charity, let us say that that has not always been the case in our pandemic year. 

Tuesday 4 August 2020

Using HCQ for Covid - Is it Cheating the Ignorant Patient?

By Girish Menon

My piece ‘Does Modern Medicine have a Platypus Problem?’ unleashed a 'minor storm in a teacup'. So, to improve my own understanding I write these words in the hope that some patient man will spare some time to clear my doubt.

In the immediate aftermath of my piece, a friend* suggested that using Dr, Immanuel's prescription to treat Covid was similar to using semen to cure Covid.

Another friend provided a slide showing the negative effect on countries not using HCQ. This data according to a third friend was fake news.

In the meantime:

The BBC carried an ad hominem article on Dr. Stella Immanuel stating that she was a pastor who had made wild claims about aliens in the past.

The WHO carried out a study which claimed that HCQ (hydroxychloroquine) was ineffective in the treatment of Covid. However, the WHO on the same page also stated " The decision to stop hydroxychloroquine’s use in the Solidarity trial does not apply to the use or evaluation of hydroxychloroquine in pre or post-exposure prophylaxis in patients exposed to COVID-19" (sic).


Yesterday another friend announced that her friend in Mumbai had recovered from Covid. During the illness she was given HCQ.

So, I asked this friend ‘does that mean HCQ cured her of Covid?’

She replied, ‘I don't know. She had tested negative for Covid. Her symptoms started with a rash which was not a symptom of Covid and yet her doctor diagnosed her condition as a Covid attack.’

So does this mean that at least there could be a positive correlation between HCQ and Covid treatment?’

‘I don't know’

‘Suppose you were in Mumbai, contracted Covid and a doctor you trust prescribed HCQ would you take it?’

‘Yes’

‘Now in a thought experiment, suppose you were teleported to Cambridge say four days later, still having Covid and the GP does not prescribe HCQ?’

‘I will obey the Milton physician.’


All these discussions reminded me of Omar Khayyam's "Myself when young did eagerly frequent doctor and saint, and heard great argument about it and about: but evermore came out by the same door as in I went."

And my questions remain:

What conclusion should a layman draw about HCQ and Covid?

Should I take HCQ as a prophylactic?

---

* All friends quoted in the article are related to science and medicine.

Thursday 30 July 2020

A coronavirus vaccine could split America

In the battle between public science and anti-vaxxer sentiment, science is heavily outgunned writes Edward Luce in The FT

It is late October and Donald Trump has a surprise for you. Unlike the traditional pre-election shock — involving war or imminent terrorist attack — this revelation is about hope rather than fear. The “China virus” has been defeated thanks to the ingenuity of America’s president. The US has developed a vaccine that will be available to all citizens by the end of the year. Get online and book your jab.  

It is possible Mr Trump could sway a critical slice of voters with such a declaration. The bigger danger is that he would deepen America’s mistrust of science. A recent poll found that only half of Americans definitely plan to take a coronavirus vaccine. Other polls said that between a quarter and a third of the nation would never get inoculated. 

Whatever the true number, anti-vaccine campaigners are having a great pandemic — as indeed is Covid-19. At least three-quarters of the population would need to be vaccinated to reach herd immunity. 

Infectious diseases thrive on mistrust. It is hard to imagine a better Petri dish than today’s America. Some of the country’s “vaccine hesitancy” is well grounded. Regulators are under tremendous pressure to let big pharma shorten clinical trials. That could lead to mistakes

Vaccine nationalism is not just about rich governments pre-ordering as many vials as they can. It is also about winning unimaginably large bragging rights in the race to save the world. Cutting immunological corners could be dangerous to public health. 

Such caution accounts for many of those who would hesitate to be injected. The rest are captured by conspiracy theories. In the battle between public science and anti-vaxxer sentiment, science is heavily outgunned. It faces a rainbow coalition of metastasising folk suspicions on both the left and the right. Public health messages are little match for the memology of social media opponents. 

It is that mix of technological savvy and intellectual derangement that drives today’s politics. Mr Trump did not invent postmodern quackery — though he has endorsed some life-threatening remedies. The irony is that he could fall victim to the mistrust he has stoked.  

Should an effective vaccine loom into view before the US goes to the polls in 95 days, Mr Trump would not be the ideal person to inform the country. The story is as old as cry wolf. Having endorsed the use of disinfectants and hydroxychloroquine, Mr Trump has forfeited any credibility. Validation should come from Anthony Fauci, America’s top infectious-diseases expert, whose trust ratings are almost double those of the president he serves. 

Even then, however, the challenge would only just be starting. There is no cause to doubt the world-beating potential of US scientific research. There are good reasons to suspect the medical establishment’s ability to win over public opinion. 

The modern anti-vaxxer movement began on the left. It is still going strong. It follows the “my body is my temple” philosophy. Corporate science cannot be trusted to put healthy things into our bodies. The tendency for modern parents to award themselves overnight Wikipedia degrees in specialist fields is also to blame. 

Not all of this mistrust is madcap. African Americans have good reason to distrust public health following the postwar Tuskegee experiments in which hundreds were infected with syphilis and left to fester without penicillin. Polls show that more blacks than whites would refuse a coronavirus vaccine. Given their higher likelihood of exposure, such mistrust has tragic potential. 

But rightwing anti-vaxxers have greater momentum. America’s 19th century anti-vaccination movements drew equally from religious paranoia that vaccines were the work of the devil and a more general fear that liberty was under threat. Both strains have resurfaced in QAnon, the virtual cult that believes America is run by a satanic deep state that abuses children. 

It would be hard to invent a more unhinged account of how the world works. Yet Mr Trump has retweeted QAnon-friendly accounts more than 90 times since the pandemic began. Among QAnon’s other theories is that Covid-19 is a Dr Fauci-led hoax to sink Mr Trump’s chances of being re-elected. Science cannot emulate such imaginative forms of storytelling. 

All of which poses a migraine for the silent majority that would happily take the vaccine shots. Their lives are threatened both by a pandemic and by an infodemic. It is a bizarre feature of our times that the first looks easier to solve than the second. 

Wednesday 29 July 2020

Does Modern Medicine have a Platypus Problem?

By Girish Menon

“Early zoologists classified as mammals those that suckle their young and as reptiles those that lay eggs. Then a duck-billed platypus was discovered in Australia laying eggs like a perfect reptile and then, when they hatched, suckling the infant like a perfect mammal.
The discovery created quite a sensation. What an enigma! it was exclaimed.

What a mystery! What a marvel of nature! When the first stuffed specimens reached England from Australia around the end of the eighteenth century they were thought to be fakes made by sticking together bits of different animals. Even today you still see occasional articles in nature magazines asking ‘Why does this paradox of nature exist?’.

The answer is: it doesn’t. The Platypus isn’t doing anything paradoxical at all. It isn’t having any problems. Platypuses have been laying eggs and suckling their young for millions of years before there were any zoologists to come along and declare it illegal. The real mystery, the real enigma, is how mature, objective, trained scientific observers can blame their own goof on a poor innocent platypus.” Robert Pirsig in Zen and the Art of Motorcycle Maintenance


I wondered if this is the attitude of modern medicine towards primary care physician Dr. Stella Emmanuel for her recommendation of Hydroxychloroquine as a panacea for the Covid-19 virus.



I discussed Dr. Emmanuel's prescription with more than one practitioner of modern medicine and they were all unanimous in their condemnation of Dr. Emmanuel’s self publicity approach of making a film with many white coated authority figures in the background. 'She could have presented her data for scrutiny' and 'her claims will not qualify as level 2 evidence' were some of their verdicts.

Hydroxychloroquine, unfortunately, has become a highly political drug which has divided opinion on liberal v conservative lines. ‘Big Pharma’ has also been accused of trying to destroy a cheap solution to the raging Corona virus problem.

In the UK, modern medicine’s success in combating Covid-19 has resulted in over 50,000 deaths and delayed treatment of all other life threatening ailments. Decision making has been a series of flip-flops and U turns and is best illustrated by Telegraph’s Blowe





I wondered if some of the decisions by modern medicine on the lockdown and thereafter have the same amount of evidence required of Dr. Emmanuel and her panacea?

I am willing to take a sceptical approach to Dr Emmanuel as well as to the science based responses of the Boris Johnson government.

But, I also wondered if modern science and medicine ever consider that they too may suffer from the platypus problem?

Saturday 6 June 2020

Scientific or Pseudo Knowledge? How Lancet's reputation was destroyed

The now retracted paper halted hydroxychloroquine trials. Studies like this determine how people live or die tomorrow writes James Heathers in The Guardian

 

‘At its best, peer review is a slow and careful evaluation of new research by appropriate experts. ... At its worst, it is merely window dressing that gives the unwarranted appearance of authority’. Photograph: George Frey/AFP/Getty Images


The Lancet is one of the oldest and most respected medical journals in the world. Recently, they published an article on Covid patients receiving hydroxychloroquine with a dire conclusion: the drug increases heartbeat irregularities and decreases hospital survival rates. This result was treated as authoritative, and major drug trials were immediately halted – because why treat anyone with an unsafe drug?

Now, that Lancet study has been retracted, withdrawn from the literature entirely, at the request of three of its authors who “can no longer vouch for the veracity of the primary data sources”. Given the seriousness of the topic and the consequences of the paper, this is one of the most consequential retractions in modern history.

---Also watch

---

It is natural to ask how this is possible. How did a paper of such consequence get discarded like a used tissue by some of its authors only days after publication? If the authors don’t trust it now, how did it get published in the first place?

The answer is quite simple. It happened because peer review, the formal process of reviewing scientific work before it is accepted for publication, is not designed to detect anomalous data. It makes no difference if the anomalies are due to inaccuracies, miscalculations, or outright fraud. This is not what peer review is for. While it is the internationally recognised badge of “settled science”, its value is far more complicated.

At its best, peer review is a slow and careful evaluation of new research by appropriate experts. It involves multiple rounds of revision that removes errors, strengthens analyses, and noticeably improves manuscripts.

At its worst, it is merely window dressing that gives the unwarranted appearance of authority, a cursory process which confers no real value, enforces orthodoxy, and overlooks both obvious analytical problems and outright fraud entirely.

Regardless of how any individual paper is reviewed – and the experience is usually somewhere between the above extremes – the sad truth is peer review in its entirety is struggling, and retractions like this drag its flaws into an incredibly bright spotlight.

The ballistics of this problem are well known. To start with, peer review is entirely unrewarded. The internal currency of science consists entirely of producing new papers, which form the cornerstone of your scientific reputation. There is no emphasis on reviewing the work of others. If you spend several days in a continuous back-and-forth technical exchange with authors, trying to improve their manuscript, adding new analyses, shoring up conclusions, no one will ever know your name. Neither are you paid. Peer review originally fitted under an amorphous idea of academic “service” – the tasks that scientists were supposed to perform as members of their community. This is a nice idea, but is almost invariably maintained by researchers with excellent job security. Some senior scientists are notorious for peer reviewing manuscripts rarely or even never – because it interferes with the task of producing more of their own research.

However, even if reliable volunteers for peer review can be found, it is increasingly clear that it is insufficient. The vast majority of peer-reviewed articles are never checked for any form of analytical consistency, nor can they be – journals do not require manuscripts to have accompanying data or analytical code and often will not help you obtain them from authors if you wish to see them. Authors usually have zero formal, moral, or legal requirements to share the data and analytical methods behind their experiments. Finally, if you locate a problem in a published paper and bring it to either of these parties, often the median response is no response at all – silence.

This is usually not because authors or editors are negligent or uncaring. Usually, it is because they are trying to keep up with the component difficulties of keeping their scientific careers and journals respectively afloat. Unfortunately, those goals are directly in opposition – authors publishing as much as possible means back-breaking amounts of submissions for journals. Increasingly time-poor researchers, busy with their own publications, often decline invitations to review. Subsequently, peer review is then cursory or non-analytical.

And even still, we often muddle through. Until we encounter extraordinary circumstances.






Peer review during a pandemic faces a brutal dilemma – the moral importance of releasing important information with planetary consequences quickly, versus the scientific importance of evaluating the presented work fully – while trying to recruit scientists, already busier than usual due to their disrupted lives, to review work for free. And, after this process is complete, publications face immediate scrutiny by a much larger group of engaged scientific readers than usual, who treat publications which affect the health of every living human being with the scrutiny they deserve.

The consequences are extreme. The consequences for any of us, on discovering a persistent cough and respiratory difficulties, are directly determined by this research. Papers like today’s retraction determine how people live or die tomorrow. They affect what drugs are recommended, what treatments are available, and how we get them sooner.

The immediate solution to this problem of extreme opacity, which allows flawed papers to hide in plain sight, has been advocated for years: require more transparency, mandate more scrutiny. Prioritise publishing papers which present data and analytical code alongside a manuscript. Re-analyse papers for their accuracy before publication, instead of just assessing their potential importance. Engage expert statistical reviewers where necessary, pay them if you must. Be immediately responsive to criticism, and enforce this same standard on authors. The alternative is more retractions, more missteps, more wasted time, more loss of public trust … and more death.

Thursday 4 June 2020

Genetics is not why more BAME people die of coronavirus: structural racism is

Yes, more people of black, Latin and south Asian origin are dying, but there is no genetic ‘susceptibility’ behind it writes Winston Morgan in The Guardian 


 
A TfL worker sprays antiviral solution inside a tube train. Photograph: Kirsty O’Connor/PA


From the start of the coronavirus pandemic, there has been an attempt to use science to explain the disproportionate impact of Covid-19 on different groups through the prism of race. Data from the UK and the US suggests that people categorised as black, Hispanic (Latino) and south Asian are more likely to die from the disease.

The way this issue is often discussed, but also the response of some scientists, would suggest that there might be some biological reason for the higher death rates based on genetic differences between these groups and their white counterparts. But the reality is there is no evidence that the genes used to divide people into races are linked to how our immune system responds to viral infections.

There are certain genetic mutations that can be found among specific ethnic groups that can play a role in the body’s immune response. But because of the loose definition of race (primarily based on genes for skin colour) and recent population movements, these should be seen as unreliable indicators when it comes to susceptibility to viral infections. 

Indeed, race is a social construct with no scientific basis. However, there are clear links between people’s racial groups, their socioeconomic status, what happens to them once they are infected, and the outcome of their infection. And focusing on the idea of a genetic link merely serves to distract from this.

You only have to look at how the statistics are gathered to understand how these issues are confused. Data from the UK’s Office for National Statistics that has been used to highlight the disparate death rates separates Indians from Pakistanis and Bangladeshis, and yet groups together all Africans (including black Caribbeans). This makes no sense in terms of race, ethnicity or genetics.

The data shows that those males categorised as black are more than 4.6 times more likely to die than their white counterparts from the virus. They are followed by Pakistanis/Bangladeshis (just over four times more likely to die), and then Chinese and Indians (just over 2.5 times).

Most genome-wide association studies group all south Asians. Yet, at least in the UK, Covid-19 can apparently separate Indians and Pakistanis, suggesting genetics have little to do with it. The categories used to collect government data for the pandemic are far more suited to social outcomes such as employment or education.

This problem arises even with a recent analysis that purportedly shows people from ethnic minorities are no more likely to die, once you take into account the effects of other illnesses and deprivation. The main analysis only compares whites to everybody else, masking the data for specific groups, while the headline of the newspaper article about the study refers only to black people.

Meanwhile, in the US the groups most disproportionately affected are African Americans and Hispanics/Latinos. All these groups come from very different population groups. We’ve also seen high death rates in Brazil, China and Italy, all of which have very different populations using the classical definition of race.

The idea that Covid-19 discriminates along traditional racial lines is created by these statistics and fails to adequately portray what’s really going on. These kinds of assumptions ignore the fact that there is as much genetic variation within racialised groups as there is between the whole human population.
There are some medical conditions with a higher prevalence in some racialised groups, such as sickle cell anaemia, and differences in how some groups respond to certain drugs. But these are traits linked to single genes and all transcend the traditional definitions of race. Such “monogenic” traits affect a very small subset of many populations, such as some southern Europeans and south Asians who also have a predisposition to sickle cell anaemia.

Death from Covid-19 is also linked to pre-existing conditions that appear in higher levels in black and south Asian groups, such as diabetes. The argument that this may provide a genetic underpinning is only partly supported by the limited evidence that links genetics to diabetes.

However, the ONS figures confirm that genes predisposing people to diabetes cannot be the same as those that predispose to Covid-19. Otherwise, Indians would be affected as much as Pakistanis and Bangladeshis, who belong to the same genome-wide association group.

Any genetic differences that may predispose you to diabetes are heavily influenced by environmental factors. There isn’t a “diabetes gene” linking the varying groups that are affected by Covid-19. But the prevalence of these so-called “lifestyle” diseases in racialised groups is strongly linked to social factors.

Another target that has come in for speculation is vitamin D deficiency. People with darker skin who do not get enough exposure to direct sunlight may produce less vitamin D, which is essential for many bodily functions, including the immune system. In terms of a link to susceptibility to Covid-19, this has not been proven. But very little work on this has been done and the pandemic should prompt more research on the medical consequences of vitamin D deficiency generally.

Other evidence suggests higher death rates from Covid-19 including among racialised groups might be linked to higher levels of a cell surface receptor molecule known as ACE2. But this can result from taking drugs for diabetes and hypertension, which takes us back to the point about the social causes of such diseases.

In the absence of any genetic link between racial groups and susceptibility to the virus, we are left with the reality, which seems more difficult to accept: that these groups are suffering more from how our societies are organised. There is no clear evidence that higher levels of conditions such as type-2 diabetes, cardiovascular disease and weakened immune systems in disadvantaged communities are because of inherent genetic predispositions.

But there is evidence they are the result of structural racism. All these underlying problems can be directly connected to the food and exercise you have access to, the level of education, employment, housing, healthcare, economic and political power within these communities.

The evidence suggests that this coronavirus does not discriminate, but highlights existing discriminations. The continued prevalence of ideas about race today – despite the lack of any scientific basis – shows how these ideas can mutate to provide justification for the power structures that have ordered our society since the 18th century.