Stuart Ritchie
The statue of David Hume on Edinburgh’s Royal Mile
Imagine you heard a scientist saying the following:
I’m being paid massive consultation fees by a pharmaceutical company who want the results of my research to turn out in one specific way. And that’s a good thing. I’m proud of my conflicts of interest. I tell all my students that they should have conflicts if possible. On social media, I regularly post about how science is inevitably conflicted in one way or another, and how anyone criticising me for my conflicts is simply hopelessly naive.
I hope this would at least cause you to raise an eyebrow. And that’s because, whereas this scientist is right that conflicts of interest of some kind are probably inevitable, conflicts are a bad thing.
We all know how biases can affect scientists: failing to publish studies that don’t go their way; running or reporting the stats in ways that push the results in a favoured direction; harshly critiquing experiments they don’t like while letting equally-bad, but more sympathetic, ones off the hook. Insofar as a conflict of interest makes any of these (often unconscious) biases more likely, it’s not something to be proud of.
And that’s why we report conflicts of interest in scientific papers - both because it helps the reader understand where a particular study is coming from, and because it would be embarrassing if someone found out after the fact if nothing had been said. We also take steps to ensure that our conflicts don’t affect our research - we do double-blinding; we do replications; we post our data online; we try and show the world that the results would’ve been the results, regardless of what we were being paid by Big Pharma.
We can also all agree that conflicts of interest aren’t just financial. They can be personal - maybe you’re married to someone who would benefit if your results turn out a particular way. They can be reputational - maybe you’re the world’s no.1 proponent of Theory X, and would lose prestige if the results of this study didn’t support it. And they can be political - you can have a view of the world that comports with the research turning out one way, but not another.
When it comes to political conflicts of interest, I’ve noticed something very strange. I’ve noticed that, instead of treating them like other kinds of conflicts—where you put your hands up and admit to them but then do your best to make sure they don’t influence your science—scientists sometimes revel in political conflicts. Like the fictional conflicted scientist quoted above, they ostentatiously tell us that they’re being political and they don’t care: “don’t you know”, they scoff, “that science and politics are inseparable?”
Indeed, this phrase—“Science and Politics are Inseparable”—was the title of a Nature editorial in 2020, and it’s not hard to find other examples in popular-science publications:
Science Has Always Been Inseparable From Politics (Scientific American)
News Flash: Science Has Always Been Political (American Scientist)
Science Is Political (Chemistry World)
Yes, Science Is Political (Scientific American)
When Nature, Science, the New England Journal of Medicine, and Scientific American all either strongly criticised the Trump administration, or explicitly endorsed Joe Biden for US President during the 2020 election campaign, they were met with surprise from many who found it unsettling to see scientific publications so openly engaging in politics. The response from their defenders? “Don’t you know science is political?”.
What does “science is political” mean?
Here’s a (non-exhaustive) list of what people might mean when they say “science is political”:
The things scientists choose to study can be influenced by their political views of what’s important;
The way scientists interpret data from scientific research can often be in line with their pre-existing political views;
Since scientists are human, it’s impossible for them to be totally objective - anything they do is always going to be tainted by political views and assumptions;
It’s easy for scientists to forget that human subjectivity influences a great many aspects of science - even things like algorithms which might seem objective but often recapitulate the biases of their human creators;
Even the choice to use science—as opposed to some other way of knowing—in the first place is influenced by our political and cultural perspective;
A lot of science is funded by the taxpayer, via governments, which are run by political parties who set the agenda. Non-governmental funders of science can also have their own political agendas;
People of different political persuasions hold predictable views on controversial scientific topics (e.g. global warming, COVID vaccines, nuclear power, and so on);
Politicians, or those engaged in political debate, regularly use “science” to back up their points of view in a cynical, disingenuous way, often by cherry-picking studies or relying on any old thing that supports them, regardless of its quality.
There’s no argument from me about any of those points. These are all absolutely true. I wrote a whole book about how biases, some of them political, can dramatically affect research in all sorts of ways. But these are just factual statements - and I don’t think the people who always tell you that “science is political” are just idly chatting sociology-of-science for the fun of it. They want to make one of two points.
Imagine you heard a scientist saying the following:
I’m being paid massive consultation fees by a pharmaceutical company who want the results of my research to turn out in one specific way. And that’s a good thing. I’m proud of my conflicts of interest. I tell all my students that they should have conflicts if possible. On social media, I regularly post about how science is inevitably conflicted in one way or another, and how anyone criticising me for my conflicts is simply hopelessly naive.
I hope this would at least cause you to raise an eyebrow. And that’s because, whereas this scientist is right that conflicts of interest of some kind are probably inevitable, conflicts are a bad thing.
We all know how biases can affect scientists: failing to publish studies that don’t go their way; running or reporting the stats in ways that push the results in a favoured direction; harshly critiquing experiments they don’t like while letting equally-bad, but more sympathetic, ones off the hook. Insofar as a conflict of interest makes any of these (often unconscious) biases more likely, it’s not something to be proud of.
And that’s why we report conflicts of interest in scientific papers - both because it helps the reader understand where a particular study is coming from, and because it would be embarrassing if someone found out after the fact if nothing had been said. We also take steps to ensure that our conflicts don’t affect our research - we do double-blinding; we do replications; we post our data online; we try and show the world that the results would’ve been the results, regardless of what we were being paid by Big Pharma.
We can also all agree that conflicts of interest aren’t just financial. They can be personal - maybe you’re married to someone who would benefit if your results turn out a particular way. They can be reputational - maybe you’re the world’s no.1 proponent of Theory X, and would lose prestige if the results of this study didn’t support it. And they can be political - you can have a view of the world that comports with the research turning out one way, but not another.
When it comes to political conflicts of interest, I’ve noticed something very strange. I’ve noticed that, instead of treating them like other kinds of conflicts—where you put your hands up and admit to them but then do your best to make sure they don’t influence your science—scientists sometimes revel in political conflicts. Like the fictional conflicted scientist quoted above, they ostentatiously tell us that they’re being political and they don’t care: “don’t you know”, they scoff, “that science and politics are inseparable?”
Indeed, this phrase—“Science and Politics are Inseparable”—was the title of a Nature editorial in 2020, and it’s not hard to find other examples in popular-science publications:
Science Has Always Been Inseparable From Politics (Scientific American)
News Flash: Science Has Always Been Political (American Scientist)
Science Is Political (Chemistry World)
Yes, Science Is Political (Scientific American)
When Nature, Science, the New England Journal of Medicine, and Scientific American all either strongly criticised the Trump administration, or explicitly endorsed Joe Biden for US President during the 2020 election campaign, they were met with surprise from many who found it unsettling to see scientific publications so openly engaging in politics. The response from their defenders? “Don’t you know science is political?”.
What does “science is political” mean?
Here’s a (non-exhaustive) list of what people might mean when they say “science is political”:
The things scientists choose to study can be influenced by their political views of what’s important;
The way scientists interpret data from scientific research can often be in line with their pre-existing political views;
Since scientists are human, it’s impossible for them to be totally objective - anything they do is always going to be tainted by political views and assumptions;
It’s easy for scientists to forget that human subjectivity influences a great many aspects of science - even things like algorithms which might seem objective but often recapitulate the biases of their human creators;
Even the choice to use science—as opposed to some other way of knowing—in the first place is influenced by our political and cultural perspective;
A lot of science is funded by the taxpayer, via governments, which are run by political parties who set the agenda. Non-governmental funders of science can also have their own political agendas;
People of different political persuasions hold predictable views on controversial scientific topics (e.g. global warming, COVID vaccines, nuclear power, and so on);
Politicians, or those engaged in political debate, regularly use “science” to back up their points of view in a cynical, disingenuous way, often by cherry-picking studies or relying on any old thing that supports them, regardless of its quality.
There’s no argument from me about any of those points. These are all absolutely true. I wrote a whole book about how biases, some of them political, can dramatically affect research in all sorts of ways. But these are just factual statements - and I don’t think the people who always tell you that “science is political” are just idly chatting sociology-of-science for the fun of it. They want to make one of two points.
1. The argument from inevitability
The first point they might be making is what we might call the argument from inevitability. “There’s no way around it. You’re being naive if you think you could stop science from being political. It’s arrogance in the highest degree to think that you are somehow being ‘objective’, and aren’t a slave to your biases.”
But this is a weirdly black-and-white view. It’s not just that something “is political” (say, a piece of research done by the Pro-Life Campaign Against Abortion which concludes that the science proves human life starts at conception) or “is not political” (say, a piece of research on climate change run by Martians who have no idea about Earth politics). There are all sorts of shades of grey - and our job is to get as close to the “not political” end as possible, even in the knowledge that we might never get fully get there.
Indeed, there’s a weird reverse-arrogance in the argument from inevitability. As noted by Scott Alexander at Astral Codex Ten:
Talking about the impossibility of true rationality or objectivity might feel humble - you're admitting you can't do this difficult thing. But analyzed more carefully, it becomes really arrogant. You're admitting there are people worse than you - Alex Jones, the fossil fuel lobby, etc. You're just saying it's impossible to do better. You personally - or maybe your society, or some existing group who you trust - are butting up against the light speed limit of rationality and objectivity.
Let’s restate this using a scientific example. We can all agree that Trofim Lysenko’s Soviet agriculture is among the worst examples of politicised science in history - a whole pseudoscientific ideology that denied the basic realities of evolution and genetic transmission, and replaced them with techniques based on discredited ideas like the “inheritance of acquired characteristics”, helping to exacerbate famines that killed millions in the Soviet Union and China. That’s pretty much as bad as politicised science gets (you can bet your bottom ruble, by the way, that Lysenko himself thought that “science is political”).
If you think you’re better than Lysenko in terms of keeping politics out of your science (and let’s face it, you totally do think this), you’re already agreeing that there are gradations. And if you agree that there are gradations, it would be daft—or highly conceited—to think that nobody could ever to do a better job than you. Thus, you probably do agree that we could always try and improve our level of objectivity in science.
(By the way, by “objectivity” I mean scientific results that would look the same regardless of the observer, so long as that observer had the right level of training and/or equipment to see them. In the case of Lysenkoism, the “science” was highly idiosyncratic to Lysenko - things could’ve been entirely different if we ran the tape of history again with Lysenko removed. In the case of, say, the double-helix structure of DNA, we could be pretty confident that, were there to have been no Watson or Crick or Franklin or Wilkins, someone would’ve eventually still made that same discovery).
We already have a system that attempts to improve objectivity. The whole edifice of scientific review and publication—heck, the whole edifice of doing experiments, as opposed to just relying on your gut instinct—is an attempt to infuse some degree of objectivity into the process of discovering stuff about the world. I think that system of review and publication is a million miles from perfect (again, I wrote a book about this), but that’s just another way of saying: “the objectivity of the system could be improved”.
And it could be. If scientists shared all their code and data by default, the process would be a little more objective. If scientists publicly pre-registered their hypotheses before they looked at the data, the process would be a little more objective. If science funders used lotteries to award grant funding, the process would be a little more objective. And so on. In each of these cases—none of which give us perfect objectivity, of course, but which just inch us a little closer to it—we’d also move further away from a world where scientists’ subjective views, political or otherwise, influenced their science.
The fact that we can’t get rid of those subjective views altogether can serve a useful purpose: there’s a good argument for having a pluralist setup where people of all different views and perspectives and backgrounds contribute to the general scientific “commons”, and in doing so help debate, test, and refine each other’s ideas. But that’s still not an argument against each of those different people trying to be as objective as they can, within their own set of inevitable, human limitations.
After a decade of discussion about the replication crisis, open science, and all the ways we could reform the way we do research, we’re more aware than ever of how biases can distort things - but also how we can improve the system. So throwing up our hands and saying “science is always political! There’s nothing we can do!” is the very last thing we want to be telling aspiring scientists, who should be using and developing all these new techniques to improve their objectivity.
Not only is the argument from inevitability mistaken. Not only is it black-and-white thinking. It’s also cheems. Even if we can’t be perfect, it’s possible to be better - and that’s the kind of progressive message that all new scientists need to hear.
2. The activist’s argument
The second point that people might be making when they say that “science is political” is what we could call the activist’s argument. “The fact that science is political isn’t just an inevitability, but it’s good. We should all be using our science to make the world a better place (according to my political views), and to the extent that people are using science to make the world worse (according to my political views), we should stop them. All scientists should be political activists (who agree with my political views)”.
If my opening example of the scientist who’s proud of his or her conflict of interest moved you at all, you already have antibodies to this idea. You should ask what the difference is between a financial conflict of interest and an ideological one.
The activist’s argument is often invoked in response to other people politicising science. For example, after the recent mass shooting in Buffalo, New York, it was discovered that the white nationalist gunman had written a manifesto that referenced some papers from population- and behaviour-genetics research. This led to explicit calls to make genetics more political in the opposite direction (including banning some forms of research that are deemed too controversial). An article in WIRED argued that, in the wake of the killings:
…scientists can no longer justify silence in the name of objectivity or use the escape tactic of “leaving politics out of science.”
This argument—which is effectively stating that two wrongs do make a right—seems terribly misguided to me. If you think it’s bad that politics are being injected into science, it’s jarringly nonsensical to argue that “leaving politics out of science” is a bad thing. Isn’t the more obvious conclusion that we should endeavour to lessen the influence of politics and ideology on science across the board? If you think it’s bad when other people do it, you should think it’s bad when you do it yourself.
Of course, a lot of people don’t think it’s bad - they only think it’s bad when their opponents do it. They want to push their own political agenda and just happen to be working in science (witness all the biologists—why is it always biologists?—who advertise their socialism, or even include a little hammer and sickle, in their Twitter bio; or on the other hand, witness all the people complaining about “wokeness” invading science who don’t bat an eyelid when right-wingers push unscientific views about COVID or climate change). There’s probably little I can do to argue round anyone who is happy to mix up their politics and their science in this way.
But there are a lot of well-meaning, otherwise non-ideological people who use the argument too. At best, by repeating “science is political” like a mantra, they’re just engaging in the usual social conformism that we all do to some extent. At worst, they’re providing active cover for those who want to politicise science (“everyone says science is inevitably political, so why can’t I insert my ideology?”).
If you explicitly encourage scientists to be biased in a particular direction, don’t be surprised if you start getting biased results. We all know that publication bias and p-hacking occur when scientists care more about the results of a scientific study than the quality of its methods. Do we think that telling scientists that it’s okay to be ideological when doing research would make this better, or worse?
If you encourage scientists to focus on the “greater good” of their political ideology rather than the science itself, don’t be surprised if the incentives change. Don’t be surprised if they get sloppy - what are a few mistakes if it all goes toward making the world a better place? And don’t be surprised if some of them break the rules - I’ve heard enough stories of scientific fraudsters who had a strong, pre-existing belief in their theory, and after they couldn’t see it in the results from their experiment, proceeded to give the numbers a little “push” in the “right” direction. Do we think a similar dynamic is more, or less likely to evolve if we tell people it’s good to put their ideology first?
If we encourage scientists to bring their political ideology to the lab, do we think groupthink—a very common human problem which in at least some scientific fields seems to have stifled debate and held back progress—will get better, or worse?
And finally, think about the effect on people who aren’t scientists, but who read or rely on its results. Scientists loudly and explicitly endorsing political positions certainly isn’t going to help those on the opposite side of the political aisle to take science more seriously (there’s some polling evidence for this). Not only that, but the suggestion that some results might be being covered up for political reasons can be perfect tinder for conspiracy theories (remember what happened during the Climategate scandal).
A better way
When scientific research is misappropriated for political ends, either by extremists or by more mainstream figures, the answer isn’t to drop all attempts at objectivity. The answer is to get as far away from politics as we can. Instead of saying “science is political - get over it”, we could say:
We’ll redouble our efforts to make our results transparent and our interpretations clear - we’ll ensure that we explain in detail why the conclusions being drawn by political actors aren’t justified based on the evidence;
We’ll make sure that what we think are incorrect interpretations are clearly described and refuted;
We’ll do the scientific equivalent of putting our results in a blind trust, by using the kinds of practices discussed above (open data, pre-registration, code sharing) and others, to lessen the effect of our pre-existing views and ensure that others can easily check our results;
We’ll tighten up processes like peer-review so that there’s an even more rigorous quality filter on new scientific papers. If they’re subjected to more scrutiny, any bad or incorrect results that are the focus of political worries should be more likely to fall by the wayside;
We’ll expand our definition of a conflict of interest, and be more open about when our personal politics, affiliations, memberships, religious beliefs, employments, relationships, commitments, previous statements, diets, hobbies, or anything else relevant might influence the way we do our research;
We’ll stop broadcasting the idea that it’s good to be ideological in science, and in fact we’ll make being ostentatiously ideological about one’s results at least as shameful as p-hacking, or publishing a paper with a glaring typo in the title;
We’ll restate our commitment to open inquiry and academic freedom, making sure that we keep an open—though highly critical and sceptical—mind when assessing anyone’s scientific claims.
To repeat: I don’t think it’s possible to fully remove politics from science. But it’s not all-or-nothing - the point is to get as close to non-political science as we can. By following some of the above steps (and I’m sure you can think of many other ways - another one that’s been discussed is the idea of adversarial collaboration), we can combat misrepresentation of research by using high-quality research of our own.
This is all rather like the discussion of the “Mertonian norms” of science, which are supposed to be the ethos of the whole activity - universalism (no matter who says it, we evaluate a claim the same way), communalism (we share results and methods around the community), organised scepticism (we constantly subject all results to unforgiving scrutiny), and, most relevant to our discussion here, disinterestedness (scientists don’t have a stake in their results turning out one way or another). These aren’t necessarily descriptions of how science is right now, but they’re aspirational - we should do our best to organise the system so it leans towards them. The idea that we should loudly and proudly bring in our political ideologies does violence to these already-fragile norms.
And we really should aspire to disinterestedness. The ideal scientist shouldn’t care whether an hypothesis comes out one way or another. And since, because they’re human beings, the vast majority of them really do, we should set the system up so their views are kept at arms’ length from the results. At the same time, we should remind ourselves of some very basic philosophy via David Hume in 1739: “is” and “ought” questions are different things. The “is” answers we get from science don’t necessarily tell us what we “ought” to to, and just as importantly, the “ought” beliefs from our moral and political philosophy don’t tell us how the world “is”. To think otherwise is to make a category error.
Or as Tom Chivers put it, somewhat more recently:
Finding out whether the Earth revolves around the Sun is a different kind of question from asking whether humans have equal moral value. One is a question of fact about the world as it is; to answer it, you have to go out into the world and look. The other is a question of our moral system, and the answer comes from within.
The inspiring, resounding peroration
The view that scientists should do their best to be as objective as possible is a boring, default, commonly-believed, run-of-the-mill opinion. It also happens to be correct.
The problem with boring, default, commonly-believed, run-of-the-mill opinions is that you don’t get a thrill from reciting them or shocking people with their counterintuitiveness. The fire that powers so much online activism just isn’t there, and the whole thing comes across as rather dull. So in an attempt to remedy that, let me try and make my position sound as exciting as possible. Ahem:
Science is political - but that’s a bad thing! We must RESIST attempts to make our science less objective! We must PUSH BACK against attempts to insert ideology—any ideology—into our science! We must STRIVE to be as apolitical as we possibly can be! I know that I’m a human being with my own biases, and so are you - but objective science is humanity’s best tool for overcoming those biases, and arriving at SHARED KNOWLEDGE. We can do better - TOGETHER.
Hmm. I’m not much of a speech-writer, and that felt a little bit embarrassing. But remember well that cringey feeling: that’s exactly how you should feel the next time someone tells you—with a clear, yet unspoken, agenda—that “science is political”.
No comments:
Post a Comment