Search This Blog

Saturday 6 June 2020

Scientific or Pseudo Knowledge? How Lancet's reputation was destroyed

The now retracted paper halted hydroxychloroquine trials. Studies like this determine how people live or die tomorrow writes James Heathers in The Guardian

 

‘At its best, peer review is a slow and careful evaluation of new research by appropriate experts. ... At its worst, it is merely window dressing that gives the unwarranted appearance of authority’. Photograph: George Frey/AFP/Getty Images


The Lancet is one of the oldest and most respected medical journals in the world. Recently, they published an article on Covid patients receiving hydroxychloroquine with a dire conclusion: the drug increases heartbeat irregularities and decreases hospital survival rates. This result was treated as authoritative, and major drug trials were immediately halted – because why treat anyone with an unsafe drug?

Now, that Lancet study has been retracted, withdrawn from the literature entirely, at the request of three of its authors who “can no longer vouch for the veracity of the primary data sources”. Given the seriousness of the topic and the consequences of the paper, this is one of the most consequential retractions in modern history.

---Also watch

---

It is natural to ask how this is possible. How did a paper of such consequence get discarded like a used tissue by some of its authors only days after publication? If the authors don’t trust it now, how did it get published in the first place?

The answer is quite simple. It happened because peer review, the formal process of reviewing scientific work before it is accepted for publication, is not designed to detect anomalous data. It makes no difference if the anomalies are due to inaccuracies, miscalculations, or outright fraud. This is not what peer review is for. While it is the internationally recognised badge of “settled science”, its value is far more complicated.

At its best, peer review is a slow and careful evaluation of new research by appropriate experts. It involves multiple rounds of revision that removes errors, strengthens analyses, and noticeably improves manuscripts.

At its worst, it is merely window dressing that gives the unwarranted appearance of authority, a cursory process which confers no real value, enforces orthodoxy, and overlooks both obvious analytical problems and outright fraud entirely.

Regardless of how any individual paper is reviewed – and the experience is usually somewhere between the above extremes – the sad truth is peer review in its entirety is struggling, and retractions like this drag its flaws into an incredibly bright spotlight.

The ballistics of this problem are well known. To start with, peer review is entirely unrewarded. The internal currency of science consists entirely of producing new papers, which form the cornerstone of your scientific reputation. There is no emphasis on reviewing the work of others. If you spend several days in a continuous back-and-forth technical exchange with authors, trying to improve their manuscript, adding new analyses, shoring up conclusions, no one will ever know your name. Neither are you paid. Peer review originally fitted under an amorphous idea of academic “service” – the tasks that scientists were supposed to perform as members of their community. This is a nice idea, but is almost invariably maintained by researchers with excellent job security. Some senior scientists are notorious for peer reviewing manuscripts rarely or even never – because it interferes with the task of producing more of their own research.

However, even if reliable volunteers for peer review can be found, it is increasingly clear that it is insufficient. The vast majority of peer-reviewed articles are never checked for any form of analytical consistency, nor can they be – journals do not require manuscripts to have accompanying data or analytical code and often will not help you obtain them from authors if you wish to see them. Authors usually have zero formal, moral, or legal requirements to share the data and analytical methods behind their experiments. Finally, if you locate a problem in a published paper and bring it to either of these parties, often the median response is no response at all – silence.

This is usually not because authors or editors are negligent or uncaring. Usually, it is because they are trying to keep up with the component difficulties of keeping their scientific careers and journals respectively afloat. Unfortunately, those goals are directly in opposition – authors publishing as much as possible means back-breaking amounts of submissions for journals. Increasingly time-poor researchers, busy with their own publications, often decline invitations to review. Subsequently, peer review is then cursory or non-analytical.

And even still, we often muddle through. Until we encounter extraordinary circumstances.






Peer review during a pandemic faces a brutal dilemma – the moral importance of releasing important information with planetary consequences quickly, versus the scientific importance of evaluating the presented work fully – while trying to recruit scientists, already busier than usual due to their disrupted lives, to review work for free. And, after this process is complete, publications face immediate scrutiny by a much larger group of engaged scientific readers than usual, who treat publications which affect the health of every living human being with the scrutiny they deserve.

The consequences are extreme. The consequences for any of us, on discovering a persistent cough and respiratory difficulties, are directly determined by this research. Papers like today’s retraction determine how people live or die tomorrow. They affect what drugs are recommended, what treatments are available, and how we get them sooner.

The immediate solution to this problem of extreme opacity, which allows flawed papers to hide in plain sight, has been advocated for years: require more transparency, mandate more scrutiny. Prioritise publishing papers which present data and analytical code alongside a manuscript. Re-analyse papers for their accuracy before publication, instead of just assessing their potential importance. Engage expert statistical reviewers where necessary, pay them if you must. Be immediately responsive to criticism, and enforce this same standard on authors. The alternative is more retractions, more missteps, more wasted time, more loss of public trust … and more death.

No comments:

Post a Comment