Search This Blog

Showing posts with label clinical trial. Show all posts
Showing posts with label clinical trial. Show all posts

Tuesday 7 February 2017

The hi-tech war on science fraud

Stephen Buranyi in The Guardian


One morning last summer, a German psychologist named Mathias Kauff woke up to find that he had been reprimanded by a robot. In an email, a computer program named Statcheck informed him that a 2013 paper he had published on multiculturalism and prejudice appeared to contain a number of incorrect calculations – which the program had catalogued and then posted on the internet for anyone to see. The problems turned out to be minor – just a few rounding errors – but the experience left Kauff feeling rattled. “At first I was a bit frightened,” he said. “I felt a bit exposed.”

Kauff wasn’t alone. Statcheck had read some 50,000 published psychology papers and checked the maths behind every statistical result it encountered. In the space of 24 hours, virtually every academic active in the field in the past two decades had received an email from the program, informing them that their work had been reviewed. Nothing like this had ever been seen before: a massive, open, retroactive evaluation of scientific literature, conducted entirely by computer.

Statcheck’s method was relatively simple, more like the mathematical equivalent of a spellchecker than a thoughtful review, but some scientists saw it as a new form of scrutiny and suspicion, portending a future in which the objective authority of peer review would be undermined by unaccountable and uncredentialed critics.

Susan Fiske, the former head of the Association for Psychological Science, wrote an op-ed accusing “self-appointed data police” of pioneering a new “form of harassment”. The German Psychological Society issued a statement condemning the unauthorised use of Statcheck. The intensity of the reaction suggested that many were afraid that the program was not just attributing mere statistical errors, but some impropriety, to the scientists.

The man behind all this controversy was a 25-year-old Dutch scientist named Chris Hartgerink, based at Tilburg University’s Meta-Research Center, which studies bias and error in science. Statcheck was the brainchild of Hartgerink’s colleague Michèle Nuijten, who had used the program to conduct a 2015 study that demonstrated that about half of all papers in psychology journals contained a statistical error. Nuijten’s study was written up in Nature as a valuable contribution to the growing literature acknowledging bias and error in science – but she had not published an inventory of the specific errors it had detected, or the authors who had committed them. The real flashpoint came months later,when Hartgerink modified Statcheck with some code of his own devising, which catalogued the individual errors and posted them online – sparking uproar across the scientific community.

Hartgerink is one of only a handful of researchers in the world who work full-time on the problem of scientific fraud – and he is perfectly happy to upset his peers. “The scientific system as we know it is pretty screwed up,” he told me last autumn. Sitting in the offices of the Meta-Research Center, which look out on to Tilburg’s grey, mid-century campus, he added: “I’ve known for years that I want to help improve it.” Hartgerink approaches his work with a professorial seriousness – his office is bare, except for a pile of statistics textbooks and an equation-filled whiteboard – and he is appealingly earnest about his aims. His conversations tend to rapidly ascend to great heights, as if they were balloons released from his hands – the simplest things soon become grand questions of ethics, or privacy, or the future of science.

“Statcheck is a good example of what is now possible,” he said. The top priority,for Hartgerink, is something much more grave than correcting simple statistical miscalculations. He is now proposing to deploy a similar program that will uncover fake or manipulated results – which he believes are far more prevalent than most scientists would like to admit.

When it comes to fraud – or in the more neutral terms he prefers, “scientific misconduct” – Hartgerink is aware that he is venturing into sensitive territory. “It is not something people enjoy talking about,” he told me, with a weary grin. Despite its professed commitment to self-correction, science is a discipline that relies mainly on a culture of mutual trust and good faith to stay clean. Talking about its faults can feel like a kind of heresy. In 1981, when a young Al Gore led a congressional inquiry into a spate of recent cases of scientific fraud in biomedicine, the historian Daniel Kevles observed that “for Gore and for many others, fraud in the biomedical sciences was akin to pederasty among priests”.

The comparison is apt. The exposure of fraud directly threatens the special claim science has on truth, which relies on the belief that its methods are purely rational and objective. As the congressmen warned scientists during the hearings, “each and every case of fraud serves to undermine the public’s trust in the research enterprise of our nation”.

But three decades later, scientists still have only the most crude estimates of how much fraud actually exists. The current accepted standard is a 2009 study by the Stanford researcher Daniele Fanelli that collated the results of 21 previous surveys given to scientists in various fields about research misconduct. The studies, which depended entirely on scientists honestly reporting their own misconduct, concluded that about 2% of scientists had falsified data at some point in their career.

If Fanelli’s estimate is correct, it seems likely that thousands of scientists are getting away with misconduct each year. Fraud – including outright fabrication, plagiarism and self-plagiarism – accounts for the majority of retracted scientific articles. But, according to RetractionWatch, which catalogues papers that have been withdrawn from the scientific literature, only 684 were retracted in 2015, while more than 800,000 new papers were published. If even just a few of the suggested 2% of scientific fraudsters – which, relying on self-reporting, is itself probably a conservative estimate – are active in any given year, the vast majority are going totally undetected. “Reviewers and editors, other gatekeepers – they’re not looking for potential problems,” Hartgerink said.

But if none of the traditional authorities in science are going to address the problem, Hartgerink believes that there is another way. If a program similar to Statcheck can be trained to detect the traces of manipulated data, and then make those results public, the scientific community can decide for itself whether a given study should still be regarded as trustworthy.

Hartgerink’s university, which sits at the western edge of Tilburg, a small, quiet city in the southern Netherlands, seems an unlikely place to try and correct this hole in the scientific process. The university is best known for its economics and business courses and does not have traditional lab facilities. But Tilburg was also the site of one of the biggest scientific scandals in living memory – and no one knows better than Hartgerink and his colleagues just how devastating individual cases of fraud can be.

In September 2010, the School of Social and Behavioral Science at Tilburg University appointed Diederik Stapel, a promising young social psychologist, as its new dean. Stapel was already popular with students for his warm manner, and with the faculty for his easy command of scientific literature and his enthusiasm for collaboration. He would often offer to help his colleagues, and sometimes even his students, by conducting surveys and gathering data for them.

As dean, Stapel appeared to reward his colleagues’ faith in him almost immediately. In April 2011 he published a paper in Science, the first study the small university had ever landed in that prestigious journal. Stapel’s research focused on what psychologists call “priming”: the idea that small stimuli can affect our behaviour in unnoticed but significant ways. “Could being discriminated against depend on such seemingly trivial matters as garbage on the streets?” Stapel’s paper in Science asked. He proceeded to show that white commuters at the Utrecht railway station tended to sit further away from visible minorities when the station was dirty. Similarly, Stapel found that white people were more likely to give negative answers on a quiz about minorities if they were interviewed on a dirty street, rather than a clean one.

Stapel had a knack for devising and executing such clever studies, cutting through messy problems to extract clean data. Since becoming a professor a decade earlier, he had published more than 100 papers, showing, among other things, that beauty product advertisements, regardless of context, prompted women to think about themselves more negatively, and that judges who had been primed to think about concepts of impartial justice were less likely to make racially motivated decisions.

His findings regularly reached the public through the media. The idea that huge, intractable social issues such as sexism and racism could be affected in such simple ways had a powerful intuitive appeal, and hinted at the possibility of equally simple, elegant solutions. If anything united Stapel’s diverse interests, it was this Gladwellian bent. His studies were often featured in the popular press, including the Los Angeles Times and New York Times, and he was a regular guest on Dutch television programmes.

But as Stapel’s reputation skyrocketed, a small group of colleagues and students began to view him with suspicion. “It was too good to be true,” a professor who was working at Tilburg at the time told me. (The professor, who I will call Joseph Robin, asked to remain anonymous so that he could frankly discuss his role in exposing Stapel.) “All of his experiments worked. That just doesn’t happen.”

A student of Stapel’s had mentioned to Robin in 2010 that some of Stapel’s data looked strange, so that autumn, shortly after Stapel was made Dean, Robin proposed a collaboration with him, hoping to see his methods first-hand. Stapel agreed, and the data he returned a few months later, according to Robin, “looked crazy. It was internally inconsistent in weird ways; completely unlike any real data I had ever seen.” Meanwhile, as the student helped get hold of more datasets from Stapel’s former students and collaborators, the evidence mounted: more “weird data”, and identical sets of numbers copied directly from one study to another.
In August 2011, the whistleblowers took their findings to the head of the department, Marcel Zeelenberg, who confronted Stapel with the evidence. At first, Stapel denied the charges, but just days later he admitted what his accusers suspected: he had never interviewed any commuters at the railway station, no women had been shown beauty advertisements and no judges had been surveyed about impartial justice and racism.

Stapel hadn’t just tinkered with numbers, he had made most of them up entirely, producing entire datasets at home in his kitchen after his wife and children had gone to bed. His method was an inversion of the proper scientific method: he started by deciding what result he wanted and then worked backwards, filling out the individual “data” points he was supposed to be collecting.

On 7 September 2011, the university revealed that Stapel had been suspended. The media initially speculated that there might have been an issue with his latest study – announced just days earlier, showing that meat-eaters were more selfish and less sociable – but the problem went much deeper. Stapel’s students and colleagues were about to learn that his enviable skill with data was, in fact, a sham, and his golden reputation, as well as nearly a decade of results that they had used in their own work, were built on lies.

Chris Hartgerink was studying late at the library when he heard the news. The extent of Stapel’s fraud wasn’t clear by then, but it was big. Hartgerink, who was then an undergraduate in the Tilburg psychology programme, felt a sudden disorientation, a sense that something solid and integral had been lost. Stapel had been a mentor to him, hiring him as a research assistant and giving him constant encouragement. “This is a guy who inspired me to actually become enthusiastic about research,” Hartgerink told me. “When that reason drops out, what remains, you know?”

Hartgerink wasn’t alone; the whole university was stunned. “It was a really difficult time,” said one student who had helped expose Stapel. “You saw these people on a daily basis who were so proud of their work, and you know it’s just based on a lie.” Even after Stapel resigned, the media coverage was relentless. Reporters roamed the campus – first from the Dutch press, and then, as the story got bigger, from all over the world.

On 9 September, just two days after Stapel was suspended, the university convened an ad-hoc investigative committee of current and former faculty. To help determine the true extent of Stapel’s fraud, the committee turned to Marcel van Assen, a statistician and psychologist in the department. At the time, Van Assen was growing bored with his current research, and the idea of investigating the former dean sounded like fun to him. Van Assen had never much liked Stapel, believing that he relied more on the force of his personality than reason when running the department. “Some people believe him charismatic,” Van Assen told me. “I am less sensitive to it.”

Van Assen – who is 44, tall and rangy, with a mop of greying, curly hair – approaches his work with relentless, unsentimental practicality. When speaking, he maintains an amused, half-smile, as if he is joking. He once told me that to fix the problems in psychology, it might be simpler to toss out 150 years of research and start again; I’m still not sure whether or not he was serious.

To prove misconduct, Van Assen said, you must be a pitbull: biting deeper and deeper, clamping down not just on the papers, but the datasets behind them, the research methods, the collaborators – using everything available to bring down the target. He spent a year breaking down the 45 studies Stapel produced at Tilburg and cataloguing their individual aberrations, noting where the effect size – a standard measure of the difference between the two groups in an experiment –seemed suspiciously large, where sequences of numbers were copied, where variables were too closely related, or where variables that should have moved in tandem instead appeared adrift.

The committee released its final report in October 2012 and, based largely on its conclusions, 55 of Stapel’s publications were officially retracted by the journals that had published them. Stapel also returned his PhD to the University of Amsterdam. He is, by any measure, one of the biggest scientific frauds of all time. (RetractionWatch has him third on their all-time retraction leaderboard.) The committee also had harsh words for Stapel’s colleagues, concluding that “from the bottom to the top, there was a general neglect of fundamental scientific standards”. “It was a real blow to the faculty,” Jacques Hagenaars, a former professor of methodology at Tilburg, who served on the committee, told me.

By extending some of the blame to the methods and attitudes of the scientists around Stapel, the committee situated the case within a larger problem that was attracting attention at the time, which has come to be known as the “replication crisis”. For the past decade, the scientific community has been grappling with the discovery that many published results cannot be reproduced independently by other scientists – in spite of the traditional safeguards of publishing and peer-review – because the original studies were marred by some combination of unchecked bias and human error.

After the committee disbanded, Van Assen found himself fascinated by the way science is susceptible to error, bias, and outright fraud. Investigating Stapel had been exciting, and he had no interest in returning to his old work. Van Assen had also found a like mind, a new professor at Tilburg named Jelte Wicherts, who had a long history working on bias in science and who shared his attitude of upbeat cynicism about the problems in their field. “We simply agree, there are findings out there that cannot be trusted,” Van Assen said. They began planning a new sort of research group: one that would investigate the very practice of science.

Van Assen does not like assigning Stapel too much credit for the creation of the Meta-Research Center, which hired its first students in late 2012, but there is an undeniable symmetry: he and Wicherts have created, in Stapel’s old department, a platform to investigate the sort of “sloppy science” and misconduct that very department had been condemned for.

Hartgerink joined the group in 2013. “For many people, certainly for me, Stapel launched an existential crisis in science,” he said. After Stapel’s fraud was exposed, Hartgerink struggled to find “what could be trusted” in his chosen field. He began to notice how easy it was for scientists to subjectively interpret data – or manipulate it. For a brief time he considered abandoning a future in research and joining the police.


There are probably several very famous papers that have fake data, and very famous people who have done it


Van Assen, who Hartgerink met through a statistics course, helped put him on another path. Hartgerink learned that a growing number of scientists in every field were coming to agree that the most urgent task for their profession was to establish what results and methods could still be trusted – and that many of these people had begun to investigate the unpredictable human factors that, knowingly or not, knocked science off its course. What was more, he could be a part of it. Van Assen offered Hartgerink a place in his yet-unnamed research group. All of the current projects were on errors or general bias, but Van Assen proposed they go out and work closer to the fringes, developing methods that could detect fake data in published scientific literature.

“I’m not normally an expressive person,” Hartgerink told me. “But I said: ‘Hell, yes. Let’s do that.’”

Hartgerink and Van Assen believe not only that most scientific fraud goes undetected, but that the true rate of misconduct is far higher than 2%. “We cannot trust self reports,” Van Assen told me. “If you ask people, ‘At the conference, did you cheat on your fiancee?’ – people will very likely not admit this.”

Uri Simonsohn, a psychology professor at University of Pennsylvania’s Wharton School who gained notoriety as a “data vigilante” for exposing two serious cases of fraud in his field in 2012, believes that as much as 5% of all published research contains fraudulent data. “It’s not only in the periphery, it’s not only in the journals people don’t read,” he told me. “There are probably several very famous papers that have fake data, and very famous people who have done it.”
But as long as it remains undiscovered, there is a tendency for scientists to dismiss fraud in favour of more widely documented – and less seedy – issues. Even Arturo Casadevall, an American microbiologist who has published extensively on the rate, distribution, and detection of fraud in science, told me that despite his personal interest in the topic, my time would be better served investigating the broader issues driving the replication crisis. Fraud, he said, was “probably a relatively minor problem in terms of the overall level of science”.

This way of thinking goes back at least as far as scientists have been grappling with high-profile cases of misconduct. In 1983, Peter Medawar, the British immunologist and Nobel laureate, wrote in the London Review of Books: “The number of dishonest scientists cannot, of course, be known, but even if they were common enough to justify scary talk of ‘tips of icebergs’, they have not been so numerous as to prevent science’s having become the most successful enterprise (in terms of the fulfilment of declared ambitions) that human beings have ever engaged upon.”

From this perspective, as long as science continues doing what it does well – as long as genes are sequenced and chemicals classified and diseases reliably identified and treated – then fraud will remain a minor concern. But while this may be true in the long run, it may also be dangerously complacent. Furthermore, scientific misconduct can cause serious harm, as, for instance, in the case of patients treated by Paolo Macchiarini, a doctor at Karolinska Institute in Sweden who allegedly misrepresented the effectiveness of an experimental surgical procedure he had developed. Macchiarini is currently being investigated by a Swedish prosecutor after several of the patients who received the procedure later died.

Even in the more mundane business of day-to-day research, scientists are constantly building on past work, relying on its solidity to underpin their own theories. If misconduct really is as widespread as Hartgerink and Van Assen think, then false results are strewn across scientific literature, like unexploded mines that threaten any new structure built over them. At the very least, if science is truly invested in its ideal of self-correction, it seems essential to know the extent of the problem.

But there is little motivation within the scientific community to ramp up efforts to detect fraud. Part of this has to do with the way the field is organised. Science isn’t a traditional hierarchy, but a loose confederation of research groups, institutions, and professional organisations. Universities are clearly central to the scientific enterprise, but they are not in the business of evaluating scientific results, and as long as fraud doesn’t become public they have little incentive to go after it. There is also the questionable perception, although widespread in the scientific community, that there are already measures in place that preclude fraud. When Gore and his fellow congressmen held their hearings 35 years ago, witnesses routinely insisted that science had a variety of self-correcting mechanisms, such as peer-review and replication. But, as the science journalists William Broad and Nicholas Wade pointed out at the time, the vast majority of cases of fraud are actually exposed by whistleblowers, and that holds true to this day.
And so the enormous task of keeping science honest is left to individual scientists in the hope that they will police themselves, and each other. “Not only is it not sustainable,” said Simonsohn, “it doesn’t even work. You only catch the most obvious fakers, and only a small share of them.” There is also the problem of relying on whistleblowers, who face the thankless and emotionally draining prospect of accusing their own colleagues of fraud. (“It’s like saying someone is a paedophile,” one of the students at Tilburg told me.) Neither Simonsohn nor any of the Tilburg whistleblowers I interviewed said they would come forward again. “There is no way we as a field can deal with fraud like this,” the student said. “There has to be a better way.”

In the winter of 2013, soon after Hartgerink began working with Van Assen, they began to investigate another social psychology researcher who they noticed was reporting suspiciously large effect sizes, one of the “tells” that doomed Stapel. When they requested that the researcher provide additional data to verify her results, she stalled – claiming that she was undergoing treatment for stomach cancer. Months later, she informed them that she had deleted all the data in question. But instead of contacting the researcher’s co-authors for copies of the data, or digging deeper into her previous work, they opted to let it go.

They had been thoroughly stonewalled, and they knew that trying to prosecute individual cases of fraud – the “pitbull” approach that Van Assen had taken when investigating Stapel – would never expose more than a handful of dishonest scientists. What they needed was a way to analyse vast quantities of data in search of signs of manipulation or error, which could then be flagged for public inspection without necessarily accusing the individual scientists of deliberate misconduct. After all, putting a fence around a minefield has many of the same benefits as clearing it, with none of the tricky business of digging up the mines.

As Van Assen had earlier argued in a letter to the journal Nature, the traditional approach to investigating other scientists was needlessly fraught – since it combined the messy task of proving that a researcher had intended to commit fraud with a much simpler technical problem: whether the data underlying their results was valid. The two issues, he argued, could be separated.

Scientists can commit fraud in a multitude of ways. In 1974, the American immunologist William Summerlin famously tried to pass a patch of skin on a mouse darkened with permanent marker pen as a successful interspecies skin-graft. But most instances are more mundane: the majority of fraud cases in recent years have emerged from scientists either falsifying images – deliberately mislabelling scans and micrographs – or fabricating or altering their recorded data. And scientists have used statistical tests to scrutinise each other’s data since at least the 1930s, when Ronald Fisher, the father of biostatistics, used a basic chi-squared test to suggest that Gregor Mendel, the father of genetics, had cherrypicked some of his data.

In 2014, Hartgerink and Van Assen started to sort through the variety of tests used in ad-hoc investigations of fraud in order to determine which were powerful and versatile enough to reliably detect statistical anomalies across a wide range of fields. After narrowing down a promising arsenal of tests, they hit a tougher problem. To prove that their methods work, Hartgerink and Van Assen have to show they can reliably distinguish false from real data. But research misconduct is relatively uncharted territory. Only a handful of cases come to light each year – a dismally small sample size – so it’s hard to get an idea of what constitutes “normal” fake data, what its features and particular quirks are. Hartgerink devised a workaround, challenging other academics to produce simple fake datasets, a sort of game to see if they could come up with data that looked real enough to fool the statistical tests, with an Amazon gift card as a prize.

By 2015, the Meta-Research group had expanded to seven researchers, and Hartgerink was helping his colleagues with a separate error-detection project that would become Statcheck. He was pleased with the study that Michèle Nuitjen published that autumn, which used Statcheck to show that something like half of all published psychology papers appeared to contain calculation errors, but as he tinkered with the program and the database of psychology papers they had assembled, he found himself increasingly uneasy about what he saw as the closed and secretive culture of science.
When scientists publish papers in journals, they release only the data they wish to share. Critical evaluation of the results by other scientists – peer review – takes place in secret and the discussion is not released publicly. Once a paper is published, all comments, concerns, and retractions must go through the editors of the journal before they reach the public. There are good, or at least defensible, arguments for all of this. But Hartgerink is part of an increasingly vocal group that believes that the closed nature of science, with authority resting in the hands of specific gatekeepers – journals, universities, and funders – is harmful, and that a more open approach would better serve the scientific method.

Hartgerink realised that with a few adjustments to Statcheck, he could make public all the statistical errors it had exposed. He hoped that this would shift the conversation away from talk of broad, representative results – such as the proportion of studies that contained errors – and towards a discussion of the individual papers and their mistakes. The critique would be complete, exhaustive, and in the public domain, where the authors could address it; everyone else could draw their own conclusions.

In August 2016, with his colleagues’ blessing, he posted the full set of Statcheck results publicly on the anonymous science message board PubPeer. At first there was praise on Twitter and science blogs, which skew young and progressive – and then, condemnations, largely from older scientists, who feared an intrusive new world of public blaming and shaming. In December, after everyone had weighed in, Nature, a bellwether of mainstream scientific thought for more than a century, cautiously supported a future of automated scientific scrutiny in an editorial that addressed the Statcheck controversy without explicitly naming it. Its conclusion seemed to endorse Hartgerink’s approach, that “criticism itself must be embraced”.

In the same month, the Office of Research Integrity (ORI), an obscure branch of the US National Institutes of Health, awarded Hartgerink a small grant – about $100,000 – to pursue new projects investigating misconduct, including the completion of his program to detect fabricated data. For Hartgerink and Van Assen, who had not received any outside funding for their research, it felt like vindication.

Yet change in science comes slowly, if at all, Van Assen reminded me. The current push for more open and accountable science, of which they are a part, has “only really existed since 2011”, he said. It has captured an outsize share of the science media’s attention, and set laudable goals, but it remains a small, fragile outpost of true believers within the vast scientific enterprise. “I have the impression that many scientists in this group think that things are going to change.” Van Assen said. “Chris, Michèle, they are quite optimistic. I think that’s bias. They talk to each other all the time.”

When I asked Hartgerink what it would take to totally eradicate fraud from the scientific process, he suggested that scientists make all of their data public; register the intentions of their work before conducting experiments, to prevent post-hoc reasoning, and that they have their results checked by algorithms during and after the publishing process.

To any working scientist – currently enjoying nearly unprecedented privacy and freedom for a profession that is in large part publicly funded – Hartgerink’s vision would be an unimaginably draconian scientific surveillance state. For his part, Hartgerink believes the preservation of public trust in science requires nothing less – but in the meantime, he intends to pursue this ideal without the explicit consent of the entire scientific community, by investigating published papers and making the results available to the public.

Even scientists who have done similar work uncovering fraud have reservations about Van Assen and Hartgerink’s approach. In January, I met with Dr John Carlisle and Dr Steve Yentis at an anaesthetics conference that took place in London, near Westminster Abbey. In 2012, Yentis, then the editor of the journal Anaesthesia, asked Carlisle to investigate data from a researcher named Yoshitaka Fujii, who the community suspected was falsifying clinical trials. In time, Carlisle demonstrated that 168 of Fujii’s trials contained dubious statistical results. Yentis and the other journal editors contacted Fujii’s employers, who launched a full investigation. Fujii currently sits at the top of the RetractionWatch leaderboard with 183 retracted studies. By sheer numbers he is the biggest scientific fraud in recorded history.


You’re saying to a person, ‘I think you’re a liar.’ How many fraudulent papers are worth one false accusation?

Carlisle, who, like Van Assen, found that he enjoyed the detective work (“it takes a certain personality, or personality disorder”, he said), showed me his latest project, a larger-scale analysis of the rate of suspicious clinical trial results across multiple fields of medicine. He and Yentis discussed their desire to automate these statistical tests – which, in theory, would look a lot like what Hartgerink and Van Assen are developing – but they have no plans to make the results public; instead they envision that journal editors might use the tests to screen incoming articles for signs of possible misconduct.

“It is an incredibly difficult balance,” said Yentis, “you’re saying to a person, ‘I think you’re a liar.’ We have to decide how many fraudulent papers are worth one false accusation. How many is too many?”

With the introduction of programs such as Statcheck, and the growing desire to conduct as much of the critical conversation as possible in public view, Yentis expects a stormy reckoning with those very questions. “That’s a big debate that hasn’t happened,” he said, “and it’s because we simply haven’t had the tools.”

For all their dispassionate distance, when Hartgerink and Van Assen say that they are simply identifying data that “cannot be trusted”, they mean flagging papers and authors that fail their tests. And, as they learned with Statcheck, for many scientists, that will be indistinguishable from an accusation of deceit. When Hartgerink eventually deploys his fraud-detection program, it will flag up some very real instances of fraud, as well as many unintentional errors and false positives – and present all of the results in a messy pile for the scientific community to sort out. Simonsohn called it “a bit like leaving a loaded gun on a playground”.

When I put this question to Van Assen, he told me it was certain that some scientists would be angered or offended by having their work and its possible errors exposed and discussed. He didn’t want to make anyone feel bad, he said – but he didn’t feel bad about it. Science should be about transparency, criticism, and truth.

“The problem, also with scientists, is that people think they are important, they think they have a special purpose in life,” he said. “Maybe you too. But that’s a human bias. I think when you look at it objectively, individuals don’t matter at all. We should only look at what is good for science and society.”

Thursday 10 April 2014

What the Tamiflu saga tells us about drug trials and big pharma


We now know the government's Tamiflu stockpile wouldn't have done us much good in the event of a flu epidemic. But the secrecy surrounding clinical trials means there's a lot we don't know about other medicines we take
Tamiflu capsules
Tamiflu capsules. Photograph: Per Lindgren/REX
Today we found out that Tamiflu doesn't work so well after all. Roche, the drug company behind it, withheld vital information on its clinical trials for half a decade, but the Cochrane Collaboration, a global not-for-profit organisation of 14,000 academics, finally obtained all the information. Putting the evidence together, it has found that Tamiflu has little or no impact on complications of flu infection, such as pneumonia.
That is a scandal because the UK government spent £0.5bn stockpiling this drug in the hope that it would help prevent serious side-effects from flu infection. But the bigger scandal is that Roche broke no law by withholding vital information on how well its drug works. In fact, the methods and results of clinical trials on the drugs we use today are still routinely and legally being withheld from doctors, researchers and patients. It is simple bad luck for Roche that Tamiflu became, arbitrarily, the poster child for the missing-data story.
And it is a great poster child. The battle over Tamiflu perfectly illustrates the need for full transparency around clinical trials, the importance of access to obscure documentation, and the failure of the regulatory system. Crucially, it is also an illustration of how science, at its best, is built on transparency and openness to criticism, because the saga of the Cochrane Tamiflu review began with a simple online comment.
In 2009, there was widespread concern about a new flu pandemic, and billions were being spent stockpiling Tamiflu around the world. Because of this, the UK and Australian governments specifically asked the Cochrane Collaboration to update its earlier reviews on the drug. Cochrane reviews are the gold-standard in medicine: they summarise all the data on a given treatment, and they are in a constant review cycle, because evidence changes over time as new trials are published. This should have been a pretty everyday piece of work: the previous review, in 2008, had found some evidence that Tamiflu does, indeed, reduce the rate of complications such as pneumonia. But then a Japanese paediatrician called Keiji Hayashi left a comment that would trigger a revolution in our understanding of how evidence-based medicine should work. This wasn't in a publication, or even a letter: it was a simple online comment, posted informally underneath the Tamiflu review on the Cochrane website, almost like a blog comment.
Tamiflu being made by Roche The UK government spent £0.5bn stockpiling Tamiflu. Photograph: Hanodut/EPA
Cochrane had summarised the data from all the trials, explained Hayashi, but its positive conclusion was driven by data from just one of the papers it cited: an industry-funded summary of 10 previous trials, led by an author called Kaiser. From these 10 trials, only two had ever been published in the scientific literature. For the remaining eight, the only available information on the methods used came from the brief summary in this secondary source, created by industry. That's not reliable enough.
This is science at its best. The Cochrane review is readily accessible online; it explains transparently the methods by which it looked for trials, and then analysed them, so any informed reader can pull the review apart, and understand where the conclusions came from. Cochrane provides an easy way for readers to raise criticisms. And, crucially, these criticisms did not fall on deaf ears. Dr Tom Jefferson is the head of the Cochrane respiratory group, and the lead author on the 2008 review. He realised immediately that he had made a mistake in blindly trusting the Kaiser data. He said so, without defensiveness, and then set about getting the information needed.
First, the Cochrane researchers wrote to the authors of the Kaiser paper. By reply, they were told that this team no longer had the files: they should contact Roche. Here theproblems began. Roche said it would hand over some information, but the Cochrane reviewers would need to sign a confidentiality agreement. This was tricky: Cochrane reviews are built around showing their working, but Roche's proposed contract would require them to keep the information behind their reasoning secret from readers. More than this, the contract said they were not allowed to discuss the terms of their secrecy agreement, or publicly acknowledge that it even existed. Roche was demanding a secret contract, with secret terms, requiring secrecy about the methods and results of trials, in a discussion about the safety and efficacy of a drug that has been taken by hundreds of thousands of people around the world, and on which governments had spent billions. Roche's demand, worryingly, is not unusual. At this point, many in medicine would either acquiesce, or give up. Jefferson asked Roche for clarification about why the contract was necessary. He never received a reply.
Then, in October 2009, the company changed tack. It would like to hand over the data, it explained, but another academic review on Tamiflu was being conducted elsewhere. Roche had given this other group the study reports, so Cochrane couldn't have them. This was a non-sequitur: there is no reason why many groups should not all work on the same question. In fact, since replication is the cornerstone of good science, this would be actively desirable.
Then, one week later, unannounced, Roche sent seven documents, each around a dozen pages long. These contained excerpts of internal company documents on each of the clinical trials in the Kaiser meta-analysis. It was a start, but nothing like the information Cochrane needed to assess the benefits, or the rate of adverse events, or fully to understand the design of the trials.
Packets of Tamiflu Packets of Tamiflu in a drawer at a German pharmacy. Photograph: Wolfgang Rattay/Reuters
At the same time, it was rapidly becoming clear that there were odd inconsistencies in the information on this drug. Crucially, different organisations around the world had drawn vastly different conclusions about its effectiveness. The US Food and Drug Administration (FDA) said it gave no benefits on complications such as pneumonia, while the US Centers for Disease Control and Prevention said it did. The Japanese regulator made no claim for complications, but the European Medicines Agency (EMA) said there was a benefit. There are only two explanations for this, and both can only be resolved by full transparency. Either these organisations saw different data, in which case we need to build a collective list, add up all the trials, and work out the effects of the drug overall. Or this is a close call, and there is reasonable disagreement on how to interpret the trials, in which case we need full access to their methods and results, for an informed public debate in the medical academic community.
This is particularly important, since there can often be shortcomings in the design of a clinical trial, which mean it is no longer a fair test of which treatment is best. We now know this was the case in many of the Tamiflu trials, where, for example, participants were sometimes very unrepresentative of real-world patients. Similarly, in trials described as "double blinded" – where neither doctor nor patient should be able to tell whether they're getting a placebo or the real drug – the active and placebo pills were different colours. Even more oddly, in almost all Tamiflu trials, it seems a diagnosis of pneumonia was measured by patients' self-reporting: many researchers would have expected a clear diagnostic algorithm, perhaps a chest x-ray, at least.
Since the Cochrane team were still being denied the information needed to spot these flaws, they decided to exclude all this data from their analysis, leaving the review in limbo. It was published in December 2009, with a note explaining their reasoning, and a small flurry of activity followed. Roche posted their brief excerpts online, and committed to make full study reports available. For four years, they then failed to do so.
During this period, the global medical academic community began to realise that the brief, published academic papers on trials – which we have relied on for many years – can be incomplete, and even misleading. Much more detail is available in a clinical study report (CSR), the intermediate document that stands between the raw data and a journal article: the precise plan for analysing the data statistically, detailed descriptions of adverse events, and so on.
By 2009, Roche had shared just small portions of the CSRs, but even this was enough to see there were problems. For example, looking at the two papers out of 10 in the Kaiser review that were published, one said: "There were no drug-related serious adverse events", and the other doesn't mention adverse events. But in the CSR documents shared on these same two studies, 10 serious adverse events were listed, of which three are classified as being possibly related to Tamiflu.
Roche HQ in Basel, Switzerland Roche HQ in Basel, Switzerland. Photograph: Bloomberg/Bloomberg via Getty Images
By setting out all the known trials side by side, the researchers were able to identify peculiar discrepancies: for example, the largest "phase three" trial – one of the large trials that are done to get a drug on to the market – was never published, and is rarely mentioned in regulatory documents.
The chase continued, and it exemplifies the attitude of industry towards transparency. In June 2010, Roche told Cochrane it was sorry, but it had thought they already had what they wanted. In July, it announced that it was worried about patient confidentiality. By now, Roche had been refusing to publish the study reports for a year. Suddenly, it began to raise odd personal concerns. It claimed that some Cochrane researchers had made untrue statements about the drug, and about the company, but refused to say who, or what, or where. "Certain members of Cochrane Group," it said, "are unlikely to approach the review with the independence that is both necessary and justified." This is hard to credit, but even if true, it should be irrelevant: bad science is often published, and is shot down in public, in academic journals, by people with good arguments. This is how science works. No company or researcher should be allowed to choose who has access to trial data. Still Roche refused to hand over the study reports.
Then Roche complained that the Cochrane reviewers had begun to copy in journalists, including me, on their emails when responding to Roche staff. At the same time, the company was raising the broken arguments that are eerily familiar to anyone who has followed the campaign for greater trials transparency. Key among these was one that cuts to the core of the culture war between evidence-based medicine, and the older "eminence-based medicine" that we are supposed to have left behind. It is simply not the job of academics to make these decisions about benefit and risk, said Roche, it is the job of regulators.
This argument fails on two fronts. First, as with many other drugs, it now seems that not even the regulators had seen all the information on all the trials. But more than that, regulators miss things. Many of the most notable problems with medicines over the past few years – with the arthritis drug Vioxx; with the diabetes drug rosiglitazone, marketed as Avandia; and with the evidence base for Tamiflu – weren't spotted primarily by regulators, but rather by independent doctors and academics. Regulators don't miss things because they are corrupt, or incompetent. They miss things because detecting signals of risk and benefit in reviews of clinical trials is a difficult business and so, like all difficult questions in science, it benefits from having many eyes on the problem.
While the battle for access to Tamiflu trials has gone on, the world of medicine has begun to shift, albeit at a painful pace, with the European Ombudsman and several British select committees joining the push for transparency. The AllTrials campaign, which I co-founded last year, now has the support of almost all medical and academic professional bodies in the UK, and many more worldwide, as well as more than 100 patient groups, and the drug company GSK. We have seen new codes of conduct, and European legislation, proposing improvements in access: all riddled with loopholes, but improvements nonetheless. Crucially, withholding data has become a headline issue, and much less defensible.
Last year, in the context of this wider shift, under ceaseless questions from Cochrane and the British Medical Journal, after half a decade, Roche finally gave Cochrane the information it needed.
So does Tamiflu work? From the Cochrane analysis – fully public – Tamiflu does not reduce the number of hospitalisations. There wasn't enough data to see if it reduces the number of deaths. It does reduce the number of self-reported, unverified cases of pneumonia, but when you look at the five trials with a detailed diagnostic form for pneumonia, there is no significant benefit. It might help prevent flu symptoms, but not asymptomatic spread, and the evidence here is mixed. It will take a few hours off the duration of your flu symptoms. But all this comes at a significant cost of side-effects. Since percentages are hard to visualise, we can make those numbers more tangible by taking the figures from the Cochrane review, and applying them. For example, if a million people take Tamiflu in a pandemic, 45,000 will experience vomiting, 31,000 will experience headache and 11,000 will have psychiatric side-effects. Remember, though, that those figures all assume we are only giving Tamiflu to a million people: if things kick off, we have stockpiled enough for 80% of the population. That's quite a lot of vomit.
Roche has issued a press release saying it contests these conclusions, but giving no reasons: so now we can finally let science begin. It can shoot down the details of the Cochrane review – I hope it will – and we will edge towards the truth. This is what science looks like. Roche also denies being dragged to transparency, and says it simply didn't know how to respond to Cochrane. This, again, speaks to the pace of change. I have no idea why it was withholding information: but I rather suspect it was simply because that's what people have always done, and sharing it was a hassle, requiring new norms to be developed. That's reassuring and depressing at the same time.
Should we have spent half a billion on this drug? That's a tricky question. If you picture yourself in a bunker, watching a catastrophic pandemic unfold, confronting the end of human civilisation, you could probably persuade yourself that Tamiflu might be worth buying anyway, even knowing the risks and benefits. But that final clause is the key. We often choose to use treatments in medicine, knowing that they have limited benefit, and significant side-effects: but we make an informed decision, balancing the risks and benefits for ourselves.
And in any case, that £500m is the tip of the iceberg. Tamiflu is a side show, the one place where a single team of dogged academics said "enough" and the company caved in. But the results of clinical trials are still being routinely and legally withheld on the medicines we use today and nothing about a final answer on Tamiflu will help plug this gaping hole.
Star anise Star anise provides the principal component of Tamiflu. Photograph: Adrian Bradshaw/EPA
More importantly, for all that there is progress, so far we have only sentiment, and half measures. None of the changes to European legislation or codes of conduct get us access to the information we need, because they all refer only to new trials, so they share a loophole that excludes – remarkably – all the trials on all the medicines we use today, and will continue to use for decades. To take one concrete and topical example: they wouldn't have made a blind bit of difference on Tamiflu. We have seen voluntary pledges for greater transparency from many individual companies – Johnson & Johnson, Roche,GSK, now Roche, and more – which are welcome, but similar promises have been given before, and then reversed a few years later.
This is a pivotal moment in the history of medicine. Trials transparency is finally on the agenda, and this may be our only opportunity to fix it in a decade. We cannot make informed decisions about which treatment is best while information about clinical trials is routinely and legally withheld from doctors, researchers, and patients. Anyone who stands in the way of transparency is exposing patients to avoidable harm. We need regulators, legislators, and professional bodies to demand full transparency. We need clear audit on what information is missing, and who is withholding it.
Finally, more than anything – because culture shift will be as powerful as legislation – we need to do something even more difficult. We need to praise, encourage, and support the companies and individuals who are beginning to do the right thing. This now includes Roche. And so, paradoxically, after everything you have read above, with the outrage fresh in your mind, on the day when it feels harder than any other, I hope you will join me in saying: Bravo, Roche. Now let's do better.