Search This Blog

Saturday 29 August 2015

Psychology experiments are failing the replication test – for good reason

John Ioannidis in The Guardian


‘The replication failure rate of psychology seems to be in the same ballpark as those rates in observational epidemiology, cancer drug targets and preclinical research, and animal experiments.’ Photograph: Sebastian Kaulitzki/Alamy


Science is the best thing that has happened to humankind because its results can be questioned, retested, and demonstrated to be wrong. Science is not about proving at all cost some preconceived dogma. Conversely religious devotees, politicians, soccer fans, and pseudo-science quacks won’t allow their doctrines, promises, football clubs or bizarre claims to be proven illogical, exaggerated, second-rate or just absurd.

Despite this clear superiority of the scientific method, we researchers are still fallible humans. This week, an impressive collaboration of 270 investigators working for five years published in Science the results of their efforts to replicate 100 important results that had been previously published in three top psychology journals. The replicators worked closely with the original authors to make the repeat experiments close replicas of the originals. The results were bleak: 64% of the experiments could not be replicated.
We often feel uneasy about having our results probed for possible debunking. We don’t always exactly celebrate when we are proven wrong. For example, retracting published papers can take many years and many editors, lawyers, and whistleblowers – and most debunked published papers are never retracted. Moreover, with fierce competition for limited research funds and with millions of researchers struggling to make a living (publish, get grants, get promoted), we are under immense pressure to make “significant”, “innovative” discoveries. Many scientific fields are thus being flooded with claimed discoveries that nobody ever retests. Retesting (called replication) is discouraged. In most fields, no funding is given for what is pooh-poohed as me-too efforts. We are forced to hasten from one “significant” paper to the next without ever reassessing our previously claimed successes.

Multiple lines of evidence suggest this is a recipe for disaster, leading to a scientific literature littered with long chains of irreproducible results. Irreproducibility is rarely an issue of fraud. Simply having millions of hardworking scientists searching fervently and creatively in billions of analyses for something statistically significant can lead to very high rates of false-positives (red-herring claims about things that don’t exist) or inflated results.

This is more likely to happen in fields that chase subtle, complex phenomena, in those that have more noise in measurement, and where there is more room for subjective choices to be introduced in designing and running experiments and crunching the data. Ten years ago I tried to model these factors. These models predicted that in most scientific fields and settings the majority of published research, findings may be false. They also anticipated that the false rates could vary greatly (from almost 0% to almost 100%), depending on the features of a scientific discipline and how scientists run their work.

Probably the failure rate in the Science data would have been higher for work published in journals of lesser quality. There are tens of thousands of journals in the scientific-publishing market, and most will publish almost anything submitted to them. The failure rate may also be higher for studies that are so complex that none of the collaborating replicators offered to attempt a replication. This group accounted for one-third of the studies published in the three top journals. So the replication failure rate for psychology at large may be 80% or more overall.

This performance is even worse than I would have predicted. In 2012 my anticipation of a 53% replication failure rate for psychology at large was published. Compared with other empirical studies, the failure rate of psychology seems to be in the same ballpark as replication failure rates in observational epidemiology, cancer drug targets and preclinical research, and animal experiments.

However, I think it is important to focus on the positive side. The Science paper shows that large-scale replication efforts of high quality are doable even in fields like psychology where there was no strong replication culture until recently. Hopefully this successful, highly informative paradigm will help improve research practices in this field. Many other scientific fields without strong replication cultures may also be prompted now to embrace replications and reproducible research practices. Thus these seemingly disappointing results offer a great opportunity to strengthen scientific investigation. I look forward to celebrate one day when my claim that most published research findings are false is thoroughly refuted across most, if not all, scientific fields.

No comments:

Post a Comment