Is science in trouble? An insight into the reproducibility crisis
Yan-Yi Lee talks about the reproducibility crisis as well as the recent collective efforts that scientists have shown to address it.
The world of science has never been void of strong claims. “Calcium protects against preeclampsia in pregnant women.” “Ego-depletion is real.” “Bilinguals do better in executive tasks than monolinguals.” As convincing as these claims may sound, attempts to replicate these findings do fail more often than expected — a phenomenon that marks one of the greatest challenges of metascience today.
The concept itself is not at all new, but the reproducibility crisis (or “replication crisis”) has been discussed more extensively only in the past decade. While the crisis troubles all disciplines reliant on empirical (and more so, positivist) modes of inquiry, it tends to affect the medical and social sciences more than, say, the physical and mathematical sciences — something perhaps explained by the different nature of “controllable” contextual factors and measurement errors across disciplines. Meta-analytical efforts illustrate it well: in 2012, Begley and Ellis found that only 11% of pre-clinical papers on cancer could be replicated. In the water sciences, Stagge and colleagues explained how data unavailability resulted in an alarmingly low replication rate among 1,989 water management articles published in 2017 (a meager 0.6% - 6.8%, estimated with 95% confidence interval). John Ioannidis, a heavyweight in metascience, went so far as to claim that “most published research findings are false”.
So what does this reproducibility crisis tell us about the nature of science? What does it reveal about how humans manage science? If scientific results are so difficult to replicate, and if the pursuit of truth proves to be so unreachable for humans, then how much about science can we ever, truly know?
“If scientific results are so difficult to replicate... how much about science can we ever, truly know?”
It is not difficult to understand why things have ended up this way. As Oxford professor Dorothy Bishop eloquently argues, the reproducibility crisis is mostly created by what she describes as “the four horsemen of irreproducibility”: (i) publication bias, where studies that yield “no effect” are deemed less favorable and less likely to get published; (ii) low statistical power, where the peril of small sample sizes cause existing effects to go undetected; (iii) P-value hacking, where researchers misuse data to report only parts that are statistically significant; and (iv) “HARKing”, where researchers form hypotheses only after results are known. Among the four, publication bias is a particularly dangerous practice; it may misguide high-stakes decisions such as policy enactments and even the designs of medical treatments.
Professor Bishop’s analysis echoes powerfully with Professor Chris Chambers’ celebrated book The Seven Deadly Sins of Psychology: A Manifesto for Reforming the Culture of Scientific Practice. As a passionate advocate of scientific rigour, Professor Chambers took the initiative to encourage the practice of “registered reports” (RR) in leading academic journals. In essence, these “registered reports” are not unlike what PhD students at Cambridge undergo at their end of their first year. As opposed to sending a final manuscript to a scientific journal, researchers submit a draft of the study they intend to do to before the study commences formally, delineating aspects such as the background literature, the research questions, the overarching hypotheses, the methodology, as well as any preliminary work that might have already been completed. It is at this initial stage where peer assessors decide whether or not to accept the study. The power of the registered report lies in the opportunity for scientists to publish their studies regardless of ultimate findings. One can reasonably argue that such an approach not only eliminates the bad practice of cherry-picking, but it also paves the path for a more comprehensive outlook of science. Since its first formal implementation in 2012, registered reports are becoming increasingly valued in psychology and neuroscience. The life and social sciences have also begun to pick up their pace in adopting this format.
“We must come to accept that science is only as good as the rigour implemented while doing it, and ‘no effect’ is by no means a hindrance to the advancement of humanity.”
Most importantly, funders of science seem to be conscientious of the problem themselves and are asking for more rigorous descriptions of methodological methods. Recent years have seen funding bodies supporting the replication of studies; the establishment of specialised units such as the Centre for Open Science by notable funders likewise hint at the hopeful beginning of an ethical scientific revolution.
So what’s next? With growing awareness of this current crisis, we must strive to do science with as little influence as possible from academic politics — as difficult as that may seem. We must come to accept that science is only as good as the rigour implemented while doing it, and “no effect” is by no means a hindrance to the advancement of humanity. As of now, registered reports have yet to become the norm in mainstream journals of some scientific disciplines, which signals the need for more interdisciplinary discussions on metascience. Whether “unreplicable studies” necessarily signifies “contested science” is another tricky question to consider.
Journal-publishing may be a modern-era practice, but reflections on humans’ limited abilities to capture reality could be as old as scientific practice itself (see Greek philosopher Plato’s Allegory of the Cave). Even with today’s advanced methodologies and technologies, it is still inevitable for scientific misconceptions to develop sometimes in the collective pursuit of knowledge. As such, it is instrumental for the scientific circle to acknowledge current unhealthy scientific practices and approach science with a more unbiased lens — an investment that would likely save us gigabucks on flawed research down the line.
- News / Lack of resits forces student out1 November 2024
- Arts / The ‘novel’ experience of Cambridge1 November 2024
- Comment / Don’t (just) go to your lectures1 November 2024
- News / Cambridge cancels apprenticeship despite ‘outstanding’ inspection1 November 2024
- News / King’s refuses to address female choral scholars’ frustrations 1 November 2024