Advice on applying for research grants | Analysis of research policy | Humour to make it all bearable
Tackling the Reproducibility Crisis
Tackling the Reproducibility Crisis

Tackling the Reproducibility Crisis

In July this year the UK Parliament’s Science and Technology Select Committee launched an inquiry into the reproducibility crisis. Its aim was to understand exactly what the crisis was, and what was necessary to address it.

Greg Clark, the chair of the committee, said it wanted “to get to the bottom of whether there is a ‘reproducibility crisis’ in science and social science research and to establish what is required to create and maintain an open, contestable and rigorous research environment”.

That there is a crisis is, I believe, beyond doubt. But the scale of it is more difficult to assess. Some suggest it is significant and widespread: a Nature survey in 2016 found that 70 per cent of respondents had “tried and failed to reproduce another scientist’s experiments,” although “73 per cent said that they think that at least half of the papers in their field can be trusted”.

How we got here

The reasons for this are entwined with the modern framework of higher education. According to some estimates, the number of global research outputs is doubling every nine years. At the same time, academic careers are increasingly precarious. As a result, there is a strong drive to get research noticed, funded and published, and thereby raise an academic’s profile.

An important way of doing so is to produce research with anomalous and eye-catching results that has the potential to disrupt accepted paradigms. Such work is more likely to be accepted for publication, chosen for funding, and picked up by mainstream media.

Once the results are out there, publishers and funders are less likely to be interested in repudiations of them. It feels like old ground is being retrodden, and the cycle of research has moved on. Worse still, researchers who undertake such research have been vilified as ‘research parasites’.

There are also accidental reasons for anomalous findings, resulting from a lack of rigour in the way the original research was undertaken. The investigators—or their postdocs—may not have managed complex datasets correctly, may have misidentified, cross-contaminated or over-passaged cell lines, or not had access to specific raw data or methodologies.

Steps are being taken to address both the accidental and instrumental causes of the crisis. The open-access and open-data movements have helped increase transparency in the research process, and the introduction of data management plans (DMPs) by UK Research and Innovation has enabled others to attempt to more fully interrogate findings and reproduce results.

How we can tackle the crisis

Clearly, more needs to be done. Funders, publishers and institutions need to work together to address the underlying issues that have led to this crisis. In response to the committee’s call for evidence, the Eastern Arc collaboration of universities, which I work for as director, outlined a number of actions that could be taken, including these five essential steps:

1. Funders should provide specific funding for reproduction studies. Some funders, such as the International Initiative for Impact Evaluation (3ie), are already doing so, but normalising and rewarding the difficult, stigmatised work would attract more researchers to take it on. 

In addition, the funders could support this work by developing a database of underused software and hardware, that may be necessary for the analysis of specific data as part of a reproduction study. This can be expensive to buy or access. Removing this hurdle would enable wider efforts to interrogate the data.

2. Funders and publishers should make the reviewing and enforcement of DMPs and DAPs more robust. Although all applicants have to complete DMPs, the checks on whether data has been deposited in appropriate repositories are weak. There is also a need to deposit code as well as data. The latter is of limited value without the former.

3. Publishers should mandate pre-registration and accept articles for publication based on an outline of research. This would overcome the problem of ‘HARK-ing’, or hypothesizing after the results are known.

4. Institutions should work together to produce common policies and monitoring. This should include integrating open and reproducible research practices into theirincentive structures at all career levels, and embed them into their research ethics frameworks. These shouldn’t just apply to the researchers themselves, but to all staff, including technicians and data managers.

5. Individuals should change the way that postgraduate students and early career researchers are trained in research methodologies and publication strategies. The Berkeley Initiative for Transparency in the Social Sciences has developed a textbook that is intended to train people in undertaking open science, and other resources exist to support those teaching students about replication. Researchers should ensure that insights from such publications are integrated into how postgraduates are trained.

Although the onus is on the research community to make these changes, the government has a part to play in transforming the wider understanding of research. It needs to work to improve the scientific literacy of politicians, policymakers and civil servants so that they understand the context and the process of research. Without it there is a tendency to accept the results at face value and act accordingly.

We all need to embrace uncertainty and accept that results are not necessarily clear cut. The pandemic has shown that it is only by understanding the data—and its limitations—that we can meet the challenges of an increasingly complex, divided and dangerous world.

A version of this article first appeared in Funding Insight in October 2021 and is reproduced with kind permission of Research Professional. For more articles like this, visit www.researchprofessional.com., and is republished here with kind permission. 

Photo by Marcus Lenk on Unsplash