In June an eminent group of academics, including Nobel laureates, wrote the following to The Daily Telegraph: “Peer review [of funding applications] is now virtually unavoidable and its bureaucratic, protracted procedures are repeated for every change in direction or new phase of experimentation or for whatever an applicant might subsequently propose. Consequently, support for research that might lead to major new scientific discoveries is virtually forbidden nowadays, and science is in serious danger of stagnating.”
Keith Bontrager, the legendary designer of bike components, once said about bicycles: “Strong, light, cheap: pick two.” You can never have all three. A similar universal law seems to be at work with systems for selecting funding proposals and academic articles, but perhaps the elements here should be: robust, equitable, easy.
What we have at the moment is a system that is fairly robust and fairly equitable, but not at all easy. It’s cumbersome and time consuming, relying on a relatively small number of overburdened peer reviewers to shoulder most of the work. The need to uphold the sacred qualities of robustness and equity is the very thing that threatens to undermine the whole system. It is just not sustainable.
This would be less worrying if the system were infallible, but it isn’t. As the recent news of the ‘busting’ of a peer review and citations ring attests, some can and do find ways to play and cheat the system. Similarly, retractions of stem cell research papers at Nature show that even the most eminent journals are open to manipulation.
So what is the answer? The Higher Education Funding Council for England has toyed with the idea of using that most objective of indicators—bibliometrics—to replicate the outcome of its assessment exercise. But this is not without its problems, as Meera Sabaratnam of the School of Oriental and African Studies and Paul Kirby of the University of Sussex point out in this open letter. Easy, but not robust or equitable; a poor replacement.
And that is the issue: in answering the challenge of peer review we shouldn’t be seeking a wholesale replacement; as David Crotty, a senior editor at the Oxford University Press, noted, “replacing a flawed system with one that’s even more flawed is not an option”.Instead, what we need is a way of lessening the burden of peer review without sacrificing robustness and equity. Fortunately, at a time when the old system is creaking towards collapse, two developments are on hand that offer a potential solution: open access and new communication technology.
Four years ago I blogged about an interesting experiment conducted by the Shakespeare Quarterly on using wikis in peer review. Four articles were posted on the website and comments were invited. Some 41 people joined in, leaving 350 comments. The revised essays were then reviewed by the editors, who made the final decision as to whether to include them in the printed journal.
This was an interesting experiment, but the Shakespeare Quarterly was still embedded in traditional publishing. The experiment would be all the more effective, I believe, if used in post-Finch open access. Web-based open-access journals and repositories can be unchained from the demands of deadline-driven, static, paper-based publishing, and can thus offer the opportunity to be more interactive, conversational and iterative. As The Economist noted, the Nature retractions came about because of “the rise of open, post-publication review on Facebook and Twitter, by email, on blogs, and in the comments sections of websites like arXiv”.
Such a system of open, ongoing, community-based publishing might work for the peer review of outputs, but what about grant applications? I believe that here, too, there is a need for more of a consensual conversation. A huge amount of time is wasted by applicants and reviewers in writing and judging proposals that don’t have any hope of success. Wouldn’t it make more sense for funders to move away from a gatekeeping form of peer review to a more iterative, supportive system?
The policy consultancy Rand Europe did an interesting analysis of some of these systems, from the Engineering and Physical Sciences Research Council’s sandpits to Australia’s National Health and Medical Research Council’s conditional funding and Canada’s Ontario Brain Institute’s mentoring, and it’s clear that that there’s not a single, simple solution. But that shouldn’t mean that we do nothing. The future of peer review lies in a diversity of approaches, and it is through diversity that we will better triangulate the Bontrageresque qualities. Ironically, it may only be by dismantling the edifice of traditional peer review that we save and sustain its underlying principles.
This article first appeared in Funding Insight on 12 August 2014 and is reproduced with kind permission of Research Professional. For more articles like this, visit www.researchprofessional.com
Photo by Markus Winkler on Unsplash