Advice on applying for research grants | Analysis of research policy | Humour to make it all bearable
Opening up the Black Box of Peer Review
Opening up the Black Box of Peer Review

Opening up the Black Box of Peer Review

Last month I attended a session at KBS on the Future of Research Assessment. I wrote up some notes from this, and particularly the talk by Prof John Mingers, here. 
 
Also speaking at the event was Liz Allen, Director of Strategic Initiatives at F1000. She published some thoughts on it on the F1000 blog, and has kindly allowed me to republish them here. 
______________________
 
I recently participated in a workshop hosted by the University of Kent Business School – the subject was whether metrics or peer review are the best tools to support research assessment. Thankfully, we didn’t get embroiled in the sport of ‘metric bashing’, but instead agreed that one size does not fit all and that whatever research assessment we do, while taking account of context, needs to be proportionate.
 

There are many reasons why we want to assess research – to identify success in relation to goals, to allocate finite resources, to build capacity, to reward and incentivise researchers, as a starting point for further research – but these are all different questions, and the information you need to answer them is not always going to be the same.
 
What do we know about peer review?
 
In recent years, while researchers and evaluators have started to swim with the metric tide and explore how new metrics have value in different contexts, ‘peer review’, i.e., the qualitative way that research and researchers are assessed, is (a) still described as if it is one thing, and (b) remains a largely unknown ‘quantity’.  I am not sure if this is ironic (or intentional?) or not, but there remains dearth of information on how peer review works (or doesn’t).
 
Essentially, getting an expert’s view on a piece of research – be that in a grant application, a piece submitted for publication to a journal, or work already published –  can be helpful to science.  However, there is now significant body of evidence that suggests that how the scientific community organises, requests and manages its expert input may not be as optimum as many consumers of its output assume.  A 2011 UK’s House of Commons report on the state of peer review concluded that while it “is crucial to the reputation and reliability of scientific research” many scientists believe the system stifles innovation and “there is little solid evidence on its efficacy.” 
 
Indeed, during the production of the HEFCE commissioned 2015 Metric Tide report, we found ourselves judging the value of quantitative metrics based on the extent to which they replicated the patterns of choices made by ‘peers’. This was done without any solid evidence to support the veracity and accuracy of the peer review decisions themselves; following a long-established tradition for reviews on the mechanics of peer review to cite reservations about the process, before eventually concluding that ‘it’ remains the gold standard. As one speaker at the University of Kent workshop surmised, “people talking about the gold standard [of peer review] maybe don’t want to open up their black boxes.” However, things might be changing.
 
Bringing in the experts at right time
 
In grant assessment, there is increasing evidence that how and when we use experts in the grant selection and funding process may be inefficient and lack precision, see for example:  Nature;  NIH;  Science and RAND. Several funding agencies are now experimenting with approaches that use expert input at different stages in the grant funding cycle and to different degrees – the aim being to encourage innovation, while bringing efficiencies to the process, including by reducing the opportunity for bias and practically, reducing the burden on peers, examples of this are Wellcome Trust Investigator Award grantsHRC Explorer grantsVolkswagenstiftung Experiment grants; and Velux Foundation Villum experiment.
 
Opening peer review in publishing
 
In the publishing world, there is considerable momentum towards the adoption of models in where research is shared much earlier and more openly.  Preprint repositories such as bioRxiv and post-publication peer review platforms, such as F1000ResearchWellcome Open Research, and soon to be launched Gates Open Research and UCL Child Health Open Research, enable open commenting and open peer review respectively as the default. Such models not only provide transparency and accelerate access to research findings and data to all users but they fundamentally change the role of experts – to one focused on providing constructive feedback and helping research to advance – even if they don’t like or agree with what they see! Furthermore, opening up access to what experts have said about others’ work is an important step towards reducing the selection bias of what is published and allowing readers more autonomy to reach their own conclusions about what they see.
 
Creating a picture of the workload
 
Perhaps the most obvious ways in which ‘peer review’ is currently broken is under the sheer weight of what publishers, funding agencies and institutions are asking experts to do. Visibility around a contribution presents the opportunity for experts to receive recognition for the effort and contributions they have made to the research enterprise in its broadest sense – as is already underway with ORCID – thus providing an incentive to get involved. And for funding agencies, publishers and institutions, more information about who is providing the expert input, and therefore where the burden lies, can help them to consider who, when and how they approach experts, maximising the chance of a useful response, and bringing efficiencies and effectiveness to the process.
 
The recent acquisition of Publons by Clarivate is a clear indication of the current demand and likely potential for more information about expert input to research – and should go some way to addressing the dearth of intelligence on how ‘peer review’ is working – and actually works.

“Emergence of mysterious Black Box DDC_0348” by Abode of Chaos is licensed under CC BY 2.0