Oh you couldn’t make it up. Paul ‘Shriek’ Jump made it a hattrick in last week’s Times Higher by bemoaning the fact that the MRC success rate had – wait for it – stayed the same.
To recap:
- On 1 September Jump suggested that the fall in its success rate reflected badly on the ESRC…
- whereas on 8 September Jump suggested that the rise in its success rate reflected badly on the EPSRC;
Now, with its success rate pretty much static (it actually fell by 1%), Jump was wringing his hands about how badly this reflected on the MRC.
Spookily, in a comment on this blog Adam Golberg had foreseen this scenario almost exactly: ‘Next week in the Times Higher: The XRC Research Council announces unchanged success rates. Does this stagnation spell the beginning of the end for the XRC?’
Does he know something we don’t? Does he have access to lines of communication which are – frankly – supernatural? Is he, in fact, Robert Johnson?
We should be told.
Photo by Jonathan Pendleton on Unsplash
I'm not convinced success rate is either a valid or reliable measure of whatever it is we are trying to measure. Though perhaps the real problem is that we aren't clear what the issue is. Is it quality of research? The ability to continue to produce quality research? UK competitiveness (in research? in something else?)? What?
Particularly when you consider "demand management" success rates don't even tell you much about the likelihood of any individual researcher securing funding.
Yes, I agree: success rates are a very blunt indicator. But I guess they're all that we've got, and they do reflect underlying issues. Research Councils want to minimise the amount of (wasted) time that they – and the applicants – spend on unsuccessful applications. They higher they can push success rates, the closer they get to this goal.
However, the single figure hides as much as it reveals, and doesn't reflect (say) the overall quality of applications in that round: how many of the 82% of unfunded MRC applications were actually of fundable quality?