In some academic fields, double-blind reviews of manuscripts for peer-reviewed publication is the norm. It’s no surprise that people who study human behavior use double-blind review. They must be on to something that most of us in the “hard” sciences haven’t picked up yet.
Some journals in my field have double blind review. Behavioral Ecology has been doing it for a good long while now. My latest paper in Animal Behaviour was a double-blind review, but I don’t know when they started. A couple years ago, American Naturalist went double-blind, too.
Researchers studied the consequence of Behavioral Ecology going double-blind, and found that it increases representation, resulting with more women as first authors, with no detectable negative effects.
A lot of journals have yet to go double-blind, even though the positive effects are there. That’s a long game to have in mind if you’re in a position to have an influence on editorial policy.
Two years ago, NIH said they were “testing — over the next year — the utility of anonymizing grant applications prior to review.” So, what happened with that? I don’t have any idea. Does anybody know?
It’s been five years since there was a paper in Science Magazine showing that there is a funding bias against African-American scientists who seek NIH support. (Here’s a writeup about the results from Science meant for a broader audience.) I can’t really imagine that the situation has magically improved between then and now. Are they still piloting out that double-blind review process?
One common quibble that people raise is that you can often guess who the person is. But there’s a big difference between guessing and knowing, and sometimes people guess wrong. Also, don’t forget that we know going double blind actually makes a difference. So saying that it doesn’t matter is just not true.
Are grants harder to double-blind than manuscripts? I suspect it’s easier. While reviewers are supposed to evaluate whether the person submitting the proposal is qualified to do the work, and whether the facilities at a particular institution are adequate for the work, that’s something that an appropriately trained program director can do before that information is excluded from the review. The probability that a reviewer can guess the author of a grant is probably not that different than the probability of guessing the author of a paper, if anything it’s lower because grants are about new stuff that hasn’t been done yet. (Using my own anecdote, I don’t think very close colleagues of mine would ever attribute my most recent grant to me.)
As far as I can tell from a few minutes on the google machine, NSF has experimented with having people competing for the same pool of money review one another, but not with double blind. Is that the case? This does seem to me to be one thing that can have a quantifiable effect on improving equity, and I’d be interested in hearing any compelling reasons, if they exist, why we haven’t made the move to double-blind review throughout our academic community.
[Update: check out the the follow-up post that discusses the track record issue, and the whole issue in more depth.]
(As a postscript, I should add that in the absence of double-blind review, in my opinion, no-blind review is worse than single-blind.)
10 thoughts on “Why not double-blind grant reviews?”
I agree, Terry; this seems fairly easy to do and with clear advantages.
Dennis Murray et al. have just (days ago) published an analysis of NSERC granting data that suggest implicit reviewer bias against researchers at small universities (http://journals.plos.org/plosone/article?id=10.1371%2Fjournal.pone.0155876). (Caveat: I haven’t read past the Abstract yet). This is another instance (added to gender etc.) in which double-blind reviewing would help.
More generally, I just can’t think of any reason NOT to do it. Isn’t the worst case scenario that the reviewer correctly guesses who wrote the grant – which puts you no worse off than you were without double-blind?
Just skimmed Murray et al. Will have some comments on it in the next Dynamic Ecology linkfest. tl;dr: it’s not at all convincing.
Interesting post. Agree that judging the project part of grant proposals as anonymous could be easier than papers. Although, not sure what US grant apps include, but some here in Aus include a ‘track record relative to opportunity’ statement as part of the application where, for example, a woman can explain how much time she has taken off from her career for maternity leave. So, regardless of whether the name is blinded, I wonder if this would still affect male/female biases?
Grants here in UK have big sections on track record and previous publications, and grants awarded so double blind would not be possible
Same in Canada. And I think that’s a good thing, so I don’t think we ought to drop that information in the name of anonymizing grant applications.
More broadly, I respectfully disagree with Terry that the expertise, track record, and experience of the PIs, facilities available to the PIs, etc., could just be evaluated by program officers, even for US NSF grants.
I wonder if it would be possible to do a two-step process: first score the proposal, and only once that score is set are the PIs identities revealed so referees can score the qualifications of the team. Doesn’t that allow for judging the science without potential biases about who is proposing to do it, while also taking into consideration qualifications to carry out the work? I imagine it would be straightforward to set up a trial in which proposals are judged both ways (knowing ID, ID blinded until proposal scored).
Thanks all for comments here, and in social media. There’s one subset of our community — mostly tenured men — that has expressed reluctance that obscuring track record and history of productivity would make double-blinding unfair or not possible. It’s been enough of a chorus that I’ll deal with this in a future post. (In the meanwhile, I have another grant deadline on friday.)
NSF/BIO did a trial run in 2010-201, called the “Big Pitch”. It was discussed some at the time. For instance (more if you google it):
I think it would be awesome for granting agencies that need to take into account PI identities do exactly as Emilio suggests and make it a two-stage process. That way decision-makers have to be explicit about “your ideas are very compelling and this would be a great project, but I’m afraid we just don’t think your team has the skills/knowledge/resources to carry it out successfully.” This is important feedback. And it would provide good data on bias (and/or contribute to avoiding bias).