I see this very often in social media, and also in conversation with other academic editors: it’s getting harder and harder to get find people who agree to review manuscripts.
I have no idea whether this reflects the general experience, or if it’s borne out by data. I of course believe the lived experience of my peers, and their accounts make sense given the steady (and absurd) increase in publication rates, with so many people working the manuscript ladder chasing prestige, all compounded by the difficulties of the pandemic. I imagine that some journals have tracked the invitation acceptance rate and how it’s changed over time and perhaps shared this — or maybe it’s in the bibliometric literature — though over the span of a couple minutes my searching powers came up short.
That said, I have to admit that getting reviewers to say yes hasn’t been a problem for me in the course of editorial duties. Even in the depths of this pandemic, I usually haven’t had to ask more than three to five people in order to land two reviewers. Each year, I’ve been handling dozens of manuscripts, so I can’t credibly pin this on the luck of the draw. I don’t know why I don’t have much trouble finding peer reviewers. It presumably is a complex function of the function of manuscripts themselves, the society affiliation of the journal, how and who I choose to invite, the financial model of the journal, maybe if people are more likely to say yes to me as a human being (?), and who knows what else. If you ask people why they say no, I’m sure everybody just thinks it’s because they’re too busy. But if you ask people why they say yes, then that where it might get interesting.
The title of this post is off because I clearly don’t know why I don’t have trouble finding reviewers, but it might be informative because I’ll tell you what I’ve been doing, and that might help y’all come to your own conclusions about the Why. I’ve just stepped down from all of my editorial roles, so I thought now is a good time to step back and reflect on how have I identified potential reviewers, and make an attempt at some generalized take-lessons from this experience.
For some context: I handled most of these manuscripts as an Associate Editor for a society journal. This journal has an impact factor of about 1.5 and is published through Springer, because the academic society has a contract with the publisher. The other journal I have been with as a Subject Editor (for maybe over 10 years!) was another society journal with an impact factor around 2, which was published through Wiley under contract with that society. Both are international societies and the submissions came in from all over the world.
The quality of the manuscripts that I handled was quite variable (and this was independent of the country of origin, I would like to be very clear about this), and I realized in many cases that when I was asking a colleague to review a paper, that they were going to be doing a not-insubstantial lift. Sometimes I’d be asking for reviews for manuscripts that had a quite unlikely route to acceptance, but the authors wouldn’t have been as well served with just a desk rejection. I think this is part of what makes society journals different than other journals: our mission includes supporting members of the community who might not have access to the support of quality mentors or a well-connected research environment.
What follows is a series of unordered statements about choices I made while seeking reviewers and interpreting reviews.
-In general, I lean towards inviting junior scientists for reviews. I have no qualms with asking senior PhD students or postdocs for reviews.
-When somebody says “no” to a review, I receive this with zero judgment and zero assumptions. There are so many good reasons that people can have to not accept a review when the invitation comes in, and I’m not going to hold it against anybody. I say no to almost all the review requests that I receive (though I still review more than 2x manuscripts than I submit), and I sure hope others grant me the same courtesy. I also don’t judge someone when they don’t volunteer names of recommended reviewers. Because sometimes we just don’t have the bandwidth to draw on for that kind of thing, it’s enough work to just click on the email to say no. But if I think this person’s input would be particularly useful in the niche of this particular manuscript, I don’t mind writing one of these folks and asking them for names.
-When looking for reviewers, I actively do my best to fight against relying on my personal recollections about who is an expert. This is useful information but if I rely on it, then it results in a highly biased result. Sometimes I’d end up inviting the usual suspects for a certain topic, but I wouldn’t arrive at this decision without doing a literature search, even if it was a topic I was working in myself. This would be on google scholar and also Springer has this tool called “Reviewer Finder” (available only to editors of their journals) that uses the title, authors, abstract, and key words to identify potential reviewers, which routinely identified people who would be great reviewers, especially from the Global South, whose work I was not familiar with but clearly made them qualified to review. One of the great things about editorial work is that it has kept me fresh about who is publishing the newest papers on emerging areas that are just outside the tight area of my expertise.
-I am inclined to think that it’s easier to get reviewers for society journals. For example, who would want to review for a journal like Oecologia, which exists principally to enrich the shareholders of Springer Nature? But if a journal is being run by and for an academic society (regardless of who publishes it), then perhaps providing reviews is a more useful act of leadership in the community of one’s peers? But I don’t know if this is a factor for most others. I don’t exclusively ask society members for reviews, but I think saying no to the society is different than saying no to a for-profit organization. (Should academic societies make contracts with for-profit publishers to support the journal and bring in revenue for the society? That’s a complicated issue, isn’t it?)
-The automated software for Editorial Manager and Manuscript Central is entirely messed up when it comes to gender and titles. The system basically expects someone to be a Ms., Mr., or Dr. That means if the reviewer doesn’t have a PhD, then we’re expected to gender them? Ugh. So when I enter a new reviewer in the system, as far as I’m concerned everybody is a Dr.
-Sometimes I’ll go to the lab website of a PI and see what members of their lab would be good to review the manuscript. I’ve found a lot of great reviewers this way.
-The biggest challenge I had is: I would find a perfect person to review a manuscript, often a postdoc, but finding a working email address for them would be like searching for an ivory-billed woodpecker. It’s not on institutional websites or a lab website. They don’t have it on their personal website, if they have one. Their email for correspondence on previously published papers isn’t active anymore. Their google scholar page shows that they have an email address but it doesn’t show what the address is, and that is also often out of date. I give up after about ten minutes of trying to find the email address of a postdoc who would be perfect for a particular manuscript. It’s not uncommon that I hit this 10-minute mark.
-There is an early career reviewer database set up by Susan Perkins. I’ve used this a bit.
-Once in a while I do ask the very Senior, very Busy, and very Important person for a review. I do this when I think the manuscript is just perfectly up their alley and I think they’d be genuinely interested in seeing it, if only because it has natural history pieces they’ll appreciate. In almost all these cases, these folks say yes, which continues to surprise me, I suppose it’s because I have enough of an eye for an abstract that will strike their fancy. When this happens, the reviews are usually among the best I’ve ever received (very detailed, very kind, and very constructive, and not overly prescriptive), but sometimes, they’re worst ever (very brief and failing to explain any of their reasoning). Nothing ever in between. I’ve been in a packed ballroom for a plenary address by one of The Great Ones, and within the same week, have been on the receiving end of their detailed and generous review of an incremental but interesting manuscript for a relatively unprestigious journal. I imagine they say no to reviews all the time but apparently (some of) these folks will say yes. (Or maybe some folks get to be big enough of a name that the number of invitations drops off, because editors think the odds of a “no” are so high? I have no idea.) Anyhow, being involved in the confidential peer review process has been an opportunity to learn about the character of other people who I thought I knew well, and I’ve had some big surprises on both ends of the spectrum.
-When I don’t invite author-recommended reviewers, the most common reason is because I’m concerned that the authors would get the raw end of the deal. There are a few names that I see keep popping up as author-recommended reviewers, and I really wish I could just take these folks aside and say, “Oh, you really don’t want these people to review your paper, trust me on that,” but obviously I don’t do that. But in general, my experience also is that author-recommended reviewers are less likely to accept the review, and it tends to slow down the process. This seems to be true even when the reviewer isn’t super-famous, but instead is simply highly informed about this particular topic. However, I still do invite a good number of these, and in general, it works out. Because most reviews are of high quality.
-I try really hard to get reviewers from two different continents, and one of them should not be North America or Europe. I don’t always succeed at this, but it’s an active target.
-When both reviewers of a manuscript are men, I do note this every time and ask myself if I went about the process equitably.
-When I get a bad review that is devoid of content (either they loved it or they hated it but don’t do a good job explaining how and why), I entirely ignore it and just get a new review. This is inevitable, especially when asking folks for reviews who I am not familiar with at all, and it slows down the process by a month or so, but I think it’s a more robust process than only asking people who I “trust” to provide good reviews.
-I sometimes invite a more senior PI to a review but then write a note in the invite that says “feel free to identify someone in your lab instead and I’d be glad to send them an invite.” This often works, too.
-I generally try to get one reviewer who is deeply familiar with the taxon/system that is the topic of the investigation, and another reviewer who is very familiar with the question but not with the system.
-I don’t ask anybody to review for the journal if they’re reviewed within the 11 months or so.
-When a reviewer says that a manuscript needs to be edited by a “native English speaker,” why yes that does undermine the credibility of the rest of the review. Also, the best written reviews are the ones that come from reviewers who are obviously multilingual.
-Once in a while, a reviewer will tell me that they couldn’t understand the science in the manuscript because the writing was so poor. Sometimes there are very poorly written manuscripts, and a few critical mythological details might get muddled, but if a manuscript isn’t clear enough to evaluate the overall science, then I’m not going to be sending it out for review. If the manuscript ended up in your inbox, that means I think that the writing is understandable enough for peer review. So this is not a helpful comment, and says more about the reviewer than it does about the manuscript.
-I do not keep a list of people who have been problematic reviewers. Most reviews are excellent. For reals. But occasionally there is a stinker. I wish I did keep such a list, because sometimes I’ve forgotten about this adverse experience (there’s a lot to be said for being a goldfish, in the Ted Lasso sense, in this science business) and then I hit these people up again years later only to recall how I got burned the first time.
-In any experience, when a reviewer says, “This is a really important paper to be cited in the manuscript,” about half the time it’s one of theirs. The other half of the time, it’s absolutely not, and they just happen to be correct that it’s an important paper to be cited. So if you get one of those reviews, and you’re trying to figure out if one of the authors of that paper is one of the reviewers, how about you flip a coin?
Do any of you have other approaches as editors that you think helps find reviewers?
[on edit, 10 minutes after I originally posted this]: it just occurred to me that Stephen Heard wrote a great blog post about how he finds reviewers. If you liked this, then you will like that too! Or you didn’t like this and managed to get to the end, then may I suggest his instead?
6 thoughts on “Why I don’t have trouble finding peer reviewers”
Some good data on peer review here, if you’re curious: https://publons.com/community/gspr
Interesting perspective. As a senior in the science, I’ve been both an Associate Editor and an Editor. I get lots of requests to review, a manuscript almost every day, proposals perhaps monthly, people 3x per year. I sometimes think that editors believe they’re doing me a favor by keeping me mentally active in my dotage. For the past decade or so, I usually sign my reviews of manuscripts unless specifically requested not to, saving the author from trying to guess. I hardly ever decline to review a person for a promotion or tenure case, because I have observed colleagues interpreting the declining negatively. As Prof McGlynn observes, there are lots of legitimate reasons to say No.
I liked your comments about author-suggested reviewers. As an Editor, I once had a prolific author who furnished me with a list of those people he would recommend as reviewers and those people he thought should not be invited because they had been writing bad reviews for his previous papers. It was quite amusing to see that he had the two lists completely reversed. Those he thought were the unsympathetic reviewers were actually giving him positive reviews, and the recommended reviewers were the ones most critical of his work.
I stepped down from two editorial boards last year. Both were pretty good journals, one relatively new open access journal and the other a well known, established journal in the field associated with a big publisher. I’d done one for 6 years and the other for over 10 (‘ve been on editorial boards for over 20 years). The difficulty in finding reviewers became more and more of an issue. For the “newish” open access journal the number I was asked to handle was quote low but they were rarely in my field – it’s scope was just too wide. This meant I relied on Google and Manuscript Central far too much. Sometimes I’d have to invite 15 or more (with the associated delays) and the last straw was when I simply couldn’t get a reviewer to bite for one manuscript. In this case I let the authors down. About a third of manuscripts were also clearly unsuited – I didn’t send those out but they took as much time as one’s which were reviewed. The second journal became a target for certain prescriptive/formulated types of manuscript that entailed a dump of (often genomic) data and following up with a largely un-justified focus on one gene/lncRNA or whatever. I couldn’t pin them down (though some I was able to find enough concern in image manipulation or data inconsistency to desk reject) but I suspected they were products either of paper mills or some other cabal. Seriously, these all looked pretty much the same except for different gene names/sites/data plots. When looking at prior papers from their lead authors, there were typically other examples with no follow-up. I raised this with the journal’s lead editors but nothing was ever done (hence the anonymity). I may have simply been over critical but these formulaic papers were, in my (flawed?) assessment, either fake or deceptive. If one f these was sent to review, sometimes a reviewer raised similar issues but quite often they gave the author the benefit of doubt – likely because they hadn’t seen patterns wth other manuscripts.
The pandemic exacerbated these issues and the ratio of poorer quality manuscripts increased. It became way too much work to satisfy my own concerns about some of the manuscripts and I felt it unfair to reject a manuscript just on the basis of my increasingly critical/cynical views. Hence, I resigned for several reasons. I still review papers but only accept around one in 5 invitations (not counting the deluge of four letter publisher journal invitations) due to poor matching or being otherwise overcommitted.
Although retired, I’m still close and in touch with many students, and reading some scientific papers. While follower of Nature Briefing, I noticed the title in 19 july “No shame in an honest retraction” and in 21 july the section “Quote of the day” piqued my interest : “When a reviewer says, ‘This is a really important paper to be cited in the manuscript,” about half the time it’s one of theirs.” So I read the text: I’m grateful to dr. Terry McGlynn since it is a very interesting topic, with stimulating comments.
I have no “other approach …” to suggest, but I here report two opposite examples eliciting to me a question.
I must immediately specify that it is not a personal/original question, but only my free “modern“ translation of a famous question, dating back to the ancient Latin Rome, around 1900 years ago:
“Quis custodies ipsos custodes?” (Decimus Junius Juvenalis Satira VI, 347-348)
I trust that students, post-doctoral, researchers/authors, and Editors represent in some way a scale, where Editors are at the top; as well as a class test, a short communication, or a research article are in respect to an Editorial written by the Editor in chief of a renowned scientific journal.
Recently in a class test at a postdoctoral school, just the lack of the quotation marks, while reporting only numbers -total and % of cases and controls- (there is a way to paraphrase numbers?) has been considered by the teaching committee a plagiarism and a serious lack of ethics with consequences. This occurred despite the name of both authors was reported on the text immediately before the numbers themselves and both with the numerical entry to the bibliography.
The other example is an Editorial dating back some years ago, opening a special issue, open access and with didactic purposes.
Regarding numbers, I think that any undergraduate student, if invited as reviewer, just searching on google, would quickly find an error on the date reported in the legend of a figure: a relevant error since dealing with the number of centuries, not just days or years. Or anybody would easily find that an image published in that Editorial was looking the same -or quite overlapping- a figure previously published in a well known newspaper (so probably covered by copyright)
Regarding references, as specified by the Editor/Author himself, 76,47% were from their journal. However the main topic and statements of that Editorial were cited in the first 4 references. Always a student, but with basic knowledge on probability calculation, would be led to the strong suspect that, due to their date and nature, 3 out 4 of those references, were quite impossible (…” like searching for an ivory-billed woodpecker”) to find out in a way different from a simple copy-paste from the text and references of another paper.
If of any relevance I specify that the topic was dealing with a main anatomical organ, that both the journals were of the same medical speciality, both representing two sub-speciality societies, and with a similar impact factor.
Unfortunately, the publisher was the same.
I read thousands of “Letter to the Editor” commenting and criticizing a paper, along with the reply, but I do not remember any letter referring to an Editorial written by the Editor in Chief.
My question is:
Who are the ideal reviewers of the Editors? Do they exist?
I am an early career researcher and just became an editor this year for a really good society journal (impact factor 5). I did a lot of reading on good editing practices, and basically do everything you mentioned – apart from the early career reviewer database set up by Susan Perkins (thanks for that!). However, I still have problems finding reviewers. I cannot help thinking it might be easier for people to say yes to invites when editors are professors, even more so when they are famous and prestigious in their field. It’s just a thought one could also consider…