Why I don’t have trouble finding peer reviewers

Standard

I see this very often in social media, and also in conversation with other academic editors: it’s getting harder and harder to get find people who agree to review manuscripts.

I have no idea whether this reflects the general experience, or if it’s borne out by data. I of course believe the lived experience of my peers, and their accounts make sense given the steady (and absurd) increase in publication rates, with so many people working the manuscript ladder chasing prestige, all compounded by the difficulties of the pandemic. I imagine that some journals have tracked the invitation acceptance rate and how it’s changed over time and perhaps shared this — or maybe it’s in the bibliometric literature — though over the span of a couple minutes my searching powers came up short.

That said, I have to admit that getting reviewers to say yes hasn’t been a problem for me in the course of editorial duties. Even in the depths of this pandemic, I usually haven’t had to ask more than three to five people in order to land two reviewers. Each year, I’ve been handling dozens of manuscripts, so I can’t credibly pin this on the luck of the draw. I don’t know why I don’t have much trouble finding peer reviewers. It presumably is a complex function of the function of manuscripts themselves, the society affiliation of the journal, how and who I choose to invite, the financial model of the journal, maybe if people are more likely to say yes to me as a human being (?), and who knows what else. If you ask people why they say no, I’m sure everybody just thinks it’s because they’re too busy. But if you ask people why they say yes, then that where it might get interesting.

The title of this post is off because I clearly don’t know why I don’t have trouble finding reviewers, but it might be informative because I’ll tell you what I’ve been doing, and that might help y’all come to your own conclusions about the Why. I’ve just stepped down from all of my editorial roles, so I thought now is a good time to step back and reflect on how have I identified potential reviewers, and make an attempt at some generalized take-lessons from this experience.

Continue reading

Updating my perspective on “predatory” journals

Standard

It took a while for the rise of the internet to destabilize the academic publishing industry, but still the major for-profit publishers have been adept at consolidating their racket. Academic institutions, and individual academics such as myself, continue to be fleeced and are donating money to corporations in a sector with an absurdly high profit margin. If you’re reading this site, you presumably are aware of all the disruptions in academic publishing that have been facilitated by the internet: preprint servers, scihub and libgen, open-access fees, journals that are entirely open access, and so called “predatory” journals.

Let’s talk more about “predatory” journals.

These journals seem more parasitic than predatory. These publishing venues are merely taking advantage of the perverse incentives that we have developed in higher education.

Continue reading

NSF needs more non-R1 GRFP reviewers, please sign up!

Standard

I have a little something to admit. I just registered as a potential reviewer for the NSF GRFP for the first time. (That’s the Graduate Research Fellowship program, for the noobs). I’ve been on here for years talking about the program: how it works, how the outcomes are inequitable, how we can do our part to increase representation in the applicant pool, yadda yadda, but I’ve never even tried to put in the work and become a reviewer until now. Does that make me a hypocrite? A little bit, yeah.

Are you interested in becoming a reviewer? You can sign up here with a copy of your CV and let NSF know that you’re available. The whole process took me about five minutes.

Continue reading

Down with bar graphs

Standard

Some folks really hate pie charts, but I think for some purposes, they can communicate precisely the information we want them to. But, on the other hand, who’s our real enemy? Bar graphs.

Introducing Exhibit A (which is Figure 1 from Weissgerber et al.):

journal.pbio.1002128.g001.PNG

Bar graphs tell us the mean, and some kind of measure of variance (standard deviation? standard error? confidence interval?). And that’s it. Continue reading

Reviewing manuscripts as an early career scientist

Standard

I find this weird, but apparently, some journals in some academic fields don’t allow grad students or postdocs to serve as peer reviewers. I do get the idea that professional experience and expertise should be required to conduct a peer review. They’re called “peer” reviews for a reason.

Then, the question is: are early career scientists our peers? Continue reading

Massive editorial failures harm authors and readers

Standard

Have you heard of the newly published misogynist paper in the American Journal of Emergency Medicine? Here’s the start of the abstract:

It is unknown whether female physicians can perform equivalently to male physicians with respect to emergency procedures. Endotracheal intubation is one of the most critical procedures performed in the emergency department (ED). We hypothesized that female physicians are not inferior to male physicians in first-pass success rate for this endotracheal intubation.

There has been much outrage. But hold on. This might not be what it might look like.
Continue reading

Sizing up competing peer review models

Standard

Is peer review broken? No, it’s not. The “stuff is broken” is overused so much that it now just sounds like hyperbole.

Can we improve peer review? Yes. The review process takes longer than some people like. And yes, editors can have a hard time finding reviewers. And there are conflicts of interest and bias baked into the process. So, yes, we can make peer review better.

As a scientific community, we don’t even agree on a single model of peer review. Some journals are doing it differently than others. I’ll briefly describe some peer review models, and then I’ll give you my take. Continue reading

An introduction to writing a peer review

Standard

I recently had an exchange with a colleague, who had just written a review at my request. They hadn’t written many reviews before, and asked me something like, “Was this a good review?” I said it was a great review, and explained what was great about it.  Then they suggested, “You should write a post about how to write a good review.”

So, ta da. Continue reading

Knowing your animal and your question

Standard

I’ve read a lot of research proposals and manuscripts. Some manuscripts were rejected, and some proposals didn’t fare so favorably in review. What have I learned from the ones on the lower end of the distribution?

Here’s an idea. It can’t explain everything, but it’s something to avoid. Continue reading

How can track record matter in double-blind grant reviews?

Standard

We should have double blind grant reviews. I made this argument a couple weeks ago, which was met with general agreement. Except for one thing, which I now address.

trouble coverSome readers said that double-blind reviews can’t work, or are inadvisable, because of the need to evaluate the PI’s track record. I disagree with my whole heart. I think we can make it work. If our community is going to make progress on diversity and equity like we keep trying to do, then we have to make it work.

We can’t just put up our hands and say, “We need to keep it the same because the alternative won’t work” because the status quo is clearly biased in a way that continues to damage our community. Continue reading

What is press-worthy scholarship?

Standard

As I was avoiding real work and morning traffic, there were a bunch of interesting things on twitter, as usual. Two things stood out.

First was a conversation among science writers, about how to find good science stories among press releases. I was wondering about all of the fascinating papers that never get press releases, but I didn’t want to butt into that conversation. Continue reading

Impatience with the peer review process

Standard

Science has a thousand problems, but the time it takes for our manuscripts to be peer reviewed ain’t one. At least, that’s how I feel. How about you?

I hear folks griping about the slow editorial process all the time. Then I ask, “how long has it been?” And I get an answer, like, oh almost two whole months. Can you believe it? Two months?!” Continue reading

“Open Science” is not one thing

Standard

“Open Science” is an aggregation of many things. As a concept, it’s a single movement. The policy changes necessary for more Open Science, however, are a conglomerate of unrelated parts.

I appreciate, and support, the prevailing philosophy of Open Science: “the movement to make scientific research, data and dissemination accessible to all levels of an inquiring society.” Transparency is often, though not always, good. Continue reading

Natural history, synthesis papers and the academic caste system

Standard

It’s been argued that in ecology, like politics, everything is local.

You can’t really understand ecological relationships in nature, unless you’re familiar with the organisms in their natural environment. Or maybe not. That’s probably not a constructive argument. My disposition is that good ecological questions are generated from being familiar with the life that organisms out of doors. But that’s not the only way to do ecology. Continue reading

What ever happened to “major and minor revisions?”

Standard

Since I started submitting papers (around the turn of the century) editorial practices have evolved. Here’s a quick guide:

What used to be “Reject” is still called a “Reject.”

What used to be “Reject with Option to Resubmit” rarely ever happens anymore.

What used to be called “Major Revisions” is now called “Reject (With Invited Resubmission)” with a multiple-month deadline.

What used to be called “Minor Revisions” is now called “Reject (With Invited Resubmission)” with a shorter timeline.

And Accept is still Accept.

Here’s the explanation.

A flat-out rejection — “Please don’t send us this paper again” — hasn’t changed. (I’ve pointed out before, that it takes some experience to know when a paper is actually rejected.) Continue reading

Which institutions request external review for tenure files?

Standard

Today, I’m submitting my file for promotion. It’s crazy to think I submitted my most recent tenure file five years ago, it feels closer to yesterday. Unless I get surprised (and it wouldn’t be the first time), I’ll be a full Professor if I’m here next year. And yet, throughout this entire process, there has been zero external validation of tenure and promotion. I think this is really odd. Continue reading

Why I prefer anonymous peer reviews

Standard

Nowadays, I rarely sign my reviews.

In general, I think it’s best if reviews are anonymous.  This is my opinion as an author, as a reviewer, and as an editor. What are my reasons? Anonymous reviews might promote better science, facilitate a more even paying field, and protect junior scientists.

The freedom to sign reviews without negative repercussions is a manifestation of privilege. The use of signed reviews promotes an environment in which some have more latitude than others. When a tenured professor such as myself signs reviews, especially those with negative recommendations, I’m exercising liberties that are not as available to a PhD candidate.

To explain this, here I describe and compare the potential negative repercussions of signed and unsigned reviews.

Unsigned reviews create the potential for harm to authors, though this harm may be evenly distributed among researchers. Arguably, unsigned reviews allow reviewers to be sloppy and get away with a less-than-complete evaluation, which will cause the reviewer to fall out of the good graces of the editor, but not that of the authors. Also, reviewer anonymity allows scientific competitors or enemies to write reviews that unfairly trash (or more strategically sabotage) the work of one another. Junior scientists may not have as much social capital to garner favorable reviews from friends in the business as senior researchers. But on the other hand, anonymous reviews can mask the favoritism that may happen during the review process, conferring an advantage to senior researchers with a larger professional network.

Signed reviews create the potential for harm to reviewers, and confer an advantage to influential authors. It would take a brave, and perhaps foolhardy, junior scientist to write a thorough review of a poor-quality paper coming from the lab of an established senior scientist. This could harm the odds of landing a postdoc, getting a grant funded, or getting a favorable external tenure evaluation. Meanwhile, senior scientists may have more latitude to be critical without fear of direct effects on the ability to bring home a monthly paycheck. Signed reviews might allow more influential scientists to experience a breezier peer review experience than unknown authors.

When the identity of reviewers is disclosed, these data may result in more novel game theoretical strategies that may further subvert the peer-review process. For example, I know there are some reviewers out there who seem to really love the stuff that I do, and there is at least one (and maybe more) who appear to have it in for me. It would only be rational for me to list the people who give me negative reviews as non-preferred reviewers, and those who gave positive reviews as recommended reviewers. If I knew who they were. If everybody knew who gave them more positive and more negative reviews, some people would make choices to help them exploit the system to garner more lightweight peer review. The removal of anonymity can open the door to corruption, including tit-for-tat review strategies. Such a dynamic in the system would further exacerbate the asymmetries between the less experienced and more experienced scientists.

The use of signed reviews won’t stop people from sabotaging other papers. However signed reviews might allow more senior researchers to use their experience with the review system to exploit it in their favor. It takes experience receiving reviews, writing reviews, and handling manuscripts to anticipate the how editors respond to reviews. Of course, let’s not undersell editors, most of whom I would guess are savvy people capable of putting reviews in social context.

I’ve heard a number people say that signing their reviews forces them to write better reviews. This implies that some may use the veil of their identity to act less than honorably or at least not try as hard. (If you were to ask pseudonymous science bloggers, most would disagree.) While the content of the review might be substantially the same regardless of identity, a signed review might be polished with more varnish. I work hard to be polite and write a fair review regardless of whether I put my name on it. But I do admit that when I sign a review, I give it a triple-read to minimize the risk that something could be taken the wrong way (just as whenever I publish a post on this site). I wouldn’t intentionally say anything different when I sign, but it’s normal to take negative reviews personally, so I try to phrase things so that the negative feelings aren’t transferred to me as a person.

I haven’t always felt this way. About ten years ago, I consciously chose to sign all of my reviews, and I did this for a few years.  I observed two side effects of this choice. The first one was a couple instances of awkward interactions at conferences. The second was an uptick in the rate which I was asked to review stuff. I think this is not merely a correlative relationship, because a bunch of the editors who were hitting me up for reviews were authors of papers that I had recently reviewed non-anonymously. (This was affirmation that I did a good job with my reviews, which was nice. But as we say, being a good reviewer and three bucks will get you a cup of coffee.)

Why did I give up signing reviews? Rejection rates for journals are high; most papers are rejected. Even though my reviews, on average, had similar recommendations as other reviewers, it was my name as reviewer that was connected to the rejection. My subfields are small, and if there’s someone who I’ve yet to meet, I don’t want my first introduction to be a review that results in a rejection.

Having a signed review is different than being the rejecting subject editor. As subject editor, I point to reviews to validate the decision, and I also have my well-reasoned editor-in-chief, who to his credit doesn’t follow subject editor recommendations in a pro forma fashion. The reviewer is the bad guy, not the editor. I don’t want to be identified as the bad guy unless it’s necessary. Even if my review is affirming, polite, and as professional as possible in a good way, if the paper is rejected, I’m the mechanism by which it’s rejected. My position at a teaching-focused institution places me on the margins of the research community, even if I am an active researcher. Why the heck would I put my name on something that, if taken the wrong way, could result in further marginalization?

When do I sign? There are two kinds of situations. First, some journals ask us to sign, and I will for high-acceptance rate journals. Second, if I recommend changes involving citations to my own work, I sign. I don’t think I’ve ever said “cite my stuff” when uncited, but sometimes a paper that cites me and follows up on something in my own work, and I step in to clarify. It would be disingenuous to hide my identity at that point.

The take home message on peer review is: The veil of anonymity in peer review unfairly confers advantages to influential researchers, but the removal of that veil creates a new set of more pernicious effects for less influential researchers.

Thanks to Dezene Huber whose remark prompted me to elevate this post from the queue of unwritten posts.