An introduction to writing a peer review

Standard

I recently had an exchange with a colleague, who had just written a review at my request. They hadn’t written many reviews before, and asked me something like, “Was this a good review?” I said it was a great review, and explained what was great about it.  Then they suggested, “You should write a post about how to write a good review.”

So, ta da. Continue reading

Knowing your animal and your question

Standard

I’ve read a lot of research proposals and manuscripts. Some manuscripts were rejected, and some proposals didn’t fare so favorably in review. What have I learned from the ones on the lower end of the distribution?

Here’s an idea. It can’t explain everything, but it’s something to avoid. Continue reading

How can track record matter in double-blind grant reviews?

Standard

We should have double blind grant reviews. I made this argument a couple weeks ago, which was met with general agreement. Except for one thing, which I now address.

trouble coverSome readers said that double-blind reviews can’t work, or are inadvisable, because of the need to evaluate the PI’s track record. I disagree with my whole heart. I think we can make it work. If our community is going to make progress on diversity and equity like we keep trying to do, then we have to make it work.

We can’t just put up our hands and say, “We need to keep it the same because the alternative won’t work” because the status quo is clearly biased in a way that continues to damage our community. Continue reading

What is press-worthy scholarship?

Standard

As I was avoiding real work and morning traffic, there were a bunch of interesting things on twitter, as usual. Two things stood out.

First was a conversation among science writers, about how to find good science stories among press releases. I was wondering about all of the fascinating papers that never get press releases, but I didn’t want to butt into that conversation. Continue reading

Impatience with the peer review process

Standard

Science has a thousand problems, but the time it takes for our manuscripts to be peer reviewed ain’t one. At least, that’s how I feel. How about you?

I hear folks griping about the slow editorial process all the time. Then I ask, “how long has it been?” And I get an answer, like, oh almost two whole months. Can you believe it? Two months?!” Continue reading

“Open Science” is not one thing

Standard

“Open Science” is an aggregation of many things. As a concept, it’s a single movement. The policy changes necessary for more Open Science, however, are a conglomerate of unrelated parts.

I appreciate, and support, the prevailing philosophy of Open Science: “the movement to make scientific research, data and dissemination accessible to all levels of an inquiring society.” Transparency is often, though not always, good. Continue reading

Natural history, synthesis papers and the academic caste system

Standard

It’s been argued that in ecology, like politics, everything is local.

You can’t really understand ecological relationships in nature, unless you’re familiar with the organisms in their natural environment. Or maybe not. That’s probably not a constructive argument. My disposition is that good ecological questions are generated from being familiar with the life that organisms out of doors. But that’s not the only way to do ecology. Continue reading

What ever happened to “major and minor revisions?”

Standard

Since I started submitting papers (around the turn of the century) editorial practices have evolved. Here’s a quick guide:

What used to be “Reject” is still called a “Reject.”

What used to be “Reject with Option to Resubmit” rarely ever happens anymore.

What used to be called “Major Revisions” is now called “Reject (With Invited Resubmission)” with a multiple-month deadline.

What used to be called “Minor Revisions” is now called “Reject (With Invited Resubmission)” with a shorter timeline.

And Accept is still Accept.

Here’s the explanation.

A flat-out rejection — “Please don’t send us this paper again” — hasn’t changed. (I’ve pointed out before, that it takes some experience to know when a paper is actually rejected.) Continue reading

Which institutions request external review for tenure files?

Standard

Today, I’m submitting my file for promotion. It’s crazy to think I submitted my most recent tenure file five years ago, it feels closer to yesterday. Unless I get surprised (and it wouldn’t be the first time), I’ll be a full Professor if I’m here next year. And yet, throughout this entire process, there has been zero external validation of tenure and promotion. I think this is really odd. Continue reading

Why I prefer anonymous peer reviews

Standard

Nowadays, I rarely sign my reviews.

In general, I think it’s best if reviews are anonymous.  This is my opinion as an author, as a reviewer, and as an editor. What are my reasons? Anonymous reviews might promote better science, facilitate a more even paying field, and protect junior scientists.

The freedom to sign reviews without negative repercussions is a manifestation of privilege. The use of signed reviews promotes an environment in which some have more latitude than others. When a tenured professor such as myself signs reviews, especially those with negative recommendations, I’m exercising liberties that are not as available to a PhD candidate.

To explain this, here I describe and compare the potential negative repercussions of signed and unsigned reviews.

Unsigned reviews create the potential for harm to authors, though this harm may be evenly distributed among researchers. Arguably, unsigned reviews allow reviewers to be sloppy and get away with a less-than-complete evaluation, which will cause the reviewer to fall out of the good graces of the editor, but not that of the authors. Also, reviewer anonymity allows scientific competitors or enemies to write reviews that unfairly trash (or more strategically sabotage) the work of one another. Junior scientists may not have as much social capital to garner favorable reviews from friends in the business as senior researchers. But on the other hand, anonymous reviews can mask the favoritism that may happen during the review process, conferring an advantage to senior researchers with a larger professional network.

Signed reviews create the potential for harm to reviewers, and confer an advantage to influential authors. It would take a brave, and perhaps foolhardy, junior scientist to write a thorough review of a poor-quality paper coming from the lab of an established senior scientist. This could harm the odds of landing a postdoc, getting a grant funded, or getting a favorable external tenure evaluation. Meanwhile, senior scientists may have more latitude to be critical without fear of direct effects on the ability to bring home a monthly paycheck. Signed reviews might allow more influential scientists to experience a breezier peer review experience than unknown authors.

When the identity of reviewers is disclosed, these data may result in more novel game theoretical strategies that may further subvert the peer-review process. For example, I know there are some reviewers out there who seem to really love the stuff that I do, and there is at least one (and maybe more) who appear to have it in for me. It would only be rational for me to list the people who give me negative reviews as non-preferred reviewers, and those who gave positive reviews as recommended reviewers. If I knew who they were. If everybody knew who gave them more positive and more negative reviews, some people would make choices to help them exploit the system to garner more lightweight peer review. The removal of anonymity can open the door to corruption, including tit-for-tat review strategies. Such a dynamic in the system would further exacerbate the asymmetries between the less experienced and more experienced scientists.

The use of signed reviews won’t stop people from sabotaging other papers. However signed reviews might allow more senior researchers to use their experience with the review system to exploit it in their favor. It takes experience receiving reviews, writing reviews, and handling manuscripts to anticipate the how editors respond to reviews. Of course, let’s not undersell editors, most of whom I would guess are savvy people capable of putting reviews in social context.

I’ve heard a number people say that signing their reviews forces them to write better reviews. This implies that some may use the veil of their identity to act less than honorably or at least not try as hard. (If you were to ask pseudonymous science bloggers, most would disagree.) While the content of the review might be substantially the same regardless of identity, a signed review might be polished with more varnish. I work hard to be polite and write a fair review regardless of whether I put my name on it. But I do admit that when I sign a review, I give it a triple-read to minimize the risk that something could be taken the wrong way (just as whenever I publish a post on this site). I wouldn’t intentionally say anything different when I sign, but it’s normal to take negative reviews personally, so I try to phrase things so that the negative feelings aren’t transferred to me as a person.

I haven’t always felt this way. About ten years ago, I consciously chose to sign all of my reviews, and I did this for a few years.  I observed two side effects of this choice. The first one was a couple instances of awkward interactions at conferences. The second was an uptick in the rate which I was asked to review stuff. I think this is not merely a correlative relationship, because a bunch of the editors who were hitting me up for reviews were authors of papers that I had recently reviewed non-anonymously. (This was affirmation that I did a good job with my reviews, which was nice. But as we say, being a good reviewer and three bucks will get you a cup of coffee.)

Why did I give up signing reviews? Rejection rates for journals are high; most papers are rejected. Even though my reviews, on average, had similar recommendations as other reviewers, it was my name as reviewer that was connected to the rejection. My subfields are small, and if there’s someone who I’ve yet to meet, I don’t want my first introduction to be a review that results in a rejection.

Having a signed review is different than being the rejecting subject editor. As subject editor, I point to reviews to validate the decision, and I also have my well-reasoned editor-in-chief, who to his credit doesn’t follow subject editor recommendations in a pro forma fashion. The reviewer is the bad guy, not the editor. I don’t want to be identified as the bad guy unless it’s necessary. Even if my review is affirming, polite, and as professional as possible in a good way, if the paper is rejected, I’m the mechanism by which it’s rejected. My position at a teaching-focused institution places me on the margins of the research community, even if I am an active researcher. Why the heck would I put my name on something that, if taken the wrong way, could result in further marginalization?

When do I sign? There are two kinds of situations. First, some journals ask us to sign, and I will for high-acceptance rate journals. Second, if I recommend changes involving citations to my own work, I sign. I don’t think I’ve ever said “cite my stuff” when uncited, but sometimes a paper that cites me and follows up on something in my own work, and I step in to clarify. It would be disingenuous to hide my identity at that point.

The take home message on peer review is: The veil of anonymity in peer review unfairly confers advantages to influential researchers, but the removal of that veil creates a new set of more pernicious effects for less influential researchers.

Thanks to Dezene Huber whose remark prompted me to elevate this post from the queue of unwritten posts.