Transparency in research: publish your reviews

Standard

When it comes to reform, and “reform,” it seems like most people think they know the fix, regardless of the problem that needs to be fixed.

For example, many people have strong opinions about how to solve the public education crisis in the U.S. What most of the people pushing for “reform” have in common is that they have little experience or success in public education. Solutions to a problem might involve fewer taxes, more taxes, more investment, less investment, more regulation, fewer regulations. It all blurs together.

It’s not too often that you hear someone in a position of ignorance say, “I’ll defer to the education experts.”

For some problems, there are self-evident partial fixes that don’t need any discussion, because the people who are wrong on the issue are straight-up ignoramuses. For example, if you want better schools, then you need to confer more respect, support and money to teachers. You can’t have good schools if the teachers don’t receive respect and support. That’s just obvious. If you want If you want children to stop dying at the hands of madmen, then you’ve got to restrict access to guns. You can’t get more sensible than that, and it’s a fact that other developed nations have figured out long ago.

The scientific publishing industry is a mess in several different ways, and this mess is stifling research progress. There are not many overt direct solutions. Perhaps scientists should be able to retain copyright of their own work, but this is a complex issue.

There is one component of the academic publishing mess that can be quickly and easily changed by us authors.

If you want more confidence and fairness in the integrity of the publication process, then you need more transparency.

There is one massive thing that we can do to increase the transparency of the publication process.  We can publish our reviews.

Here are some upsides to releasing your reviews:

  • There will be fewer doubts about the integrity of journals and the quality of peer review.
  • There will be more doubts about the integrity of journals that should be subject to doubts.
  • Reviewers, even though they are anonymous, may tend toward producing more civil and measured reviews, with fewer requests for citations to their own work, if the reviews end up being published.
  • Specific concerns about the scientific content of that paper which were addressed during the review process will be publicly available, increasing the ability of readers to critically evaluate the science of the publications.
  • Taxpayers who are paying for research will be more even more informed about the process and consequences of publicly-funded projects.
  • People will learn that the quantity and quality of peer review may be independent of the impact factor, prestige or ranking of a journal.
  • The academic glamour magazines will look a lot less glamorous if the reviews and editorial evaluations associated with those venues are seen in daylight.

How does this work?  Just put ’em on your website. I’ve been doing this since 2009. Go ahead and ‘read ’em! (and, feel free to cite them)

It takes a very short time to do this. I just take the reviews as they come in and copy-and-paste them into a word processing document, redacting the names of my correspondents. Then I make it into a pdf, and upload it right next to the paper itself on my website.

To my knowledge, I’m the only person who ever does this as a regular course of action.

I haven’t often mentioned it while chatting with colleagues, even though I know plenty of folks are downloading reprints from my site. Perhaps nobody mentions it because they think it’s a supremely risky or unwise thing to do. If you read through the files, you might notice that one or two good journals come out looking rather silly. It might have resulted in a grudge on their end, though, I don’t think that’s the case. Obviously it’s not wholly positive about me, to show evidence of rejection after rejection for some papers. I think the benefits of transparency outweigh publishing negative reviews that result in rejection.

How do the journals feel about this? Nobody’s ever said anything. It hasn’t come up.

I do look at this from the perspective of an editor, too. I have handled my share of manuscripts. I doubt that any of the authors whose manuscripts I handle are publishing their rejections and acceptances online (and rejections are far more common than acceptances). Nevertheless, I work for quality and fairness, which is clear enough so that if the documents were public, and my name were on them, that I would be proud of the work and not feel as if I would have to make any excuses. I do include the names of journals, but not the names of any particular individuals. You could infer editors-in-chief based on the dates in the correspondence, but it’s a different matter for handling editors.

I approach editing with the philosophy that I would want to be sure that I would be able to handle public scrutiny if it all was published on the front page of the newspaper. I also have the same policy for how I conduct myself in the classroom, and how I correspond over email. I honestly wouldn’t be bothered if my reviews of a manuscript and my remarks as an editor were publicly revealed with my name. I certainly wouldn’t mind if they were released without any name attached, which is what I do with the reviews I share with the world.

I don’t think people are too particular about the content of these reviews. They want to see the final paper, and few want to look into the sausage factory. It is probably of greatest interest to students who don’t know about how the process works.

One thing that you’ll see is the rigor of peer review associated with PLoS ONE. I’ve only published one paper in this venue so far, and when you compare the process there to the quality of editorial work in the other publications, and in the submissions of that paper to other venues first, you have to respect what happens under the hood at PLoS ONE.

Do you think this is a great idea to share your reviews? If everybody shared their reviews, would it destabilize the publication process, result in no change, or make things more fair? Would the level of hap involved in the process, and the importance of salesmanship, become more evident?

I’m not suggesting that this is a major fix, but from the way I’ve seen the angles so far, I see a lot of positives.

11 thoughts on “Transparency in research: publish your reviews

    • I’m not sure how this is connected to the post – but hey, folks, if you want to learn about how hard it is to publish in glamour mags, this link is an example.

  1. Hi Terry, It was meant to connect to your readers. I am new to this blogging business but boy have I got a lot to rant about! You can see from my other blogs, I am a seasoned scientist but struggling to make a living in academia. One of the things I am passionate about is open access and open review process, which will be covered in my future blogs! Thanks for accommodating my blog!
    Ravi

    • Sure thing! Welcome to blogging. I’m mighty new as well, still figuring this out. This blog is about to hit its 2-monthiversary about now. Looking forward to more of what you have to say!

  2. Wow! I looked at a semi-random sample of the reviews for two or three of your most papers and noticed that you have encountered a long slew of extremely tough reviews out there. I mostly speak to the detail and rigor of the reviews, as well as the frequent rejections and calls for rejection, often based on very precise notions of study broadness or impact commensurate with the target journal. Also some cases where you were invited to revise and resubmit, did so, and then the paper was rejected anyways! Makes me impressed at your persistence in maintaining a high publication rate in the face of all this. Also makes me wonder if the ant ecologists are a particularly tough lot, at least compared to reviewers I have encountered. My experience has been different. My work is mostly on legacy pollution monitoring studies, with most of my work published in moderate impact journals by commercial publishers (Elsevier and Springer in particular). Almost every paper received a call for “major revision” with relatively brief (1/3 – 2/3 page length) reviews. After carefully addressing the comments, these papers were accepted, albeit sometimes after reconsideration by the original reviewers. Not sure if I should feel lucky or worried that the process left much of my work relatively unscathed. It has also been my anecdotal experience that the less ambitious and adventuresome the paper, the less critical the reviews. I wonder if there is a real bias for “normative” work.

    • Yeah, I’ve had some rough reviews. Some are quite spot-on. The annoying part is that you can tell that the subtext of some reviewers is that they don’t like the premise or the approach of the paper, rather than the science itself. Some people start with “this is interesting and how can it be fixed to be better” and others start with “I don’t like this and how I can I identify things to torpedo it in review.” I think I get more of the latter, overall. The paranoid person in me says it’s because I’m not a member of the club. But then again, there’s an argument to be made that I am one of the privileged guys in the club. All I can do is just resubmit and keep doing my science, and be as collegial as possible when given the opportunity.

  3. Some folks have asked me why I only included review for some papers. I have included reviews for all research papers for which I am first author, and have been published since I started the practice. I won’t do it if I’m not first author, unless the author is a student of mine and I have that student’s permission.

Leave a Reply to ravinookalaCancel reply