I’ve read a lot of research proposals and manuscripts. Some manuscripts were rejected, and some proposals didn’t fare so favorably in review. What have I learned from the ones on the lower end of the distribution?
Here’s an idea. It can’t explain everything, but it’s something to avoid.
Some biologists are question people — a particular question or issue unifies their research agenda. Others are organism or biome people — their work is unified by a particular taxon or a particular place. (And some people aren’t so easily categorized, of course.)
For example, some people might work on mating systems, or fire ecology, or geographic distributions, working with a variety of model systems. Other people might work on bees, or rocky intertidal zones, or poppies — and they could be working on a broad variety of conceptual issues.
This isn’t a bad thing. Some of us just have our affinities. It’s not bad, but it might contribute to blind spots.
Here’s something I have found, in a good number of the manuscripts that get trashed in review, and a fair share of the proposals that sink the bottom: They’re really well informed about the organism but not the question, or they’re really well informed about the question and not the organism.
This might seem a bit obvious, but I’ll say it anyway:
If your manuscript or grant is about question A in organism X, make sure that you understand and are familiar with the literature in both A and X.
So for example, if your manuscript is about the physiological ecology of pikas on mountaintops (I just made this up), then you need to address the literature about pikas and the literature about mountaintop biology of all kinds of organisms. If you’re a pika expert, and you limit your discussion to other mammals, then odds are that other scientists will think that you’re missing the point, and you’ll get majorly dinged in peer review.
Likewise, let’s say that you’re a specialist in mountaintop biology, and you decided to do a study on pikas though you haven’t worked much with pikas before. Even if your project is about mountaintops and pikas are just a handy model system for your question, you need to understand and write about the biology of pikas and other mammals, as it relates to your work. Otherwise, you might be saying something off base or not grounded in the biology of the organism.
I’d guess that the majority of academic work in ecology/evolution that gets a really hard time in peer review doesn’t address this issue. It’s not the only problem, but it’s an important one.
I should address the reality that subfields have cliques. Social groups emerge among people who are familiar with one another’s work over the years, and when a new person comes in without a reputation, they might get more scrutiny than someone who is already a member of the club. It isn’t fair, it’s not right, but it’s the way things work sometimes. So if a person who hasn’t published in a particular realm gets reviewed by a well-established expert, it’s not a surprise that they might get a more critical review than they deserve. (It’s the job of editors, of course, to keep this in mind, and to solicit and interpret reviews with an awareness of the potential for this kind of situation.)
Mostly just wanted to say good post. You nailed it.
The same point–you need to know both your question and your system–also comes up with reviews. As a fundamental researcher who works in model systems, some of which I haven’t spent a lifetime working on, I can tell you that the most frustrating reviews tend to come from specialists in the system who don’t really understand the question I’m asking. Honestly, I’d be happy (ok, embarrassed at first, but ultimately happy) to be told that my conclusions are incorrect because of aspect X of the biology of my study species, of which I had previously been unaware. But I’ve never gotten that sort of review. Instead, reviews I’ve gotten from system specialists have tended to just point to features of the biology of the organism that the paper didn’t discuss, without explaining why those biological features undermine the conclusions of the paper.
Made up example: you’re doing an experiment to test Tilman’s “R*” competition theory with protists as the model system. A protist specialist who asks you “Why would you think that theory would apply to the protist species you used? One of them is Blepharisma sp. which is capable of inducible gigantism and can eat the species you competed it against. Aren’t you actually studying intraguild predation rather than pure resource competition?” is bringing up very relevant natural history that might well undermine your conclusions. In contrast, a protist specialist who asks you “Why would you think that theory would apply to the protist species you used? They have micronuclei as well as macronuclei.” is just bringing up irrelevant natural history.
That is, if you as a reviewer don’t understand both the question and the system, you’ll have a hard time figuring out which bits of your system-specific knowledge are relevant for purposes of the review.
A good editor can recognize reviews that “miss the point” and work around them. But the editor needs to be pretty independent-minded and prepared to ignore an unsatisfied reviewer.
And yes, it’s my anecdotal impression that some systems are associated with very cliquey specialists, whereas specialists on other systems aren’t that way at all. No idea why that is.