Natural history, synthesis papers and the academic caste system

Standard

It’s been argued that in ecology, like politics, everything is local.

You can’t really understand ecological relationships in nature, unless you’re familiar with the organisms in their natural environment. Or maybe not. That’s probably not a constructive argument. My disposition is that good ecological questions are generated from being familiar with the life that organisms out of doors. But that’s not the only way to do ecology.

When hunting big (metaphorical) fish, we’re looking for patterns and mechanisms at the global scale (in oceans and forests and savannahs, and beyond the globe, like in simulations and mesocosms). If you have a principle or idea that works in one location but doesn’t generalize, then you’re not advancing theory. That’s only a piece of a more complex working model. For example, just because Phosphorus predicts litter decomposition in the rainforest where I work, that doesn’t mean it works worldwide. But what I know about P and litter and this one forest can be an important piece of information to build a more informative global model.

It’s still possible to make a huge splash in ecology by working locally. If you have a great idea and a well-designed experiment, it’s possible to change the world by just going to the intertidal zone, and removing some animals and watching what happens afterwards. If you track nutrients in one ecosystem, or follow one invasive species, plow and fertilize some fields in a certain way, or study the predatory behavior of one animal, this could cause an intellectual cascade throughout the discipline. Work done at one site can have a massive impact.

Local work may trigger intellectual cascades, but those cascades really start to flow from related metaanalyses, reviews, and syntheses. Connell’s removal experiments are as famous as you can get in ecology, but this work was impactful because of his related reviews. He wasn’t saying that it happened just with his animals, but that it it happens all over the place and the broader literature was brought to bear to support this point.

Most of the highly-cited papers in ecology are not reports of individual field projects, but are syntheses that use other investigations as building blocks. On a smaller scale, I’ve experienced this myself.  I’ve been studying the biology of nest movements in ants for quite a while. A few years ago, I wrote a review on the topic, and it’s not becoming citation classic overnight, but it’s definitely getting more attention than most of the papers that I needed to write the review. It’s a well-known phenomenon that syntheses and reviews get more attention — and prestige — than the work that forms the building blocks. We should be aware of what this means for the scientists who are making those building blocks.

Ecology has a huge Rashomon effect in the rhetoric and publication practices of field-based data and natural history.

Everybody loves natural history. Everybody thinks that we should value natural history more (well, almost everybody), and find a way to value descriptive field biology more in our academic rewards system. I would bet there are more ecologists who hate puppies than ecologists who think that natural history is valued adequately.

Though nearly all of us agree that natural history need to be valued more, if you look at the way we publish and cite and hire and tenure and promote and award scientists, well, we pretty much aren’t valuing it.

We want it to be valued because we all benefit from it. But we all are reluctant to do and publish much of it because we aren’t getting credit for it.

(Speaking of which, shout out for Taxonomist Appreciation Day on 19 March! Do you know how you’re going to celebrate the taxonomists in your life?)

It’s not constructive to pick on any particular people in this situation, but I wanted to share my short little peek into the sausage factory that makes big influential synthesis papers. I looked up three influential (at least to me) synthetic papers from recent years, in fields that I know well (enough). From each, I looked at two things: First, I looked at the citations of papers that provided the data for the synthesis. Second, I looked up the publication list of the authors of the synthesis papers. Here’s what I found:

  1. The papers that contained the data for the syntheses were typically in less prestigious journals than the journals that publishes the syntheses.
  2. When I was familiar with the authors of the cited papers, a decent fraction were in non-R1 institutions.
  3. The authors of the syntheses were publishing in more prestigious journals than the papers they cited for data.
  4. The papers most often published by the authors of the synthesis papers didn’t seem like they would contain original field data that future researchers would be of useful to future synthesis papers.

Those above points are me generalizing, I realize, but I think it might be a fair generalization. I just picked out a few papers and I don’t know if this would hold for across the entire field and for everybody. Feel free to give it a try yourself, and if your experience is different (or not), please leave a note in the comments.

From these facts, here are two inferences:

  1. The people who are writing synthesis papers are often a different set of people than those who are writing the most useful original data papers.
  2. The original data papers aren’t garnering as much academic reward for authors as synthesis papers.

I don’t intend for this to be a new insight. But I think it’s worth pointing out that there’s a power differential between the people that are applying aggregations of local data to global questions, and the people who primarily are generating local data.

If we are calling for more natural history to be able to answer big ecological questions, are we saying that everybody needs to do more natural history, or we want other people do do more natural history?

If someone is choosing to do research that is locally focused in scale, that’s an individual choice. And when people tackle big synthetic projects, that’s also a choice. Calling this a “caste” system is a bit of hyperbole. (But hey, it’s got you reading this far, and you might have to admit that people from non-prestigious institutions have harder time breaking into collaborative projects that have a big prestigious result at the end.) But I doubt you’d want to disagree with the idea that there are people who design their careers to do mostly descriptive ecological work in non-prestigious journals, and there are others who aim to publish in mostly prestigious journals that often rely on the academic contributions of the former group. (And of course, many of us, myself included in my opinion, fit into neither group.) But when we advocate for “more natural history,” cognizance of these functional roles in our academic community matters.

Some of the most vocal advocates for natural history are people that publish descriptive ecology and original field data. But some other visible advocates for natural history and an increase in the availability of field data aren’t generating these data themselves.

If someone is saying the world needs more data about organisms in nature, but they’re not adding these data at the rate that they’re consuming these data, then there’s no reason to think that this group is advocating for the interests of the people who are publishing the descriptive work. Based on my little qualitative literature survey, I’m realizing that this descriptive work is being done by the an inadequately valued subset of our community.

Let’s say some people work their butt off to address a question of non-global interest, which gets published in a decent but non-prestigious journal and is read by specialists and not anybody else. And then the data end up being more valuable as part of a huge global synthesis — and the people who worked their butt off generating those data get (almost) no credit for it. That’s great for the folks who wrote the synthesis but the person who generated those data probably will think they deserve more credit than is possible in our outdated system.

Depending on the context, bemoaning the lack of natural history data can really easily sound like a complaint that the scientific underclass isn’t doing its job generating data to support the synthesis papers coming from prestigious laboratories. While I’ll be the first to say, “we need to value natural history more,” I won’t be saying “we need more natural history” without committing to actually valuing natural history for what it’s worth to the community. To do otherwise just exacerbates the problem.

Mandatory public data archival is well established in some journals, and the practice is steadily growing. This is a good thing, but it doesn’t do anything to increase the incentive to publish a paper that has data that would be important for synthetic work. It doesn’t help us build a more substantial foundation of natural history that big-scale ecology requires. A lot of the journals that contain studies useful for syntheses have not yet hopped on the mandatory data archival train, and at the moment there are more disincentives than incentives for those publishing in non-prestigious venues.

I like celebrating natural history with colleagues who share this as a priority. But I’m getting weary hearing about the need for natural history while our academic environment chronically devalues the fundamental work in describing the biology of the organisms in nature.

When it comes to the importance of descriptive work and natural history in ecology, a lot of this talk is All Hat, No Cattle. Yes, we need it. Instead of just saying it’s important, how as a community are we actually going to truly value it? The top-down leadership on this issue has mostly focused on increasing the visibility of natural history without rewarding the people who do the work. The grassroots support for natural history (such as the Ecological Society of America’s Natural History section) needs to grow into a more mainstream movement that has a clear agenda that can move things to the next level. Until these things happen, big synthetic ecology will not be able to fulfill its potential, for want of the contributions of talented scientists focused on local and descriptive ecology.

We can start by not using “descriptive” as a negative epithet. How’s that sound?

28 thoughts on “Natural history, synthesis papers and the academic caste system

  1. Good things to consider!
    Another interesting disconnect in valuation of biological research is that scientists as a whole tend to be more impressed with broad scale synthetic work providing cross-system insights, but the public very much appreciates even particular insights about local phenomena. Was just talking about this with Gil Wizen (met him on the weekend) and we both have found that people have really loved our very “particular” stories of natural history research, but in the grand scheme of things these are regarded as trivia by the discipline as a whole. This is odd to me, as science quite often casts out entire theoretical approaches, but rarely do we cast aside good natural history data. These stories about particular facets of life on this planet have value!

  2. yes being included in a review/synthesis paper can have a really negative impact on the original paper. I had a paper that was doing really well until it was included in a review, since when it has hardly been cited whilst the review article has gone from strength to strength – the only positive aspect is that I wrote the review article :-)

  3. Terry, in 1965 this was called “There are cake bakers (many) and there are recipe writers (few)”, Robert Sokal, Department of Entomology, University of Kansas, Lawrence. Humanity need cakes and humanities need recipes, different kinds and sizes for different occasions. Sometimes it is a 30 kg wedding cake and sometimes it is one oatmeal cookie.

    Every single paper for which I am famous or which had a very large influence (they are not the same) started with a field natural history observation, whether it was one of my many review papers or many natural history descriptive papers. “Study nature, not books” as was said long, long ago. The hype and syntheses and idea theft all came later in my research (and conservation activities as well), and many of the following generations (in age or in the academic trophic chain) wondered “however did you come up with that idea?”. I did not. I just saw it happen by being there.

    Sales are four things: clients, packaging, content, and cost.

    Smile. Dan Janzen and Winnie Hallwachs

    P.S. A vast amount of natural history research/observations/descriptions, almost all, and the syntheses papers stacked on them, could not have been done without a taxonomist(s) somewhere at the start of the intellectual trophic chain. Cite them and make them coauthors. If you want a crude example, you can use
    http://onlinelibrary.wiley.com/doi/10.1111/j.1755-0998.2009.02628.x/full

  4. Hi Terry,

    I am involved in both research synthesis and natural history research, and I disagree with you on several points.

    “The people who are writing synthesis papers are often a different set of people than those who are writing the most useful original data papers.” – in my experience in ecology, it is rarely the case (most meta-analysts I know are also quite active in primary research in ecology/evolutionary biology), and, importantly, it does not have to be the case. I strongly believe that the most meaningful research synthesis can be done by the same people who are involved in primary research on that topic because they understand system better. That is why me and several other colleagues organize meta-analysis courses and train up to 100 PhD students/postdocs per year on how to write synthesis papers so that they can benefit from both worlds – do natural history research and synthesis work. There is no need to create a cast of ‘research parasites’ who use other people’s data for their benefits. And people who collect primary natural history data do not need to feel like scientific underclass.
    In my meta-analysis course I use the analogy to illustrate the role of primary scientists, theorists and research synthersists in scientific process (similar to Dan’s cake analogy in many ways). Primary scientists provide bricks of data. Theorists provide the blueprint of building of science. Research scientists are bicklayers. They put bricks of data/knowledge together and check if the shape of the resulting building resembles the blueprint provided by the theorists. If they run out of bricks, they explain what sort of bricks are needed (this is knowledge gap identification in research synthesis). Note, that this is not the same as “a complaint that the scientific underclass isn’t doing its job generating data to support the synthesis papers coming from prestigious laboratories.” It is an absolutely crucial and legitimate part of the scientific process. In my cartoons brick layers look exaclty like brick providers (primary researchers) and very often, after identifying research gaps, they go back to the field and start collecting data needed by themselves. If brick layers find out that the building does not look like the blueprint, they go back to theorists and tell them that something is wrong with the theories and assumptions need to be revisited.
    “The papers that contained the data for the syntheses were typically in less prestigious journals than the journals that publishes the syntheses” and “The original data papers aren’t garnering as much academic reward for authors as synthesis papers”. It is true that review papers are often more cited than primary papers (for obvious reasons, as they are more general and based on larger body of the evidence). So why not do both review papers and primary data papers – most ecologists already do that, I think. It is obviously a matter of personal choice where you are within the natural history/fieldwork – research synthesis continuum. As regards primary papers being published in less prestiguous journals than synthesis papers – not sure about that. None of my meta-analyses are published in Nature or Science (yet), but I regularly include in my meta-analyses primary research papers published in these journals.

    So, my main points are: a) we need both brick makers and bricklayers (and theorists) to make progress in science, b) anyone can do both of these jobs (yes, one would need some training to be able to do both, but this training is available on both natural sciences side of the spectrum and synthesis spectrum), c) it is best for science if both jobs are done by the same people, and I see no reasons why they can’t be done by the same people. No casts, no complaints, no reseach parasites.

  5. Lots to chew on here. Not sure how useful my comments will be since I sense I’m not the target audience for this post. But here goes…

    -You were more careful in your choice of rhetoric than Lindenmayer & Likens. But I sense that you agree with at least some of their points, and disagree with at least some of Brian’s responses? https://dynamicecology.wordpress.com/2013/11/07/the-one-true-route-to-good-science-is/

    -In a world in which more stuff is being published all the time, review papers and meta-analyses are only going to become increasingly valuable relative to other sorts of work. Right? I mean, would you suggest that it would be better for everyone to do their own literature reviews in the introductions to their own papers, rather than relying on reviews or meta-analyses done by others? Do you see it as just lazy for people to rely on review papers, and to cite them instead of citing all the underlying studies?

    -Lots of review papers and meta-analyses are based on something other than natural history observations. Meta-analyses of experiments, for instance–they’re quite common. So is the issue here specific to natural history? Or is this just a special case of the broader issue of valuing individual studies vs. reviews/meta-analyses?

    -Presumably, if we’re going to value natural historical work more, some natural historical work is more valuable or better than other natural historical work. What characterizes the best natural history work? For instance, Am Nat–a very widely read, high-impact general ecology & evolution journal–devotes a special section to natural history. To my eye, they don’t seem to publish just any natural history. For instance, they don’t publish papers reporting the first record of species X in locale Y. Is the sort of natural history Am Nat publishes the sort of thing that you think we should all be valuing more highly and publishing more of in general ecology & evolution journals? If you’re only going to pick out one of my comments to respond to, make it this one–very curious to hear your thoughts on what constitutes the best sort of natural historical work.

    -I think there are various roads to generality in ecology (https://dynamicecology.wordpress.com/2015/06/17/the-five-roads-to-generality-in-ecology/). In particular, #5 on the list in that linked post is a way of doing work of general interest that isn’t always recognized as such, because it involves data from only one site/time/species/system. Are you suggesting that ecologists overvalue certain ways of achieving generality, or certain senses of “generality”, at the expense of other ways/senses?

    -All else being equal, I think it’s correct to care about work in proportion to how general it is, where “general” has various forms or senses (as noted in the previous bullet). Would you argue that generality itself–in all of the forms/senses I listed in the post linked in the previous bullet–is overrated? That seems like a pretty hard argument to make. It’s just not true that every site/time/species/study system is unique. And even if it were true (or to the extent it is true, since obviously uniqueness is a matter of degree), well, why should I care about your work in unique system X if I care about unique system Y?

    -It’s interesting to think about distributed experiments in this context. NutNet for instance. It’s a single collaborative project collecting the sort of data (including descriptive observational data) that might previously have been collected by a bunch of investigators all working independently and then later synthesized by someone writing a meta-analysis. All of the NutNet collaborators are authors on the very high-profile NutNet papers. But are they actually getting a substantive amount of credit for that work? Honestly, when I see any collaborative paper with a whole bunch of authors, I don’t think of all of them should, or do, get the same credit for the work as a sole author would had the paper been sole-authored. So for instance, with NutNet I think of the founders of the collaboration as the people who should get most of the credit for it. And even if credit for some many-authored high profile paper were split equally, well, a pie sliced into a whole bunch of pieces still yields tiny pieces even if those pieces are all the same size.

    -Re: the bit about someone working their butt off to address a question of non-global interest, two questions. First, do you subscribe to what might be called the “labor theory of scientific value”? That is, do you think the worth of some bit of science is defined (at least in part) by how hard the investigator had to work to do it? Because I see that as just irrelevant. Second, I know this probably wasn’t your intent, but I wouldn’t be surprised if someone like Brian McGill were to comment about how much work it can be to track down, compile, clean up, and analyze data for a meta-analysis, how hard it can be to chase down errors in one’s R code, etc. So if I could make one suggestion for how to pursue your cause here, I suggest you avoid rhetoric about how tough it is to collect field data. Everybody thinks–correctly–that their own stuff is hard work. And nobody not already convinced is going to be convinced that we’re undervaluing natural history work because it’s so hard to do. Indeed, you might even invite pushback from those who would argue that one characteristic of good scientists is to pick low hanging fruit–to recognize easy bits of good science and prioritize them over more difficult bits of science.

  6. p.s. What Julia said re: the notion that people writing synthesis papers mostly don’t collect their own data. That’s just false. People like Brian McGill, who focus exclusively on synthesizing data collected by others, are very rare in ecology and evolution, and I don’t see any sign they’re becoming more common. (And in case it needs saying, no, I don’t think the rare folks like Brian are in any way free riders or parasites or in any way worse–or better–as ecologists than people who collect their own data).

  7. @Dan Janzen,

    I’m puzzled by your point. Without wanting to deny that your many broadly-applicable insights started with natural history observations, they were broadly-applicable insights. That’s why they’re (rightly) influential. Terry’s post, as I understand it, raises a totally different issue–lack of influence of, and credit for, natural history observations that on their own do not suggest any broader insights or generalities. Rather, they only suggest broader insights or generalities only when combined with many other natural history observations in the form of reviews or meta-analyses.

  8. After reading the comments from Julia and Jeremy, I had to go back and reread my piece. It seems to me that they’re rebutting or responding to things that I didn’t say (or at least, things I didn’t intend to say and can’t see upon my reread.)

    I made a point to emphasize that people have a choice to do what kind of science they do. What’s this about “research parasites”? I can’t see where I accused anybody of doing anything wrong or taking advantage of anybody else. Nor did I say that people who write synthesis papers don’t collect their own data, though I inferred (from the small sample I selected for this purpose) folks writing synthesis papers aren’t the ones who write the most useful original data papers.

    The central point of this piece, which Janzen & Hallwachs remarked on, is that there is a division of labor. I observed that this division of labor is associated with an academic class system. There is a whole group of people who — for a complex variety of reasons associated with inequities in science — are not being involved in the collaborations that result in big reviews and syntheses. But their stuff is ending up in these reviews. There are foremen (forepeople?) and those that make the bricks. I’m just pointing out that the rewards for the people who specialize on making bricks are lower than for the people using the bricks (some of whom might also be brick producers). And when we’re calling for more natural history and more descriptive work, being conscious of those class distinctions would be more constructive.

  9. For a concrete example of how this inequity can works, look to the many examples of meta-analyses where the originating studies are only cited in supplemental material, but not in the main text.

    In that case the people who did the fieldwork are being denied the most basic of credits – a citation tracked in a major citation database. Incidentally, this may also push down (however marginally) the perceived impact of the lower and mid-tier journals where the field data are published.

  10. Terry – well, I commented on several specific points in your blog and I quoted the text to which I was referring. As regards division of labour, my point was that it is partly artificial (i.e. you don’t have to be only brick maker or only foreman, and most people are both – as Jeremy also pointed out). I also don’t believe division of labour is associated with an academic class/caste system (for the same reason – one can do natural history in summer and synthesis papers in winter, which is what I do). You did not use the term ‘research parasite’, but you wrote about “the scientific underclass … generating data to support the synthesis papers coming from prestigious laboratories” which to me implies that this scientific underclass is essentially exploited by ‘scientific upperclass’ of research synthesists, hence this brought to mind recent #researchparasites storm in Twitter. You made a point that calls for more natural history work are sometimes made by people who specialize on research synthesis (“some other visible advocates for natural history and an increase in the availability of field data aren’t generating these data themselves”) – I commented that sometimes this is warranted as part of identification of research gaps, which is part of synthesis, and in many cases synthesists go back to the field to fill in these gaps themselves (Angela Moles’ work is a good example). In response to the question you pose “If we are calling for more natural history to be able to answer big ecological questions, are we saying that everybody needs to do more natural history, or we want other people do do more natural history?” – I’d say both. If I read a high profile research synthesis which argues that more empirical research is needed in research area X, and I happen to have some field work experience in this area, I rejoice and write a grant proposal using this research synthesis as an argument to get funding to do more field work in area X. So I think people doing natural history research can benefit from research synthesis work as much as research synthesists can benefit from data collected by naturalists.

    • Julia, I’m really curious, do most people do both? I don’t have hard data on this, but when I look at the publication list of a lot of the scientists at my university and neighboring institutions in the California State University, it seems that syntheses and reviews are rather uncommon. Maybe the most people in R1 institutions do both, but that’s not true across academia in the US. (Which is the central theme of this site, of course.)

      • Terry, I don’t have hard data either, but I would say all meta-analysts I know do both research synthesis and primary research. Of course there are many people who do only/mostly primary research, but I can’t think of reasons why research synthesis would be/should be more commonly done by researchers from larger/more research active unis. You do not need any special/expensive equipment to buy or funding to travel to some distant field sites, just access to a decent library and a laptop. No need to have a large lab either. Some meta-analyses I published (one in Ecology) were based on undergraduate research projects. You might be quite right and review papers might be very unevenly distributed between universities in US (or other places) – see for instance this recent paper https://peerj.com/articles/1457/ – I think at country level it can be explained by lack of training in some places, but not sure why would synthesis activities be more common in larger/more research focused unis

  11. I’ll chime in, having just come back from a Synthesis working group. With a handful of exceptions, everyone in the room either had or was part of an on-the-ground muddy-boots data collection program – particularly the PIs. And these data were feeding right into synthesis. Indeed, in all of the synthesis groups I’ve been part of, I’d say that nearly all of the participants were motivated to be there because of their ongoing data collection programs, either past or present. So I think your point above is a mischaracterization – and I’ll say my anecdote is as valid as your random sample of three papers (HA!)

    Personally, I view Synthesis Ecology as just another subfield of the discipline. Alongside any other subdiscipline. There’s a unique set of skills and techniques that practitioners need, again, just like any other subdiscipline. And a different way of thinking about the nature of data and what you can learn from it. That we’ve swung towards it as generating high profile work is indicative more of cultural desire for global answers to global problems. Not everyone is good at/interested in it the same way that, frankly, I have very little interest in ever doing terrestrial plant ecology.

    I do think we’re at a point where credit for data needs to be much much better. Journals bias citations when they don’t permit them in the main text, for example. And this has professional consequences. I’m more curious about the future of data papers – that synthetic groups building large datasets with the collaboration of other practitioners generate large data papers that can be highly highly cited. But this is still a relatively new concept and just catching on.

  12. Jarrett (jebyrnes), Just curious, did your working group have people working in a PUI? That’s the population (from small ponds) that tend to not be involved in these efforts.

    • No. I was probably the closest (we go back and forth on what we are). One of the big conversations at the meeting was the effect that our overly connected social networks had on the quality of our synthesis, actually – that we miss huge important pieces of data that could cause answers to be incorrect because of lack of diversity in all aspects. We’re trying to figure out how to rectify that, and large-scale collaborative data papers were one tool brought up, although how we can truly make those unbiased is still a matter of some ongoing discussion. Clearly, tents need to be bigger. And the question is how to make that happen.

  13. @Terry:

    Thanks for the clarification. So, just to make sure I’m understanding correctly, the issues you’re concerned with are:

    (i) faculty at non-R1 institutions collect data but don’t write review papers or participate in working groups, and

    (ii) people calling for more natural historical observational work are being unrealistic (and hypocritical?), because they’ve failed to recognize that such work isn’t valued. They’re calling for people to do more of something that people have no incentive to do.

    Assuming I’m not misunderstanding…

    re: (i), taking for granted that this is true (and I’m not sure it is), I guess I’m still not clear if it is a problem, and if it is what’s the source of the problem and what the solution is. I mean, anybody who wants to write a review paper, or convene or propose a working group to write a review paper, can do so. Want to be a recipe-writer rather than a cake-baker? Go ahead! Or are you saying that’s not possible? Are you suggesting that there’s some kind of systematic structural bias that prevents people at non-R1 institutions from having equal opportunity to write review papers? And if so, where’s the structural bias? Within non-R1 institutions? At leading ecology journals? At the institutions like NCEAS that support working group proposals? I guess what I’m saying is that I think language of “class” and “inequity” is out of place here, or at least I’m not convinced it has a place here. People are born into a class, which it might be quite difficult to escape. Nobody’s born into not writing review papers or not organizing working groups, or if there’s a sense in which they are, I don’t see it. Different people choose to pursue science in different ways, for all sorts of reasons, many of which really are their own choice. And even if it is the case that, on average, people working at once sort of institution tend to write more review papers than people working at other sorts of institutions, I don’t think that that’s prima facie evidence for some sort of structural problem.

    re: (ii), in general I agree that it’s silly to try to address problems of incentives such as collective action problems by exhorting people to behave contrary to the incentives they face. Hard for me to say anything specific about this particular situation, since I don’t agree that we’re systematically undervaluing certain sorts of work.

    @Eric:

    Sure. I agree that that’s a problem, and it seems like an eminently soluble one. If memory serves, I believe GEB has just changed some policies on publishing of references that might help address this issue.

    But I guess I don’t see that that as evidence of some larger or more systematic inequity or even imbalance in terms of what sort of work gets published where. Here’s the data (source: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0059813): the majority of ecology papers, including those published in leading journals, are observational studies based on data never before published that the authors collected themselves, and the bulk of the remainder are experimental studies based on data never before published that the authors collected themselves. Also, 70% of papers in ecology journals concern single species. Further, all of this has been true since at least the 1980s; there’s been no appreciable decline over the last few decades in the frequency of observational studies, or studies based on field data the authors collected themselves, in leading ecology journals or in ecology journals as a whole. Finally, reviews and meta-analyses make up a slowly growing but still very small fraction of papers published in leading ecology journals (review journals aside, obviously). So with respect, I just don’t see the evidence that one sort of researcher publishes new field data in exclusively low impact specialized journals, and then other sorts of researchers (who don’t collect their own data) compile those data into meta-analyses and publish the resulting papers in Ecology Letters.

    • Jeremy, those definitely are things that I’ve emphasized. Clearly, when I say there’s a population of people that aren’t publishing reviews or synthesis, yes, many of them are outside R1 institutions. That’s the topic of this site after all! I think my words in the original post stand on their own, and as far as I can tell a good bunch of the concerns that people have expressed in the comments don’t actually pertain to what I actually wrote.

      “Class” and “inequity” are definitely issues for scientists who work outside R1s and elite SLACs. How do you know this? Because I’m one and I’m telling you this.

      http://twitter.com/hormiga/status/704407308688715776

  14. @Terry:

    You seem to be saying that Julia, Jarrett, and I all seem to have badly misread you. If so, I apologize but suggest that the problem is with the clarity of your original post. But if you’d prefer not to clarify or rephrase and let the original post stand on its own, that’s your choice.

    No, sorry, I’m not willing to take your word for it either on empirical matters such as whether most people who write review papers also collect their own data, or on broader matters such as whether there’s some structural bias that systematically limits the opportunity of people at non-R1 institutions to write review papers. If only because I seem unable to correctly paraphrase your views, which strongly suggests that I’m continuing to misunderstand you (for which my apologies, all I can say is that I’m doing my best…). If I don’t even know what you’re saying, I can’t take your word for it. And I assume you wouldn’t want me taking your word for it, since if I don’t understand you I might well be taking your word for something you didn’t actually intend.

    • Of course people are misreading me. It’s not a misread as in “failing to understand what I’ve said,” but instead, “perceive that I said something which I did not because my topic is similar to something that they’ve heard about and read other places.” Like this whole “parasite” deal. I saw that and was like, “what?” “huh?” Where did I say that?

      I haven’t said you’ve failed to paraphrase me adequately. I just haven’t discussed it at length because I’m working to let my original words stand and let a discussion happen in the comments instead of me monopolizing the conversation.

      I’m observing that there are sets of people that aren’t really involved in influential syntheses and analyses. And they don’t reap a lot of credit for their work. This is something we need to be cognizant of and deal with when we are calling for more information. I said that well in the post, I’m saying it again here in the comments.

  15. Re: letting a discussion happen in the comments, ok, up to you Terry, it’s your blog. But I directed my comments at you because I was hoping for a response from you.

  16. Ok, now that I understand the issue is “people at PUIs tending not to write review papers, and not receiving a lot of credit for the natural history papers they write that then get used in review papers”,* I have a couple of follow-up questions by way of further clarification:

    -Is the issue here specific to review papers? By that I mean: the output of researchers at PUIs, and the broader perception of that research, probably differs from that of researchers at R1 universities for many reasons, many of which aren’t specific to review papers. For instance, if you don’t have grad students, that’s going to affect your ability to produce all sorts of different kinds of research, including but not limited to review papers (though of course, it might not equally affect your ability to do all different kinds of research–as Julia noted it’s not immediately obvious why someone at a PUI would find it especially difficult to do review papers as compared to other sorts of research). And if there’s a prejudice against any and all work being done by people at PUIs, well, by assumption that prejudice would apply to both review papers and other sorts of work.

    -Is the issue here specific to natural historical work, as opposed to others sorts of research that ecologists at PUIs might do? That is, is the issue raised in the post one of insufficient valuation of natural history work, independent of whether it’s done by people at PUIs or R1s or wherever?

    -Is it the case that ecologists at PUIs mostly do (or at least, are much more likely than R1 ecologists to do) natural history work? So that, for instance, if natural history work is undervalued as in the previous bullet, that undervaluation disproportionately affects ecologists at PUIs because they’re disproportionately likely to be doing natural history work?

    -And now it occurs to me that I should perhaps be asking what “natural history” is, rather than presuming that what I mean by that is the same as what anyone else means.

    *And Terry, please do confirm that I’m paraphrasing you correctly. It’s difficult to have a productive conversation inspired by the post, even one not involving you, if readers are misreading the post. And independent of who’s participating in what conversations, I would like to know purely for my own peace of mind that I am not putting words into your mouth.

  17. Those are all great questions Jeremy, several posts worth of issues. I’d say your paraphrase with the asterisk is consistent with some of the main points in the post.

    I wasn’t writing about biases specifically directed against people in PUIs in this post (though of course that’s a recurring theme on the blog). Instead, I was writing about a segment of the population that probably has greater representation in PUIs.

    I’m not really in the mood to write an interpretation of my own words in my own comments. If you’re wondering what I think, then you can ask me what I think. If you’re wondering what I wrote, then you can read what I wrote. If you can’t make sense of what I wrote, then it’s up to you to decide whether what I wrote was shit or whether it’s worth considering in more depth.

  18. Hi Terry – there’s much in your post that I agree with, but like Jeremy I’d prefer to see some real data showing that researchers at non-research-intensive universities (like mine) are proportionately less likely to write syntheses, as opposed to simply producing less research overall.

    You mentioned that calling this a “caste system” might be hyperbole. I’d go further and say that it’s inaccurate: caste systems by their very nature are inflexible, whereas there’s significant flexibility, and choice, of both where and how we pursue our careers.

    My favourite point in the comments is Dan Janzen’s comment that we should include taxonomists as co-authors on papers. Yes, absolutely, and I do it whenever I can.

  19. Terry – let me flip the conversation. How easy do you think it would be for somebody who only does synthesis work to get hired at a PUIs? I actually think it wouldn’t be very easy.

    I applied to only a couple and got no interviews. I did interview at a small liberal arts college (I know whole different kettle of fish in your caste system, but I think they share the bias I am getting at with PUIs) and not having a field system was very negative to my prospects. The primary goal of having an active researcher was to have a study system to involve undergrads. And while I tried to make the case that synthesis and modelling can involve undergrads, just ones in the math and computer science departments, the biology department wasn’t very interested in that. Compared to providing field research opportunities for undergrads, getting synthetic papers published in “prestigious” journals wasn’t much of a priority.

    One of those PUIs I applied to and got ignored at was a place I really wanted to live. Was I discriminated against because of the type of work I do? I’m not going to claim that. But your logic might go there.

    I think this is more an issue of fit to system. As Jeremy and Julia noted, to the extent somebody at a PUI has time to do research (and many clearly do although it is undoubtedly more challenging than for somebody at an R1 – and I’m at an R2 by the way), I see little preventing somebody at a PUI doing meta-analyses or synthesis if they want to. Most are choosing not to. Why? Its not impossibility, because I completely agree with Jarrett and Julia and Jeremy, that nearly everybody in a synthesis group has a muddy boots component. I think writing field-work free (albeit high profile) papers is just not what a PUI values. For that matter, I think it is not what a majority of people at R1s doing only field work value personally either. And I won’t highjack your post, but I think I could build a pretty good case its not really what most R1 hiring committees value either (they love having both, but when they have to pick they demonstrably choose field-work only over synthesis-only).

    And another form of flip, there are plenty of people doing fieldwork only getting into very high profile journals. Tilman comes to mind. As does Hubbell and Jim Clark. And a substantial fraction of every Ecology Letters issue is field work.

  20. It took me about 24 hours to recognize the irony that a variety of people have asked for data related to the issue, yet there are no data. (My remarks have been shaped by my perusal of three recent papers, which is clearly an inadequate and not possibly representative subset of whatever statistical population we’d like to assess. I’d also be thrilled be have the data, and even moreso, to be shown wrong.)

  21. Re: data, I could probably be convinced to look up a bit of anecdata. But I’m not going to bother unless I know what I’m looking for. What’s the population of interest? All ecological review papers? Just review papers published in leading ecology journals? Only those papers that review what might be considered “natural history” data (and if so, what is that, operationally? observational field data?)?

  22. p.s. to previous: And is any data on non-review papers relevant? I take it the answer is no, but I thought I’d double-check.

    And what’s more relevant: paper-by-paper data (such as you looked at in your original post), or author-by-author data? It’s easier to get paper by paper data–I can just look up some papers and see where the authors are employed. But I’m not going to bother doing that if the question is actually whether authors employed at PUIs are disporportionately unlikely to write review papers compared to authors at R1s, controlling for differences in number of papers authored.

Leave a Reply to Terry McGlynnCancel reply