Ant science: Thieving ants know how to be sneaky

Ectatomma ruidum. Image by Alex Wild

Ectatomma ruidum. Image by Alex Wild

The most recent paper from my lab is a fun one. We show that thieving ants have a suite of sneaky behaviors, to help them avoid being caught in the possession of stolen goods. These differences are dramatic enough to classify thieves as a distinct and new caste of ant.

Continue reading

Review unto others as you would have them review unto you?


I am going to go ahead and assume we all want quality reviews of our journal submissions, however you define ‘quality’. Reviewers that take time to seriously evaluate your work, provide constructive feedback and ultimately improve the paper should always be appreciated. But as reviewers ourselves, we know that sometimes we don’t always give each paper our full attention. In general, I try to give good and helpful (to the author and editor) reviews. I try not to take on reviews when I know I don’t have the time to do a good job. Perhaps I am naïve but the impression I get from my colleagues and reviews of my papers is that in general most people are also trying to give good reviews. Continue reading

I’m going to stop ignoring ResearchGate


LinkedIn, Facebook, ORCID, Twitter, Instagram, Klout, Mendeley, ResearchGate.

I’m signed up for all of these things. Some are useful, some can be annoying, some I just ignore.

Some vague time ago, a friend in my department mentioned that I should sign up for ResearchGate. I said something like, “It’s just another one of those social networks, yadda yadda so what.” But I signed up anyway*.

At the time I signed up, I halfheartedly connected some of my papers, and since then I’ve ignored it. Jump to last week, when one of their emails was creative enough to find its way through my spam filter:

rgateclipI was like, huh? I chose to click over to my profile on ResearchGate.

Continue reading

The acceptances that weren’t acceptances


Chatting with people at La Selva Biological Station in Costa Rica, the topic from a recent post came up: that journals have cut back on “accept with revisions” decisions.

There was a little disagreement in the comments. Now, on the basis of some conversations, I have to disagree with myself. Talking with three different grad students, this is what I learned:

Some journals are, apparently, still regularly doing “accept-with-revisions.” And they also then are in the habit of rejecting those papers after the revisions come in. Continue reading

What ever happened to “major and minor revisions?”


Since I started submitting papers (around the turn of the century) editorial practices have evolved. Here’s a quick guide:

What used to be “Reject” is still called a “Reject.”

What used to be “Reject with Option to Resubmit” rarely ever happens anymore.

What used to be called “Major Revisions” is now called “Reject (With Invited Resubmission)” with a multiple-month deadline.

What used to be called “Minor Revisions” is now called “Reject (With Invited Resubmission)” with a shorter timeline.

And Accept is still Accept.

Here’s the explanation.

A flat-out rejection — “Please don’t send us this paper again” — hasn’t changed. (I’ve pointed out before, that it takes some experience to know when a paper is actually rejected.) Continue reading

The importance of storytelling


Much of my time lately has been consumed with two seemingly unrelated activities: reading job applications and reviewing conference papers.

Reading job applications requires me to evaluate a person’s credentials, teaching and research experience, letters of recommendation, and countless other intangibles—all on paper—to determine whether this person might “fit” what we are looking for in a colleague.

Reviewing conference papers requires me to evaluate the validity and importance of the research question, the soundness of the science, the relevance of the results, and the correctness of the interpretation of the results, to determine whether this paper “fits” the definition of “good science” as well as the scope of the conference.

There is one key commonality between them: in both cases, it’s very important that the author tells a good story. Continue reading

Is it harder, or easier, to publish in your field?


It takes time and effort to publish a paper. After all, if it were really easy, then publications wouldn’t be a workable (albeit flawed) currency in for success in the sciences.

I often have heard about how some labs experience a bigger or smaller MPU (minimum publishable unit) than others, as I’ve worked in biology departments with a lot of academic diversity.

For example, I once knew an immunologist in an undergraduate institution who spent five years of consistently applied effort, to generate a single paper on a smallish-scale project. This wasn’t a problem in the department, as everyone accepted the notion that the amount of work that it took to generate a paper on this topic was greater than what it would take for (say) physiology, vertebrate paleontology, or ecology. Continue reading

Authorship when the first author is the senior author


Authorship conventions are based around assumptions that research was done under the umbrella of a research institution.

It’s often just fine to assume that the first author did the most work, and the last author is the senior author who is the PI of the lab that enabled the project.

That’s a fair assumption, so long as the senior author and the first author are different people. In my circumstance, when a paper comes out of my lab, I’m typically the first author and the senior author. Continue reading

Writing a review: thoughts from the trenches.


Somehow I’m in the middle of writing three review papers so I am gaining some perspective on writing them. The first one is basically my own fault; I started thinking a lot about nectar rewards and how they fit into my research. That thinking lead to a talk last year on some of my ideas to a bunch of like-minded folk at the Scandinavian Association of Pollination Ecologist’s meeting. Main lesson from my experience: never end a talk asking if you should write a review (and/or for interested co-authors) unless you really want to. Continue reading

What happens in the canopy stays in the canopy.


For a few years, I’ve harbored a very cool (at least to me) natural history idea. But it’s a big technical challenge. The required fieldwork is never going to happen by me. So, I should write a blog post about it, right?

Bullet ants (Paraponera clavata) are one of the most charismatic creatures in Neotropical rainforests. My lab has done some work with them recently. These often-seen and well-known animals are still very mysterious. Continue reading

How much do you let students design projects?


Now is the time of year when we work with students on designing summer research projects. How do you decide exactly what their project is, and how the experimental design is structured? This is something I struggle with.


Image by T.McGlynn

In theory, quality mentorship (involving time, patience and skill) can lead a student towards working very independently and still have a successful project. Oftentimes, though, the time constraints involved in a summer project don’t allow for a comprehensive mentoring scheme that facilitates a high level of student independence. Should the goal of a student research project be training of an independently-thinking scientist or the production of publishable research? I think you can have both, but when push comes to shove, which way do you have to lean? I’ve written about this already. (Shorter: without the pubs, my lab would run out of dough and then no students would have any experiences. As is said, your mileage may vary.)

A well-designed project will require a familiarity with prior literature, experimental design, relevant statistical approaches and the ability to anticipate the objections that reviewers will have once the final product goes out for review. Undergraduates are typically lacking in most, if not all, of these traits. Sometimes you just gotta tell the student what will work and what will not, and what is important to the scientific community and what is not. And sometimes you can’t send the student home to read fifteen papers before reconsidering a certain technique or hypothesis.

When students in the lab are particularly excited about a project beyond my mentorable expertise, or beyond the realm of publishability, I don’t hesitate to advise a new course. I let them know what I hope students get out a summer research experience:

  • a diverse social network of biologists from many subfields and universities

  • experience designing and running an experiment

  • a pub

All three of those things take different kinds of effort, but all three are within reach, and I make decisions with an effort to maximize these three things for the students. Which means that, what happens in my lab inhabits the right side of the continuum, sometimes on the edge of the ‘zone of no mentorship’ if I take on too many students.

You might notice one thing is missing from my list: conceive an experiment and develop the hypotheses being tested.

Students can do that in grad school if they want. Or in the lab of a different PI. I would rather have a students design experiments on hypotheses connected to my lab that I am confident can be converted into papers, rather than work on an experiments of the students’ own personal interest. (Most of my students become enamored of their experimental subnets pretty quickly, though.)

This approach is in the interest of myself to maintain a productive lab, but I also think that being handed a menu of hypotheses instead of a blank slate is also in the long-term interest of most students. I’m not keen on mentoring a gaggle of students who design their own projects when these projects are only for their edification, and not for sharing with the scientific community. That kind of thing is wonderful for the curriculum, but not for my research lab.

Other people have other approaches, and that is a Good Thing. We need many kinds of PIs, including those that give students so much latitude that they will have an opportunity to learn from failure. And also those that take on 1-2 students at a time and work with them very carefully. I like the idea of thinking about my approach to avoid falling into a default mode of mentorship. Does this scheme make sense, and if it does, where do you fit in and how have you made your choices? I would imagine the nature of your institution and the nature of your subfield — and how much funding is available — structures these choices.

Ant foraging diversity: a simple and elegant explanation


Science can be creative and elegant.

To illustrate this fact, I want to bring to your attention a groundbreaking review paper that was recently published in Myrmecological News, written by Michele Lanan of the University of Arizona.

Usually the terms “groundbreaking” and “review paper” aren’t paired with one another. Review papers usually codify existing ideas, propose some new ones that may fall flat. And, if you chat with an editor, you’ll learn that good reviews really improve a journal’s impact factor.

Then there’s this amazing review I loved so much I had to write this post about it. Even if you don’t know a thing about ants, I’m betting you’ll love how the paper draws a clear and simple explanation from complex interacting phenomena.

Ant people are asked about foraging behavior quite often. How and why do ants make trails? Why do some species make trails and others don’t? Until now, our answers were vaguely correct but relied heavily on generalizations. Now, after Michele Lanan scoured pretty much every paper that’s ever collected data on foraging behavior and ecology, we have a quantitative and robust explanation that is powerfully simple and elegant.

We’ve known that foraging behaviors are structured by that ways in which food is available. Among all ants, there’s a huge variety of foraging patterns. Some are opportunistic hunter-gatherers, others are nomadic raiders, and some use trunk trails, as in the figure below. These patterns reflect differences in food availability.

Figure by M. Lanan

Figure from M. Lanan 2014, Myrm News 20:50-73.

How, exactly, is it that the properties of food availability can predict how ants forage? In an analytically robust and predictable manner, that works for all ants throughout the phylogeny? It doesn’t require an n-dimensional hyperspace to understand foraging patterns of ants. It only needs a 4-dimensional space.


Figure from M. Lanan 2014, Myrm News 20:50-73.

Lanan took into account four properties of food items: size, spatial distribution, frequency of occurrence, and depletability. She arranged these variables along four axes (as on the right), and showed how this this 4-dimenstional space foraging patterns in the figure above.

How do these foraging patterns distribute across the major ant subfamilies? Are some lineages more variable than others, and what might account for these differences? What other beautiful figures and photographs are in the review that illustrate the relationship between spatiovariability of food and foraging biology? As they say on Reading Rainbow, you’ll have to read the review to find out!

Reference: Lanan, M. 2014. Spatiotemporal resource distribution and foraging strategies of ants (Hymenoptera: Formicidae). Myrmecological News 20: 50-73.

As a disclaimer, I should mention that the author of this paper is a collaborator and friend of mine. And she is leading The Ants of the Southwest short course this summer which I’m also teaching — and spaces are still available!

But that’s not why I’m featuring this paper. I am enthusiastic about this paper because it so obviously resulted from a labor of love for the ants, and is a culmination of years of reflection. This is just a downright gorgeous piece of science, and the more people that see it — and the more recognition that the author gets — the better.

Why I prefer anonymous peer reviews


Nowadays, I rarely sign my reviews.

In general, I think it’s best if reviews are anonymous.  This is my opinion as an author, as a reviewer, and as an editor. What are my reasons? Anonymous reviews might promote better science, facilitate a more even paying field, and protect junior scientists.

The freedom to sign reviews without negative repercussions is a manifestation of privilege. The use of signed reviews promotes an environment in which some have more latitude than others. When a tenured professor such as myself signs reviews, especially those with negative recommendations, I’m exercising liberties that are not as available to a PhD candidate.

To explain this, here I describe and compare the potential negative repercussions of signed and unsigned reviews.

Unsigned reviews create the potential for harm to authors, though this harm may be evenly distributed among researchers. Arguably, unsigned reviews allow reviewers to be sloppy and get away with a less-than-complete evaluation, which will cause the reviewer to fall out of the good graces of the editor, but not that of the authors. Also, reviewer anonymity allows scientific competitors or enemies to write reviews that unfairly trash (or more strategically sabotage) the work of one another. Junior scientists may not have as much social capital to garner favorable reviews from friends in the business as senior researchers. But on the other hand, anonymous reviews can mask the favoritism that may happen during the review process, conferring an advantage to senior researchers with a larger professional network.

Signed reviews create the potential for harm to reviewers, and confer an advantage to influential authors. It would take a brave, and perhaps foolhardy, junior scientist to write a thorough review of a poor-quality paper coming from the lab of an established senior scientist. This could harm the odds of landing a postdoc, getting a grant funded, or getting a favorable external tenure evaluation. Meanwhile, senior scientists may have more latitude to be critical without fear of direct effects on the ability to bring home a monthly paycheck. Signed reviews might allow more influential scientists to experience a breezier peer review experience than unknown authors.

When the identity of reviewers is disclosed, these data may result in more novel game theoretical strategies that may further subvert the peer-review process. For example, I know there are some reviewers out there who seem to really love the stuff that I do, and there is at least one (and maybe more) who appear to have it in for me. It would only be rational for me to list the people who give me negative reviews as non-preferred reviewers, and those who gave positive reviews as recommended reviewers. If I knew who they were. If everybody knew who gave them more positive and more negative reviews, some people would make choices to help them exploit the system to garner more lightweight peer review. The removal of anonymity can open the door to corruption, including tit-for-tat review strategies. Such a dynamic in the system would further exacerbate the asymmetries between the less experienced and more experienced scientists.

The use of signed reviews won’t stop people from sabotaging other papers. However signed reviews might allow more senior researchers to use their experience with the review system to exploit it in their favor. It takes experience receiving reviews, writing reviews, and handling manuscripts to anticipate the how editors respond to reviews. Of course, let’s not undersell editors, most of whom I would guess are savvy people capable of putting reviews in social context.

I’ve heard a number people say that signing their reviews forces them to write better reviews. This implies that some may use the veil of their identity to act less than honorably or at least not try as hard. (If you were to ask pseudonymous science bloggers, most would disagree.) While the content of the review might be substantially the same regardless of identity, a signed review might be polished with more varnish. I work hard to be polite and write a fair review regardless of whether I put my name on it. But I do admit that when I sign a review, I give it a triple-read to minimize the risk that something could be taken the wrong way (just as whenever I publish a post on this site). I wouldn’t intentionally say anything different when I sign, but it’s normal to take negative reviews personally, so I try to phrase things so that the negative feelings aren’t transferred to me as a person.

I haven’t always felt this way. About ten years ago, I consciously chose to sign all of my reviews, and I did this for a few years.  I observed two side effects of this choice. The first one was a couple instances of awkward interactions at conferences. The second was an uptick in the rate which I was asked to review stuff. I think this is not merely a correlative relationship, because a bunch of the editors who were hitting me up for reviews were authors of papers that I had recently reviewed non-anonymously. (This was affirmation that I did a good job with my reviews, which was nice. But as we say, being a good reviewer and three bucks will get you a cup of coffee.)

Why did I give up signing reviews? Rejection rates for journals are high; most papers are rejected. Even though my reviews, on average, had similar recommendations as other reviewers, it was my name as reviewer that was connected to the rejection. My subfields are small, and if there’s someone who I’ve yet to meet, I don’t want my first introduction to be a review that results in a rejection.

Having a signed review is different than being the rejecting subject editor. As subject editor, I point to reviews to validate the decision, and I also have my well-reasoned editor-in-chief, who to his credit doesn’t follow subject editor recommendations in a pro forma fashion. The reviewer is the bad guy, not the editor. I don’t want to be identified as the bad guy unless it’s necessary. Even if my review is affirming, polite, and as professional as possible in a good way, if the paper is rejected, I’m the mechanism by which it’s rejected. My position at a teaching-focused institution places me on the margins of the research community, even if I am an active researcher. Why the heck would I put my name on something that, if taken the wrong way, could result in further marginalization?

When do I sign? There are two kinds of situations. First, some journals ask us to sign, and I will for high-acceptance rate journals. Second, if I recommend changes involving citations to my own work, I sign. I don’t think I’ve ever said “cite my stuff” when uncited, but sometimes a paper that cites me and follows up on something in my own work, and I step in to clarify. It would be disingenuous to hide my identity at that point.

The take home message on peer review is: The veil of anonymity in peer review unfairly confers advantages to influential researchers, but the removal of that veil creates a new set of more pernicious effects for less influential researchers.

Thanks to Dezene Huber whose remark prompted me to elevate this post from the queue of unwritten posts.

Retraction of a previous post about pseudojournals


On 09 April 2013, I published a post entitled, “Keeping tabs on pseudojournals.”

I just modified that post to indicate a retraction, with the following text:

Since I published this post, I’ve been made aware of an alternative agenda in Jeffrey Beall’s crusade against predatory publishers. His real crusade is, apparently, against Open Access publishing. This agenda is clearly indicated in his own words in an open access publication entitled, “The Open-Access Movement is Not Really about Open Access.” More information about Beall’s agenda can be found here. I am not removing this post from the site, but I am disavowing its contents as positive coverage of the work of Beall may undermine the long-term goal of allowing all scientists, and the public, to access peer-reviewed publications as easily and inexpensively as possible.

Months ago, I saw the Beall’s paper, that tried to equate open-access publishing with poor quality scholarship. This makes no sense whatsoever, because many open access journals have rigorous peer review. (For example, I posted the reviews from my a recent-ish PLOS ONE paper of mine. No doubts about that rigor.) The suggestion that an open access publishing model is tantamount to predatory publication is not only absurd, but also is intellectually dishonest. I could only image that this position is either a result of incredibly feeble reasoning, or is politically motivated to help publishers maintain their oligarchy of the academic publishing industry.

Regardless of the reasons, Beall’s crusade against the open access to academic research is folly and I don’t want to be associated with support for his work. Now, academia needs a strong, rational and transparent voice to combat genuine predatory publishers that lack rigorous peer review and are guilty of academic payola. It seems Jeffrey Beall doesn’t fit that bill.

I own my data, until I don’t.


Science is in the middle of a range war, or perhaps a skirmish.

Ten years ago, I saw a mighty good western called Open Range. Based on the ads, I thought it was just another Kevin Costner vehicle. But Duncan Shepherd, the notoriously stingy movie critic, gave it three stars. I not only went, but also talked my spouse into joining me. (Though she needs to take my word for it, because she doesn’t recall the event whatsoever.)

The central conflict in Open Range is between fatcat establishment cattle ranchers and a band of noble itinerant free grazers. The free grazers roam the countryside with their cows in tow, chewing up the prairie wherever they choose to meander. In the time the movie was set, the free grazers were approaching extirpation as the western US was becoming more and more subdivided into fenced parcels. (That’s why they filmed it in Alberta.) To learn more about this, you could swing by the Barbed Wire Museum.

The ranchers didn’t take kindly to the free grazers using their land. The free grazers thought, well, that free grazing has been a well-established practice and that grass out in the open should be free.

If you’ve ever passed through the middle of the United States, you’d quickly realize that the free grazers lost the range wars.

On the prairie, what constitutes community property? If you’re on loosely regulated public land administered by the Bureau of Land Management, then you can use that land as you wish, but for certain uses (such as grazing), you need to lease it from the government. You can’t feed your cow for free, nowadays. That community property argument was settled long ago.

Now to the contemporary range wars in science: What constitutes community property in the scientific endeavor?

In recent years, technological tools have evolved such that scientists can readily share raw datasets with anybody who has an internet connection. There are some who argue that all raw data used to construct a scientific paper should become community property. Some have the extreme position that as soon as a datum is collected, regardless of the circumstances, it should become public knowledge as promptly as it is recorded. At the other extreme, some others think that data are the property of the scientists who created them, and that the publication of a scientific paper doesn’t necessarily require dissemination of raw data.

Like in most matters, the opinions of most scientists probably lie somewhere between the two poles.

The status quo, for the moment, is that most scientists do not openly disseminate their raw data. In my field, most new papers that I encounter are not accompanied with fully downloadable raw datasets. However, some funding agencies are requiring the availability of raw data. There are a few journals of which I am aware that require all authors to archive data upon publication, and there are many that support but do not require archiving.

The access to other people’s data, without the need to interact with the creators of the data, is increasing in prevalence. As the situation evolves, folks on both sides are getting upset at the rate of change – either it’s too slow, or too quick, or in the wrong direction.

Regardless of the trajectory of “open science,” the fact remains that, at the moment, we are conducing research in a culture of data ownership. With some notable exceptions, the default expectation is that when data are collected, the scientist is not necessarily obligated to make these data available to others.

Even after a paper is published, there is no broadly accepted community standard that the data that resulted in the paper become public information. On what grounds do I assert this? Well, last year I had three papers come out, all of which are in reputable journals (Biotropica, Naturwissenschaften, and Oikos, if you’re curious). In the process of publishing these papers, nobody ever even hinted that I could or should share the data that I used to write these papers. This is pretty good evidence that publishing data is not yet standard practice, though things are slowly moving in that direction. As evidence, I just got an email from Oikos as a recent author asking me to fill out a survey to let them know how I feel about data archiving policies for the journal.

As far as the world is concerned, I still own the data from those three papers published last year. If you ask me for the data, I’d be glad to share them with you after a bit of conversation, but for the moment, for most journals it seems to be my choice. I don’t think any of those three journals have a policy indicating that I need to share my dataset with the public. I imagine this could change in the near future.

I was chatting with a collaborator a couple weeks ago (working on “paper i”) and we were trying to decide where we should send the paper. We talked about PLOS ONE. I’ve sent one paper to this journal, actually one of best papers. Then I heard about a new policy of the journal to require public archiving of datasets from all papers published in the journal.

All of sudden, I’m less excited about submitting to this journal. I’m not the only one to feel this way, you know.

Why am I sour on required data archiving? Well, for starters, it is more work for me. We did the field and lab work for this paper during 2007-2009. This is a side project for everybody involved and it’s taken a long time to get the activation energy to get this paper written, even if the results are super-cool.

Is that my fault that it’ll take more work to share the data? Sure, it’s my fault. I could have put more effort into data management from out outset. But I didn’t, as it would have been more effort, and kept me from doing as much science as I have done. It comes with temporal overhead. Much of the data were generated by an undergraduate researcher, a solid scientist with decent data management practices. But I was working with multiple undergraduates in the field in that period of time, and we were getting a lot done. I have no doubts in the validity of the science we are writing up, but I am entirely unthrilled about cleaning up the dataset and adding the details into the metadata for the uninitiated. And, our data are a combination of behavioral bioassays, GC-MS results from a collaborator, all kinds of ecological field measurements, weather over a period of months, and so on. To get these numbers into a downloadable and understandable condition would be, frankly, an annoying pain in the ass. And anybody working on these questions wouldn’t want the raw data anyway, and there’s no way these particular data would be useful in anybody’s meta analysis. It’d be a huge waste of my time.

Considering the time it takes me to get papers written, I think it’s cute that some people promoting data archiving have suggested a 1-year embargo after publication. (I realize that this is a standard timeframe for GenBank embargoes.) The implication is that within that one year, I should be able to use that dataset for all it’s worth before I share it with others. We may very well want to use these data to build a new project, and if I do, then it probably would be at least a year before we head back to the rainforest again to get that project done. At least with the pace of work in my lab, an embargo for less than five years would be useless to me.

Sometimes, I have more than one paper in mind when I am running a particular experiment. More often, when writing a paper, I discover the need to write different one involving the same dataset (Shhh. Don’t tell Jeremy Fox that I do this.) I research in a teaching institution, and things often happen at a slower pace than at the research institutions which are home to most “open science” advocates. Believe it or not, there are some key results from a 15-year old dataset that I am planning to write up in the next few years, whenever I have the chance to take a sabbatical. This dataset has already been featured in some other papers.

One of the standard arguments for publishing raw datasets is that the lack of full data sharing slows down the progress of science. It is true that, in the short term, more and better papers might be published if all datasets were freely downloadable. However, in the long term, would everybody be generating as much data as they are now? Speaking only for myself, if I realized that publishing a paper would require the sharing of all of the raw data that went into that paper, then I would be reluctant to collect large and high-risk datasets, because I wouldn’t be sure to get as large a payoff from that dataset once the data are accessible.

Science is hard. Doing science inside a teaching institution is even harder. I am prone isolation from the research community because of where I work. By making my data available to others online without any communication, what would be the effect of sharing all of my raw data? I could either become more integrated with my peers, or more isolated from them. If I knew that making my data freely downloadable would increase interactions with others, I’d do it in a heartbeat. But when my papers get downloaded and cited I’m usually oblivious to this fact until the paper comes out. I can only imagine that the same thing could happen with raw data, though the rates of download would be lower.

In the prevailing culture, when data are shared, along with some other substantial contributions, that’s standard grounds for authorship. While most guidelines indicate that providing data to a collaborator is not supposed to be grounds for authorship, the current practice is that it is grounds for authorship. One can argue that it isn’t fair nor is it right, but that is what happens. Plenty of journals require specification of individual author contributions and require that all authors had a substantial role beyond data contribution. However, this does not preclude that the people who provide data do not become authors.

In the culture of data ownership, the people who want to write papers using data in the hands of other scientists need to come to an agreement to gain access to these data. That agreement usually involves authorship. Researchers who create interesting and useful data – and data that are difficult to collect – can use those data as a bargaining chip for authorship. This might not be proper or right, and this might not fit the guidelines that are published by journals, but this is actually what happens.

This system is the one that  “open science” advocates want to change. There are some databases with massive amounts of ecological and genomic data that other people can use, and some people can go a long time without collecting their own data and just use the data of others. I’m fine with that. I’m also fine with not throwing my data in to the mix.

My data are hard-won, and the manuscripts are harder-won. I want to be sure that I have the fullest opportunity to use my data before anybody else has the opportunity. In today’s marketplace of science, having a dataset cited in a publication isn’t much credit at all. Not in the eyes of search committees, or my Dean, or the bulk of the research community. The discussion about the publication of raw data often avoids tacit facts about authorship and the culture of data ownership.

To be able to collect data and do science, I need grant money.

To get grant money, I need to give the appearance of scientific productivity.

To show scientific productivity, I need to publish a bunch of papers.

To publish a bunch of papers, I need to leverage my expertise to build collaborations.

To leverage my expertise to build collaborations, I need to have something of quality to offer.

To have something of quality to offer, I need to control access to the data that I have collected. I don’t want that to stop after publication.

The above model of scientific productivity is part of the culture of data ownership, in which I have developed my career at a teaching institution. I’m used to working amicably and collaboratively, and the level of territoriality in my subfields is quite low. I’ve read the arguments, but I don’t see how providing my data with no strings attached would somehow build more collaborations for me, and I don’t see how it would give me any assistance in the currency that matters. I am sure that “open science” advocates are wholly convinced that putting my data online would increase, rather than constrict opportunities for me. I am not convinced, yet, though I’m open to being convinced. I think what will convince me is seeing a change in the prevailing culture.

There is one absurdity to these concerns of mine, that I’m sure critics will have fun highlighting. I doubt many people would be downloading my data en masse. But, it’s not that outlandish, and people have done papers following up on my own work after communicating with me. I work at a field site where many other people work; a new paper comes out from this place every few days. I already am pooling data with others for collaborations. I’d like to think that people want to work with me because of what I can bring to the table other than my data, but I’m not keen on testing that working hypothesis.

Simply put, in today’s scientific rewards system, data are a currency. Advocates of sharing raw data may argue that public archiving is like an investment with this currency that will yield greater interest than a private investment. The factors that shape whether the yield is greater in a public or private investment of the currency of data are complicated. It would be overly simplistic to assert that I have nothing to lose and everything to gain by sharing my raw data without any strings attached.

While good things come to those who are generous, I also have relatively little to give, and I might not be doing myself or science a service if I go bankrupt. Anybody who has worked with me will report (I hope) that am inclusive and giving with what I have to offer. I’ve often emailed datasets without people even asking for them, without any restrictions or provisions. I want my data to be used widely. But even more, I want to be involved when that happens.

Because I run a small operation in a teaching institution, my research program experiences a set of structural disadvantages compared to colleagues at an R1 institution. The requirement to share data levies the disadvantage disproportionately against researchers like myself, and others with little funding to rapidly capitalize on the creation of quality data.

To grow a scientific paper, many ingredients are required. As grass grows the cow, data grows a scientific paper.

In Open Range, the resource in dispute is not the grass, but the cows. The bad guy ranchers aren’t upset about losing the grass, they just don’t want these interlopers on their land. It’s a matter of control and territoriality. At the moment, the status quo is that we run our own labs, and the data growing in these labs are also our property.

When people don’t want to release their data, they don’t care about the data itself. They care about the papers that could result from these data. I don’t care if people have numbers that I collect. What I care about is the notion that these numbers are scientifically useful, and that I wish to get scientific credit for the usefulness of these numbers. Once the data are public, there is scant credit for that work.

It takes plenty of time and effort to generate data. In my case, lots of sweat, and occasionally some venom and blood, is required to generate data. I also spend several weeks per year away from my family, which any parent should relate with. Many of the students who work with me also have made tremendous personal investments into the work as well. Generating data in my lab often comes at great personal expense. Right now, if we publicly archived data that were used in the creation of a new paper, we would not get appropriate credit in a currency of value in the academic marketplace.

When a pharmaceutical company develops a new drug, the structure of the drug is published. But the company has a twenty year patent and five years of exclusivity. It’s widely claimed – and believed – that without the potential for recouping the costs of work in developing medicines that pharmaceutical companies wouldn’t jump through all the regulatory hoops to get new drugs on the market. The patent provides incentive for drug production. Some organizations might make drugs out of the goodness of their hearts, but the free market is driven by dollars. An equivalent argument could be wagered for scientists wishing for a very long time window to reap the rewards of producing their own data.

In the United States, most meat that people consume doesn’t come from grass on the prairie, but from corn grown in an industrial agricultural setting. Likewise, most scientific papers that get published come from corn-fed data produced by a laboratory machine designed to crank out a high output of papers. Ranchers stay in business by producing a lot of corn, and maximizing the amount of cow tissue that can be grown with that corn. Scientists stay in business by cranking out lots of data and maximizing how many papers can be generated from those data.

Doing research in a small pond, my laboratory is ill equipped to compete with the massive corn-fed laboratories producing many heads of cattle. Last year was a good year for me, and I had three papers. That’s never going to be able to compete with labs at research institutions — including the ones advocating for strings-free access to everybody’s data.

The movement towards public data archiving is essentially pushing for the deprivatization of information. It’s the conversion of a private resource into a community resource. I’m not saying this is bad, but I am pointing out this is a big change. The change is biggest for small labs, in which each datum takes a relatively greater effort to produce, and even more effort to bring to publication.

So far, what I’ve written is predicated on the notion that researchers (or their employers) actually have ownership of the data that they create. So, who actually owns data? The answer to that question isn’t simple. It depends on who collected it, who funded the collection of the data, and where the data were published.

If I collect data on my own dime, then I own these data. If my data were collected under the funding support of an agency (or a branch of an agency) that doesn’t require the public sharing of the raw data, then I still own these data. If my data are published in a journal that doesn’t require the publication of raw data, I still own these data.

It’s fully within the charge of NIH, NSF, DOE, USDA, EPA and everyone else to require the open sharing of data collected under their support. However, federal funding doesn’t necessarily necessitate public ownership (see this comment in Erin McKiernan’s blog for more on that.) If my funding agency, or some federal regulation, requires that my raw data be available for free downloads, then I no longer own these data. The same is true if a journal has a similar requirement. Also, if I choose to give away my data, then I no longer own them.

So, who is in a position to tell me when I need to make my data public? My program officer, or my editor.

If you wish, you can make it your business by lobbying the editors of journals to change their practices, and you can lobby your lawmakers and federal agencies for them to require and enforce the publication of raw datasets.

I think it’s great when people choose to share data. I won’t argue with the community-level benefits, though the magnitude of these benefits to the community vary with the type of data. In my particular situation, when I weigh the scant benefit to the community relative to the greater cost (and potential losses) to my research program, the decision to stay the course is mighty sensible.

There are some well-reasoned folks, who want to increase the publication of raw datasets, who understand my concerns. If you don’t think you understand my concerns, you really need to read this paper. In this paper, they had four recommendations for the scientific community at large, all of which I love:

  1. Facilitate more flexible embargoes on archived data
  2. Encourage communication between data generators and re-users
  3. Disclose data re-use ethics
  4. Encourage increased recognition of publicly archived data.

(It’s funny, in this paper they refer to the publication of raw data as “PDA” (public data archiving), but at least here in the States, that acronym means something else.)

And they’re right, those things will need to happen before I consider publishing raw data voluntarily. Those are the exact items that I brought up as my own concerns in this post. The embargo period would need to be far longer, and I’d want some reassurance that the people using my data will actually contact me about it, and if it gets re-used, that I have a genuine opportunity for collaboration as long as my data are a big enough piece. And, of course, if I don’t collaborate, then the form of credit in the scientific community will need to be greater than what happens now, which is getting just cited.

The Open Data Institute says that “If you are publishing open data, you are usually doing so because you want people to reuse it.” And I’d love for that to happen. But I wouldn’t want it to happen without me, because in my particular niche in the research community, the chance to work with other scientists is particularly valuable. I’d prefer that my data to be reused less often than more often, as long as that restriction enabled me more chances to work directly with others.

Scientists at teaching institutions have a hard time earning respect as researchers (see this post and read the comments for more on that topic). By sharing my data, I realize that I can engender more respect. But I also open myself up to being used. When my data are important to others, then my colleagues contact me. If anybody feels that contacting me isn’t necessary, then my data are not apparently necessary.

Is public data archiving here to stay, or is it a passing fad? That is not entirely clear.

There is a vocal minority that has done a lot to promote the free flow of raw data, but most practicing scientists are not on board this train. I would guess that the movement will grow into an establishment practice, but science is an odd mix of the revolutionary and the conservative. Since public data archiving is a practice that takes extra time and effort, and publishing already takes a lot work, the only way will catch on is if it is required. If a particular journal or agency wants me to share my data, then I will do so. But I’m not, yet, convinced that it is in my interest.

I hope that, in the future, I’ll be able to write a post in which I’m explaining why it’s in my interest to publish my raw data.

The day may come when I provide all of my data for free downloads, but that day is not today.

I am not picking up a gun in this range war. I’ll just keep grazing my little herd of cows in a large fragment of rainforest in Sarapiquí, Costa Rica until this war gets settled. In the meantime, if you have a project in mind involving some work I’ve done, please drop me a line. I’m always looking for engaged collaborators.

Novels, science, and novel science


I was chatting with a friend in a monthly book group. A rare event happened this month: everybody in the group really liked the book. It turns out that that most of the books they read are not well-liked by the group. How does that happen? Well, this is a discriminating group, and there are lot of books on the market; many books aren’t that good.

We speculated about why so many non-good books are sold by publishers. The answer is embedded within the question: those books sell.

Let me overgeneralize and claim that there are two kinds of novels: First, there are those that were brought into the world because the author had a creative vision and really wanted to write the book. Second, there are novels that are written with the purpose of selling and making money. Of course, some visionary works of art also sell well, but many bestselling books aren’t great works of art. (No venn diagram should be required.) Some amazingly great novels don’t sell well, and weren’t created to be sold easily in the marketplace.

Most novels were never intended for greatness. The authors and the publishers know this, but have designed them to be enjoyed and to have the potential to sell well. When someone is shopping for a certain kind of book, then they’ll be able to buy that kind of book. Need a zombie farce? A spy thriller? A late-20s light-hearted romance? I have no problem with people writing and selling books that aren’t great. Books can be a commodity to be manufactured and sold, just like sandwiches or clothing. A book that is designed to sell fits easily fits into a predetermined category, and then does its best to conform to the expectations the category, to deliver to the consumer what was expected.

I think a similar phenomenon happens when we do experiments and write scientific papers.

First, some research happens because the investigators are passionately interested in the science and have a deeply pressing creative urge to solve problems and learn new things.

On the other hand, some research is designed to be sold on the scientific marketplace.

To advance in our careers, we need to sell our science well. The best way to do this, arguably, is to not aspire to do great science. We can sell science by taking the well trod path on theoretical bandwagon, instead of blazing our own paths.

If you want a guarantee that your science will sell well, you need to build your research around questions and theories that are hot at a given moment. If you do a good set of experiments on trendy topic, then you should be able to position your paper well in a well-regarded journal. If you do this a dozen times, then your scientific career is well on its way.

On the other hand, you could choose a topic that you are passionately interested in. You might think that this is an important set of questions that has the potential to be groundbreaking, but you don’t know if other people will feel the same way. You might be choosing to produce research that doesn’t test a theory-of-the-moment, but you think will be of long-term use to researchers in the field for many years to come. However, these kinds of papers might not sell well to high-profile journals.

Just like a novelist attempting to write a great novel instead of one that will sell well, if you are truly attempting to do great science, there is no guarantee that your science will sell. Just like there are all kinds of would-be-great novelists, there are some would-be-great scientists who are not pursuing established theories but are going off in more unexplored directions.

Of course, some science created for the marketplace is also great science, too. But the secrets to creating science that sells, are very different than the secrets to doing great science.

After all, most papers in Science and Nature are easily forgettable, just like the paperbacks for sale at your local chain bookstore.

Update: For the record, y’all, I’m not claiming that I am above doing science to be sold. That’s mostly what I do. I’m just owning that fact. There’s more on this in the comments.

Natural history is important, but not perceived as an academic job skill


This post is a reflection on a thoughtful post by Jeremy Fox, over on Dynamic Ecology. It encouraged me (and a lot of others, as you see in the comments) to think critically about the laments about the supposed decline of natural history.

I aim to contextualize the core notion of that post. This isn’t a quote, but here in my own words is the gestalt lesson that I took away:

We don’t need to fuss about the decline of natural history, because maybe it’s not even on the decline. Maybe it’s not actually undervalued. Maybe it really is a big part of contemporary ecology after all.

Boy howdy, do I agree with that. And also disagree with that. It depends on what we mean by “value” and “big part.” I think the conversation gets a lot simpler once we agree about the fundamental relationship between natural history and ecology. As the operational definition of the relationship used in the Dynamic Ecology post isn’t workable, I’ll posit a different one.

As a disclaimer, let me explain that I’m not an expert natural historian. Anybody who has been in the field with me is woefully aware of this fact. I know my own critters, but I’m merely okay when it comes to flora and fauna overall. I have been called an entomologist, but if you show me a beetle, there’s a nonzero probability that I won’t be able to tell you its family. There are plenty of birds in my own backyard that I can’t name. Now, with that out of the way:

Let’s make no mistake: natural history is, truly, on the decline. The general public knows less, and cares less, about nature than a few decades ago. Kids are spending more time indoors and are less prone to watch, collect, handle, and learn about plants and creatures. Literacy about nature and biodiversity has declined in concert with a broader decline in scientific literacy in the United States. This is a complex phenomenon, but it’s clear that the youth of today’s America are less engaged in natural history than yesterday’s America.

On the other hand, people love and appreciate natural history as much as they always have. Kids go nuts for any kind of live insect put in front of them, especially when it was just found in their own play area. Adults devour crappy nature documentaries, too. There’s no doubt that people are interested in natural history. They’re just not engaged in it. Just because people like it doesn’t mean that they are doing it or are well informed. That’s enough about natural history and public engagement, now let’s focus on ecologists.

I honestly don’t know if interest in natural history has waned among ecologists. I don’t have enough information to speculate. But this point is moot, because the personal interests of ecologists don’t necessarily have a great bearing on what they publish, and how students are trained.

Natural history is the foundation of ecology. Natural history is the set of facts upon which ecology builds. Ecology is the search to find mechanisms driving the patterns that we observe with natural history. Without natural history, there is no such thing as ecology, just as there is no such thing as a spoken language without words. In the same vein, I once made the following analogy: natural history : ecology :: taxonomy : evolution. The study of evolution depends on a reliable understanding of what our species are on the planet, and how they are related to one another. You really can’t study the evolution of any real-world organism in earnest without having reliable alpha taxonomy. Natural history is important to ecologists in the same way that alpha taxonomy is for evolutionary biologists.

Just as research on evolution in real organisms requires a real understanding of their taxonomy and phylogeny, research in real-world ecology requires a real-world understanding of natural history. (Some taxonomists are often as dejected as advocates for natural history: Taxonomy is on the decline. There is so much unclassified and misclassified biodiversity, but there’s no little funding and even fewer jobs to do the required work. If we are going to make progress in the field of evolutionary biology, then we need to have detailed reconstructions of evolutionary history as a foundation.)

Of course natural history isn’t dead, because if it were, then ecology would not exist. We’d have no facts upon which to base any theories. Natural history isn’t in conflict with ecology, because natural history is the fundamental operational unit of ecology. Natural history comprises the individual bricks of LEGO pieces that ecologists use to build LEGO models.

The germane question is not to ask if natural history is alive or dead. The question is: Is natural history being used to its full potential? Is it valued not just as a product, but as an inherent part of the process of doing ecological research?

LEGO Master Builders know every single individual building element that the company makes. When they are charged with designing a new model, they understand the natural history of LEGO so well that their model is the best model it can be. Likewise, ecologists that know the most about nature are the ones that can build models that best describe how nature works. An ecologist that doesn’t know the pieces that make up nature will have a model that doesn’t look like what it is supposed to represent.

Yes, the best ecological model is the one that is the most parsimonious: an overly complex model is not generalizable. You don’t need to know the natural history of every organism to identify underlying patterns and mechanisms in nature. However, a familiarity with nature to know what can be generalized, and what cannot be generalized, is central to doing good ecology. And that ability is directly tied to knowing nature itself. You can’t think about how generalizable a model is without having an understanding of the organisms and system to which the model could potentially apply.

I made an observation a few months back, that graduate school is no longer designed to train excellent scientists, but instead is built to train students how to publish papers. That was a little simplistic, of course. Let me refine that a bit with this Venn diagram: 


What’s driving the push to train grad students how to publish? It doesn’t take rocket science to look at the evolutionary arms race for the limited number of academic positions. A record of multiple fancy publications is typically required to get what most graduate advisors regard to be a “good” academic job. If you don’t have those pubs, and you want an academic job, it’s for naught. So graduate programs succeed when students emerge with as their own miniature publication factory.

In terms of career success, it doesn’t really matter what’s in the papers. What matters is the selectivity of the journal that publishes those papers, and how many of them exist. It’s telling that many job search committees ask for a CV, but not for reprints. What matters isn’t what you’ve published, but how much you have and where you’ve published.

So it only makes sense that natural history gets pushed to the side in graduate school. Developing natural history talent is time-intensive, involving long hours in the field, lots of reading in a broad variety of subjects. Foremost, becoming a talented natural historian requires a deliberate focus on information outside your study system. A natural historian knows a lot of stuff about a lot of things. I can tell you a lot about the natural history of litter-nesting ants in the rainforest, but that doesn’t qualify me as a natural historian. Becoming a natural historian requires a deliberate focus on learning about things that are, at first appearance, merely incidental to the topic of one’s dissertation.

Ecology graduate students have many skills to learn, and lots to get done very quickly, if they feel that they’ll be prepared to fend for themselves upon graduation. Who has time for natural history? It’s obvious that ecology grad students love natural history. It’s often the main motivator for going to grad school in the first place. And it’s also just as obvious that many grad students feel a deep need to finish their dissertations with ripe and juicy CVs, and feel that they can’t pause to learn natural history. This is only natural given the structure of the job environment.

Last month I had a bunch of interactions that helped me consider the role of natural history in the profession of ecology. These happened while I was fortunate enough to serve as guest faculty on a graduate field course in tropical biology. This “Fundamentals Course,” run by the Organization for Tropical Studies throughout many sites in in Costa Rica, has been considered to be a historic breeding ground for pioneering ecologists. Graduate students apply for slots in the course, which is a traveling road show throughout many biomes.

I was a grad student on the course, um, almost 20 years ago. I spent a lot of my time playing around with ants, but I also learned about all kinds of plant families, birds, herps, bats, non-ant insects, and a full mess of field methods. And soils, too. I was introduced to many classic coevolved systems, I learned how orchid bees respond to baits, how to mistnet, and I saw firsthand just how idiosyncratic leafcutter ants are in food selection. I came upon a sloth in the middle of its regular, but infrequent, pooping session at the base of a tree. I saw massive flocks of scarlet macaws, and how frog distress calls can bring in the predators of their predators. I also learned a ton about experimental design by running so many experiments with a bunch of brilliant colleagues and mentors, and a lot about communicating by presenting and writing. And I was introduced to new approaches to statistics. And that’s just the start of it the stuff I learned.

I essentially spent a whole summer of grad school on this course. Clearly, it was a transformative experience for me, because now I’m a tropical biologist and nearly all of my work happens at one of the sites that we visited on the course. Not everybody on the course became a tropical biologist, but it’s impossible to avoid learning a ton about nature if you take the course.

The course isn’t that different nowadays. One of the more noticeable things, however, is that fewer grad students are interested, or available, to take the course. I talked to a number of PhD students who wanted to take the course but their advisors steered them away from it because it would take valuable time away from the dissertation. I also talked to an equivalent number of PhD students who really wanted a broad introduction to tropical ecology but were too self-motivated to work on their thesis to make sure that they had a at least few papers out before graduating.

In the past, students would be encouraged to take the course as a part of their training to become an excellent ecologist. Now, students are being dissuaded because it would get in the way of their training to become a successful ecologist.

There was one clear change in the curriculum this year: natural history is no longer included. This wasn’t a surprise, because even though students love natural history, this is no longer an effective draw for the course. When I asked the coordinator why natural history was dropped from the Fundamentals Course, the answer I got had even less varnish than I expected: “Because natural history doesn’t help students get jobs.” And if it doesn’t help them get a job, then they can’t spend too much time doing it in grad school.

Of course we need to prepare grad students for the broad variety of paths they may choose. However, does this mean that something should be pulled from the curriculum because it doesn’t provide a specific transferable job skill? Is the entire purpose of earning a Ph.D. to arm our students for the job market. Is there any room for doing things that make better scientists that are not necessarily valued on the job market?

Are we creating doctors of philosophy, or are we creating highly specialized publication machines?

There are some of grad students (and graduate advisors) who are bucking the trend, and are not shying away from the kind of long-term field experiences that used to be the staple of ecological dissertations. One such person is Kelsey Reider, who among other things is working on frogs that develop in melting Andean glaciers. By no means is she tanking her career by spending years in the field doing research and learning about the natural history of her system. She will emerge from the experience as an even more talented natural historian who, I believe, will have better context and understanding for applying ecological theory to the natural world. Ecology is about patterns, processes and mechanisms in the natural world, right?

Considering that “natural history” is only used as an epithet during the manuscript review process, is natural history valued by the scientific community at all?  Most definitely it is! But keep in mind that this value doesn’t matter when it comes to academic employment, funding, high impact journals, career advancement, or graduate training.

People really like and appreciate experts in natural history. Unfortunately, that value isn’t in the currency that is important to the career of an ecologist. And it’d be silly to focus away from your career while you’re in grad school.

But, as Jeremy pointed out in his piece, many of the brilliant ecologists who he knows are also superb natural historians. I suggest that this is not mere coincidence. Perhaps graduate advisors can best serve their students by making sure that their graduate careers include the opportunity for serious training in natural history. It is unwise to focus exclusively on the production of a mountain of pubs that can be sold to high-impact journals.

We should focus on producing the most brilliant, innovative, and broad-minded ecologists, who also publish well. I humbly suggest that this entails a high degree of competency in natural history.

The rejection that wasn’t


I remember when I got the reviews back from the first big paper that I submitted. I was mad to have to deal with a rejection after such petty reviews.

Then I showed the editor’s letter to my advisor. He said, “Congratulations!” It turns out it was not a rejection, but a minor revision. Who would have thought that a request for a minor revision would have had the word “reject” in the decision letter?

I think editors are more clear about their decisions nowadays. That incident was a while ago. That was an actual letter. Which arrived via postal air mail from another continent.

More recently, in 2007, I got another rejection I found annoying. I inadvertently unburied the decision letter last week, when I was forced to clean up my lab before the holidays (because work crews need all surfaces clear for work being done in the building). Here’s what the letter said:

Enclosed is your manuscript entitled “Moderately obscure stuff about ants” and the reviews. Based upon these reviews, in its present form the manuscript is not accepted for publication in the Journal of Moderately Obscure Stuff.

Significant work/re-write will be needed before the manuscript can be resubmitted.

The reviews were not bunk, but were simply prescriptive and didn’t require massive changes. I realize now, years later, here is another rejection that wasn’t a rejection! I was fooled again! This was a pretty straightforward “major revision.” This paper still is sitting on my hard drive, unpublished, and down low in the queue. I just forgot about it because I was occupied with stuff that was more interesting at the time. The coauthor on the paper, who was a postdoc at the time, now has tenure. So there’s no rush to get this paper out to enhance his career.

The moral of the for authors is: If you’re not an old hand at reading decisions from editors, be sure to have senior colleagues read them and interpret them. When in doubt about what you need to do for a revision, it’s okay to ask the editor.

The moral of the story for editors is: We need to be careful to construct decisions so that there is no doubt that less experienced authors will be able to understand if a revision is welcome, and if so, what needs to be done to make the revision acceptable.

Journal prestige and publishing from a teaching institution


Finally. There are journals publishing quality peer-reviewed research, but leave it to the reader to decide whether a paper is sexy or important. Shouldn’t this be better than letting a few editors and reviewers reject work based on whether they personally think that a paper is important or significant?

This newish type of journal uses editors and reviewers to assure quality and accuracy. The biggie is PLoS ONE. A newer one on the block is PeerJ. Another one asked me to shill for them on this site.

The last few years have seen a relatively quick shift in scientific publishing models, and there has been a great upheaval in journals in which some new ones have become relatively prestigious (e.g., Ecology Letters) and some well-established journals have experienced a decline in relative rank (e.g., American Journal of Botany). These hierarchies have a great effect on researchers publishing from small ponds.

Publishing in selective journals is required to establish legitimacy. This is true for everybody. Because researchers in small teaching institutions are inherently legitimacy-challenged, then this is the population that most heavily relies on this mechanism of legitimacy.

Researchers in teaching institutions don’t have a mountain of time for research. Just think about all of the time that could be spent on genuine research, instead of time wasted in the mill of salesmanship that is required to publish in selective journals. (I also find that pitching research as a theory-of-the-moment to be one of the most annoying parts of the business.)

With new journals that verify quality but not the sexiness, we can hop off the salesmanship game and just get stuff published. Sounds great, right?

After all, the research that takes place at teaching institutions can be of high quality and significant within our fields. But, on average, we just don’t publish as much. That makes sense because our employers expect us to focus on teaching above all else.

Since we’re less productive, then every paper counts. We want to get our research out there, but we also need to make sure that every paper represents us well. What we lack for in quantity, we need to make up for in (perceived) quality.

How do people assess research quality? The standard measure is the selectivity of the journal that publishes the paper. It’s natural to think that a paper in American Naturalist (impact factor 4.7) is going to be higher quality than American Midland Naturalist (impact factor 0.6).

People make these judgments all the time. It might not be fair, but it’s normal.

And no matter how dumb people say it might be, no matter how many examples are brought up, assessments of ‘journal quality’ aren’t going away. No matter how much altmetrics picks up as another highly flawed measure of research quality, the name of the journal that publishes a paper really matters. That isn’t changing anytime soon.

The effect of paper on the research community is tied to the prestige of the venue, as well as the prestige of the authors. Fame matters. If any researcher – including those of us at teaching institutions – wants to build an influential research program, we’ve got to build up a personal reputation for high quality research.

Building a reputation for high quality research is not easy at all, but it’s even harder while based at a teaching institution. Just like having a paper in a prestigious journal is supposed to be an indicator of quality research, a faculty position at a well-known research institution is supposed to be an indicator of a quality researcher. Since our institutional affiliations aren’t contributing to our research prestige, we need to make the most of the circumstances to establish the credibility and status of the work that comes out of our labs.

If journal hierarchies didn’t exist, it would be really hard for researchers in lesser-known institutions, who may not publish frequently, to readily convince others that their work is of high quality. Good work doesn’t get cited just because it’s good. It needs to be read first. And work in non-prestigious journals may simply go unread if the author isn’t already well known.

If journal hierarchies somehow faded, it’s not as if the perception of research quality would evolve into some perfect meritocracy. There are lots of conscious and unconscious biases, aside from quality, that affect whether or not work gets into a fancy-pants journal, but it is true that people without a fancy-pants background still can publish in elite venues based on the quality of their work. This means that people without an elite background can gain a high profile based on merit, though they do need to persevere though the biases working against them.

If journals themselves merely published work but without any prestige associated with them, then it would be even more difficult for people without well-connected networks to have their work read and cited. It wouldn’t democratize access to science; it would inherently favor the scientists with great connections. At least now, the decisions of a small number of editors and reviewers can put science from an obscure venue into a position where a large audience will see it. On the other hand, publishing in a journal without any prestige, like PLoS ONE, will allow work to be available to a global audience, but actually read by very few.

If I want my work to be read by ecologists, then publishing it in a perfectly good journal like Oikos will garner me more readers than if I publish it in PLoS ONE. Moreover, people will look at the Oikos paper and realize that at some point in its life, there was a set of reviewers and an editor who agreed that the paper was not only of high quality but also interesting or sexy enough to be accepted. It wasn’t just done well, but it’s also useful or important to the field. That can’t necessarily be said of all PLoS ONE papers.

Not that long ago, I thought that these journals lacking the exclusivity factor were a great thing because it allowed everybody equal access to research. What changed my mind? The paper that I chose to place in PLoS ONEI chose to put a paper that I was really excited about in this journal. It was a really neat discovery, and should lead to a whole new line of inquiry. (Also, the editorial experience was great, the reviewers were very exacting but even-handed, and the handling editor was top notch.)

Since that paper has come out just over a year ago, there have been a number of new papers on this or a closely related topic. But my paper has not been cited yet, even though it really should have been cited. Meanwhile they’re citing my older, far less interesting and useful, paper on the same topic from 2002.

Why has nobody cited the more recent paper? Either people think that it’s not relevant, not high enough quality, or they never found it. (Heck, the blog post about it has been seen more times than the paper itself.) Maybe people found it and then didn’t read it because of the journal. It’s really a goddamn great paper. And it’s getting ignored because I put in PLoS ONE. I have very little doubt that if I chose to put it in a specialized venue like Insectes Sociaux or Myrmecological News, both good journals that are read by social insect biologists, that it would be read more heavily and have been cited at least a few times. This paper could have been in an even higher profile journal, because it’s so frickin’ awesome, but I chose to put it in PLoS ONE. Oh well, I’ve learned my lesson. There are some papers in that venue that get very highly cited, but I think most things in there just get lost.

I would love for people to judge a paper based on the quality of its content rather than the name of the journal. But most people don’t do this. And I’m not going to choose to publish in a venue that may lead people to think that the work isn’t interesting or groundbreaking even before they have chosen to (not) read it. I’ll admit to not placing myself on the front of reform in scientific publishing, even if I make all of my papers immediately and universally available. I have to admit that I’m apt to select a moderately selective venue when possible, because I am concerned that people see my research as not only legitimate but also worthwhile. I’m not worried that my stuff isn’t quite good, but I want to make sure it’s not done in vain. Science is a social enterprise, and as a working scientist I need to put my work into the conversation.

A snapshot of the publication cycle


I was recently asked:

Q: How do you decide what project you work on?

A: I work on the thing that is most exciting at the moment. Or the one I feel most bad about.

In the early stages, the motivator is excitement, and in the end, the motivator is guilt. (If I worked in a research institution, I guess an additional motivator would be fear.)

Don’t get me wrong: I do science because it’s tremendous fun. But the last part – finessing a manuscript through the final stages – isn’t as fun as the many other pieces. How do I keep track of the production line from conception to publication, and how do I make sure that things keep rolling?

At the top center of my computer desktop lives a document entitled “manuscript progress.” I consult this file when I need to figure out what to work on, which could involve doing something myself or perhaps pestering someone else to get something done.

In this document are three categories:

  1. Manuscript completed
  2. Paper in progress
  3. In development projects

Instead of writing about the publication cycle in the abstract, I thought it might be more illustrative to explain what is in each category at this moment. (It might be perplexing, annoying or overbearing, too. I guess I’m taking that chance.) My list is just that – a list. Here, I amplify to describe how the project was placed on the treadmill and how it’s moving along, or not moving along. I won’t bore those of you with the details of ecology, myrmecology or tropical biology, and I’m not naming names. But you can get the gist.

Any “Student” is my own student – and a “Collaborator” is anybody outside my own institution with whom I’m working, including grad students in other labs. A legend to the characters is at the end.

Manuscript completed

Paper A: Just deleted from this list right now! Accepted a week ago, the page proofs just arrived today! The idea for this project started as the result of a cool and unexpected natural history observation by Student A in 2011. Collaborator A joined in with Student B to do the work on this project later that summer. I and Collab A worked on the manuscript by email, and I once took a couple days to visit Collab A at her university in late 2011 to work together on some manuscripts. After that, it was in Collab A’s hands as first author and she did a rockin’ job (DOI:10.1007/s00114-013-1109-3).

Paper B: I was brought in to work with Collab B and Collab C on a part of this smallish-scale project using my expertise on ants. I conducted this work with Student C in my lab last year and the paper is now in review in a specialized regional journal (I think).

Paper C: This manuscript is finished but not-yet-submitted work by a student of Collab D, which I joined in by doing the ant piece of the project. This manuscript requires some editing, and I owe the other authors my remarks on it. I realize that I promised remarks about three months ago, and it would take only an hour or two, so I should definitely do my part! However, based on my conversations, I’m pretty sure that I’m not holding anything up, and I’m sure they’d let me know if I was. I sure hope so, at least.

Paper D: The main paper out of Student A’s MS thesis in my lab. This paper was built with from Collab E and Collab F and Student D. Student A wrote the paper, I did some fine-tuning, and it’s been on a couple rounds of rejections already. I need to turn it around again, when I have the opportunity. There isn’t anything in the reviews that actually require a change, so I just need to get this done.

Paper E: Collab A mentored Student H in a field project in 2011 at my field site, on a project that was mostly my idea but refined by Collab A and Student H. The project worked out really well, and I worked on this manuscript the same time as Paper A. I can’t remember if it’s been rejected once or not yet submitted, but either way it’s going out soon. I imagine it’ll come to press sometime in the next year.

Manuscripts in Progress

Paper F: Student D conducted the fieldwork in the summer of 2012 on this project, which grew out of a project by student A. The data are complete, and the specific approach to writing the paper has been cooked up with Student D and myself, and now I need to do the full analysis/figures for the manuscript before turning it off to StudentD to finish. She is going away for another extended field season in a couple months, and so I don’t know if I’ll get to it by then. If I do, then we should submit the paper in months. If I don’t, it’ll be by the end of 2014, which is when Student D is applying to grad schools.

Paper G: Student B conducted fieldwork in the summer of 2012 on a project connected to a field experiment set up by Collab C. I spent the spring of 2013 in the lab finishing up the work, and I gave a talk on it this last summer. It’s a really cool set of data though I haven’t had the chance to work it up completely. I contacted Collab G to see if he had someone in his lab that wanted to join me in working on it. Instead, he volunteered himself and we suckered our pal Collab H to join us in on it. The analyses and writing should be straightforward, but we actually need to do it and we’re all committed to other things at the moment. So, now I just need to make the dropbox folder to share the files with those guys and we can take the next step. I imagine it’ll be done somewhere between months to years from now, depending on how much any one of us pushes.

Paper H: So far, this one has been just me. It was built on a set of data that my lab has accumulated over few projects and several years. It’s a unique set of data to ask a long-standing question that others haven’t had the data to approach. The results are cool, and I’m mostly done with them, and the manuscript just needs a couple more analyses to finish up the paper. I, however, have continued to be remiss in my training in newly emerged statistical software. So this manuscript is either waiting for myself to learn the software, or for a collaborator or student eager to take this on and finish up the manuscript. It could be somewhere between weeks to several years from now.

Paper I: I saw a very cool talk by someone a meeting in 2007, which was ripe to be continued into a more complete project, even though it was just a side project. After some conversations, this project evolved into a collaboration, with Student E to do fieldwork in summer 2008 and January 2009. We agreed that Collab I would be first author, Student E would be second author and I’d be last author. The project is now ABM (all but manuscript), and after communicating many times with Collab I over the years, I’m still waiting for the manuscript. A few times I indicated that I would be interested in writing up our half on our own for a lower-tier journal. It’s pretty much fallen off my radar and I don’t see when I’ll have time to write it up. Whenever I see my collaborator he admits to it as a source of guilt and I offer absolution. It remains an interesting and timely would-be paper and hopefully he’ll find the time to get to it. However, being good is better than being right, and I don’t want to hound Collab I because he’s got a lot to do and neither one of us really needs the paper. It is very cool, though, in my opinion, and it’d be nice for this 5-year old project to be shared with the world before it rots on our hard drives. He’s a rocking scholar with a string of great papers, but still, he’s in a position to benefit from being first author way more myself, so I’ll let this one sit on his tray for a while longer. This is a cool enough little story, though, that I’m not going to forget about it and the main findings will not be scooped, nor grow stale, with time.

Paper J: This is a review and meta-analysis that I have been wanting to write for a few years now, which I was going to put into a previous review, but it really will end up standing on its own. I am working with a Student F to aggregate information from a disparate literature. If the student is successful, which I think is likely, then we’ll probably be writing this paper together over the next year, even as she is away doing long-term field research in a distant land.

Paper K: At a conference in 2009, I saw a grad student present a poster with a really cool result and an interesting dataset that came from the same field station as myself. This project was built on an intensively collected set of samples from the field, and those same samples, if processed for a new kind of lab analysis, would be able to test a new question. I sent Student G across the country to the lab of this grad student (Collab J) to process these samples for analysis. We ran the results, and they were cool. To make these results more relevant, the manuscript requires a comprehensive tally of related studies. We decided that this is the task of Student G. She has gotten the bulk of it done over the course of the past year, and should be finishing in the next month or two, and then we can finish writing our share of this manuscript. Collab J has followed through on her end, but, as it’s a side project for both of us, neither of us are in a rush and the ball’s in my court at the moment. I anticipate that we’ll get done with this in a year or two, because I’ll have to analyze the results from Student G and put them into the manuscript, which will be first authored by Collab J.

Paper L: This is a project by Student I, as a follow-up to the project of Student H in paper E, conducted in the summer of 2013. The data are all collected, and a preliminary analysis has been done, and I’m waiting for Student I to turn these data into both a thesis and a manuscript.

Paper M: This is a project by Student L, building on prior projects that I conducted on my own. Fieldwork was conducted in the summer of 2012, and it is in the same place as Paper K, waiting for the student to convert it into a thesis and a manuscript.

Paper N: This was conducted in the field in summer 2013 as a collaboration between Student D and Student N. The field component was successful and now requires me to do about a month’s worth of labwork to finish up the project, as the nature of the work makes it somewhere between impractical and unfeasible to train the students to do themselves. I was hoping to do it this fall, to use these data not just for a paper but also preliminary data for a grant proposal in January, but I don’t think I’ll be able to do it until the spring 2014, which would mean the paper would get submitted in Fall 2014 at the earliest, or maybe 2015. This one will be on the frontburner because Students D and N should end up in awesome labs for grad school and having this paper in press should enhance their applications.

Paper O: This project was conducted in the field in summer 2013, and the labwork is now in the hands of Student O, who is doing it independently, as he is based out of an institution far away from my own and he has the skill set to do this. I need to continue communicating with this student to make sure that it doesn’t fall off the radar or doesn’t get done right.

Paper P: This project is waiting to get published from an older collaborative project, a large multi-PI biocomplexity endeavor at my fieldstation. I had a postdoc for one year on this project, and she published one paper from the project but as she moved on, left behind a number of cool results that I need to write up myself. I’ve been putting this off because it would rely on me also spending some serious lab time doing a lot of specimen identifications to get this integrative project done right. I’ve been putting it off for a few years, and I don’t see that changing, unless I am on a roll from the work for Paper N and just keep moving on in the lab.

Paper Q: A review and meta-analysis that came out of a conversation with Collabs K and L. I have been co-teaching field courses with Collab K a few times, and we share a lot of viewpoints about this topic that go against the incorrect prevailing wisdom, so we thought we’d do something about it. This emerged in the context of a discussion with L. I am now working with Student P to help systematically collect data for this project, which I imagine will come together over the next year or two, depending on how hard the pushing comes from myself or K or L. Again it’s a side project for all of us, so we’ll see. The worst case scenario is that we’ll all see one another again next summer and presumably pick things up from there. Having my student generating data is might keep the engine running.

Paper R: This is something I haven’t thought about in a year or so. Student A, in the course of her project, was able to collect samples and data in a structured fashion that could be used with the tools developed by Collab M and a student working with her. This project is in their hands, as well as first and lead authorship, so we’ve done our share and are just waiting to hear back. There have been some practical problem on their side, that we can’t control, and they’re working to get around it.

Paper S: While I was working with Collab N on an earlier paper in the field in 2008, a very cool natural history observation was made that could result in an even cooler scientific finding. I’ve brought in Collab O to do this part of the work, but because of some practical problems (the same as in Paper R, by pure coincidence) this is taking longer than we thought and is best fixed by bringing in the involvement of a new potential collaborator who has control over a unique required resource. I’ve been lagging on the communication required for this part of the project. After I do the proper consultation, if it works out, we can get rolling and, if it works, I’d drop everything to write it up because it would be the most awesome thing ever. But, there’s plenty to be done between now and then.

Paper T: This is a project by Student M, who is conducted a local research project on a system entirely unrelated to my own, enrolled in a degree program outside my department though I am serving as her advisor. The field and labwork was conducted in the first half of 2013 – and the potential long-shot result come up positive and really interesting! This one is, also, waiting for the student to convert the work into a thesis and manuscript. You might want to note, by the way, that I tell every Master’s student coming into my lab that I won’t sign off on their thesis until they also produce a manuscript in submittable condition.

Projects in development

These are still in the works, and are so primordial there’s little to say. A bunch of this stuff will happen in summer 2014, but a lot of it won’t, even though all of it is exciting.


I have a lot of irons in the fire, though that’s not going to keep me from collecting new data and working on new ideas. This backlog is growing to an unsustainable size, and I imagine a genuine sabbatical might help me lighten the load. I’m eligible for a sabbatical but I can’t see taking it without putting a few projects on hold that would really deny opportunities to a bunch of students. Could I have promoted one of these manuscripts from one list to the other instead of writing this post? I don’t think so, but I could have at least made a small dent.

Legend to Students and Collaborators

Student A: Former M.S. student, now entering her 2nd year training to become a D.P.T.; actively and reliably working on the manuscript to make sure it gets published

Student B: Former undergrad, now in his first year in mighty great lab and program for his Ph.D. in Ecology and Evolutionary Biology

Student C: Former undergrad, now in a M.S. program studying disease ecology from a public health standpoint, I think.

Student D: Undergrad still active in my lab

Student E: Former undergrad, now working in biology somewhere

Student F: Former undergrad, working in my lab, applying to grad school for animal behavior

Student G: Former undergrad, oriented towards grad school, wavering between something microbial genetics and microbial ecology/evolution (The only distinction is what kind of department to end up in for grad school.)

Student H: Former undergrad, now in a great M.S. program in marine science

Student I: Current M.S. student

Student L: Current M.S. student

Student M: Current M.S. student

Student N: Current undergrad, applying to Ph.D. programs to study community ecology

Student O: Just starting undergrad at a university on the other side of the country

Student P: Current M.S. student

Collab A: Started collaborating as grad student, now a postdoc in the lab of a friend/colleague

Collab B: Grad student in the lab of Collab C

Collab C: Faculty at R1 university

Collab D: Faculty at a small liberal arts college

Collab E: Faculty at a small liberal arts college

Collab F: International collaborator

Collab G: Faculty at an R1 university

Collab H: Started collaborating as postdoc, now faculty at an R1 university

Collab I: Was Ph.D. student, now faculty at a research institution

Collab J: Ph.D. student at R1 university

Collab K: Postdoc at R1 university, same institution as Collab L

Collab L: Ph.D. student who had the same doctoral PI as Collab A

Collab M: Postdoc at research institution

Collab N: Former Ph.D. student of Collab H.; postdoc at research institution

Collab O: Faculty at a teaching-centered institution similar to my own

By the way, if you’re still interested in this topic, there was also a high-quality post on the same topic on Tenure, She Wrote, using a fruit-related metaphor with some really nice fruit-related photos.

Pretending you planned to test that hypothesis the whole time


Our scientific papers often harbor a massive silent fiction.

Papers often lead the readership into thinking that the main point of the scientific paper was the main point of the experiment when it was conducted. This is sometimes the case, but in many cases it is a falsehood.

How often is it, when we publish a paper, that we are writing up the very specific set of hypotheses and predictions that we had in mind when we set forth with the project?

Papers might state something like, “We set out to test whether X theory is supported by running this experiment…”  However, in many cases, the researchers might not even have had X theory in mind when running the experiment, but were focusing on other theories at the time. In my experience in ecology, it seems to happen all the time.

Having one question, and writing a paper about another question, is perfectly normal. This non-linearity is part of how science works. But we participate in the sham of , “I always meant to conduct this experiment to test this particular question” because that’s simply the format of scientific papers.

Ideas are sold in this manner: “We have a question. We do an experiment. We get an answer.” However, that’s not the way we actually develop our questions and results.

It could be: “I ran an experiment, and I found out something entirely different and unexpected, not tied to any specific prediction of mine. Here it is.”

It somehow is unacceptable to say that you found these results that are of interest, and are sharing and explaining them. If a new finding is a groundbreaking discovery that came from nowhere (like finding a fossil where it was not expected), then you can admit that you just stumbled on it. But if it’s an interesting relationship or support for one idea over an other idea, then you are required to suggest, if not overly state, that you ran the experiment because you wanted to look at that relationship or idea in the first place. Even if it’s untrue. We don’t often lie, but we may mislead. It’s expected of us.

In some cases, the unexpected origin of a finding could be a good narrative for a paper. “I had this idea in mind, but then we found this other thing out which was entirely unrelated. And here it is!” But, we never write papers that way. Maybe it’s because most editors want to trim every word that could be seen as superfluous, but it’s probably more caused by the fact that we need to pretend to our scientific audience that our results are directly tied to our initial questions, because that’s the way that scientists are supposed to work. It would seem less professional, or overly opportunistic, to publish interesting results from an experiment that were not the topic of the experiment.

Let me give you an example from my work. As a part of my dissertation, in the past millennium, I did a big experiment in which I and my assistants collected a few thousand ant colonies, in an experimental framework. It resulted in a mountain of cool data. This is a particularly useful and cool dataset in a few ways, because it has kinds of data that most people typically cannot get, even though they can be broadly informative (There are various kinds of information you get from collecting whole ant colonies that you can’t get otherwise.) There are all kinds of questions that my dataset can be used to ask, that can’t be answered using other approaches.

For example, in one of the taxa in the dataset, the colonies have a variable number of queens. I wanted to test different ideas that might explain environmental factors shaping queen number. This was fine framework to address those questions, even though it wasn’t what I had in mind while running the experiment. But when I wrote the paper, I had to participate in the silly notion that that experiment was designed to understand queen number (the pdf is free on my website and google scholar).

When I ran that experiment, a good while ago, the whole reason was to figure out how environmental conditions shaped the success of an invasive species in its native habitat. That was the one big thing that was deep in my mind while running the experiment. Ironically, that invasive species question has yet to be published from this dataset. The last time I tried to publish that particular paper, the editor accused me of trying to milk out a publication about an invasive species even though it was obvious (to him at least) that that wasn’t even the point of the experiment.

Meanwhile, using the data from the same experiment designed to ask about invasive species, I’ve written about not just queen number, but also species-energy theory, nest movement, resource limitation, and caste theory. I also have a few more in the queue. I’m excited about them all, and they’re all good science. You could accuse me of milking an old project, but I’m also asking questions that haven’t been answered (adequately) and using the best resources available. I’m always working on a new project with new data, but just because this project on invasive species was over many years ago doesn’t mean that I’m going to ignore additional cool discoveries that are found within the same project.

Some new questions I have are best asked by opening up the spreadsheet instead of running a new experiment. Is that so wrong? To some, it sounds wrong, so we need to hide it.

You might be familiar with the chuckles that came from the bit that went around earlier this year, involving Overly Honest Methods. There was a hashtag involved. Overly honest methods are only the tip of the proverbial iceberg about what we’re hiding in our research.

It’s time for #overlyhonesthypotheses.

The relationships among fame, impact and research quality


I just read a particularly interesting post by Dr. Becca about life about halfway through the tenure track that got me thinking, particularly one section:

I feel like most of my job right now is to be famous… What I mean by this is that I’m pretty sure a lot of my future success is going to depend on whether people remember my name when they review my grant applications and manuscripts…

What determines your success? How famous you are.

Most famous scientists have a history of excellent research with high impact. And most researchers with a history of excellent research with high impact are famous. (Fame, that is, among scientists.) However, the r2 on this relationship is well below 1. What explains the variance?

What are the factors that makes you more famous, or less famous, than would be merited by your research quality?

Is the impact of your research — how much it influences the work others — closely tied to your fame or are there people who have a high impact but not well recognized – or people who are quite famous but don’t have much impact?

Fame path diagram

A working hypothesis for the relationships among aspects of a scientist’s research program

I posit the figure above only as a suggestion, a working hypothesis that I’m not wholly wedded to. It’s a good template for discussion.

The ceiling of the impact of your research is dictated by how famous you are. Your impact could be (very) crudely measured using impact factor, or by an h-score or some other measure of citations. How much of a difference you make. You might get cited a few times if nobody has heard of you, but essentially you need to be known for your work to make a splash. You can only make a difference if people know who you are, which is exactly the point that Dr. Becca made. Your job, if it is to make scientific progress, is to become famous. Because you can only make a difference if you’re famous.

If asked to name two huge advances in biology from mid-1800s, most of us would pick the same things. One came from a person working in obscurity and another by one who was, among scientists of the day, mighty famous and was in regular communication with other famous scientists. Darwin’s scientific impact was immediate. Mendel’s finding required the fame of Hugo de Vries to create a scientific impact more than thirty years later.

There are many things that contribute to fame. One of these is research quality, but also the institution you came from, your academic pedigree, attractiveness, personality, and also your ethnicity and gender can have an effect.

What’s another thing plays a key role in facilitating, or limiting, your fame? The institution where you work. If you’re not based out of a research institution, there is a hard cap on how famous that you’re allowed to become as a research scientist. However, if you’re at a teaching institution, the school doesn’t really want you to be a research scientist of any fame, anyway. Fame isn’t part of the evaluation process for tenure, and you could be entirely unknown off campus and this shouldn’t (necessarily) negatively reflect your tenure bid. This would be fatal at a research institution, where you’re expected to establish a visible profile in the research community.

Our jobs at teaching campuses do not expect us to be famous and do not require it. This might be a defining contrast between a teaching campus and a research campus. However, there are lots of us in teaching institutions that not only are doing consequential research, but also want this research to have as much impact as it possibly can. However, based on the name of the institution found on our nametag when we present at conferences, this becomes very difficult.

There’s a positive feedback loop connecting one’s pedigree, social network, publication history, favorable reviews of grants and proposals, funding, talent of collaborators and fame. They’re all connected to one another. And if you’re at a teaching campus, you’re at a strategic disadvantage because those positive feedback loops don’t work as tightly.

Leveraging your pedigree, papers, and collaborations is harder to do, because of unacknowledged biases against teaching campuses in the research community. You can’t be famous above a certain level, because those at research institutions assume that you aren’t working at one because you can’t get a job at one. If you’re doing research from a teaching institution, that means that you haven’t had enough success to work at a teaching institution. So the thinking goes. Even in the incredibly tight job market, that line of thinking still prevails. You’re skeptical? Pull up a few journals and look at the mastheads, to find the institutions of the editorial board members and the subject editors.

So, unlike Dr. Becca and those at research institutions, my job isn’t to become famous. Even if I was famous, nobody on campus would even be aware of it anyway. However, if I have ambitions for my research to make a difference, then I need to become famous. This fame is required to activate the positive feedbacks among friendly reviews, funding, invitations, collaborations, and so on.

How much space do faculty at teaching campuses take up in journals?


What’s the relative influence of teaching faculty on their fields as a whole? That’s hard to measure.

Here’s an easier, related, question to ask: What fraction of papers coming out have teaching faculty as authors?

A couple months ago, I perused the tables of contents of a variety of journals. Here’s what I found:

  • Ecology: 3 of 25 papers were partially or completely authored by researchers in teaching institutions
  • Journal of the Kansas Entomological Society: 1 of 10
  • Biotropica: 0 of 16
  • Annual Review of Ecology, Evolution and Systematics:  4 of 23.
  • Ecology Letters: 1 of 15
  • Proceedings of the Royal Society B: 0 of 20
  • Biology Letters: 2 of 32

By the way, in Physical Review Letters, it was 1 out of 32; Chemical Reviews was 0 of 12.

I can sniff out a teaching institution in the US based on its name. The primarily-teaching university doesn’t quite exist in the same manifestation internationally, but even so it was clear that most international authors were associated with research institutions of one kind or another.

Using this feeble back-of-the-envelope calculation using a very small sample size, maybe up to 10% of papers in my fields have teaching-school authors in the US. Is this more or less than you’d expect?

What’s it look like in your field, if you’re not a ecology/entomo/tropical type?

Glamour publications: the view from a teaching campus


The academic publishing environment is being undermined by a bunch of extrinsic and intrinsic forces.

One such force is the genre of academic glamour magazines. They have massive impact factors that allow you to make a big splash when you land a spot inside one of them. Sometimes genuinely huge discoveries and advances end up in Science, Nature, Ecology Letters, or Cell. But most of what appears in these venues is a big sexy idea that doesn’t have any real lasting value. If science were nutrition, then this is junk food. It’s yummy, and it is dressed up with everything to make it exciting and yummy, but rarely is there substance.

For those running labs in research institutions, the perceived wisdom is that you should be publishing in a glamour magazine once in a while.

For those of us at teaching campuses, the perceived wisdom is that you should be publishing once in a while.

There are increased calls for principled stands against glamour mags. For those who stand too firm on principle and avoid any whiff of careerism when choosing a journal, Physioprof pointed out last year out that you’re probably in a position of privilege if you’re saying that. I like Drugmonkey’s attitude, to subvert the system by being entirely reasonable. Among these reasonable ideas: don’t cite glamour mags unnecessarily; don’t not publish a result because you can’t get it into one of them; as a reviewer, keep the standard crap out of them and support excellent work by your colleagues when you get it for review.

At teaching institutions, we approach this issue from an entirely different perspective. We rarely review for those venues, and typically don’t submit to them either. (I’ve submitted to Science/Nature a few times and reviewed a few times.) This suits institutional expectations. Landing a paper in a Science or Nature would be an immense coup. Few, if any, on campus would ever think of this as a gimmicky paper, though the rarity of it wouldn’t be fully appreciated. (The only person that I’ve ever worked with at a teaching campus who had one of these papers during my time actually has an overall below-par publishing record.)

These are glamour magazines because they are a flashy thing that impresses, because of the rarity itself. Gold and diamonds are valuable because there isn’t that much of them, or because they are difficult to access. Likewise, it’s hard to get into glamour mags, so that’s what makes them flashy. These papers themselves don’t communicate the value or prestige of a research program, they’re just the flashy pieces of ornamentation that are necessary.

What, then, is truly glamorous on a teaching campus? The answer is publications. Lots of ’em. The reason that this is glamorous is also because of its rarity. While many people publish on teaching campuses, status and glamour comes from doing it in high volume, because so few are able to do this. This is true even if the venues are not highly regarded, and even if the papers don’t end up being cited. If you want to show off your bling on a teaching campus, five papers in obscure regional or highly specialized journals actually seem more impressive than one paper in a top-notch journal. The people who are arbiters of your reputation on campus might not be able to assess publication quality, but they sure can assess publication frequency.

I make a point to publish in which I consider to be venues appropriate for my work. I avoid merely descriptive or confirmatory work without introducing substantial new ideas, so I try to avoid journals that mostly include this kind of work. I could change my focus and crank out many more papers than I do, in lower-impact journals, but that would harm my credibility in among my scientific peers even as it would increase my profile on campus. Some other scientists manage that tradeoff in different ways, of course. I’m not overly concerned as long as people work on their passion, and make sure that it gets shared with the world.

What is the distinction between publishing for glamour and publishing for genuine impact? It’s probably the same distinction between measured “impact factor” and and long-term citation rates.

Transparency in research: publish your reviews


When it comes to reform, and “reform,” it seems like most people think they know the fix, regardless of the problem that needs to be fixed.

For example, many people have strong opinions about how to solve the public education crisis in the U.S. What most of the people pushing for “reform” have in common is that they have little experience or success in public education. Solutions to a problem might involve fewer taxes, more taxes, more investment, less investment, more regulation, fewer regulations. It all blurs together.

It’s not too often that you hear someone in a position of ignorance say, “I’ll defer to the education experts.”

For some problems, there are self-evident partial fixes that don’t need any discussion, because the people who are wrong on the issue are straight-up ignoramuses. For example, if you want better schools, then you need to confer more respect, support and money to teachers. You can’t have good schools if the teachers don’t receive respect and support. That’s just obvious. If you want If you want children to stop dying at the hands of madmen, then you’ve got to restrict access to guns. You can’t get more sensible than that, and it’s a fact that other developed nations have figured out long ago.

The scientific publishing industry is a mess in several different ways, and this mess is stifling research progress. There are not many overt direct solutions. Perhaps scientists should be able to retain copyright of their own work, but this is a complex issue.

There is one component of the academic publishing mess that can be quickly and easily changed by us authors.

If you want more confidence and fairness in the integrity of the publication process, then you need more transparency.

There is one massive thing that we can do to increase the transparency of the publication process.  We can publish our reviews.

Here are some upsides to releasing your reviews:

  • There will be fewer doubts about the integrity of journals and the quality of peer review.
  • There will be more doubts about the integrity of journals that should be subject to doubts.
  • Reviewers, even though they are anonymous, may tend toward producing more civil and measured reviews, with fewer requests for citations to their own work, if the reviews end up being published.
  • Specific concerns about the scientific content of that paper which were addressed during the review process will be publicly available, increasing the ability of readers to critically evaluate the science of the publications.
  • Taxpayers who are paying for research will be more even more informed about the process and consequences of publicly-funded projects.
  • People will learn that the quantity and quality of peer review may be independent of the impact factor, prestige or ranking of a journal.
  • The academic glamour magazines will look a lot less glamorous if the reviews and editorial evaluations associated with those venues are seen in daylight.

How does this work?  Just put ’em on your website. I’ve been doing this since 2009. Go ahead and ‘read ’em! (and, feel free to cite them)

It takes a very short time to do this. I just take the reviews as they come in and copy-and-paste them into a word processing document, redacting the names of my correspondents. Then I make it into a pdf, and upload it right next to the paper itself on my website.

To my knowledge, I’m the only person who ever does this as a regular course of action.

I haven’t often mentioned it while chatting with colleagues, even though I know plenty of folks are downloading reprints from my site. Perhaps nobody mentions it because they think it’s a supremely risky or unwise thing to do. If you read through the files, you might notice that one or two good journals come out looking rather silly. It might have resulted in a grudge on their end, though, I don’t think that’s the case. Obviously it’s not wholly positive about me, to show evidence of rejection after rejection for some papers. I think the benefits of transparency outweigh publishing negative reviews that result in rejection.

How do the journals feel about this? Nobody’s ever said anything. It hasn’t come up.

I do look at this from the perspective of an editor, too. I have handled my share of manuscripts. I doubt that any of the authors whose manuscripts I handle are publishing their rejections and acceptances online (and rejections are far more common than acceptances). Nevertheless, I work for quality and fairness, which is clear enough so that if the documents were public, and my name were on them, that I would be proud of the work and not feel as if I would have to make any excuses. I do include the names of journals, but not the names of any particular individuals. You could infer editors-in-chief based on the dates in the correspondence, but it’s a different matter for handling editors.

I approach editing with the philosophy that I would want to be sure that I would be able to handle public scrutiny if it all was published on the front page of the newspaper. I also have the same policy for how I conduct myself in the classroom, and how I correspond over email. I honestly wouldn’t be bothered if my reviews of a manuscript and my remarks as an editor were publicly revealed with my name. I certainly wouldn’t mind if they were released without any name attached, which is what I do with the reviews I share with the world.

I don’t think people are too particular about the content of these reviews. They want to see the final paper, and few want to look into the sausage factory. It is probably of greatest interest to students who don’t know about how the process works.

One thing that you’ll see is the rigor of peer review associated with PLoS ONE. I’ve only published one paper in this venue so far, and when you compare the process there to the quality of editorial work in the other publications, and in the submissions of that paper to other venues first, you have to respect what happens under the hood at PLoS ONE.

Do you think this is a great idea to share your reviews? If everybody shared their reviews, would it destabilize the publication process, result in no change, or make things more fair? Would the level of hap involved in the process, and the importance of salesmanship, become more evident?

I’m not suggesting that this is a major fix, but from the way I’ve seen the angles so far, I see a lot of positives.

My all-time favorite scientific paper


Current events (E.O. Wilson saying that scientists don’t need to be good at math) give me a great reason to introduce what might be my favorite scientific paper.

I have three reasons for choosing this paper to share with you. One minor reason is that, from one ant man for another more illustrious ant man, I’d like to be one of the few scientists to publicly say something nice about E.O. Wilson this week without any kind of caveat.

Second, the content of this paper, and the fact of its existence, frames Wilson’s message about science and math that dovetails with my recent writing on how to design a research program.

Last, since this paper was published it has been a source of inspiration to me as a scientist.

Without further ado, here’s the paper:

Wilson, E.O. 2005. Oribatid mite predation by small ants of the genus Pheidole. Insectes Sociaux 52: 263-265. There is a paywall – email me if you’d like a copy.

Here is the abstract of this three-pager in its entirety:

Using “cafeteria experiments” with forest soil and litter, I obtained evidence that at least some small Neotropical species of Pheidole prey on a wide array of slow-moving invertebrates, favoring those of approximately their own size. The most frequent prey were oribatid mites, a disproportion evidently due in part to the abundance of these organisms. The ants have no difficulty breaking through the calcified exoskeleton of the mites.

What is the deal with this, and why is it inspirational? Please humor me by reading on if I haven’t lost you already.

This paper was published in the year 2005. In 2003, after several decades of effort, Wilson had published a monumental revision of the most species-rich genus of ant, Pheidole. Any taxonomist can appreciate the sheer enormity of this effort that had Wilson’s attention over the years. Clearly, it’s a work of love. Most Pheidole are tiny in size. They’re charming little ants, if nondescript, and not really different from one another in obvious ways that could account for their richness.

Like most years, 2005 was a good year for Wilson. He wrote three PNAS papers, two with his long-time friend and colleague Bert Hölldobler. He also wrote a controversial paper in Social Research arguing that altruism doesn’t principally arise from kin selection, a precursor to Wilson’s now full-fledged group selection posture. He had a book chapter come out, oh, and also he published a big book introducing the concept of gene-culture coevolution. And then there was this little paper, one of my favorite papers ever, in Insectes Sociaux.

If you want to understand and measure the diversity of ants, the first place to start is to sample the leaf litter. A whole book has been written about how to do this, actually. That’s where the action is, in terms of functional and taxonomic diversity. Pretty much wherever you go on the entire planet, the most common thing that you’ll find in the litter is Pheidole. They’re cosmopolitan, if not sophisticated. If the importance of a taxon is measured by its diversity, abundance and distribution, then Pheidole are the most important ants. (I guess you could argue for carpenter ants, too. But why? They’re so boring.)

Wilson has argued time and time again that ants are really important, they rule the world, they have the same biomass as people, and all that stuff. So, since Pheidole are the ants that rule among the ants, then we’ve got to really have figured out these ants, right? After all, they’re easy to find, they show up at baits, they’re easy to work with.

So what can we, as the community of ant biologists, tell you about the natural history, life history and habits of these Pheidole that live in leaf litter? Here’s a quick list of features:

  • _
  • _
  • _

That’s only a slight underexaggeration.  Okay, so, I can at least tell you what they eat.

No, I can’t.

Actually, I can.  Why? Because E.O. goddamn Wilson, at 79 years of age, after reaching the pinnacle of his career twenty different times and receiving every honor you could invent, decided to do the little experiment to figure this out. He wrote it up as a sole authored paper in a specialized journal.

It turns out they love oribatid mites. Now you know.

(This is not insignificant, actually, for the field of chemical ecology. Two years after the Wilson paper, Ralph Saporito sorted out that mite alkaloids end up in ants, which end up in poison frogs as their chemical defenses. The frogs also eat the mites directly, too.)

Wilson had spent decades slowly churning on the revision of Pheidole. After spending all that time at the scope and in the museum sorting out the genus, he can’t be blamed for thinking, “what do we know about these ladies after all?” Instead of just wondering, he did the experiment. You gotta love that spirit.

It’s rare for a midcareer PI of a typical lab to do a little experiment of one’s own like this and take the time to write it up. And then there’s EO Wilson doing his own experiments, among a string of high-profile papers, books, gala appearances and being a reliable stand-up mentor to junior colleagues. This communicates an unabashed love for these ants, for discovery, for natural history, and for answering unanswered questions wherever they lead you. Wilson is the consummate tinkerer.

This paper is by no means an outlier. Studies like these pepper his CV, sandwiched with his major theories and findings. To me, these are the actual meat of the sandwich. (Or tofu or something. I don’t eat meat.) To those of us who study ants, that’s what makes Wilson a rockstar. He’d be super-awesome without any of the books and big theories formulated by collaborations with mathematicians. His productivity, keen sense of natural history, an eye for observation and an interest in discovering questions as well as answers has been a trademark of his ant-centered work. The man loves ants, and it shows.

When this paper had come out, I had been working on the ecology of litter-nesting ants in tropical rainforests for about ten years. There were many ideas that I was pursuing, and I’m proud of what I’ve done and excited about what lies ahead. This has been rewarding because so little is known about the biology of these animals, despite their abundance and diversity.

After ten years, if you had asked me, so what do they eat? I wouldn’t have been able to tell you. How many zoologists do you know who can’t tell you the diet of their study organism?

Isn’t that odd that I didn’t know what these ants eat? That nobody knew, at all? Hell yes, it’s odd. Wilson saw it was odd. And he did something about it. The publication of this paper was but a speck, if a speck at all, on the face of his career. For those of us who study litter ants, this was very important. Any one of us could have done it. But you know what? We didn’t, while Wilson did.

That’s what badass science looks like, in my book. And it doesn’t require partial differential equations.

Footnote: You might be wondering, by the way, how can you not know what they eat if you work with them all the time? The answer is, essentially, that these are really small ants. A massive colony fits in a microcentrifuge tube, and a smallish one can fit in a 2 cm piece of straw. You won’t see what’s between their mandibles in the wild, and can’t make out the refuse in nests, either.

Keeping tabs on pseudo journals [retracted]


Update 10 March 2014: Since I published this post, I’ve been made aware of an alternative agenda in Jeffrey Beall’s crusade against predatory publishers. His real crusade is, apparently, against Open Access publishing. This agenda is clearly indicated in his own words in an open access publication entitled, “The Open-Access Movement is Not Really about Open Access.” More information about Beall’s agenda can be found here. I am not removing this post from the site, but I am disavowing its contents as positive coverage of the work of Beall may undermine the long-term goal of allowing all scientists, and the public, to access peer-reviewed publications as easily and inexpensively as possible.

Earlier on, I lamented the annoying – and predatory – practices of pseudojournals. I wished that someone could do something to identify and contain these parasites.

I just learned someone is. Meet Jeffrey Beall. This guy is awesome. He’s an academic librarian at UC Denver. He’s taken on the herculean task of identifying, calling out, and investigating all of these non-journals that try hard to look like real academic outfits.

He calls these pseudoacademic entities “predatory journals” and “predatory publishers,” which is an apt label.

He runs the blog Scholarly Open Access, which I just discovered last week.

A column by him ran in Nature Magazine about this topic and his blog six months ago. I’m not a guy who regularly peruses Nature (unless EO Wilson goes all group-selectionist and my colleagues go all doctrinarian), so this slipped my attention.

It’s definitely worth a visit to Beall’s site. Not only does he keep an up-to-date list of publishers and journals that are “predatory” in nature, he also shares much of his investigation into particular circumstances, such as this one guy who is the “Editor in Chief” of several “journals.”

These journals have all kinds of fake information and corrupt financial arrangements, often done in a hilariously inept manner. It’s entertaining to spend some time on this blog. I’ll be regularly visiting, for entertainment of the drive-past-an-accident-scene-and-can’t-not-look-while-passing-by kind of variety.

Of course, it’s of practical use too, in the event your institution also has people who use these fake journals as a way to boost their CV, in case they need an external opinion to validate your own. Mr. Beall is doing some spectacular work and we should all express some appreciation for delving into this muck on behalf of the rest of academia.

By the way, right after I prepared this post, the New York Times came out with a profile of Beall’s efforts, focusing on not only pseudojournals but also the pseudoconferences that are hosted by the same or similar organizations.

Lab meetings: the publication process


My lab meeting last week got totally derailed. In a good way.

One of my students mentioned the manuscript that she’s working on, and from all erupted a series of questions and questions about the publication process. Everyone wanted to know so much about that, we mostly ditched our original plans (to discuss the design of an experiment for the summer).

The social subtleties of how a paper gets published are entirely foreign to undergrads. Moreover, the basic mechanics of the process are also nothing of which they’re aware.

The meeting turned into a long clinic/tutorial about how the process goes. If I knew better, I would have been prepared with examples of cover letters, reviews, rejections, responses, and revisions.

Actually, I liked the way we went about it as an ad hoc conversation. I just answered their questions as they came in, rather than having prepared a little lesson about it. How do you pick a journal? How does an editor find reviewers? You mean they can just reject you without getting reviews? How often have you gotten rejected? How much do you get paid? You have to *pay* to publish? How much do you review? What happens when you say no? How long does it take for a paper to be published once you submit it? Can you submit to more than one at a time? What do you do when the reviewers don’t agree with one another? What does the university say when you publish a paper?

It’s important for my undergrads to be familiar with the how-we-do-things-on-a-daily-basis part of academia. They’ll be a lot more savvy as they gain more exposure and will be able to understand doctoral students, when they hang out and as they’re applying to grad school.

I’ve had this kind of conversation, informally, with students more times than I can remember. Little things get explained here and there, now and then. Lab meetings would be a good time to make this more formalized. There was a good discussion in an earlier post about what exactly we do and don’t do in lab meetings. So, here’s one thing you can dedicate a whole lab meeting to – the forensic analysis of the publication cycle of a couple manuscripts, explaining all the choices along the way.

My students are still surprised over the concept that it sometimes takes more work to publish a paper than it takes to collect the data, and even more surprised (or dismayed, perhaps) that it can take far longer to do so as well.

That’s a lesson that we need to reinforce, that much of science is about writing.

The Evolution of Pseudojournals


Does your institution accept pseudojournals? Mine does.

Today, I got an invitation to publish in the new journal, “Expert Opinion in Environmental Biology.” Then it provided a list of the “High Profile Editorial Board” members. I usually don’t discuss spam at breakfast (nor eat it), but this morning my family had fun inventing names of prestigious journals. I could go through my spambox and find a couple dozen more.

These journals exist because there are people out there whose jobs require some sort of external validation of their scholarship. Long ago, the Who’s Who series profited from people who needed to show their names in a bound volume. They’re still making a mint, I think. Now they are joined by a small army of “peer-reviewed” journals. Any website claiming to have a peer review process can magically add fresh meat to your publications list on your CV.

Who would be involved with such an outfit? After all, if my 9-year-old kid can see through the name of a silly pseudojournal, shouldn’t professors in the field? Wouldn’t they actually make you look worse? The answer is, apparently, no.

Universities require that faculty coming up for tenure and promotion are demonstrably scholars within their field. At teaching schools, in which most people do little research, how is scholarship evaluated? Within a department where new faculty coming up for tenure, the evaluators may have by been tenured long ago, unfamiliar with the current norms in the field.

Standards might be locked in time from when faculty were active scholars in grad school. For example, a former colleague of mine was convinced that a having paper in Ecological Entomology would be a much bigger accomplishment one in Ecology Letters. That’s because he hadn’t heard of Ecology Letters, even if it had recently become the ISI top-ranked journal in the field.

Some scholars realize that things change, which maybe is why pseudojournals may be so easily accepted. Some might see through the sham of the pseudojournal, but decide to not care about the deceit because scholarly prominence may not be a priority.

At my university, I attended a session about tenure file evaluations. There was a discussion about the perrenial problem: how can faculty evaluate people in different subfields, especially in diverse disciplines?

It was a disappointing conversation. The outcomes affirmed the following policies: Committees are not allowed to request external evaluation of an academic record or CV. They are allowed to subjectively evaluate journal quality, but are specifically forbidden from referencing specific metrics such as impact factor, h-scores or ISI indexing. They are not encouraged to search for information regarding the validity of a journal, and any specific facts or evidence that a journal is of poor quality, or has sham peer review, should not be included in an evaluation.It is okay to report that you have not heard of a journal, but you can’t report whether your investigation has shows it to be a good journal. So, the only other ecologist at my old job would say, “I’ve never heard of Ecology Letters before” but he wouldn’t be allowed to say that ISI ranks it is the top journal in my field. Ecology Letters would be on par with Expert Opinion in Environmental Biology.

These policies allow the continued persistence of pseudojournals. This lets the institution check off the scholarship box on the tenure file without caring about the reality or quality of the scholarship. That said, people have been denied tenure for inadequate scholarship. However, it appears that this can only happen to the scholars who have too much pride to publish in pseudojournals. Heck, they could join the editorial boards of several of them, if they so chose.

I suspect, or at least hope, that many teaching schools actually do ferret out pseudojournals. However, the proliferation of these venues – and the willingness of faculty to lend their names to editorial boards – suggests that they indeed have an audience. Otherwise, who would go to the trouble and how could you make a profit without a customer base? Who really does see these things and think they’re real? I am honestly confused.

As academic publishing is moving towards more transparency, open access and data sharing, it will be interesting to see how the perception or measurement of scholarly activity shifts. The acceptance rate at PLoS ONE, for example, is over 2/3. Anybody can publish in this journal, and even more importantly, anybody can read this journal. I imagine that this trend will continue, and I’m excited for the notion that citation, and all that comes with it, will be more of a meritocracy, not managed by for-profit publishers keeping findings away from the public.