There’s a remark that I see once a while in reviews, something along the lines of: “The authors should have their work edited by a native English speaker.”
Please stop staying this. I think it’s a problem, for three reasons:
There’s a remark that I see once a while in reviews, something along the lines of: “The authors should have their work edited by a native English speaker.”
Please stop staying this. I think it’s a problem, for three reasons:
Apparently, there are some editors of academic journals who will readily send manuscripts out to “non-preferred reviewers” — the specific people that authors specify who they don’t want to receive the paper for review.
I think this is all kinds of messed up.
Is peer review broken? No, it’s not. The “stuff is broken” is overused so much that it now just sounds like hyperbole.
Can we improve peer review? Yes. The review process takes longer than some people like. And yes, editors can have a hard time finding reviewers. And there are conflicts of interest and bias baked into the process. So, yes, we can make peer review better.
As a scientific community, we don’t even agree on a single model of peer review. Some journals are doing it differently than others. I’ll briefly describe some peer review models, and then I’ll give you my take.
You’re reading Small Pond Science right now — but a lot of our colleagues don’t read anything resembling a blog. So, for them, I’ve just published a short peer-reviewed paper about how this site addresses a common theme: how to promote equity and inclusion, especially for students in minority-serving institutions.
Think of it as a blog post, but with a lot of useful references in peer-reviewed journals and with the bright and shiny veneer of legitimacy from journal that’s been in print for more than a century. And hopefully fewer typos.
Preprints are not a standard practice in biology. Nowadays, most papers that get published in peer-reviewed journals were not uploaded to a public preprint server.
Maybe this is changing? It looks like preprints are starting to take off. It’s not clear if this is a wave that will sweep the culture of the field, or just a growing practice among a small subset.
The turnaround time that journal publishers demand for correcting page proofs is crazy, right? I honestly have no idea what the hurry is.
I think most reviews are good and fair. Regardless, when I get an unwelcome decision back from an editor, it’s annoying. Getting annoyed is natural. Here’s how I process bad reviews.
I recently had an exchange with a colleague, who had just written a review at my request. They hadn’t written many reviews before, and asked me something like, “Was this a good review?” I said it was a great review, and explained what was great about it. Then they suggested, “You should write a post about how to write a good review.”
So, ta da.
I think a lot of academic article titles are pretty bad. What do I mean by bad? The title doesn’t really tell you what the paper is actually is about. It could be buried in jargon, or overselling an idea, or focuses on details that most of the intended audience won’t care about.
Does the title of a paper affect how it gets read and cited? Probably. In what way? That’s not so simple, based on my short browse of some scientometric findings.
Science has a thousand problems, but the time it takes for our manuscripts to be peer reviewed ain’t one. At least, that’s how I feel. How about you?
I hear folks griping about the slow editorial process all the time. Then I ask, “how long has it been?” And I get an answer, like, oh almost two whole months. Can you believe it? Two months?!”
I am going to go ahead and assume we all want quality reviews of our journal submissions, however you define ‘quality’. Reviewers that take time to seriously evaluate your work, provide constructive feedback and ultimately improve the paper should always be appreciated. But as reviewers ourselves, we know that sometimes we don’t always give each paper our full attention. In general, I try to give good and helpful (to the author and editor) reviews. I try not to take on reviews when I know I don’t have the time to do a good job. Perhaps I am naïve but the impression I get from my colleagues and reviews of my papers is that in general most people are also trying to give good reviews.
Chatting with people at La Selva Biological Station in Costa Rica, the topic from a recent post came up: that journals have cut back on “accept with revisions” decisions.
There was a little disagreement in the comments. Now, on the basis of some conversations, I have to disagree with myself. Talking with three different grad students, this is what I learned:
Some journals are, apparently, still regularly doing “accept-with-revisions.” And they also then are in the habit of rejecting those papers after the revisions come in.
Since I started submitting papers (around the turn of the century) editorial practices have evolved. Here’s a quick guide:
What used to be “Reject” is still called a “Reject.”
What used to be “Reject with Option to Resubmit” rarely ever happens anymore.
What used to be called “Major Revisions” is now called “Reject (With Invited Resubmission)” with a multiple-month deadline.
What used to be called “Minor Revisions” is now called “Reject (With Invited Resubmission)” with a shorter timeline.
And Accept is still Accept.
Here’s the explanation.
A flat-out rejection — “Please don’t send us this paper again” — hasn’t changed. (I’ve pointed out before, that it takes some experience to know when a paper is actually rejected.)
This is going to make me sound not young, but here it goes.
When I was in grad school, if you wanted an article, you had to go over to photocopy it at the library. (Uphill, both ways, in the snow.)
Every time I went to the stacks to get the article I needed, I’d walk by the current periodicals section. That’s where the new issues accumulated before they were sent off to be bound for the stacks. There were typically several months’ worth of issues for every journal.
I usually paused to look through the new issues of some of my favorite journals, including American Naturalist, Behavioral Ecology and Sociobiology, Biotropica, Ecology, Insectes Sociaux, Oecologia, Oikos, and a upstart journal called Ecography. And many others. (The journal landscape has really evolved over the past couple decades, of course.)
Nowadays, I rarely sign my reviews.
In general, I think it’s best if reviews are anonymous. This is my opinion as an author, as a reviewer, and as an editor. What are my reasons? Anonymous reviews might promote better science, facilitate a more even paying field, and protect junior scientists.
The freedom to sign reviews without negative repercussions is a manifestation of privilege. The use of signed reviews promotes an environment in which some have more latitude than others. When a tenured professor such as myself signs reviews, especially those with negative recommendations, I’m exercising liberties that are not as available to a PhD candidate.
To explain this, here I describe and compare the potential negative repercussions of signed and unsigned reviews.
Unsigned reviews create the potential for harm to authors, though this harm may be evenly distributed among researchers. Arguably, unsigned reviews allow reviewers to be sloppy and get away with a less-than-complete evaluation, which will cause the reviewer to fall out of the good graces of the editor, but not that of the authors. Also, reviewer anonymity allows scientific competitors or enemies to write reviews that unfairly trash (or more strategically sabotage) the work of one another. Junior scientists may not have as much social capital to garner favorable reviews from friends in the business as senior researchers. But on the other hand, anonymous reviews can mask the favoritism that may happen during the review process, conferring an advantage to senior researchers with a larger professional network.
Signed reviews create the potential for harm to reviewers, and confer an advantage to influential authors. It would take a brave, and perhaps foolhardy, junior scientist to write a thorough review of a poor-quality paper coming from the lab of an established senior scientist. This could harm the odds of landing a postdoc, getting a grant funded, or getting a favorable external tenure evaluation. Meanwhile, senior scientists may have more latitude to be critical without fear of direct effects on the ability to bring home a monthly paycheck. Signed reviews might allow more influential scientists to experience a breezier peer review experience than unknown authors.
When the identity of reviewers is disclosed, these data may result in more novel game theoretical strategies that may further subvert the peer-review process. For example, I know there are some reviewers out there who seem to really love the stuff that I do, and there is at least one (and maybe more) who appear to have it in for me. It would only be rational for me to list the people who give me negative reviews as non-preferred reviewers, and those who gave positive reviews as recommended reviewers. If I knew who they were. If everybody knew who gave them more positive and more negative reviews, some people would make choices to help them exploit the system to garner more lightweight peer review. The removal of anonymity can open the door to corruption, including tit-for-tat review strategies. Such a dynamic in the system would further exacerbate the asymmetries between the less experienced and more experienced scientists.
The use of signed reviews won’t stop people from sabotaging other papers. However signed reviews might allow more senior researchers to use their experience with the review system to exploit it in their favor. It takes experience receiving reviews, writing reviews, and handling manuscripts to anticipate the how editors respond to reviews. Of course, let’s not undersell editors, most of whom I would guess are savvy people capable of putting reviews in social context.
I’ve heard a number people say that signing their reviews forces them to write better reviews. This implies that some may use the veil of their identity to act less than honorably or at least not try as hard. (If you were to ask pseudonymous science bloggers, most would disagree.) While the content of the review might be substantially the same regardless of identity, a signed review might be polished with more varnish. I work hard to be polite and write a fair review regardless of whether I put my name on it. But I do admit that when I sign a review, I give it a triple-read to minimize the risk that something could be taken the wrong way (just as whenever I publish a post on this site). I wouldn’t intentionally say anything different when I sign, but it’s normal to take negative reviews personally, so I try to phrase things so that the negative feelings aren’t transferred to me as a person.
I haven’t always felt this way. About ten years ago, I consciously chose to sign all of my reviews, and I did this for a few years. I observed two side effects of this choice. The first one was a couple instances of awkward interactions at conferences. The second was an uptick in the rate which I was asked to review stuff. I think this is not merely a correlative relationship, because a bunch of the editors who were hitting me up for reviews were authors of papers that I had recently reviewed non-anonymously. (This was affirmation that I did a good job with my reviews, which was nice. But as we say, being a good reviewer and three bucks will get you a cup of coffee.)
Why did I give up signing reviews? Rejection rates for journals are high; most papers are rejected. Even though my reviews, on average, had similar recommendations as other reviewers, it was my name as reviewer that was connected to the rejection. My subfields are small, and if there’s someone who I’ve yet to meet, I don’t want my first introduction to be a review that results in a rejection.
Having a signed review is different than being the rejecting subject editor. As subject editor, I point to reviews to validate the decision, and I also have my well-reasoned editor-in-chief, who to his credit doesn’t follow subject editor recommendations in a pro forma fashion. The reviewer is the bad guy, not the editor. I don’t want to be identified as the bad guy unless it’s necessary. Even if my review is affirming, polite, and as professional as possible in a good way, if the paper is rejected, I’m the mechanism by which it’s rejected. My position at a teaching-focused institution places me on the margins of the research community, even if I am an active researcher. Why the heck would I put my name on something that, if taken the wrong way, could result in further marginalization?
When do I sign? There are two kinds of situations. First, some journals ask us to sign, and I will for high-acceptance rate journals. Second, if I recommend changes involving citations to my own work, I sign. I don’t think I’ve ever said “cite my stuff” when uncited, but sometimes a paper that cites me and follows up on something in my own work, and I step in to clarify. It would be disingenuous to hide my identity at that point.
The take home message on peer review is: The veil of anonymity in peer review unfairly confers advantages to influential researchers, but the removal of that veil creates a new set of more pernicious effects for less influential researchers.
Thanks to Dezene Huber whose remark prompted me to elevate this post from the queue of unwritten posts.
On 09 April 2013, I published a post entitled, “Keeping tabs on pseudojournals.”
I just modified that post to indicate a retraction, with the following text:
Since I published this post, I’ve been made aware of an alternative agenda in Jeffrey Beall’s crusade against predatory publishers. His real crusade is, apparently, against Open Access publishing. This agenda is clearly indicated in his own words in an open access publication entitled, “The Open-Access Movement is Not Really about Open Access.” More information about Beall’s agenda can be found here. I am not removing this post from the site, but I am disavowing its contents as positive coverage of the work of Beall may undermine the long-term goal of allowing all scientists, and the public, to access peer-reviewed publications as easily and inexpensively as possible.
Months ago, I saw the Beall’s paper, that tried to equate open-access publishing with poor quality scholarship. This makes no sense whatsoever, because many open access journals have rigorous peer review. (For example, I posted the reviews from my a recent-ish PLOS ONE paper of mine. No doubts about that rigor.) The suggestion that an open access publishing model is tantamount to predatory publication is not only absurd, but also is intellectually dishonest. I could only image that this position is either a result of incredibly feeble reasoning, or is politically motivated to help publishers maintain their oligarchy of the academic publishing industry.
Regardless of the reasons, Beall’s crusade against the open access to academic research is folly and I don’t want to be associated with support for his work. Now, academia needs a strong, rational and transparent voice to combat genuine predatory publishers that lack rigorous peer review and are guilty of academic payola. It seems Jeffrey Beall doesn’t fit that bill.
Finally. There are journals publishing quality peer-reviewed research, but leave it to the reader to decide whether a paper is sexy or important. Shouldn’t this be better than letting a few editors and reviewers reject work based on whether they personally think that a paper is important or significant?
The last few years have seen a relatively quick shift in scientific publishing models, and there has been a great upheaval in journals in which some new ones have become relatively prestigious (e.g., Ecology Letters) and some well-established journals have experienced a decline in relative rank (e.g., American Journal of Botany). These hierarchies have a great effect on researchers publishing from small ponds.
Publishing in selective journals is required to establish legitimacy. This is true for everybody. Because researchers in small teaching institutions are inherently legitimacy-challenged, then this is the population that most heavily relies on this mechanism of legitimacy.
Researchers in teaching institutions don’t have a mountain of time for research. Just think about all of the time that could be spent on genuine research, instead of time wasted in the mill of salesmanship that is required to publish in selective journals. (I also find that pitching research as a theory-of-the-moment to be one of the most annoying parts of the business.)
With new journals that verify quality but not the sexiness, we can hop off the salesmanship game and just get stuff published. Sounds great, right?
After all, the research that takes place at teaching institutions can be of high quality and significant within our fields. But, on average, we just don’t publish as much. That makes sense because our employers expect us to focus on teaching above all else.
Since we’re less productive, then every paper counts. We want to get our research out there, but we also need to make sure that every paper represents us well. What we lack for in quantity, we need to make up for in (perceived) quality.
How do people assess research quality? The standard measure is the selectivity of the journal that publishes the paper. It’s natural to think that a paper in American Naturalist (impact factor 4.7) is going to be higher quality than American Midland Naturalist (impact factor 0.6).
People make these judgments all the time. It might not be fair, but it’s normal.
And no matter how dumb people say it might be, no matter how many examples are brought up, assessments of ‘journal quality’ aren’t going away. No matter how much altmetrics picks up as another highly flawed measure of research quality, the name of the journal that publishes a paper really matters. That isn’t changing anytime soon.
The effect of paper on the research community is tied to the prestige of the venue, as well as the prestige of the authors. Fame matters. If any researcher – including those of us at teaching institutions – wants to build an influential research program, we’ve got to build up a personal reputation for high quality research.
Building a reputation for high quality research is not easy at all, but it’s even harder while based at a teaching institution. Just like having a paper in a prestigious journal is supposed to be an indicator of quality research, a faculty position at a well-known research institution is supposed to be an indicator of a quality researcher. Since our institutional affiliations aren’t contributing to our research prestige, we need to make the most of the circumstances to establish the credibility and status of the work that comes out of our labs.
If journal hierarchies didn’t exist, it would be really hard for researchers in lesser-known institutions, who may not publish frequently, to readily convince others that their work is of high quality. Good work doesn’t get cited just because it’s good. It needs to be read first. And work in non-prestigious journals may simply go unread if the author isn’t already well known.
If journal hierarchies somehow faded, it’s not as if the perception of research quality would evolve into some perfect meritocracy. There are lots of conscious and unconscious biases, aside from quality, that affect whether or not work gets into a fancy-pants journal, but it is true that people without a fancy-pants background still can publish in elite venues based on the quality of their work. This means that people without an elite background can gain a high profile based on merit, though they do need to persevere though the biases working against them.
If journals themselves merely published work but without any prestige associated with them, then it would be even more difficult for people without well-connected networks to have their work read and cited. It wouldn’t democratize access to science; it would inherently favor the scientists with great connections. At least now, the decisions of a small number of editors and reviewers can put science from an obscure venue into a position where a large audience will see it. On the other hand, publishing in a journal without any prestige, like PLoS ONE, will allow work to be available to a global audience, but actually read by very few.
If I want my work to be read by ecologists, then publishing it in a perfectly good journal like Oikos will garner me more readers than if I publish it in PLoS ONE. Moreover, people will look at the Oikos paper and realize that at some point in its life, there was a set of reviewers and an editor who agreed that the paper was not only of high quality but also interesting or sexy enough to be accepted. It wasn’t just done well, but it’s also useful or important to the field. That can’t necessarily be said of all PLoS ONE papers.
Not that long ago, I thought that these journals lacking the exclusivity factor were a great thing because it allowed everybody equal access to research. What changed my mind? The paper that I chose to place in PLoS ONE. I chose to put a paper that I was really excited about in this journal. It was a really neat discovery, and should lead to a whole new line of inquiry. (Also, the editorial experience was great, the reviewers were very exacting but even-handed, and the handling editor was top notch.)
Since that paper has come out just over a year ago, there have been a number of new papers on this or a closely related topic. But my paper has not been cited yet, even though it really should have been cited. Meanwhile they’re citing my older, far less interesting and useful, paper on the same topic from 2002.
Why has nobody cited the more recent paper? Either people think that it’s not relevant, not high enough quality, or they never found it. (Heck, the blog post about it has been seen more times than the paper itself.) Maybe people found it and then didn’t read it because of the journal. It’s really a goddamn great paper. And it’s getting ignored because I put in PLoS ONE. I have very little doubt that if I chose to put it in a specialized venue like Insectes Sociaux or Myrmecological News, both good journals that are read by social insect biologists, that it would be read more heavily and have been cited at least a few times. This paper could have been in an even higher profile journal, because it’s so frickin’ awesome, but I chose to put it in PLoS ONE. Oh well, I’ve learned my lesson. There are some papers in that venue that get very highly cited, but I think most things in there just get lost.
I would love for people to judge a paper based on the quality of its content rather than the name of the journal. But most people don’t do this. And I’m not going to choose to publish in a venue that may lead people to think that the work isn’t interesting or groundbreaking even before they have chosen to (not) read it. I’ll admit to not placing myself on the front of reform in scientific publishing, even if I make all of my papers immediately and universally available. I have to admit that I’m apt to select a moderately selective venue when possible, because I am concerned that people see my research as not only legitimate but also worthwhile. I’m not worried that my stuff isn’t quite good, but I want to make sure it’s not done in vain. Science is a social enterprise, and as a working scientist I need to put my work into the conversation.
As academics, we spend a lot of time reading primary literature (although we often feel it is not enough). It is a real skill to learn to decipher how journal articles are written and how to read them effectively. One barrier is the language and learning a discipline involves learning the language. However, even if you know all the words and concepts, the format of papers is different from most everything else we might read.
From a survey I did of ecology teachers*: many think that reading primary literature is important in teaching ecology. I included answers for reading textbooks as a comparison. I wasn’t surprised that there was a bit less emphasis on textbook reading but it is obvious still a useful resource for teaching ecology. I certainly also had the impression that reading journal articles was important as an undergraduate but I wasn’t quite sure how to do it.
So if you are using or want to use primary literature in an undergraduate class, how should you go about it? There are perhaps 101 ways to effectively use journal articles as teaching tools. The link is a detailed article which outlines what you can use primary literature for, how to identify good articles, challenges with using primary literature and how to overcome them and finally how to assess learning. There is tons of good advice there, so if you are looking for ways to incorporate the literature but are unsure how, it is a good place to start. Here’s a more personal account of one professor’s approach to integrating the primary literature into a class. I like the idea of building up understanding and directing the students so that their reading is productive.
I found when I was an undergrad, I basically learned how to read primary literature by doing it a lot. My first attempts felt a bit like looking through a fog. I would attend discussion sections where we’d read papers and there were a few students presenting each time. I don’t think I learned anything until I presented a paper myself. Before that, it seemed that every time I missed the main point of the paper.
So when I started running my own section for a writing intensive group of an ecology course as a teaching assistant, I realised I didn’t want my students to be stuck in the rut I had been in. We were to discuss many papers during the semester and I couldn’t wait for the end for all of them to be comfortable. I also didn’t want the focus to be on me (the section was meant to facilitate their independence), so I didn’t want to break down every paper for them. Inspired by discussions in a class about how to teach writing that I was taking, I came up with a simple plan to get students to overcome any issues they might have with discussing primary literature.
The methods were simple**:
As instructors, we often discuss how to approach reading a paper but we rarely address the intimidation that many students feel when reading scientific writing. Often students get so bogged down in the details of a paper that they can’t see the forest through the trees. So I wanted students to avoid getting caught up in details they didn’t understand (e.g. statistical methods are particularly prone to this). My hope was that I could help students overcome their fears of both reading primary literature and then having something to say about it. I have to admit the first time I tried this I was terrified. I knew that I could briefly read a paper before a discussion and contribute if I needed (sometimes happened more than I’d care to admit as a grad student) but I wasn’t sure how they would do. I wasn’t asking them to describe in detail the paper and I specifically choose papers that were relatively easy to understand the main points. I hoped this was enough. To my relief, it worked!
Student comments on this activity:
I was able to describe paper reading very briefly in the beginning of section because my students had all been exposed to reading primary literature in previous courses. If this is the first time your class has seen a journal article, maybe more effort would be needed here. At the end of class I would also take a few moments to point out what they couldn’t pick up from their quick reading. For example, I’d ask some directed questions to the teams about the articles that I was pretty sure that they wouldn’t have picked up on. My goal was to get them to be able to figure out the main story of a paper and realise that they could understand that without knowing all the details. But I didn’t want the take away message to be that fully reading a paper is never necessary. The rest of the semester we discussed many papers and they also needed to read and summarize papers working up to their final proposal so there was many opportunities to teach them about how to read and learn from the literature.
I was lucky because I had small groups of students to work with. I can see ways in which you could modify the activity for larger groups. Maybe having them share what the paper is about in smaller groups rather than the whole class, for example. Mainly I think it is important for them to have to say something about the paper. It is through being forced to quickly summarize the points that students actually learn to ignore all the detailed methodology that they tend to get caught up in. We can tell them to focus on the big picture but most of them (including me as an undergrad) won’t. By not giving the time to get bogged down, they quickly learned to look at the big picture. I was really pleased that both times my students were able to use this experience throughout the course. The discussions were more lively than I’d ever had before as a TA and I did very little talking.
In general, I think incorporating primary literature is important for learning in the sciences. Whether it is exposing students to the papers themselves or their products in an deconstructed way, efforts we make to teach students how to read the scientific literature can only expand their understanding of what science is all about. Now whether they should be able to access the literature after their degree is complete is a whole another debate…
*if you are new to Teaching Tuesdays, I’ve been doing a series of posts that have derived from a survey I distributed broadly to ecology teachers earlier this year. If you are interested in knowing more about what ecology teachers are up to, you can read more here (intro, difficulties, solutions, practice and writing).
**after doing this activity with my students in a couple of courses I remember reading something similar. I think the article was maybe in an ESA newsletter (Eco 101, perhaps) but my cursory searching hasn’t found it. Although I had thought I downloaded it, there is nothing on my hard drive either. If you know of this article, please send me the link! (Update: link, thanks Gary)
I was recently asked:
Q: How do you decide what project you work on?
A: I work on the thing that is most exciting at the moment. Or the one I feel most bad about.
In the early stages, the motivator is excitement, and in the end, the motivator is guilt. (If I worked in a research institution, I guess an additional motivator would be fear.)
Don’t get me wrong: I do science because it’s tremendous fun. But the last part – finessing a manuscript through the final stages – isn’t as fun as the many other pieces. How do I keep track of the production line from conception to publication, and how do I make sure that things keep rolling?
At the top center of my computer desktop lives a document entitled “manuscript progress.” I consult this file when I need to figure out what to work on, which could involve doing something myself or perhaps pestering someone else to get something done.
In this document are three categories:
Instead of writing about the publication cycle in the abstract, I thought it might be more illustrative to explain what is in each category at this moment. (It might be perplexing, annoying or overbearing, too. I guess I’m taking that chance.) My list is just that – a list. Here, I amplify to describe how the project was placed on the treadmill and how it’s moving along, or not moving along. I won’t bore those of you with the details of ecology, myrmecology or tropical biology, and I’m not naming names. But you can get the gist.
Any “Student” is my own student – and a “Collaborator” is anybody outside my own institution with whom I’m working, including grad students in other labs. A legend to the characters is at the end.
Paper A: Just deleted from this list right now! Accepted a week ago, the page proofs just arrived today! The idea for this project started as the result of a cool and unexpected natural history observation by Student A in 2011. Collaborator A joined in with Student B to do the work on this project later that summer. I and Collab A worked on the manuscript by email, and I once took a couple days to visit Collab A at her university in late 2011 to work together on some manuscripts. After that, it was in Collab A’s hands as first author and she did a rockin’ job (DOI:10.1007/s00114-013-1109-3).
Paper B: I was brought in to work with Collab B and Collab C on a part of this smallish-scale project using my expertise on ants. I conducted this work with Student C in my lab last year and the paper is now in review in a specialized regional journal (I think).
Paper C: This manuscript is finished but not-yet-submitted work by a student of Collab D, which I joined in by doing the ant piece of the project. This manuscript requires some editing, and I owe the other authors my remarks on it. I realize that I promised remarks about three months ago, and it would take only an hour or two, so I should definitely do my part! However, based on my conversations, I’m pretty sure that I’m not holding anything up, and I’m sure they’d let me know if I was. I sure hope so, at least.
Paper D: The main paper out of Student A’s MS thesis in my lab. This paper was built with from Collab E and Collab F and Student D. Student A wrote the paper, I did some fine-tuning, and it’s been on a couple rounds of rejections already. I need to turn it around again, when I have the opportunity. There isn’t anything in the reviews that actually require a change, so I just need to get this done.
Paper E: Collab A mentored Student H in a field project in 2011 at my field site, on a project that was mostly my idea but refined by Collab A and Student H. The project worked out really well, and I worked on this manuscript the same time as Paper A. I can’t remember if it’s been rejected once or not yet submitted, but either way it’s going out soon. I imagine it’ll come to press sometime in the next year.
Manuscripts in Progress
Paper F: Student D conducted the fieldwork in the summer of 2012 on this project, which grew out of a project by student A. The data are complete, and the specific approach to writing the paper has been cooked up with Student D and myself, and now I need to do the full analysis/figures for the manuscript before turning it off to StudentD to finish. She is going away for another extended field season in a couple months, and so I don’t know if I’ll get to it by then. If I do, then we should submit the paper in months. If I don’t, it’ll be by the end of 2014, which is when Student D is applying to grad schools.
Paper G: Student B conducted fieldwork in the summer of 2012 on a project connected to a field experiment set up by Collab C. I spent the spring of 2013 in the lab finishing up the work, and I gave a talk on it this last summer. It’s a really cool set of data though I haven’t had the chance to work it up completely. I contacted Collab G to see if he had someone in his lab that wanted to join me in working on it. Instead, he volunteered himself and we suckered our pal Collab H to join us in on it. The analyses and writing should be straightforward, but we actually need to do it and we’re all committed to other things at the moment. So, now I just need to make the dropbox folder to share the files with those guys and we can take the next step. I imagine it’ll be done somewhere between months to years from now, depending on how much any one of us pushes.
Paper H: So far, this one has been just me. It was built on a set of data that my lab has accumulated over few projects and several years. It’s a unique set of data to ask a long-standing question that others haven’t had the data to approach. The results are cool, and I’m mostly done with them, and the manuscript just needs a couple more analyses to finish up the paper. I, however, have continued to be remiss in my training in newly emerged statistical software. So this manuscript is either waiting for myself to learn the software, or for a collaborator or student eager to take this on and finish up the manuscript. It could be somewhere between weeks to several years from now.
Paper I: I saw a very cool talk by someone a meeting in 2007, which was ripe to be continued into a more complete project, even though it was just a side project. After some conversations, this project evolved into a collaboration, with Student E to do fieldwork in summer 2008 and January 2009. We agreed that Collab I would be first author, Student E would be second author and I’d be last author. The project is now ABM (all but manuscript), and after communicating many times with Collab I over the years, I’m still waiting for the manuscript. A few times I indicated that I would be interested in writing up our half on our own for a lower-tier journal. It’s pretty much fallen off my radar and I don’t see when I’ll have time to write it up. Whenever I see my collaborator he admits to it as a source of guilt and I offer absolution. It remains an interesting and timely would-be paper and hopefully he’ll find the time to get to it. However, being good is better than being right, and I don’t want to hound Collab I because he’s got a lot to do and neither one of us really needs the paper. It is very cool, though, in my opinion, and it’d be nice for this 5-year old project to be shared with the world before it rots on our hard drives. He’s a rocking scholar with a string of great papers, but still, he’s in a position to benefit from being first author way more myself, so I’ll let this one sit on his tray for a while longer. This is a cool enough little story, though, that I’m not going to forget about it and the main findings will not be scooped, nor grow stale, with time.
Paper J: This is a review and meta-analysis that I have been wanting to write for a few years now, which I was going to put into a previous review, but it really will end up standing on its own. I am working with a Student F to aggregate information from a disparate literature. If the student is successful, which I think is likely, then we’ll probably be writing this paper together over the next year, even as she is away doing long-term field research in a distant land.
Paper K: At a conference in 2009, I saw a grad student present a poster with a really cool result and an interesting dataset that came from the same field station as myself. This project was built on an intensively collected set of samples from the field, and those same samples, if processed for a new kind of lab analysis, would be able to test a new question. I sent Student G across the country to the lab of this grad student (Collab J) to process these samples for analysis. We ran the results, and they were cool. To make these results more relevant, the manuscript requires a comprehensive tally of related studies. We decided that this is the task of Student G. She has gotten the bulk of it done over the course of the past year, and should be finishing in the next month or two, and then we can finish writing our share of this manuscript. Collab J has followed through on her end, but, as it’s a side project for both of us, neither of us are in a rush and the ball’s in my court at the moment. I anticipate that we’ll get done with this in a year or two, because I’ll have to analyze the results from Student G and put them into the manuscript, which will be first authored by Collab J.
Paper L: This is a project by Student I, as a follow-up to the project of Student H in paper E, conducted in the summer of 2013. The data are all collected, and a preliminary analysis has been done, and I’m waiting for Student I to turn these data into both a thesis and a manuscript.
Paper M: This is a project by Student L, building on prior projects that I conducted on my own. Fieldwork was conducted in the summer of 2012, and it is in the same place as Paper K, waiting for the student to convert it into a thesis and a manuscript.
Paper N: This was conducted in the field in summer 2013 as a collaboration between Student D and Student N. The field component was successful and now requires me to do about a month’s worth of labwork to finish up the project, as the nature of the work makes it somewhere between impractical and unfeasible to train the students to do themselves. I was hoping to do it this fall, to use these data not just for a paper but also preliminary data for a grant proposal in January, but I don’t think I’ll be able to do it until the spring 2014, which would mean the paper would get submitted in Fall 2014 at the earliest, or maybe 2015. This one will be on the frontburner because Students D and N should end up in awesome labs for grad school and having this paper in press should enhance their applications.
Paper O: This project was conducted in the field in summer 2013, and the labwork is now in the hands of Student O, who is doing it independently, as he is based out of an institution far away from my own and he has the skill set to do this. I need to continue communicating with this student to make sure that it doesn’t fall off the radar or doesn’t get done right.
Paper P: This project is waiting to get published from an older collaborative project, a large multi-PI biocomplexity endeavor at my fieldstation. I had a postdoc for one year on this project, and she published one paper from the project but as she moved on, left behind a number of cool results that I need to write up myself. I’ve been putting this off because it would rely on me also spending some serious lab time doing a lot of specimen identifications to get this integrative project done right. I’ve been putting it off for a few years, and I don’t see that changing, unless I am on a roll from the work for Paper N and just keep moving on in the lab.
Paper Q: A review and meta-analysis that came out of a conversation with Collabs K and L. I have been co-teaching field courses with Collab K a few times, and we share a lot of viewpoints about this topic that go against the incorrect prevailing wisdom, so we thought we’d do something about it. This emerged in the context of a discussion with L. I am now working with Student P to help systematically collect data for this project, which I imagine will come together over the next year or two, depending on how hard the pushing comes from myself or K or L. Again it’s a side project for all of us, so we’ll see. The worst case scenario is that we’ll all see one another again next summer and presumably pick things up from there. Having my student generating data is might keep the engine running.
Paper R: This is something I haven’t thought about in a year or so. Student A, in the course of her project, was able to collect samples and data in a structured fashion that could be used with the tools developed by Collab M and a student working with her. This project is in their hands, as well as first and lead authorship, so we’ve done our share and are just waiting to hear back. There have been some practical problem on their side, that we can’t control, and they’re working to get around it.
Paper S: While I was working with Collab N on an earlier paper in the field in 2008, a very cool natural history observation was made that could result in an even cooler scientific finding. I’ve brought in Collab O to do this part of the work, but because of some practical problems (the same as in Paper R, by pure coincidence) this is taking longer than we thought and is best fixed by bringing in the involvement of a new potential collaborator who has control over a unique required resource. I’ve been lagging on the communication required for this part of the project. After I do the proper consultation, if it works out, we can get rolling and, if it works, I’d drop everything to write it up because it would be the most awesome thing ever. But, there’s plenty to be done between now and then.
Paper T: This is a project by Student M, who is conducted a local research project on a system entirely unrelated to my own, enrolled in a degree program outside my department though I am serving as her advisor. The field and labwork was conducted in the first half of 2013 – and the potential long-shot result come up positive and really interesting! This one is, also, waiting for the student to convert the work into a thesis and manuscript. You might want to note, by the way, that I tell every Master’s student coming into my lab that I won’t sign off on their thesis until they also produce a manuscript in submittable condition.
Projects in development
These are still in the works, and are so primordial there’s little to say. A bunch of this stuff will happen in summer 2014, but a lot of it won’t, even though all of it is exciting.
I have a lot of irons in the fire, though that’s not going to keep me from collecting new data and working on new ideas. This backlog is growing to an unsustainable size, and I imagine a genuine sabbatical might help me lighten the load. I’m eligible for a sabbatical but I can’t see taking it without putting a few projects on hold that would really deny opportunities to a bunch of students. Could I have promoted one of these manuscripts from one list to the other instead of writing this post? I don’t think so, but I could have at least made a small dent.
Legend to Students and Collaborators
Student A: Former M.S. student, now entering her 2nd year training to become a D.P.T.; actively and reliably working on the manuscript to make sure it gets published
Student B: Former undergrad, now in his first year in mighty great lab and program for his Ph.D. in Ecology and Evolutionary Biology
Student C: Former undergrad, now in a M.S. program studying disease ecology from a public health standpoint, I think.
Student D: Undergrad still active in my lab
Student E: Former undergrad, now working in biology somewhere
Student F: Former undergrad, working in my lab, applying to grad school for animal behavior
Student G: Former undergrad, oriented towards grad school, wavering between something microbial genetics and microbial ecology/evolution (The only distinction is what kind of department to end up in for grad school.)
Student H: Former undergrad, now in a great M.S. program in marine science
Student I: Current M.S. student
Student L: Current M.S. student
Student M: Current M.S. student
Student N: Current undergrad, applying to Ph.D. programs to study community ecology
Student O: Just starting undergrad at a university on the other side of the country
Student P: Current M.S. student
Collab A: Started collaborating as grad student, now a postdoc in the lab of a friend/colleague
Collab B: Grad student in the lab of Collab C
Collab C: Faculty at R1 university
Collab D: Faculty at a small liberal arts college
Collab E: Faculty at a small liberal arts college
Collab F: International collaborator
Collab G: Faculty at an R1 university
Collab H: Started collaborating as postdoc, now faculty at an R1 university
Collab I: Was Ph.D. student, now faculty at a research institution
Collab J: Ph.D. student at R1 university
Collab K: Postdoc at R1 university, same institution as Collab L
Collab L: Ph.D. student who had the same doctoral PI as Collab A
Collab M: Postdoc at research institution
Collab N: Former Ph.D. student of Collab H.; postdoc at research institution
Collab O: Faculty at a teaching-centered institution similar to my own
By the way, if you’re still interested in this topic, there was also a high-quality post on the same topic on Tenure, She Wrote, using a fruit-related metaphor with some really nice fruit-related photos.
Quotes of the week from Joan Strassmann:
In my current biggest class there are 52 students… No one should email the professor or the teaching assistants more than three times in a semester. If you have already done that, there is a problem. Have some consideration. We have a lot to do.
I might suck it up and just deal with it if it only impacted me, but I get cranky when my wonderful teaching assistants agonize over their overflowing in boxes. They need every second of their time to learn how to teach, how to mentor, and how to do research.
If you haven’t seen it yet, an XKCD from last week was breathlessly gorgeous and poignant.
A non-link I’m providing are the academic job wikis. I don’t know if these are common knowledge. Because search committees are so slow to let candidates to know the outcome of searches (sometimes for reasonable reasons), applicants have taken the matter into their own hands by creating wikis in which people can list the status of searches on a big master template. This is actually a good place to find out about open jobs, and not so much for accurate information about the status of searches.
In the politics of publishing, Mick Watson just resigned from an academic editorial slot in PLoS One, because the journal took a few months to handle an appeal to a rejection one of his manuscripts.
More on the politics of publishing, Çağan H. Şekercioğlu published a little piece in Current Biology about the academic cost of the Rejection-Resubmission cycle. It reads like a blog post but it’s found in the pages of a for-profit Elsevier journal. It’s interesting how often posts about papers, like this one about another post about the Şekercioğlu piece, seem to garner more attention than the papers themselves.
Even more on the evolving publishing landscape: Some of the new, huge, journals are not discipline-specific, and the discipline-specific ones with good readership are now becoming far more selective than they used to be. So, papers on a specialized topic, designed for a specialized audience, might have trouble connecting to that specialized audience. This could be a problem, and this blog post at the Computational Evolution Group asks some good questions.
There’s an overt piece about “belief” versus “knowledge” in the context of science education over in the Sci-Ed blog (my favorite site about informal science education). Even more interesting and useful is the classy and substantial response by Holly Dunsworth who was interviewed for the Sci-Ed piece in which her words were used selectively in a way that misrepresented her.
There was a great comment from Steve on this week’s post on undergraduate mentorship in R1 vs. SLACs. He pointed out that SLACs may create more doctoral students because their students are a lot less likely to be aware about what the day-to-day life of a grad student is like. (This is also another important reason for undergrads to become friends with grad students.)
The last item is more than three years old, so might have seen it already. If you are particular about type, then you might not be a big fan of Comic Sans? You might want to see what Comic Sans has to say for himself. Beware, he has a potty mouth.
Our scientific papers often harbor a massive silent fiction.
Papers often lead the readership into thinking that the main point of the scientific paper was the main point of the experiment when it was conducted. This is sometimes the case, but in many cases it is a falsehood.
How often is it, when we publish a paper, that we are writing up the very specific set of hypotheses and predictions that we had in mind when we set forth with the project?
Papers might state something like, “We set out to test whether X theory is supported by running this experiment…” However, in many cases, the researchers might not even have had X theory in mind when running the experiment, but were focusing on other theories at the time. In my experience in ecology, it seems to happen all the time.
Having one question, and writing a paper about another question, is perfectly normal. This non-linearity is part of how science works. But we participate in the sham of , “I always meant to conduct this experiment to test this particular question” because that’s simply the format of scientific papers.
Ideas are sold in this manner: “We have a question. We do an experiment. We get an answer.” However, that’s not the way we actually develop our questions and results.
It could be: “I ran an experiment, and I found out something entirely different and unexpected, not tied to any specific prediction of mine. Here it is.”
It somehow is unacceptable to say that you found these results that are of interest, and are sharing and explaining them. If a new finding is a groundbreaking discovery that came from nowhere (like finding a fossil where it was not expected), then you can admit that you just stumbled on it. But if it’s an interesting relationship or support for one idea over an other idea, then you are required to suggest, if not overly state, that you ran the experiment because you wanted to look at that relationship or idea in the first place. Even if it’s untrue. We don’t often lie, but we may mislead. It’s expected of us.
In some cases, the unexpected origin of a finding could be a good narrative for a paper. “I had this idea in mind, but then we found this other thing out which was entirely unrelated. And here it is!” But, we never write papers that way. Maybe it’s because most editors want to trim every word that could be seen as superfluous, but it’s probably more caused by the fact that we need to pretend to our scientific audience that our results are directly tied to our initial questions, because that’s the way that scientists are supposed to work. It would seem less professional, or overly opportunistic, to publish interesting results from an experiment that were not the topic of the experiment.
Let me give you an example from my work. As a part of my dissertation, in the past millennium, I did a big experiment in which I and my assistants collected a few thousand ant colonies, in an experimental framework. It resulted in a mountain of cool data. This is a particularly useful and cool dataset in a few ways, because it has kinds of data that most people typically cannot get, even though they can be broadly informative (There are various kinds of information you get from collecting whole ant colonies that you can’t get otherwise.) There are all kinds of questions that my dataset can be used to ask, that can’t be answered using other approaches.
For example, in one of the taxa in the dataset, the colonies have a variable number of queens. I wanted to test different ideas that might explain environmental factors shaping queen number. This was fine framework to address those questions, even though it wasn’t what I had in mind while running the experiment. But when I wrote the paper, I had to participate in the silly notion that that experiment was designed to understand queen number (the pdf is free on my website and google scholar).
When I ran that experiment, a good while ago, the whole reason was to figure out how environmental conditions shaped the success of an invasive species in its native habitat. That was the one big thing that was deep in my mind while running the experiment. Ironically, that invasive species question has yet to be published from this dataset. The last time I tried to publish that particular paper, the editor accused me of trying to milk out a publication about an invasive species even though it was obvious (to him at least) that that wasn’t even the point of the experiment.
Meanwhile, using the data from the same experiment designed to ask about invasive species, I’ve written about not just queen number, but also species-energy theory, nest movement, resource limitation, and caste theory. I also have a few more in the queue. I’m excited about them all, and they’re all good science. You could accuse me of milking an old project, but I’m also asking questions that haven’t been answered (adequately) and using the best resources available. I’m always working on a new project with new data, but just because this project on invasive species was over many years ago doesn’t mean that I’m going to ignore additional cool discoveries that are found within the same project.
Some new questions I have are best asked by opening up the spreadsheet instead of running a new experiment. Is that so wrong? To some, it sounds wrong, so we need to hide it.
You might be familiar with the chuckles that came from the bit that went around earlier this year, involving Overly Honest Methods. There was a hashtag involved. Overly honest methods are only the tip of the proverbial iceberg about what we’re hiding in our research.
It’s time for #overlyhonesthypotheses.
What’s the relative influence of teaching faculty on their fields as a whole? That’s hard to measure.
Here’s an easier, related, question to ask: What fraction of papers coming out have teaching faculty as authors?
A couple months ago, I perused the tables of contents of a variety of journals. Here’s what I found:
By the way, in Physical Review Letters, it was 1 out of 32; Chemical Reviews was 0 of 12.
I can sniff out a teaching institution in the US based on its name. The primarily-teaching university doesn’t quite exist in the same manifestation internationally, but even so it was clear that most international authors were associated with research institutions of one kind or another.
Using this feeble back-of-the-envelope calculation using a very small sample size, maybe up to 10% of papers in my fields have teaching-school authors in the US. Is this more or less than you’d expect?
What’s it look like in your field, if you’re not a ecology/entomo/tropical type?
The academic publishing environment is being undermined by a bunch of extrinsic and intrinsic forces.
One such force is the genre of academic glamour magazines. They have massive impact factors that allow you to make a big splash when you land a spot inside one of them. Sometimes genuinely huge discoveries and advances end up in Science, Nature, Ecology Letters, or Cell. But most of what appears in these venues is a big sexy idea that doesn’t have any real lasting value. If science were nutrition, then this is junk food. It’s yummy, and it is dressed up with everything to make it exciting and yummy, but rarely is there substance.
For those running labs in research institutions, the perceived wisdom is that you should be publishing in a glamour magazine once in a while.
For those of us at teaching campuses, the perceived wisdom is that you should be publishing once in a while.
There are increased calls for principled stands against glamour mags. For those who stand too firm on principle and avoid any whiff of careerism when choosing a journal, Physioprof pointed out last year out that you’re probably in a position of privilege if you’re saying that. I like Drugmonkey’s attitude, to subvert the system by being entirely reasonable. Among these reasonable ideas: don’t cite glamour mags unnecessarily; don’t not publish a result because you can’t get it into one of them; as a reviewer, keep the standard crap out of them and support excellent work by your colleagues when you get it for review.
At teaching institutions, we approach this issue from an entirely different perspective. We rarely review for those venues, and typically don’t submit to them either. (I’ve submitted to Science/Nature a few times and reviewed a few times.) This suits institutional expectations. Landing a paper in a Science or Nature would be an immense coup. Few, if any, on campus would ever think of this as a gimmicky paper, though the rarity of it wouldn’t be fully appreciated. (The only person that I’ve ever worked with at a teaching campus who had one of these papers during my time actually has an overall below-par publishing record.)
These are glamour magazines because they are a flashy thing that impresses, because of the rarity itself. Gold and diamonds are valuable because there isn’t that much of them, or because they are difficult to access. Likewise, it’s hard to get into glamour mags, so that’s what makes them flashy. These papers themselves don’t communicate the value or prestige of a research program, they’re just the flashy pieces of ornamentation that are necessary.
What, then, is truly glamorous on a teaching campus? The answer is publications. Lots of ’em. The reason that this is glamorous is also because of its rarity. While many people publish on teaching campuses, status and glamour comes from doing it in high volume, because so few are able to do this. This is true even if the venues are not highly regarded, and even if the papers don’t end up being cited. If you want to show off your bling on a teaching campus, five papers in obscure regional or highly specialized journals actually seem more impressive than one paper in a top-notch journal. The people who are arbiters of your reputation on campus might not be able to assess publication quality, but they sure can assess publication frequency.
I make a point to publish in which I consider to be venues appropriate for my work. I avoid merely descriptive or confirmatory work without introducing substantial new ideas, so I try to avoid journals that mostly include this kind of work. I could change my focus and crank out many more papers than I do, in lower-impact journals, but that would harm my credibility in among my scientific peers even as it would increase my profile on campus. Some other scientists manage that tradeoff in different ways, of course. I’m not overly concerned as long as people work on their passion, and make sure that it gets shared with the world.
What is the distinction between publishing for glamour and publishing for genuine impact? It’s probably the same distinction between measured “impact factor” and and long-term citation rates.
This is a manifesto about science research.
The National Museum of the American Indian opened on the national mall in Washington, DC in 2004, as a branch of the Smithsonian Institution. The building is a work of art, the exhibits are mostly engaging and informative, though the most remarkable thing about the place is the food court as its own lesson in biodiversity and cultural plurality. It’s worth a visit, along with scores of other great museums in DC.
The mission statement of the museum reads like boilerplate, about advancing knowledge about the diversity of Native American cultures in the Western Hemisphere. The museum itself accomplishes this task as well as it can, considering the massive diversity of peoples that could be represented within one building.
The NMAI was born after a long gestation, more than a decade. The creators of the museum had a tremendous challenge in presenting a unified structure that communicates the experiences of so many different kinds of peoples, ranging from those in the arctic to Patagonia and every place in between. They developed many mini-exhibits featuring a representative but small subset of the peoples of the Americas, featuring citizen curators who worked with the museum professionals in an attempt to use a small amount of space in an attempt to represent a culture. It was ambitious, and the success of these efforts varies. The result is a visual melange, and a cognitive jumble. This medium appears to be, in part, the message of the curators.
When the museum was being developed, it is my understanding that the creators had a more challenging mission, which isn’t explicitly stated on their website. They realized that most US citizens have a mistaken view of the role of Native Americans in our past and present.
This primary task for the museum is very straightforward: Tell the people that Native Americans still exist. Tell the people that Native Americans are one of us.
I suspect that museum staff hopes that visitors to the museum leave thinking, “I had no idea! This was a total surprise.” I would guess that the typical visitor walking through the doors for the first time might expect a series of maps, valuable old artifacts, and a history lesson. Instead, the exhibits are about the lives of people who are alive today, where they live, how they make their living, and the great diversity of their spiritual, linguistic and social practices.
American Indians are not (just) a part of history. They are a large set of vibrant and active cultures living within and among all of those who live in the Americas. If you learn about American Indians in school in the US, the story you learn is that the European settlers steadily and systematically exterminated Native Americans. That story is a falsity. Native Americans persist. They are both distinct and a part of us.
What does this have to do with being a scientist?
The mission statements of this site, of sorts (the “about” tab and “rationale for existence“) said that I wanted to represent the experience of doing research in a teaching institution. There are many kinds of teaching schools, and they all have different kinds of opportunities and challenges. I thought that those of us doing research in these environments should have a bigger voice.
I have received unanticipated (and uniformly wonderful) feedback from readers, especially senior graduate students, postdocs and junior faculty. Based on what they’ve told me, I now realize that I had jumped the gun with my mission statement. I started by getting into the nitty-gritty of what it’s like doing research on a teaching campus. That wasn’t a mistake, but I didn’t adopt the broader perspective. I needed to follow the example of the creators of the National Museum of the American Indian. I neglected to frame this endeavor with an elemental message:
We are doing research in these teaching campuses. To take this kind of job doesn’t mean that our research career is over. We do research in your field, and we train those who become your graduate students. We create new knowledge and we are scholars just like you.
We are one of you.
We are rarely on disciplinary grant review panels or the mastheads of journals. We aren’t able to hire your grad students as postdocs. We are rarely invited to give seminars at your big research universities, because schmoozing us won’t yield as many tangible benefits as schmoozing someone else.
This doesn’t invalidate the fact that many of us have good research labs. We read and publish in the same journals as you. We get funding from the same agencies, and we have specific talents and resources that allow us to get stuff done and to be valuable collaborators. Our undergraduates do not handicap our research programs. These students are our greatest asset. They are both the means and the ends.
The grad student who opens a research lab in a teaching campus is not a failure. Be proud. Do not expect us to disappear from science. If you keep us as members of your research community, we will be able to participate in the community.
Don’t see this as settling for less.
It’s not less, unless you perpetuate this perspective.
On teaching campuses, faculty aren’t required to do much research, if at all. This doesn’t prevent some us from running serious and productive research labs. We have to do some things differently. We also have the opportunity to do things differently.
And, let’s face facts. There is a steady decline of tenure-track positions as the 20th century notion of the professoriate is relegated to the history books. Nowadays, lots of researchers are taking teaching positions. Research institutions, and their faculty lines, will not disappear, but it’s been a long time since research has broken out of traditional research institutions in the United States.
Researchers have a variety of motives for taking jobs at teaching schools. Some are dedicated to teaching and are seeking to do both teaching and research actively. Others are more excited about teaching, and others might prefer a research institution but have personal reasons for choosing a particular job. While there is more competition for tenure-track jobs at top research universities, none of these jobs are inherently easier, less stressful or more rewarding, if you’re doing them right.
It’s not easy to do research at any university. You’re working to keep funded from grant cycle to grant cycle, and juggle competing demands of student training, teaching, service, writing, and outreach. At teaching campuses, we do things differently than at research institutions. That’s what this site is about – how research gets done at teaching schools.
It sounds like I’ve struck a resonant chord so far. I’m hopeful that what I choose to write here continues to be helpful to those developing their career paths, at all levels. So far, I’ve heard that the most helpful aspect has been the formerly tacit message, that we exist.
It should seem perfectly natural for labs at research universities to train people to run research labs on teaching campuses. After all, that is the actual status quo. My job here is, in part, to make this obvious fact more visible, and a shift in this perception will continue to produce more great research labs on teaching campuses. If this site is capable of shifting perceptions, then it is my hope to write this blog out of existence.
And now, back to our normal programming.