Summer is sometimes a contemplative time for me. It used to be long hours in the field would give me time to think but now it is just as often that I’m weeding my garden or some other summer activity. Lately I’ve been thinking a lot about negative results.
I have three projects that I am struggling with in various ways because of their ‘negative result’ nature.
The one that has lingered the longest was once apart of my PhD dissertation. I was looking at phenotypic selection across space and time to assess whether selection was more variable in either. It matters when thinking about evolution in general because selection that varies across space is more likely to lead to local adaptation while variation across time is more likely to maintain trait variation in a population. It also matters on the small scale of what it means when individual researchers measure selection. How representative is that measurement? Since few studies have done these measurements across both space and time, I thought my efforts would be a great contribution. And with one of my committee members we also summarized the literature that had measured selection either in multiple populations or years. Long story short—there was lots of variation but neither in my dataset or the literature was it more variable across space or time. An essentially negative result, although a real one. I’ve since added more data and resolved that this is the last time I tackle writing it up. It is going to get published this time I swear! But it is surprising difficult to write about something with no difference, even if that lack of difference shows something important.
Next we have a paper with my former PhD student who was working further on understanding floral traits in our system of Penstemon digitalis. We were asking whether a floral scent that is emitted from nectar was an honest signal of nectar to pollinators. The long story short there is that it’s complicated. If we had done fewer experiments there probably would have been a beautiful high-ranking journal article to come out of this work because some of the pieces fit together so nicely. Then we had to go and do more and that beautiful story becomes “essentially a negative result”*. We’re at the second journal with the study and we’ll see what comes back from these next reviews but it is tough. I believe that the story we’re telling is honest (haha) and looks at honesty more deeply than it is often done but the answer we get back is complicated. Those complications make it tough to write about and show that we need a lot more work to figure out everything that is going on in that system.
Finally, we’re working on plant-pollinator interactions in Swedish towns and some surprising results have come out there as well. My first master’s student basically found no real differences in pollen limitation or fitness across an urbanization gradient. This summer there is another student taking up the question and expanding the studies to a few more towns and repeating the survey done before. I’m not 100% convinced of this negative result as I am of the others. It might have to do more with the sampling effort but if we find the same thing this year I’ll have to concede that it is real. And then the difficulty of writing up another set of essentially negative results.
Nature is complicated. Rarely do things fit exactly into our nicely thought out theories or hypotheses. We should expect that our systems don’t always line up with the theory. When the trend is clearly in the opposite direction, eventually that makes into refining the theoretical framework of the field (we hope). But what about when we expect something to have an effect and it doesn’t? The first thing to do of course is to make sure that our design is robust and the lack of effect isn’t due to some failing of our studies. But what if that checks out?
I suspect there is a lot of important information out there buried in files of experiments/studies where no difference was detected. I am hardly the first to make this observation. Long ago when I began my career, I had lofty ambitions to never let data languish. But now I can see why these kinds of datasets do. My own have. It takes a lot more to write a convincing paper about a lack of difference than a difference. Storytelling is easier and neater when the facts fit the theory. It isn’t just reviewers and readers that have a more difficult time with negative results but I’m finding as a writer it is also much more difficult to process and, well, just get on with it and write these stories.
I don’t have any answers here but it seems to me that we should make an effort to tell the less clear tales of our data adventures. My big push to get the variation is selection paper out finally will be my attempt at this. I’m finding more blocks to my writing than usual but I think (hope) I will eventually manage. Hopefully the editors and reviewers will appreciate the difficult tales I am working on. Because it is really tempting to set them aside for the more simple stories I have waiting to get out as well.
As for those little wee datasets and observations that aren’t enough for a paper. I’m going to try follow in some other bloggers footsteps and try to highlight them in future blog posts so they see some light of day.
*It bothered me when a coauthor referred to our results as such but then I realised they were right. It started my internal struggle with thinking about the real issue I have with these papers. Negative sounds, well, negative as in bad. I think it is hard to not have that connotation in your mind and nobody wants to think about their data as bad. While I do have data that is bad, as in not reliable for various reasons and therefore I’m comfortable throwing out (although it is not easy and frustrating), when the data are ‘good’ but negative it is different. These are the data that I’m finding difficult to write about and I’m guessing sit around in many a scientist’s ‘file cabinet’.