A case of scientific dishonesty has hit close to home and got me thinking. This isn’t a post of the details of the case (you can read more here if you’re interested) or the players involved (I don’t know them more than to say hi in the hallway) or to comment this particular case since I don’t have any more information than what is publically available. So if you’re looking for insider gossip, the following is bound to disappoint. Instead this example has got me reflecting in general about scientific dishonesty and what I can do about it.
The first obvious answer is that I can conduct my self and my research in an honest way. Sounds simple right? In theory it is. I can be accountable for my own actions of course but science is rarely a solo endeavour and I tend to collaborate. So it isn’t just me that I need to look out for.
So how do we make sure our research collaborations are honest? It is a tougher thing to address because there is a lot of trust in science. Even if you are involved with the whole process from inception to publication, if you aren’t personally doing the work it is possible that one of your collaborators is acting dishonestly. So I guess the next answer is that you should choose honest collaborators. Again, that is an overly simplistic answer to a complex issue which might not be possible to live up to. How do you know if your collaborators, students, etc. are honest? We often assume they are but clearly there are pressures to fudge data and get high impact publications in science or otherwise scientific dishonesty wouldn’t happen.
It is easy to say that as a co-author you should check that the results are valid but it can be more complicated than it sounds. Depending on the data, you might not have the expertise to ferret out any wrong doing for all the pieces in the collaboration–it is often why we collaborate in the first place! Or it might be difficult to independently verify all results with your own analyses. Often as student advisors we evaluate summary statistics and big picture rather than the raw data. Even if we traced all analysis from raw data to final product, I’m sure that if someone wanted to fake the data than they could also figure out how to hide that from their advisors/collaborators.
So how do we promote scientific honesty? Well, this is a really tough one. As was pointed out by my colleagues at our weekly lunch, it isn’t as if you can just say “don’t cheat/be honest” and all is good. Likely people who are looking to cheat the system will do it whether or not their advisors say anything about it.
Myself I have found that having shared folders where data, analyses and manuscripts live has helped me be more informed about what my students are doing. I hope that an offshoot of this approach is to also encourage honesty in my group because the raw data are available to trace back. It also helps with ensuring that data are backed up in multiple ways so they’re not lost. But I am wondering what more I can do to help my group maintain scientific honesty.
Ultimately I will continue to trust the work of the people I do science with until they show me otherwise because I don’t want to live or work expecting people to cheat. But I am also cognisant of the pressure in science to preform.
Since I’ve been applying to here in Sweden, the national granting agents have success rates in the 10-15% range but some go as low as 5%. There is an incredible pressure on young researchers to establish their careers. Few permanent positions come up and there are many who stay on after the post doc stage on soft grant money until those permanent slots open up. If you’re like me and miss securing funding when you need it, then you can be unemployed for a year. And the pressure in Sweden is no way unique; it is just the kind I’m most familiar with at the moment. With incredible pressure, I’m sure comes the temptation to cheat the system.
I don’t have great answers to how to solve issues of scientific honesty but personally I aim to foster a kind atmosphere in my group. I hope my lab is a place where it is ok to fail and make mistakes, where a career in academia is not seen as the only successful outcome, and where we can think creatively about our data when our hypotheses are not supported. I hope I never have to face true scientific dishonesty with any of my collaborations and that I can lead by example for those I train. But although I’ve been pondering concrete things I can do in my own lab, it is difficult to come up with an approach that can avoid scientific dishonesty entirely.
I’d love to hear other suggestions for fostering scientific honesty!
As you say, being honest builds honesty in others, I think. Open Access to datasets seems to be a great platform for increasing and celebrating honesty too. Industry requirements for obscuring evidence (due to Intellectual Property issues for example) I think needs careful consideration for the future of honesty.
Thanks for the post!
When cheating is so easy to do and so highly rewarded, it is no wonder that it happens from time to time. This creates a very toxic environment, where honest errors carry the scent of fraud, and stifles constructive conversation. The ultimate answer to this situation takes changing the reward structure in science, which now boils down to the incentive to find and publish tidy and statistically significant results. However, most scientists are not in the position to hire, publish, or award grants, and cannot change the policies that affect those decisions (but if you are, please see steps to change the situation with the Transparency and Openness Promotion Guidelines https://cos.io/top).
For everyone else, one best practice is to use a workflow that creates persistent records of actions and allows for easy collaboration, sharing, and persistent archiving. Another practice is to ensure that you are honest to yourself, a necessary first step in being honest to others. Preregistering data analysis plans is one way to ensure that you are honest to yourself, such that a–priori hypothesis tests, data exclusion rules, and confounding variables are determined before seeing the data. This process has been mandated in clinical sciences for years, but the advantages are the same for basic or pre-clinical work. Our education campaign, the Preregistration Challenge, is designed to encourage researchers to try it out in order to see the advantages for themselves, please check it out! https://cos.io/prereg
You might also find this interesting in Nature yesterday: http://www.nature.com/news/integrity-starts-with-the-health-of-research-groups-1.21921
I’ve been thinking about these things for some time and have been collecting my thoughts into a manuscript on scientific integrity in ecotoxicology. That is the corner of ecology where the discredited fish-prefer-plastics study falls, and I’ve followed the study from its initial publication with fanfare, unraveling, and fall with interest. The incentives for ecologists, like any other scientists, to publisha high-profile article in a glam journal describing something really new like fish prefer plastic or arsenic life can make the difference between a path to a comfortable, well funded position, and a lab with their name on it versus struggling for years to make a go with an otherwise respectable publication record.
The high stakes biomedical field leads in both science scandal and in examination and self-correction. A recent analysis “Why Do Scientists Fabricate And Falsify Data?” by Daniele Fanelli, Elisabeth Bik and others looked for differences in patterns in author situations in papers which image manipulation patterns. In short, early-career researchers, and researchers working in countries where publications are rewarded with cash incentives were at higher risk of image duplication. Bik had heroically manually inspected published cell biology images for evidence of duplication, which she considered honest mistakes, versus active manipulation with PhotoShop-like tools to change the data from showing one thing to something else. The latter was considered dishonest fabrication or falsification. (That’s a study in perseverance itself, with a sharp eye. She examined 20,621 images which gave her a few hundred presumed honest mistakes or dishonest fabrications (that someone who do that is a study in itself). They found that that academic culture, peer control, cash-based publication incentives and national misconduct policies were associated with scientific integrity issues. For instance, (individuals appeared less likely to engage in misconduct when scrutiny of their work is ensured by peers, mentors or society, national misconduct policies with an independent investigatory role were better than just local policies, where institutions self-investigate. Countries with cash incentives for researchers publishing in high-impact factor journals were really priming the pump for trouble (India, China, and Argentina stood out). I do sometimes wonder if the hyper-competitiveness forced on us all by diminishing chances of funding may turn into a science remake of Glengarry Glen Ross. Not a cheerful thought.
Certainly the types of practices Amy P. and David Mellor described with openness, mutual support and criticism are on the right path, but nothing is foolproof, as the Uppsala University microplastics case showed. The old adage, extraordinary results require extraordinary evidence is worth keeping in mind.
The Uppsala case is an unusual and rare one. The data I’m aware of show that misconduct mostly comes from a small number of serial offenders, from authors from a few countries, mostly involves low profile papers rather than high profile ones, and is not especially common at Nature and Science (see, e.g., Bik et al., linked to by David above, and data here: http://arxiv.org/abs/1412.2716).
I don’t think the data actually support a story that misconduct arises from the same incentives and pressures all scientists face, or that it’s especially prevalent at journals that publish “splashy” work.
More commentary on this here: http://datacolada.org/40