It took a while for the rise of the internet to destabilize the academic publishing industry, but still the major for-profit publishers have been adept at consolidating their racket. Academic institutions, and individual academics such as myself, continue to be fleeced and are donating money to corporations in a sector with an absurdly high profit margin. If you’re reading this site, you presumably are aware of all the disruptions in academic publishing that have been facilitated by the internet: preprint servers, scihub and libgen, open-access fees, journals that are entirely open access, and so called “predatory” journals.
Let’s talk more about “predatory” journals.
These journals seem more parasitic than predatory. These publishing venues are merely taking advantage of the perverse incentives that we have developed in higher education.
I’ve never been wholly comfortable with calling these journals “predatory.” In the earlier days of this site, I called them “psuedojournals,” and I think that label still holds up. (And whaaat? oddly enough, that blog post is better cited than some of my papers published around the same time.) What makes these journals suspect isn’t who runs them, or where they are published, or how much people pay, it’s simply about the lack of peer review.
As far as I’m concerned, the problem with these fake-ish journals is that they pretend to have substantial peer review, and the lack of it makes them something less than an academic journal.
The idea behind calling these journals “predatory” is that they take advantage of hapless scholars who who entirely unaware that they have submitted a journal that lacks the reputation and respect of its peers, and that publishes very quickly without peer review. While I’m sure that has happened to some people who are doing research in relative isolation from the broader academic community, I think this assumption doesn’t give enough to credit to our colleagues who are navigating through the academic landscape. Over the years I’ve become quite familiar with a variety of colleagues (in all kinds of disciplines, in a broad variety of countries) who have published in the most sketchiest of journals, and let me tell you, there was nothing predatory about this arrangement. The absence of peer review, and rapid publication, and cheap publication fees — those were all features of the process that attracted the authors. They chose journals that allowed them to check off the boxes for expectations that they were being held to. Their work was unlikely to be published in a more selective journal that had higher academic standards. They might have just been in a hurry, and some knew that their employers didn’t care, and others were intentionally gaming the system.
The whole thing was parasitic, leeching resources from the perverse incentives that are pervasive in higher ed. The publishers were charging a rather cheap fee (like a few hundred bucks) because they know that many institutions are only concerned about a thin veil of academic respectability rather than genuinely sound academic work. The people listed on the editorial boards of these journals were parasitizing the publisher, because they needed or wanted the appearance of academic clout to be on the editorial board of a journal. The authors of these articles were publishing articles because their institutions were more concerned about counting papers rather than paying attention to their contents or the quality of peer review.
The quality of peer review is highly variable and you the publisher of a journal isn’t really an indicator of whether it’s a “real” journal or not. For example, I’ve heard some stories about the editorial practices (or the lack thereof) of some journals published by Hindawi, but then again some Hindawi journals are downright solid, and one year ago, they were bought by Wiley, which will give them all the more legitimacy. Some MDPI journals have solid peer review but others do not. Same for Frontiers. So how can one decided whether or not a journal article “counts” or is a “real” publication? Well, unfortunately, it’s not that simple and has never been that simple. I’m a strong advocate for society journals and I think the backing of a disciplinary academic society (not counting AAAS or NAS here) means a lot. Beyond that, good luck.
After all, papers that are bad with shamefully weak peer review end up in Nature on the regular. When we look at what Nature charging authors $10,000 to make a paper available to the public, who’s the predator? Surely not the guy with a random website, pronounces it a “journal,” and charges far less to make science available free to everybody. That’s not predation, it’s just skimming off the top a little bit.
If we want to “fix” the “predatory journal” problem, then we need to fix ourselves. We need to actually pay attention to the scholarship that our peers are doing. As long as we’re willing to use counting as a surrogate metric for the quality or quantity of academic work, these perverse incentives will always be with us in one way or another. If you don’t like parasitic journals, don’t be mad at the people taking advantage of the system, because we designed our system for them. Academics, heal ourselves.
By the way, one thing to keep in mind is that a very singular and vocal person had managed to create an outsized impact on the perception of pseudojournals: librarian Jeffrey Beall. He decided that he was equipped to draw a line between journals that were “predatory” and those that were not. His criteria were highly subjective and involved many factors other than peer review. His list became well distributed as “Beall’s List” and though it stopped being published years ago, there are people still carrying the torch and referring to this defunct list. Which seems to me to be ill-advised considering the rapid evolution of the publishing landscape.
Here’s a thing that some folks haven’t realized about Beall and his list: his crusade against predatory journals was a crusade against the open-access publishing model. His problem wasn’t the existence or quality of peer review, it was the idea that academic literature was freely available to everybody and that the for-profit publishers and/or academic societies were no longer the financial gatekeepers of publishing. His real concern was payola, the pay-to-play situation where academics are paying money to publish, which could insert a conflict of interest in the peer review process. Unlike Beall, I’ve seen how the peer review model can be created so that the fees paid by the authors are separated from editorial decisions. But you can get into pseudojournal territory when editors lack independence. Beall’s concerns about pay-to-play as a corruption of the peer review process seem quaint and misguided when you look at all of the other potential for corruption and malfeasance in academic publishing. It seems to me that his politics were okay with for-profit companies making lots of money off of a traditional publishing model but thought that the author-pays open-access model was somehow bad. I just don’t get how a person can arrive at that position, but there we are.
4 thoughts on “Updating my perspective on “predatory” journals”
Definitely agree with this take, Terry. “Pseudojournal” is right, or I used “fake journal” here: https://scientistseessquirrel.wordpress.com/2021/12/14/those-journals-may-be-fake-but-i-dont-think-theyre-predatory/. It’s about incentives, not about taking advantage of naive authors.
Some of them are somewhat predatory- on students, and those who are inexperienced (i.e. not from academia or government, by and large: such as some private consultants who don’t usually publish) in regards to present-day scientific publishing. I keep having to remind myself to have “the talk” with my research students once their first paper is published, because within days they inevitably get these “invitations” from other “publishers” who saw “their paper, which left a deep impression” and invite them to “submit a paper to our next issue which is lacking two manuscripts, for quick publication.” Seems every year I need to head someone off from doing that, and tell them the bad news that it’s a phony come-on. A warning about this probably needs to be added to all research-student orientations.
Those of us who are at all experienced in academic publishing are, indeed, well aware of this by now, and don’t fall for the mimicry.
I think we should look at publishing and published material as a gradation from very weak to very strong or poor to very high standards. All should be allowed to grow. Those who are exclusive, expensive and probably highly technical, can have their papers published in these journals. But those with low level papers, like student project papers, should do with lowly journals, and grow with time. Let us have a free market. It will allow upcoming authors to grow, instead of becoming stranded because of unrealistic hurdles created by some publishers who are just looking for chances to exploit upcoming authors. What is the use of locking up knowledge in the so called “high impact journals” instead of allowing access to knowledge for development? Knowledge not shared is useless!
I agree with the previous comment that “locking up knowledge” is bad. Researchers publish where they can within the limits of their funding, their topics, their communities. The so-called good journals are not any better at identifying quality research than other journals. How is it that a Nature article that someone paid 10k euros to get published ensures some kind of quality? It certainly does not. Look at the recent scandal of the Canadian behavioural ecologist Jonathan Pruitt and all the journals where his work was published. We have to teach students how to look beyond splashy headlines written by comm. specialists and critically review work, whether as a reviewer, or as a reader. Every article should be judged on its own merits. No scientific journal has a process that can guarantee the quality of all the articles they publish.