Below is a statement from the Open Access Scholarly Publishers Association (OASPA) in response to the recent “sting” that was reported in Science in an article entitled “Who’s Afraid of Peer Review?”
OASPA was established in 2008 to bring together a growing community of high-quality , who were showing how research could be published according to the highest standards and made freely and openly available at the point of publication. Our goal was, and continues to be, promoting best practices in open access publishing and providing a forum for constructive discussion and development of this field. Open access publishing has continued to grow since the establishment of OASPA, and is now a well-established part of the scholarly publishing landscape.
A second reason for the establishment of OASPA was the emergence of a group of that were engaging in open access publishing without having the appropriate quality control mechanisms in place. OASPA’s approach to addressing this issue was to establish strict criteria for entry into the Association, such that applicants are screened for policies relating to publication fees, peer review, licensing, etc. For that do not initially meet our criteria, we provide a detailed list of our concerns to the publisher and encourage them to adjust their policies accordingly.
The “sting” exercise conducted by John Bohannon that was recently reported in Science provides some useful data about the scale of, and the problems associated with, this group of low-quality , which is an issue that OASPA has worked to address since the Association was first created. While we appreciate the contribution that has been made to this discussion by the recent article in Science, OASPA is concerned that the data that is presented in this article may be misinterpreted. We will issue a fuller response to this article once we have had a chance to review the data in more detail (and we applaud the decision to make the data fully available), but for now we wish to highlight what can and cannot be concluded from the information contained within this article.
The greatest limitation of the “sting” that was described in the Science article is that “fake” articles were only sent to a group of open access journals, and these journals were not selected in an appropriately randomized way. There was no comparative control group of subscription based journals, despite the exhortation from Dr. Marcia McNutt (the Editor-in-Chief of Science) in the accompanying Editorial that publishing models be subject to rigorous tests. In contrast, more rigorously designed studies that have been peer-reviewed prior to publication provide evidence of the rigor and benefits of open access journals relative to their subscription counterparts(http://www.biomedcentral.com/1741-7015/10/73 and http://onlinelibrary.wiley.com/doi/10.1002/asi.22944/abstract).
Another limitation of the study described in Science concerns the sampling of the journals that were chosen as targets for the “sting,” which were drawn from two lists – Beall’s list of ‘predatory’ open-access journals, and the Directory of Open Access Journals (DOAJ). Publishers were selected from these lists after eliminating some on various grounds, including a journal’s language of publication, subject coverage, and publication fee policy. Ultimately the “fake” articles were sent to 304 journals, out of which 157 journals appear to have accepted these articles for publication. Given the selection criteria that were used in determining where to submit these “fake” articles, it is not possible to draw any meaningful conclusions about the pervasiveness of low-quality open access journals in the wider publishing ecosystem.
Overall, although the data undoubtedly support the view that a substantial number of poor-quality journals exist, and some certainly lack sufficient rigor in their peer review processes, no conclusions can be drawn about how open access journals compare with similar subscription journals, or about the overall prevalence of this phenomenon.
Based on the information that OASPA has been able to collect so far, it seems that several OASPA members received and rejected the “fake” article, but a small number of members have accepted the article. As soon as we have more detailed information we will be contacting these members to ask for their views about how this happened, and the steps that they will be taking in order to resolve any potential weaknesses that may exist in their peer review procedures. OASPA has a complaints procedure that is used to investigate any complaints about our members that we receive, and in the event that a publisher is not upholding the OASPA code of conduct, their membership in the Association may be terminated.
In our view the most important lesson from this recent article in Science is that the publishing community needs stronger mechanisms to help identify reliable and rigorous journals and , regardless of access or business model. OASPA will continue to scrutinize membership applications according to our membership criteria, and listen to feedback from the community, so that membership within OASPA can continue to be an important signal of quality within the open access ecosystem.
Science also failed to do a literature search or cite sources – randomly generated papers have been accepted by conferences and journals since 2005, created by free online paper generators such as SCIgen and Mathgen that anyone can use with a quick Google.
I wonder why the author of the “fake” article did not submit it also to 307 subscription-based journals, so that to have a better way to compare the “bad quality” open access to the “high quality” conventional model?
It could happen that some OASPA members have accepted the article for publication. Should however OASPA come into a role of “investigator” to ask those members why they accepted it? I am not sure.
because subscription-based journals often ask you to pay a fee for submission… i was asked 700USD to simply submit my paper.
Another feature of the Science sample was that all the journals had author fees, hence an incentive to profit from publishing. We need to explore alternative models, without author fees or reader fees. I have been doing this a bit for Judgment and Decision Making, the journal I edit. For example, granting agencies typically allow grantees to pay publication charges from grants. I asked people at NSF (U.S.) if they would make a small contribution to the journal, such as $5000/year, on the grounds that they would save at least this much in publication fees of grantees (who do publish in the journal). They have a policy against this, and I presume that other granting agencies do too. Why? Another issue is production costs. Many free journals require LaTeX, but this is (apparently) difficult for the large number of scholars who think that Microsoft Word is the only way to write anything on a computer. To solve this problem, it could be possible to develop some easy alternative along the lines of Google Docs (or building on the tag in html5), which can be converted into nice looking copy. Google Docs (like Word) was not intended for this purpose, but it is a start. The main work of journals is already done, mostly, by unpaid volunteers (editors and reviewers, not to mention authors). Production is only a small step beyond, if it could be put technically within reach of the masses, and perhaps supported with a little external funding.
The way Bohannon selected the tagets from the DOAJ introduces a bias in the results, as I claim in my reaction to the sting study: http://im2punt0.wordpress.com/2013/10/04/science-mag-sting-of-oa-journals-is-it-about-open-access-or-about-peer-review/
@Jonathan Baron: The NSF idea is interesting, and I assume the rejection is because there is no causal link (grant->article->fee). One solution might be publication fees for non-profit journals. Or, maybe funding bodies like NSF (or Wellcome Trust, who already require and pay for Open Access even with subcription based journals) would be willing to fund if only there were an explicit label for this sort of money going to something like JDM, so they can justify internally. One problem is how to define “something like JDM” – again, non-profit seems key or else you have a new incentive to set up shoddy journals at will to get NSF etc funding.
Exactly how does Jonathan Baron’s model differ from the subscription model except that the subscriptions are now paid directly by a small group of funders rather than by the whole collectivity of universities, research institutes, etc? What does this mean for those fields that do not receive any significant amount of external funding? Doesn’t it lead to the accretion of an undesirable amount of power in the hands of funders to influence what gets published where? Surely a narrow pipeline funder>lab>journal is a recipe for dull normal science rather than for innovation…
This is a good point. I agree with it. And my “model” is not my ideal preference, just an attempt to make sure that my own journal (JDM) survives longer than my ability to produce it, which is what I do now for no remuneration (just as most editors and reviewers work for no remuneration). My ideal model is to have a technology that makes it possible for small scholarly groups to produce nice looking journals for minimal amounts of cash, by making production easier. This model is now in effect for journals that require LaTeX, but apparently that is hard for people to use. (I believe that this is true mainly because Microsoft has convinced people that text editors are useless, so they must use Word. I used mark-up languages before LaTeX existed, and I have always found Word to be too difficult for ME to learn.) I think that the development of such a technology is high priority, but most members of OASPA, being , have no interest at all in it. Others do, and are working on it, but slowly. Right now the closest thing is Google Docs, because this is an interface that most authors can manage to use. So one path is to find a way to convert this to something that looks nice. This is not the only path. The original idea of HTML was that this language is so easy that anyone can learn it and make web sites. That is still true, even though most people think they have to hire a professional web developer. But the original idea needs to be brought to journal publishing (not just in html, but also pdf, etc.).
WHERE THE FAULT LIES
To show that the bogus-standards effect is specific to Open Access (OA) journals would of course require submitting also to subscription journals (perhaps equated for age and impact factor) to see what happens.
But it is likely that the outcome would still be a higher proportion of acceptances by the OA journals. The reason in simple: Fee-based OA publishing (fee-based “Gold OA”) is premature, as are plans by universities and research funders to pay its costs:
Funds are short and 80% of journals (including virtually all the top, “must-have” journals) are still subscription-based, thereby tying up the potential funds to pay for fee-based Gold OA. The asking price for Gold OA is still arbitrary and high. And there is very, very legitimate concern that paying to publish may inflate acceptance rates and lower quality standards (as the Science sting shows).
What is needed now is for universities and funders to mandate OA self-archiving (of authors’ final peer-reviewed drafts, immediately upon acceptance for publication) in their institutional OA repositories, free for all online (“Green OA”).
That will provide immediate OA. And if and when universal Green OA should go on to make subscriptions unsustainable (because users are satisfied with just the Green OA versions), that will in turn induce journals to cut costs (print edition, online edition), offload access-provision and archiving onto the global network of Green OA repositories, downsize to just providing the service of peer review alone, and convert to the Gold OA cost-recovery model. Meanwhile, the subscription cancellations will have released the funds to pay these residual service costs.
The natural way to charge for the service of peer review then will be on a “no-fault basis,” with the author’s institution or funder paying for each round of refereeing, regardless of outcome (acceptance, revision/re-refereeing, or rejection). This will minimize cost while protecting against inflated acceptance rates and decline in quality standards.
That post-Green, no-fault Gold will be Fair Gold. Today’s pre-Green (fee-based) Gold is Fool’s Gold.
None of this applies to no-fee Gold.
Obviously, as Peter Suber and others have correctly pointed out, none of this applies to the many Gold OA journals that are not fee-based (i.e., do not charge the author for publication, but continue to rely instead on subscriptions, subsidies, or voluntarism). Hence it is not fair to tar all Gold OA with that brush. Nor is it fair to assume — without testing it — that non-OA journals would have come out unscathed, if they had been included in the sting.
But the basic outcome is probably still solid: Fee-based Gold OA has provided an irresistible opportunity to create junk journals and dupe authors into feeding their publish-or-perish needs via pay-to-publish under the guise of fulfilling the growing clamour for OA:
Publishing in a reputable, established journal and self-archiving the refereed draft would have accomplished the very same purpose, while continuing to meet the peer-review quality standards for which the journal has a track record — and without paying an extra penny.
But the most important message is that OA is not identical with Gold OA (fee-based or not), and hence conclusions about peer-review standards of fee-based Gold OA journals are not conclusions about the peer-review standards of OA — which, with Green OA, are identical to those of non-OA.
For some peer-review stings of non-OA journals, see below:
Peters, D. P., & Ceci, S. J. (1982). Peer-review practices of psychological journals: The fate of published articles, submitted again. Behavioral and Brain Sciences, 5(2), 187-195.
Harnad, S. R. (Ed.). (1982). Peer commentary on peer review: A case study in scientific quality control (Vol. 5, No. 2). Cambridge University Press
Harnad, S. (1998/2000/2004) The invisible hand of peer review. Nature [online] (5 Nov. 1998), Exploit Interactive 5 (2000): and in Shatz, B. (2004) (ed.) Peer Review: A Critical Inquiry. Rowland & Littlefield. Pp. 235-242.
Harnad, S. (2010) No-Fault Peer Review Charges: The Price of Selectivity Need Not Be Access Denied or Delayed. D-Lib Magazine 16 (7/8).
@Robert Dingwall: Well, one difference is that you don’t need a subscription to access the published articles. (This is what I would understand as “Open Access”). Another is that JDM, the example he gives, is non-profit. So the funding sought is (a) much smaller than the sum of for-profit subscription fees paid across university libraries, and (b) does not result in profit and hence not in an incentive to inflate some statistic that results in being able to demand higher subscription fees.
Now we have real and proven data and result of the quality of SOME OA journals (neglecting the fact that it was not compared with similar subscription based journals and other weakness of this study). Even though this experiment is ‘not perfect’, but I am so happy to see that it throwing light on the quality of ‘screening and peer review service’ of OA journals. I strongly believe that the scholarly should work like ‘strict gate keeper’ by arranging honest and sincere peer review service. This is the main difference between a ‘scholarly publisher’ and a ‘generic publisher’ (who publishes anything). Other works like typesetting, proofing, printing, web hosting, marketing, etc are important but not unique for a scholarly publisher. (My personal opinion is that we should not waste time by debating–Good OA, –bad OA–good CA–bad CA, etc. It is the time to work. We must jump to more effectively analyses and use these huge precious data)
I know and strongly believe that Beall, being an academician-librarian, also gives the highest importance to this particular criteria than any other thing. I congratulate Beall that his theory has been experimentally proven by the Sting Operation of Science.
I know this sting operation is going to generate a huge debate and one group will try to find out the positive points and other group will try to prove it as bogus experiment. A simple endless and useless fight and wastage of time. It will be more important to find out some way to use this huge data more effectively.
Now I have some suggestions and questions
1. How are we going to use these huge data generated by this year long experiment?
2. I request DOAJ, OASPA to do some constructive works by using this huge data.
3. Can we develop some useful criteria of screening OA from the learning of this experiment?
4. Is there any way of rewarding the , who properly and effectively passed this experiment (rejecting the article by proper peer review. I noticed some journals rejected due to ‘scope-mismatch’. It is definitely a criterion of rejection. But it does not answer, if scope was matched, what would happen).
5. I saw the criticism of Beall that ‘he is ..trigger-happy’. It is now time for Beall to prove that he not only knows to punish the bad OA, but he knows to reward also somebody, if it intends to improve from the previous situation. Is there any possibility that this data can be used for the ‘appeal’ section of Beall’s famous blog. Sometimes judge can free somebody depending on the circumstantial proof, even if he/she does not formally appeal. (Think about posthumous award/ judgment.) I always believe that ‘reward and inspiration of good’ is more powerful than ‘punishment for doing bad’. But I also believe that both should exist.
If anybody tells that “The results show that Beall is good at spotting with poor quality control.” Then it tells one part of the story. It is only highlighting who failed in this experiment. It is not telling or highlighting about those who passed this experiment but still occupy the seat in Beall’s famous list. I really hate this trend. My cultural belief and traditional learning tells me that “if we see only one lamp in the ocean of darkness, then we must highlight it, as it is the only hope. We must protect and showcase that lamp to defeat the darkness”. I don’t know whether my traditional belief is wrong or right, but will protect this faith till my death.
I really want to see that Beall officially publishes a white-list of ‘transformed predatory OA ’, where he will clearly write the reasons of its removal from the ‘bad list’. So, that from that discussion other lower quality predatory OA will learn how to improve (if they really want to do so) and will learn how to get out of Beall’s ‘bad list’. This will step will essentially complete the circle, Beall started.
Ideally I STRONGLY BELIEVE that Beall will be the happiest person on earth if in one fine morning Beall’s list of ‘bad OA ’ contains ‘zero name’, by transferring them to Beall’s list of “Good OA ” by transforming them with the help of effective peer review process initiated by Beall.
Akbar Khan,
India
It will be interesting to read the explanations of the OASPA members who accepted the spoof paper (I am counting at least two: Sage and Dove Medical Press) and I hope the steps they promise to take will be published here. Dove in particular is repeatedly demonstrating questionable publishing practices (aggressive spamming activity 5 years ago, see http://gunther-eysenbach.blogspot.ca/2008/07/dove-medical-press-and-libertas.html, then publishing a paper of Alabama shooter Amy Bishop and her three teenage kids as co-authors, see http://poynder.blogspot.ca/2010/02/open-access-linked-to-alabama-shooting.html (with the true crime being that Dove just deleted the article from their website without any proper retraction notice/process), and now the third offence, accepting the Science spoof article after “superficial peer-review”. I think it is fair to ask at this point – how much longer will this publisher be allowed to tarnish the reputation of open access and OASPA? How about three strikes and you are out?
Thank you to all of you who are posting your comments to this blog. Please know that the OASPA board is taking note of your comments.
As stated in the blog item above, OASPA is reviewing the data put forth by John Bohannan. In particular, we are conducting reviews of those members with journals that have accepted the paper. A small committee has been created to carry out this review. I am taking part in that committee. As we are in the middle of the review we are unable to make comments on specific at this time.
This study points out one very important aspect that needs more attention: that it is very difficult (actually, essentially impossible) to verify the quality of peer review that any article, published in (almost, see below) any journal, goes through. This is because peer review is done in secrecy: the results of peer review are not made public (i.e. the comments/questions, the author’s responses, the number of times it was revised, etc.) and the identify of peer reviewers is kept secret.
One relevant exception is the journal F1000Research (full disclosure: I have no association with this journal, and neither have I published or submitted a paper to it). This journal makes the peer-review process completely public, so it is possible to follow ‘live’ the revisions made to a paper. This can tell how competent the reviewers were, and if the paper was appropriately scrutinized. In my opinion, knowing this information helps with improving confidence in the paper, and helps in understanding the contents of the paper and the reasoning of the authors, and in highlighting its shortcomings and the work still to be done. What’s more, the journal charges author fees upon submission, so there is no incentive for the journal to accept poor papers to raise revenues.
I am hoping to see this publishing model more widely used, including in disciplines outside the biological and medical sciences (in that way I can consider submitting my work to such a journal).
As a graduate student conducting research, I find it extremely frustrating to be unable to access important literature related to my project because my university didn’t purchase the expensive “subscriptions” for the journal in which the articles were published in. What good is research if it is not available to researchers that would benefit from it and also cite it? I feel especially bad for researchers and students from smaller colleges that have very poor access to scientific literature. These unnecessary barriers to science are ridiculous to have in this modern technological era.
This “sting” by science does demonstrate that some open access journals have peer review and quality control problems, but why would they not test subscription based journals too? Science should have answered this question in the article.
Problems with open access journals are to be expected. Will Science play a role in working out the kinks of open access to benefit science, researchers and the public or will they sit on the sideline pointing fingers?
Where can we view the official statements of those OASPA members that were included in Bohannon’s sting?
Caroline Sutton (October 8, 2013): “OASPA is reviewing the data put forth by John Bohannan”
Can Dr. Sutton and OASPA please indicate where the formal findings of that review process were published. Also, would OASPA care to comment on the “ethics” of Bohannon’s “sting” papers (Science and chocolate to IAM). Thank you.
Jaime, OASPA made a second blog post about this (https://oaspa.org/oaspas-second-statement-following-the-article-in-science-entitled-whos-afraid-of-peer-review/). The review that was conducted was for the purpose of assessing our own members.