Below is a statement from the Open Access Scholarly Publishers Association (OASPA) in response to the recent “sting” that was reported in Science in an article entitled “Who’s Afraid of Peer Review?”
OASPA was established in 2008 to bring together a growing community of high-quality publishers, who were showing how research could be published according to the highest standards and made freely and openly available at the point of publication. Our goal was, and continues to be, promoting best practices in open access publishing and providing a forum for constructive discussion and development of this field. Open access publishing has continued to grow since the establishment of OASPA, and is now a well-established part of the scholarly publishing landscape.
A second reason for the establishment of OASPA was the emergence of a group of publishers that were engaging in open access publishing without having the appropriate quality control mechanisms in place. OASPA’s approach to addressing this issue was to establish strict criteria for entry into the Association, such that applicants are screened for policies relating to publication fees, peer review, licensing, etc. For publishers that do not initially meet our criteria, we provide a detailed list of our concerns to the publisher and encourage them to adjust their policies accordingly.
The “sting” exercise conducted by John Bohannon that was recently reported in Science provides some useful data about the scale of, and the problems associated with, this group of low-quality publishers, which is an issue that OASPA has worked to address since the Association was first created. While we appreciate the contribution that has been made to this discussion by the recent article in Science, OASPA is concerned that the data that is presented in this article may be misinterpreted. We will issue a fuller response to this article once we have had a chance to review the data in more detail (and we applaud the decision to make the data fully available), but for now we wish to highlight what can and cannot be concluded from the information contained within this article.
The greatest limitation of the “sting” that was described in the Science article is that “fake” articles were only sent to a group of open access journals, and these journals were not selected in an appropriately randomized way. There was no comparative control group of subscription based journals, despite the exhortation from Dr. Marcia McNutt (the Editor-in-Chief of Science) in the accompanying Editorial that publishing models be subject to rigorous tests. In contrast, more rigorously designed studies that have been peer-reviewed prior to publication provide evidence of the rigor and benefits of open access journals relative to their subscription counterparts(http://www.biomedcentral.com/1741-7015/10/73 and http://onlinelibrary.wiley.com/doi/10.1002/asi.22944/abstract).
Another limitation of the study described in Science concerns the sampling of the journals that were chosen as targets for the “sting,” which were drawn from two lists – Beall’s list of ‘predatory’ open-access journals, and the Directory of Open Access Journals (DOAJ). Publishers were selected from these lists after eliminating some on various grounds, including a journal’s language of publication, subject coverage, and publication fee policy. Ultimately the “fake” articles were sent to 304 journals, out of which 157 journals appear to have accepted these articles for publication. Given the selection criteria that were used in determining where to submit these “fake” articles, it is not possible to draw any meaningful conclusions about the pervasiveness of low-quality open access journals in the wider publishing ecosystem.
Overall, although the data undoubtedly support the view that a substantial number of poor-quality journals exist, and some certainly lack sufficient rigor in their peer review processes, no conclusions can be drawn about how open access journals compare with similar subscription journals, or about the overall prevalence of this phenomenon.
Based on the information that OASPA has been able to collect so far, it seems that several OASPA members received and rejected the “fake” article, but a small number of members have accepted the article. As soon as we have more detailed information we will be contacting these members to ask for their views about how this happened, and the steps that they will be taking in order to resolve any potential weaknesses that may exist in their peer review procedures. OASPA has a complaints procedure that is used to investigate any complaints about our members that we receive, and in the event that a publisher is not upholding the OASPA code of conduct, their membership in the Association may be terminated.
In our view the most important lesson from this recent article in Science is that the publishing community needs stronger mechanisms to help identify reliable and rigorous journals and publishers, regardless of access or business model. OASPA will continue to scrutinize membership applications according to our membership criteria, and listen to feedback from the community, so that membership within OASPA can continue to be an important signal of quality within the open access ecosystem.