Following on from the recent webinar entitled Scholarly Communication & COVID-19: Closing the Loop for Effective Peer Review, we asked our speakers to summarise their talks by offering a few key takeaways, which you can find below.
We also asked speakers to respond to the many questions that were posed by attendees via the webinar chat. You can find those questions and answers directly under the takeaways. This may be useful for those who missed it or wish to share with colleagues.
You can also access the speakers’ slides.
OASPA is very grateful to the speakers for all of the work and time they have given to this webinar – in preparation, on the day, and after.
________________
Key takeaways
Sarah Greaves (@SarahGreaves18)
Hindawi
- Publishers can move quickly to address long standing issues in STM publishing in a collaborative, non-competitive way which maximises impact for the academic
- 11 /organisations to begin with in the group, now over 15 with >1500 academics signed up – phenomenal response
- Usage of peer reviewers has been high at some (but not all) and so far no transferred papers with portable peer review
- Now challenging how we work with preprints and data sharing, so truly complete the loop in a cross-publisher, collaborative manner
- 2021 will show us if this new way of working with colleagues for the benefit of the research community remains
Monica Granados (@monsauce) and Daniela Sarderi (@Neurosarda)
PREreview
- The COVID-19 pandemic has highlighted a long-standing need to accelerate research dissemination
- On Outbreak Science Rapid PREreview researchers can rapidly review or request reviews of outbreak-related preprints
- Our goal is to enable scientists to provide constructive feedback to each other’s work in a process that is rewarding to them
- Rapid reviews by the research community can help speed up journal-organized peer review
Susanna-Assunta Sansone (@SusannaASansone) and Peter McQuilton (@Drosophilic)
FAIRSharing
- Supporting evidences, including as datasets, code and models must be routinely made available in a transparent and persistent manner
- To be reused, reproduced and verified data must also be Findadable, Accessible, Interoperable and Reusable, according to the FAIR Principles
- Data and metadata standards, as well as data repositories, are essential to make data FAIR
- FAIRsharing guides consumers to discover, select and use these standards and repositories with confidence; and producers to make their resources more visible, widely adopted and cited.
Questions received via the webinar chat channel
Answers from Sarah Greaves (SG); Daniela Sarderi (DS); Monican Granados (MG); Susanna-Assunta Sansone (SAS)
Q. I’m an advocate of preprints, however I’ve seen some resistance to their adoption among the , especially in Latin America. Some of them say preprints could be confusing for users, especially if they lack digital and information literacy. What do you think Monica and Daniela?
MG: Thank you for your question, I think overall I would say that although I too am a proponent of preprints, they may not be the solution for every community. There may be alternatives that provide access to manuscripts before they have gone through the peer review process that are more suitable solutions for the infrastructure of a particular region/community. There may also be an appropriate hybrid approach that uses open infrastructure and solutions like PKP that could increase the accessibility of digital tools.
Q. How is rapid prereview useful for social science researchers?
MG: I can envision the social science researchers rapidly reviewing preprints that live in the intersection of social science and COVID-19. For example papers on policy or health outcomes in COVID-19. Also, the PREreview platform (https://prereview.org/), which is separate from the Outbreak Science Rapid PREreview platform is not discipline-specific. It also differs in that you write longer, more traditional reviews on the platform. We also designed the Outbreak Science Rapid PREreview Platform to be open for people to remix and reuse. So if there is community (and financial) support for a social science version of the platform we could explore that.
Q. Prereview – are you planning any evaluation measures? Do the rapid peer review results correlate with later peer review at journals?
MG: We have internal evaluation measures that assess the uptake of the tool and what communities are using it, with particular attention to diversity. We aren’t looking at that question of the correlation between rapid reviews and journal reviews explicitly at PREreview, but we are open to working with researchers who would be interested in exploring that question.
Q. How we can ensure that the preprint data shared is secure and no one is going to plagiarise the shared data?
MG: For this question, I’ll assume you are speaking of data you are referring to data reviewers identify in the preprint and specify the link. I think we have to rely here on research integrity, the same way that we have for open data in publications.
Q. Question for Sarah – Thank you for a nice summary of the initiative. Do you have some insight into what may be behind the fact that a minority of the reviewers who signed up for rapid reviews were involved in reviewing? What do you think can be done to increase their involvement?
SG: Thanks for the great question. The reviewers are all ready to be used – the education piece from the is around making sure the Editors know they can go outside their usual sources for peer reviewers and expand the pool of people they invite. As an ex-Editor myself we all have academics we like to use on certain papers – which is one of the main reasons we set up the reviewer pool to relieve the pressure on a smaller group of academics who were overwhelmed. We now have a global group of academics ready to review but our Editors aren’t, yet, turning to them in large numbers. So this is back with the Publishers to look at – and in the meantime, we’re aiming to use this amazing resource of academics on PREreview to comment on COVID-19 reprints.
Q. Are any of you doing work with making key (and review) articles more readable? Even an Open Article is not easily absorbed by a researcher unless they can also read some of the cited sources in that paper. For example, The journal Science published a really important paper on May 1. The first to give very specific data on the problem of asymptomatic transmission. [They estimated that 89% of people who catch COVID catch it from someone who doesn’t know they have it.] It would be great to imagine some researchers close to these fields able to do a replication of this study. But just while listening I went to the reference list and (being an unaffiliated person) it was easy for me to find examples of articles that were closed to me because of paywalls. While listening I submitted the DOI of one of them to the Open Access Button. They automate all the effort of contact authors and helping them to re-open. I suggest this be done on ALL REVIEW papers related to COVID as well as selected others.
SG: Most of the in the initiative have made all their COVID-19 content free to access by anyone across the globe; obviously all content published under an Open Access model is always freely available and allows researchers to access all the articles they need.
Q. I’ve only been a part of the publishing industry since October last year and am still fairly new to everything but all the webinars I’ve attended have been about working together and building community – do you think it’s realistic to say that this community atmosphere can continue?
SG: I truly hope so – the open collaboration has allowed us to move quickly whilst still all working for our Editors and journals and the academic community overall. The benefits can be seen in terms of the initiatives we’ve managed to launch in such a short space of time – so I’d like to think we keep working together as a group to look at other ways we can increase the transparency of research and improve the publication process for all authors in an era of open science.
Q. It seems as though a lot of attention has recently been focussed on the dissemination of information about C19 specifically (for obvious reasons). Are there any aspects of the rapid review system (or other parts of the publication process which have been used more frequently during the pandemic) which may not be sustainable long-term once the collective focus of the scientific community has perhaps dispersed somewhat?
SAS: A very good point. I can certainly confirm that all the discussion about data is not limited to health crises and not even to the biomedical area only. FAIR is another name for good data management and sharing and this is an ongoing issue in all disciplines.
Q. If a reviewer conducts a peer review privately/anonymously, and later switches their profile to be public, would the anonymous review now become public as well?
DS: Yes, a reviewer can have two personas and review a preprint as anonymous and another with the public profile on. Each profile page will show the contributions made with that profile settings, but we on our side have those unified under the same user id, so that we can enforce the COC regardless. Another important point is that a user can review a preprint or request a review only once with one profile settings to prevent the instance in which someone would try to double up a review.
Q. As the number of reviews increases, do you plan to set categories according to themes to ease the research in the platform?
– I created recently a profile and I’d like to know if reviews can be individual. I saw that they can be done through journal clubs (very nice initiative with eLife, by the way) but I was wondering if that is the only way.
– Maybe I missed this but are the reviews in PREreview recognized like, for example, in ORCID? One of the problems with peer-review is the lack of recognition.
DS: You can search through reviews using keywords already. Some categories on OSrPRE are already available for quick search on the left side of the main page.
– Reviews right now can only be signed by one person actually. We are working to make the collaborative writing option for JC available in an easy way, but that’s not implemented in the current version of OSrPRE. If you want help, please don’t hesitate to contact us at contact@prereview.org.
– As mentioned before, we are working to make that link back to ORCID possible.
Q. About the community review statistics show during peer review process in Outbreak Science: Is there any concern about the statistics shown influencing reviewer responses?
DS: We do not highlight profiles based on expertise so all reviews are weighted equally.
Q. Great talk Monica and Daniela! Quick question: if someone requests a preprint to be rapidly reviewed, is there a pool of reviewers that gets notified, or do researchers have to go to the website to see which preprints received a rapid review request?
DS: Thank you! We have notifications set up that work if the user has their email in the orcid profile or type it up in the OSrPRE profile. The way it works is if the user has requested a review they will receive an email when someone has reviewed that preprint. But it’s tricky to contact the author of the preprint if they are not not signed up on the platform as we don’t have access to their email address. One way would be for us to automate a system that fetches the corresponding author’s email address from the metadata (if present) and email them when their preprint has been reviewed. But we need to investigate if that is something researchers would like.
Q. Have there been, or should there be discussions about adopting micropublications – particularly surrounding reproducible results derived from data sets? For example: Covid19 bio/medical informatics papers, in particular, could be accelerated if the requirement to produce a ‘full paper’ with comprehensive introduction/backgrounds/etc was not required.
DS: I know that many have/are working on micro/nanopublications, and IMO any additional solutions that enhance the reusability of the data is welcome.
Q. Regarding your bit about reviewer expertise, have you considered methods to rate reviewers based on the feedback on the quality of reviews by their peers, or any other metrics?
DS: On PREreview, the non-outbreak related platform, we have integrated Plaudit to allow users to endorse the PREreviews. But that is not implemented in the OSrPRE platform as the format is structured and it could be artificial to give a rating to a Y/N questionnaire.
MG: We have considered building in badges to identify reviewers that have gone through a program or a preselected list from a publisher.
Leave a Comment