Open Peer Review - Evolution and Experimentation

12-09-2017 vinguyen

In the second post of our series in celebration of Peer Review Week 2017, guest author David Crotty looks at the current experimental approaches in open peer review. David is the Editorial Director of Journals Policy for Oxford University Press, and the Executive Editor of the Scholarly Kitchen blog where he regularly writes about current issues in publishing.

Have you ever read a paper and wondered, “how the heck did this get published – what were the reviewers thinking?” As scientific practices continue to evolve toward openness and better transparency, the peer review process is undergoing a similar transition. The term “open peer review” has become something of a catch-all term for a wide variety of different methodologies, all meant to improve the quality and reproducibility of published research.

“Open peer review,” like “open access” means a lot of different things to different people. Across the board, the goal is to provide a window into the publication and review process, which is now something of a black box. While the authors see and respond to reviewer comments, the reader has neither access to these expert opinions, nor any way of knowing how rigorous the process was. Open peer review methods bring that information into the light, adding to our ability to understand and interpret the research literature.

The simplest form of open peer review is to publish the reviews alongside the final paper. While it is likely that very few of these reviews will be read (in an era of information overload, no one wants more to read), they may prove invaluable on occasion, particularly for controversial papers or where a reader wants to know what others thought of the work. Open reviews also provide a valuable tool for assessing a journal’s editorial practices – how careful a process is it, does the editor choose expert reviewers well, and how much weight do their reviews carry in the decision process?

As science itself evolves toward more open, transparent and reproducible methodologies, so too does scholarly publishing.

Taking things a step further, some journals identify the reviewers to the author during the review process, and to the reader upon publication. Often this is voluntary, the reviewer is given a choice as to whether to sign their review, but in some cases it is mandatory. Identification of the reviewer can provide useful information for a reader – how “expert” were those who reviewed it? But this form of open peer review remains controversial. Blinded review (either single blind, where the reviewers know the authors’ identities, or double blind, where this is not disclosed) is the norm for scholarly journals. Anonymity is an incredibly powerful part of the process, as we want reviewers to honestly and objectively review the work presented. It is much less dangerous to speak truth to power if one can do so anonymously. Signing your name to intensely negative review may come with consequences, as the paper’s authors may later exact some form of retribution (even if subconsciously) toward someone they consider a harsh critic. Consider whether you’d be willing to openly shred the work of the most powerful researcher in your field, knowing that they may be sitting on your next grant review or hiring committee.

A different approach to open peer review is to conduct the entire process in public. A paper is submitted to a journal and is immediately posted online. Reviewers are either directly invited or come from readers who happen to come across the posted manuscript. Reviews are posted publicly alongside the paper, and any resulting revisions are noted. When the manuscript has received some threshold number of positive reviews, it is considered “accepted”. While this is probably the most “open” form of open peer review, it is not without flaws. First, we know from decades of experience, that without oversight and management, the peer review process often doesn’t happen. Journal editors spend an enormous amount of time finding willing reviewers and regularly prodding them to get their reviews in on time. When left to a stochastic process, the results are, not surprisingly, somewhat random. Some papers get reviewed, others don’t. For example, only a small number of papers posted to the preprint server bioRxiv receive any comments at all, let alone formal reviews. F1000 Research, a journal that uses this method, has a significant number of papers that remain unreviewed, years after they were posted. This leaves the authors in limbo, as their paper is considered “published” but not “accepted.” And while opening up the review process to everyone greatly increases the pool of potential reviewers, there is no quality control offered by this system, no way to limit review to those who are truly experts in the field.

This last method also makes public the reviewing process for rejected papers. Again, once posted, a paper is considered “published”, so if rejected, the author has no ability to re-submit it elsewhere. Given the culture of academia, having one’s failures publicly and permanently displayed to the world is not something most researchers would enjoy.

The other major evolution we’re seeing in peer review is a drive toward offering career credit for those participating in the review process. Performing peer reviews has long been seen as a vital part of being a scientist, a service one offers to one’s research community. But as researchers are under growing pressure to document and justify how they spend their time, some are seeking a formalized recognition for peer review as a job responsibility. It’s not clear how much “credit” one should receive for reviewing – no one is going to get tenure or a grant for being a really good reviewer – but performing peer review is a useful signal in researcher assessment. Being a reviewer means that you’re an active participant in the community, and considered enough of an expert by a journal to be asked for your opinion. So far it’s not clear that any institutions or funding bodies are offering this sort of credit, but likely it will become something recognized, at least at a basic yes/no level (does this person do peer reviews?), and that should suffice, rather than some of the overly complex ranking and ratings systems currently being proposed.

This just scratches the surface of the debates and experimentation going on around peer review, and scholarly publishing in general. We’ve moved past the early digital era, where the goal was simply to reproduce the print journal online, and are now in a period where we’re figuring out how best to take advantage of the digital environment. Journals are no longer bound by the print paradigm, with limited space and rigid schedules, which opens up new possibilities. As science itself evolves toward more open, transparent and reproducible methodologies, so too does scholarly publishing.

You can keep up-to-date with all activities related to Peer Review Week this week by using the Twitter hashtags #PeerRevWk17 and #TransparencyinReview.

If you would like to share with us your views on what peer review means to you, join the global conversation using the hashtag #wepeerreview.

Share this news