Trust is a topic that is particularly relevant in the times we are all living in. While fake news, political agendas belittling scientific research, and public opinions denying scientific findings are not a recent phenomenon, the COVID-19 pandemic has somehow made it more evident that dissemination of false or misleading information, often coated with scientific language, spreads as quickly as a highly contagious virus, and is often as deadly.

In the academic world, peer review has been under much scrutiny lately, particularly with the increased popularity of open access publishing, whether in repositories or in open access journals. The electronic medium has also offered an increased number of platforms that allow disseminating scientific work instantly and in particular much more quickly than a rigorous peer review process can offer.

All things considered, discussion on to what extent the classic peer review system can be trusted appears timely.

While scientific journals have been in existence since 1665, peer review as we know it is a relatively recent affair. Research output and new ideas were generally shared before publication among scientists; therefore access to a level of “peer scrutiny” has always been part of the scientific method. However, the practice of having peers selected “independently” by a journal’s editor has been widely adopted only since the 1970s. Fifty years later, this is still the most commonly used form of quality-control of research output, regardless of discipline. Recently, alternative methods have been explored, as the “F1000Research” platform, where authors themselves are responsible to invite reviewers for their paper, which is posted as a preprint and updated as the open review process goes forward. It is possible that newer ways of peer assessment require more time to be embraced, but so far, there is no clear indication that they provide higher quality, or are more trustworthy, than classic peer review.

Perhaps, the real question is how much researchers (let alone the general public) trust the output of their peers. The ever increasing number of publications driven by the “publish or perish” culture makes it more and more challenging to “believe” everything that is published in the various outlets. The open science movement tries to address this issue by encouraging, and sometimes enforcing, that datasets, software, and samples supporting scientific findings are made available to the public. At the same time however, this further complicates things as “open” does not necessarily ensure these resources are scrutinised by peers before being made available.

Even before the broader open science movement, the existence and widespread use of preprint servers to disseminate science calls into question whether the opinion of two or three experts in the field is actually worth waiting for. One could argue that the true value of the scientific results presented in a paper can only be revealed over time, and that will be strongly dependent on how much those results are acknowledged, accepted, and embraced by the community.

In 2019, Sense about Science, an independent charity that champions the public interest in sound science, published the results of a survey carried out in collaboration with Elsevier. Those findings show that the vast majority of researchers were satisfied with peer review: 90% of them thought that the peer review process improves the paper and that without peer review there would be no scrutiny on research output.

Trust takes time to build and can be broken by just one bad experience. While most researchers trust peer review, the method can be put at risk by unfair or biased processes or by unethical practices, as often carried out by predatory journals. As identified by the survey, one of the ways to strengthen trust is to make the process more transparent, including explaining clearly to authors what they should expect from peer review and instruct reviewers on how to carry out their review.

At Communications Physics, we believe that authors should receive a balanced and fair assessment of their work. Manuscripts submitted to the journal are initially evaluated by a journal’s editor (who has research expertise in the broad area associated with the manuscript) or an editorial board member, and assessed on the basis of editorial criteria and novelty. If a paper is sent out to review, we aim at obtaining the expert opinion of at least three reviewers, who collectively should be able to comment on all the aspects of a manuscript. We believe that this provides a balance between fairness in the review process and speed, which we know is important for authors. We also rarely contact the same referee twice in a year, so that the burden of the reviewer work, particularly in a time of ever increasing research output, is shared more widely across the physics community.

Reviewers’ guidelines are detailed on the journal webpage, but reviewers are also provided with specific instructions each time they accept to review a manuscript, and are invited to contact the journal editors if they require any clarification. More experienced reviewers are strongly encouraged to mentor an earlier career colleague, especially in those cases where they recommend them to carry over the review after declining the initial invitation. Reviewers have access to the main manuscript as well as any additional files like supplementary material, supplementary data and/or movies that authors sometimes submit with their papers. This has the double advantage of allowing the reviewer to get a complete overview of how results were obtained, while at the same time being able to assess and scrutinize all the material supporting the main conclusions, including any policy related to the results presented in the study under consideration.

Invitations to reviewers are often accompanied by bespoke requests if, for example, we wish one referee to comment on a specific aspect of the study. Most importantly, reviewers are not asked to provide an opinion as to whether the manuscript should be published in the journal, but rather provide constructive technical and scientific feedback that will undoubtedly improve each manuscript peer reviewed by the journal, independent of whether it is ultimately published in Communications Physics or elsewhere.

We strive to maintain a process that will contribute to the advancement of scientific progress, and we would like to believe that our authors trust us in supporting a fair and valuable process leading to the decision made on their research. At the same time, like authors, editors and referees are only human, and bias and errors are therefore unavoidable.

We attempt to reduce bias by generally having more than one journal editor involved in manuscript assessment and by maximising the diversity in our pool of reviewers. In particular, we believe that increasing diversity reduces the burden of review work on a limited number of individuals while providing richer and thus more valuable feedback. Diversity in the physics research community is an important theme that we look forward to discussing in the journal.

As a further means to reduce bias, we promote double blind peer review, and authors who wish their manuscript to be sent to reviewers anonymously can do so, and receive specific guidance on how to “anonymise” their paper. Since 2019, we have adopted transparent peer review, whereby authors can opt in to have all the peer review material, namely the reviewer reports and the authors’ response, published alongside the manuscript. This can be scrutinised by the wider community to independently assess the quality of the published research and understand how a manuscript reached its final shape.

What our authors were not able to do is to track the progress of their article in our system. We acknowledge that this may have been frustrating, and waiting may be less painful if one knows which step of the process they are waiting for. Thanks to a partnership with ResearchGate, all our authors have now access to a “dashboard for all” which allows them to follow the progress of their manuscript in real time.

We hope that all these initiatives aimed at increasing transparency will also help in strengthening trust in the peer review model and the practices we use.

Peer review is recognised as an integral part of reputable and trustworthy research output when performed on scientific papers, but is also fundamental when evaluating grant proposals or applications for instrument time at large public (or private) facilities. Researchers generally trust in peer review but there is room for improvement, particularly to reduce bias related to geographical origin, gender, or other assumptions.

As editors for a journal we should continuously strive to ensure that the process we adopt is fair, unbiased, and fully ethical, as well as promote best practices

As editors for a journal we should continuously strive to ensure that the process we adopt is fair, unbiased, and fully ethical, as well as promote best practices.

We would like to encourage our authors, readers and referees to add any comment to this article to let us know how we can support increasing trust in peer review.