Peer review is an integral part of academia. However, often for big organizations such as the European Space Agency, a panel member may have to peer review dozens of proposals in one application cycle. In addition to the heavy workload for individuals, there are concerns regarding bias when a small selection panel makes decisions affecting a large number of applicants. Aimed at reducing these problems, distributed peer review (DPR) is a system that asks all applicants to commit to peer reviewing a certain number of proposals from other applicants, instead of relying on a small committee to make decisions. Writing in Nature Astronomy, Wolfgang Kerzendorf and colleagues present a study comparing the performance of a machine-learning enhanced DPR process with that of a traditional selection panel.

A challenge for any peer review system is in choosing referees with the relevant expertise to make an informed and fair decision. In a DPR system, where each proposal is matched to a number of referees from the pool of applicants, this process quickly becomes very complicated to do by hand. Kerzendorf et al used natural language processing to pick out key terms from proposals and used an algorithm to find expert referees based on their publication record. The algorithm results matched self-reported expertise well, with an 80% success rate of removing researchers with no expertise.

Credit: Cultura RM/Alamy Stock Photo

Analysing the grades given to each proposal in the DPR and traditional systems showed that the degree of agreement between different reviewers on each proposal was the same. The advantage of DPR is that each proposal can be seen by more referees, allowing for a larger statistical basis for the final decision. The referees also represent a larger pool of the research community and each referee handles fewer proposals, enabling them to take more time and give more detailed feedback to the applicant.