当前位置: X-MOL 学术Nat. Mach. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
An open source machine learning framework for efficient and transparent systematic reviews
Nature Machine Intelligence ( IF 18.8 ) Pub Date : 2021-02-01 , DOI: 10.1038/s42256-020-00287-7
Rens van de Schoot , Jonathan de Bruin , Raoul Schram , Parisa Zahedi , Jan de Boer , Felix Weijdema , Bianca Kramer , Martijn Huijts , Maarten Hoogerwerf , Gerbrich Ferdinands , Albert Harkema , Joukje Willemsen , Yongchao Ma , Qixiang Fang , Sybren Hindriks , Lars Tummers , Daniel L. Oberski

To help researchers conduct a systematic review or meta-analysis as efficiently and transparently as possible, we designed a tool to accelerate the step of screening titles and abstracts. For many tasks—including but not limited to systematic reviews and meta-analyses—the scientific literature needs to be checked systematically. Scholars and practitioners currently screen thousands of studies by hand to determine which studies to include in their review or meta-analysis. This is error prone and inefficient because of extremely imbalanced data: only a fraction of the screened studies is relevant. The future of systematic reviewing will be an interaction with machine learning algorithms to deal with the enormous increase of available text. We therefore developed an open source machine learning-aided pipeline applying active learning: ASReview. We demonstrate by means of simulation studies that active learning can yield far more efficient reviewing than manual reviewing while providing high quality. Furthermore, we describe the options of the free and open source research software and present the results from user experience tests. We invite the community to contribute to open source projects such as our own that provide measurable and reproducible improvements over current practice.



中文翻译:

用于高效透明系统评价的开源机器学习框架

为了帮助研究人员尽可能高效和透明地进行系统评价或荟萃分析,我们设计了一个工具来加速筛选标题和摘要的步骤。对于许多任务——包括但不限于系统评价和荟萃分析——需要系统地检查科学文献。学者和从业人员目前手动筛选数千项研究,以确定哪些研究应纳入他们的审查或荟萃分析。由于数据极其不平衡,这容易出错且效率低下:只有一小部分筛选研究是相关的。系统审查的未来将是与机器学习算法的交互,以处理可用文本的巨大增长。因此,我们开发了一个应用主动学习的开源机器学习辅助管道:ASReview。我们通过模拟研究证明,主动学习可以在提供高质量的同时产生比手动审查更有效的审查。此外,我们描述了免费和开源研究软件的选项,并展示了用户体验测试的结果。我们邀请社区为诸如我们自己的开源项目做出贡献,这些项目提供了对当前实践的可衡量和可重复的改进。

更新日期:2021-02-01
down
wechat
bug