当前位置: X-MOL 学术BMJ › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies.
The BMJ ( IF 105.7 ) Pub Date : 2020-03-25 , DOI: 10.1136/bmj.m689
Myura Nagendran 1 , Yang Chen 2 , Christopher A Lovejoy 3 , Anthony C Gordon 4, 5 , Matthieu Komorowski 6 , Hugh Harvey 7 , Eric J Topol 8 , John P A Ioannidis 9 , Gary S Collins 10, 11 , Mahiben Maruthappu 3
Affiliation  

OBJECTIVE To systematically examine the design, reporting standards, risk of bias, and claims of studies comparing the performance of diagnostic deep learning algorithms for medical imaging with that of expert clinicians. DESIGN Systematic review. DATA SOURCES Medline, Embase, Cochrane Central Register of Controlled Trials, and the World Health Organization trial registry from 2010 to June 2019. ELIGIBILITY CRITERIA FOR SELECTING STUDIES Randomised trial registrations and non-randomised studies comparing the performance of a deep learning algorithm in medical imaging with a contemporary group of one or more expert clinicians. Medical imaging has seen a growing interest in deep learning research. The main distinguishing feature of convolutional neural networks (CNNs) in deep learning is that when CNNs are fed with raw data, they develop their own representations needed for pattern recognition. The algorithm learns for itself the features of an image that are important for classification rather than being told by humans which features to use. The selected studies aimed to use medical imaging for predicting absolute risk of existing disease or classification into diagnostic groups (eg, disease or non-disease). For example, raw chest radiographs tagged with a label such as pneumothorax or no pneumothorax and the CNN learning which pixel patterns suggest pneumothorax. REVIEW METHODS Adherence to reporting standards was assessed by using CONSORT (consolidated standards of reporting trials) for randomised studies and TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) for non-randomised studies. Risk of bias was assessed by using the Cochrane risk of bias tool for randomised studies and PROBAST (prediction model risk of bias assessment tool) for non-randomised studies. RESULTS Only 10 records were found for deep learning randomised clinical trials, two of which have been published (with low risk of bias, except for lack of blinding, and high adherence to reporting standards) and eight are ongoing. Of 81 non-randomised clinical trials identified, only nine were prospective and just six were tested in a real world clinical setting. The median number of experts in the comparator group was only four (interquartile range 2-9). Full access to all datasets and code was severely limited (unavailable in 95% and 93% of studies, respectively). The overall risk of bias was high in 58 of 81 studies and adherence to reporting standards was suboptimal (<50% adherence for 12 of 29 TRIPOD items). 61 of 81 studies stated in their abstract that performance of artificial intelligence was at least comparable to (or better than) that of clinicians. Only 31 of 81 studies (38%) stated that further prospective studies or trials were required. CONCLUSIONS Few prospective deep learning studies and randomised trials exist in medical imaging. Most non-randomised trials are not prospective, are at high risk of bias, and deviate from existing reporting standards. Data and code availability are lacking in most studies, and human comparator groups are often small. Future studies should diminish risk of bias, enhance real world clinical relevance, improve reporting and transparency, and appropriately temper conclusions. STUDY REGISTRATION PROSPERO CRD42019123605.

中文翻译:

人工智能与临床医生:对深度学习研究的设计、报告标准和主张的系统评价。

目的 系统地检查设计、报告标准、偏倚风险和研究声称,将医学影像诊断深度学习算法的性能与临床专家的性能进行比较。设计系统审查。数据来源 Medline、Embase、Cochrane Central Register of Controlled Trials 和世界卫生组织试验登记处,从 2010 年到 2019 年 6 月。 选择研究的资格标准 比较深度学习算法在医学成像中的性能的随机试验注册和非随机研究与一位或多位临床专家组成的当代团队。医学成像对深度学习研究的兴趣日益浓厚。深度学习中卷积神经网络 (CNN) 的主要区别在于,当 CNN 输入原始数据时,他们开发了自己的模式识别所需的表示。该算法自己学习对分类很重要的图像特征,而不是被人类告知要使用哪些特征。选定的研究旨在使用医学成像来预测现有疾病的绝对风险或分类到诊断组(例如,疾病或非疾病)。例如,原始胸片带有气胸或无气胸等标签,CNN 学习哪些像素模式提示气胸。审查方法 对随机研究的 CONSORT(报告试验的综合标准)和非随机研究的 TRIPOD(个体预后或诊断的多变量预测模型的透明报告)评估报告标准的遵守情况。偏倚风险通过使用 Cochrane 随机研究偏倚风险工具和 PROBAST(预测模型偏倚风险评估工具)评估非随机研究。结果 仅发现 10 条深度学习随机临床试验记录,其中 2 条已发表(偏倚风险低,除了缺乏盲法和高度遵守报告标准),8 条正在进行中。在确定的 81 项非随机临床试验中,只有 9 项是前瞻性的,只有 6 项在真实世界的临床环境中进行了测试。比较组中专家的中位数仅为四位(四分位距为 2-9)。对所有数据集和代码的完全访问受到严重限制(分别在 95% 和 93% 的研究中不可用)。81 项研究中有 58 项的总体偏倚风险很高,对报告标准的依从性欠佳(29 个 TRIPOD 项目中的 12 个项目的依从性低于 50%)。81 项研究中有 61 项在其摘要中指出,人工智能的表现至少与临床医生相当(或优于)。81 项研究中只有 31 项(38%)表示需要进一步的前瞻性研究或试验。结论 医学成像领域几乎没有前瞻性的深度学习研究和随机试验。大多数非随机试验都不是前瞻性的,存在很高的偏倚风险,并且偏离了现有的报告标准。大多数研究都缺乏数据和代码可用性,并且人类比较组通常很小。未来的研究应该减少偏倚风险,增强现实世界的临床相关性,提高报告和透明度,并适当调整结论。学习注册 PROSPERO CRD42019123605。
更新日期:2020-03-26
down
wechat
bug