当前位置: X-MOL 学术J. Digit. Imaging › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep Learning Method for Automated Classification of Anteroposterior and Posteroanterior Chest Radiographs.
Journal of Digital Imaging ( IF 2.9 ) Pub Date : 2019-12-01 , DOI: 10.1007/s10278-019-00208-0
Tae Kyung Kim 1, 2 , Paul H Yi 1, 2 , Jinchi Wei 2 , Ji Won Shin 2 , Gregory Hager 2 , Ferdinand K Hui 1, 2 , Haris I Sair 1, 2 , Cheng Ting Lin 1, 2
Affiliation  

Ensuring correct radiograph view labeling is important for machine learning algorithm development and quality control of studies obtained from multiple facilities. The purpose of this study was to develop and test the performance of a deep convolutional neural network (DCNN) for the automated classification of frontal chest radiographs (CXRs) into anteroposterior (AP) or posteroanterior (PA) views. We obtained 112,120 CXRs from the NIH ChestX-ray14 database, a publicly available CXR database performed in adult (106,179 (95%)) and pediatric (5941 (5%)) patients consisting of 44,810 (40%) AP and 67,310 (60%) PA views. CXRs were used to train, validate, and test the ResNet-18 DCNN for classification of radiographs into anteroposterior and posteroanterior views. A second DCNN was developed in the same manner using only the pediatric CXRs (2885 (49%) AP and 3056 (51%) PA). Receiver operating characteristic (ROC) curves with area under the curve (AUC) and standard diagnostic measures were used to evaluate the DCNN's performance on the test dataset. The DCNNs trained on the entire CXR dataset and pediatric CXR dataset had AUCs of 1.0 and 0.997, respectively, and accuracy of 99.6% and 98%, respectively, for distinguishing between AP and PA CXR. Sensitivity and specificity were 99.6% and 99.5%, respectively, for the DCNN trained on the entire dataset and 98% for both sensitivity and specificity for the DCNN trained on the pediatric dataset. The observed difference in performance between the two algorithms was not statistically significant (p = 0.17). Our DCNNs have high accuracy for classifying AP/PA orientation of frontal CXRs, with only slight reduction in performance when the training dataset was reduced by 95%. Rapid classification of CXRs by the DCNN can facilitate annotation of large image datasets for machine learning and quality assurance purposes.

中文翻译:

深度学习方法用于自动对前后胸部X光片进行分类。

确保正确的射线照片视图标记对于机器学习算法的开发和从多种设施获得的研究的质量控制至关重要。这项研究的目的是开发和测试深度卷积神经网络(DCNN)的性能,以将额胸片(CXR)自动分类为前后(AP)或后后(PA)视图。我们从NIH ChestX-ray14数据库中获得了112,120个CXR,该数据库是在成年(106,179(95%))和儿科(5941(5%))患者中进行的公开CXR数据库,其中包括44,810(40%)AP和67,310(60%) )PA视图。CXR用于训练,验证和测试ResNet-18 DCNN,以将X光片分类为前后和后前。仅使用儿科CXR(AP为2885(49%),PA为3056(51%))以相同的方式开发了第二个DCNN。接收器工作特性(ROC)曲线及其下的面积(AUC)和标准诊断措施用于评估DCNN在测试数据集上的性能。在整个CXR数据集和儿科CXR数据集上训练的DCNN的AUC分别为1.0和0.997,准确度分别为99.6%和98%,以区分AP和PA CXR。在整个数据集上训练的DCNN的敏感性和特异性分别为99.6%和99.5%,在儿科数据集上训练的DCNN的敏感性和特异性均为98%。两种算法之间观察到的性能差异在统计学上不显着(p = 0.17)。我们的DCNN能够准确分类正面CXR的AP / PA方向,而将训练数据集减少95%时,性能只会稍有下降。DCNN对CXR的快速分类可以促进对大型图像数据集进行注释,以进行机器学习和质量保证。
更新日期:2019-11-01
down
wechat
bug