当前位置: X-MOL 学术IEEE J. Biomed. Health Inform. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep Learning Methods for Lung Cancer Segmentation in Whole-Slide Histopathology Images—The ACDC@LungHP Challenge 2019
IEEE Journal of Biomedical and Health Informatics ( IF 7.7 ) Pub Date : 2020-11-20 , DOI: 10.1109/jbhi.2020.3039741
Zhang Li , Jiehua Zhang , Tao Tan , Xichao Teng , Xiaoliang Sun , Hong Zhao , Lihong Liu , Yang Xiao , Byungjae Lee , Yilong Li , Qianni Zhang , Shujiao Sun , Yushan Zheng , Junyu Yan , Ni Li , Yiyu Hong , Junsu Ko , Hyun Jung , Yanling Liu , Yu-cheng Chen , Ching-wei Wang , Vladimir Yurovskiy , Pavel Maevskikh , Vahid Khanagha , Yi Jiang , Li Yu , Zhihong Liu , Daiqiang Li , Peter J. Schuffler , Qifeng Yu , Hui Chen , Yuling Tang , Geert Litjens

Accurate segmentation of lung cancer in pathology slides is a critical step in improving patient care. We proposed the ACDC@LungHP (Automatic Cancer Detection and Classification in Whole-slide Lung Histopathology) challenge for evaluating different computer-aided diagnosis (CADs) methods on the automatic diagnosis of lung cancer. The ACDC@LungHP 2019 focused on segmentation (pixel-wise detection) of cancer tissue in whole slide imaging (WSI), using an annotated dataset of 150 training images and 50 test images from 200 patients. This paper reviews this challenge and summarizes the top 10 submitted methods for lung cancer segmentation. All methods were evaluated using metrics using the precision, accuracy, sensitivity, specificity, and DICE coefficient (DC). The DC ranged from 0.7354 $\pm$ 0.1149 to 0.8372 $\pm$ 0.0858. The DC of the best method was close to the inter-observer agreement (0.8398 $\pm$ 0.0890). All methods were based on deep learning and categorized into two groups: multi-model method and single model method. In general, multi-model methods were significantly better ( p $< $ 0.01) than single model methods, with mean DC of 0.7966 and 0.7544, respectively. Deep learning based methods could potentially help pathologists find suspicious regions for further analysis of lung cancer in WSI.

中文翻译:

全切片组织病理学图像中肺癌分割的深度学习方法——2019 年 ACDC@LungHP 挑战赛

在病理切片中准确分割肺癌是改善患者护理的关键步骤。我们提出了 ACDC@LungHP(全玻片肺组织病理学中的自动癌症检测和分类)挑战,用于评估不同的计算机辅助诊断 (CAD) 方法对肺癌的自动诊断。ACDC@LungHP 2019 使用来自 200 名患者的 150 张训练图像和 50 张测试图像的带注释数据集,专注于整个切片成像 (WSI) 中癌症组织的分割(逐像素检测)。本文回顾了这一挑战,并总结了前 10 名提交的肺癌分割方法。所有方法都使用精确度、准确度、灵敏度、特异性和 DICE 系数 (DC) 的指标进行评估。DC 范围从 0.7354 $\下午$ 0.1149 到 0.8372 $\下午$ 0.0858。最佳方法的 DC 接近观察者间协议(0.8398 $\下午$ 0.0890)。所有方法都基于深度学习,分为两类:多模型方法和单模型方法。总的来说,多模型方法明显更好( $< $ 0.01) 比单一模型方法,平均 DC 分别为 0.7966 和 0.7544。基于深度学习的方法可能会帮助病理学家找到可疑区域,以便在 WSI 中进一步分析肺癌。
更新日期:2020-11-20
down
wechat
bug