当前位置: X-MOL 学术Med Phys › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Fully automated explainable abdominal CT contrast media phase classification using organ segmentation and machine learning
Medical Physics ( IF 3.8 ) Pub Date : 2024-04-17 , DOI: 10.1002/mp.17076
Yazdan Salimi 1 , Zahra Mansouri 1 , Ghasem Hajianfar 1 , Amirhossein Sanaat 1 , Isaac Shiri 1, 2 , Habib Zaidi 1, 3, 4, 5
Affiliation  

BackgroundContrast‐enhanced computed tomography (CECT) provides much more information compared to non‐enhanced CT images, especially for the differentiation of malignancies, such as liver carcinomas. Contrast media injection phase information is usually missing on public datasets and not standardized in the clinic even in the same region and language. This is a barrier to effective use of available CECT images in clinical research.PurposeThe aim of this study is to detect contrast media injection phase from CT images by means of organ segmentation and machine learning algorithms.MethodsA total number of 2509 CT images split into four subsets of non‐contrast (class #0), arterial (class #1), venous (class #2), and delayed (class #3) after contrast media injection were collected from two CT scanners. Seven organs including the liver, spleen, heart, kidneys, lungs, urinary bladder, and aorta along with body contour masks were generated by pre‐trained deep learning algorithms. Subsequently, five first‐order statistical features including average, standard deviation, 10, 50, and 90 percentiles extracted from the above‐mentioned masks were fed to machine learning models after feature selection and reduction to classify the CT images in one of four above mentioned classes. A 10‐fold data split strategy was followed. The performance of our methodology was evaluated in terms of classification accuracy metrics.ResultsThe best performance was achieved by Boruta feature selection and RF model with average area under the curve of more than 0.999 and accuracy of 0.9936 averaged over four classes and 10 folds. Boruta feature selection selected all predictor features. The lowest classification was observed for class #2 (0.9888), which is already an excellent result. In the 10‐fold strategy, only 33 cases from 2509 cases (∼1.4%) were misclassified. The performance over all folds was consistent.ConclusionsWe developed a fast, accurate, reliable, and explainable methodology to classify contrast media phases which may be useful in data curation and annotation in big online datasets or local datasets with non‐standard or no series description. Our model containing two steps of deep learning and machine learning may help to exploit available datasets more effectively.

中文翻译:

使用器官分割和机器学习进行全自动可解释腹部 CT 造影剂相位分类

背景与非增强 CT 图像相比,对比增强计算机断层扫描 (CECT) 提供了更多信息,特别是对于肝癌等恶性肿瘤的鉴别。对比剂注射阶段信息通常在公共数据集中缺失,并且在诊所中即使在相同的地区和语言中也没有标准化。这是在临床研究中有效利用现有 CECT 图像的障碍。目的本研究的目的是通过器官分割和机器学习算法从 CT 图像中检测造影剂注射阶段。方法将总共 2509 张 CT 图像分为四份从两台 CT 扫描仪收集注射造影剂后的非造影剂(#0 类)、动脉(#1 类)、静脉(#2 类)和延迟(#3 类)子集。包括肝脏、脾脏、心脏、肾脏、肺、膀胱和主动脉在内的七个器官以及身体轮廓掩模是通过预先训练的深度学习算法生成的。随后,从上述掩模中提取的五个一阶统计特征,包括平均值、标准差、10、50和90百分位数,经过特征选择和缩减后被输入机器学习模型,以将CT图像分类为上述四种之一类。遵循 10 倍数据分割策略。我们的方法的性能根据分类精度指标进行评估。结果Boruta 特征选择和 RF 模型实现了最佳性能,曲线下平均面积超过 0.999,四个类别和 10 倍的平均精度为 0.9936。 Boruta 特征选择选择了所有预测器特征。观察到的最低分类是第 2 类 (0.9888),这已经是一个很好的结果。在 10 倍策略中,2509 例中只有 33 例 (∼1.4%) 被错误分类。所有折叠的性能都是一致的。结论我们开发了一种快速、准确、可靠和可解释的方法来对造影剂相进行分类,这可能有助于大型在线数据集或具有非标准或无系列描述的本地数据集的数据管理和注释。我们的模型包含深度学习和机器学习两个步骤,可能有助于更有效地利用可用数据集。
更新日期:2024-04-17
down
wechat
bug