当前位置: X-MOL 学术Int. J. CARS › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Multiview multimodal network for breast cancer diagnosis in contrast-enhanced spectral mammography images
International Journal of Computer Assisted Radiology and Surgery ( IF 2.3 ) Pub Date : 2021-05-08 , DOI: 10.1007/s11548-021-02391-4
Jingqi Song 1 , Yuanjie Zheng 1 , Muhammad Zakir Ullah 1 , Junxia Wang 1 , Yanyun Jiang 1 , Chenxi Xu 1 , Zhenxing Zou 2 , Guocheng Ding 2
Affiliation  

Purpose

CESM (contrast-enhanced spectral mammography) is an efficient tool for detecting breast cancer because of its image characteristics. However, among most deep learning-based methods for breast cancer classification, few models can integrate both its multiview and multimodal features. To effectively utilize the image features of CESM and thus help physicians to improve the accuracy of diagnosis, we propose a multiview multimodal network (MVMM-Net).

Methods

The experiment is carried out to evaluate the in-house CESM images dataset taken from 95 patients aged 21–74 years with 760 images. The framework consists of three main stages: the input of the model, image feature extraction, and image classification. The first stage is to preprocess the CESM to utilize its multiview and multimodal features effectively. In the feature extraction stage, a deep learning-based network is used to extract CESM images features. The last stage is to integrate different features for classification using the MVMM-Net model.

Results

According to the experimental results, the proposed method based on the Res2Net50 framework achieves an accuracy of 96.591%, sensitivity of 96.396%, specificity of 96.350%, precision of 96.833%, F1_score of 0.966, and AUC of 0.966 on the test set. Comparative experiments illustrate that the classification performance of the model can be improved by using multiview multimodal features.

Conclusion

We proposed a deep learning classification model that combines multiple features of CESM. The results of the experiment indicate that our method is more precise than the state-of-the-art methods and produces accurate results for the classification of CESM images.



中文翻译:

多视图多峰网络在对比增强的X线乳腺摄影图像中诊断乳腺癌

目的

CESM(对比度增强的乳房X线摄影术)由于其图像特征而成为检测乳腺癌的有效工具。但是,在大多数基于深度学习的乳腺癌分类方法中,很少有模型可以整合其多视图和多模式特征。为了有效利用CESM的图像特征,从而帮助医生提高诊断的准确性,我们提出了一种多视图多模态网络(MVMM-Net)

方法

进行该实验以评估从95位年龄在21-74岁的患者中获得的760张图像的内部CESM图像数据集。该框架包括三个主要阶段:模型的输入,图像特征提取和图像分类。第一步是对CESM进行预处理,以有效利用其多视图和多模式特征。在特征提取阶段,基于深度学习的网络用于提取CESM图像特征。最后一个阶段是使用MVMM-Net模型集成用于分类的不同功能。

结果

根据实验结果,所提出的基于Res2Net50框架的方法在测试集上达到96.591%的准确性,96.396%的灵敏度,96.350%的特异性,96.833%的精度,F1得分0.966和AUC 0.966。比较实验表明,使用多视图多峰特征可以改善模型的分类性能。

结论

我们提出了一种结合了CESM多个功能的深度学习分类模型。实验结果表明,我们的方法比最新方法更精确,并且可以为CESM图像分类提供准确的结果。

更新日期:2021-05-08
down
wechat
bug