当前位置: X-MOL 学术IEEE Trans. Fuzzy Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Hierarchical Fused Model with Deep Learning and Type-2 Fuzzy Learning for Breast Cancer Diagnosis
IEEE Transactions on Fuzzy Systems ( IF 11.9 ) Pub Date : 2020-12-01 , DOI: 10.1109/tfuzz.2020.3013681
Tianyu Shen , Jiangong Wang , Chao Gou , Fei-Yue Wang

Breast cancer diagnosis based on medical imaging necessitates both fine-grained lesion segmentation and disease grading. Although deep learning (DL) offers an emerging and powerful paradigm of feature learning for these two tasks, it is hampered from popularizing in practical application due to the lack of interpretability, generalization ability, and large labeled training sets. In this article, we propose a hierarchical fused model based on DL and fuzzy learning to overcome the drawbacks for pixelwise segmentation and disease grading of mammography breast images. The proposed system consists of a segmentation model (ResU-segNet) and a hierarchical fuzzy classifier (HFC) that is a fusion of interval type-2 possibilistic fuzzy c-means and fuzzy neural network. The ResU-segNet segments the masks of mass regions from the images through convolutional neural networks, while the HFC encodes the features from mass images and masks to obtain the disease grading through fuzzy representation and rule-based learning. Through the integration of feature extraction aided by domain knowledge and fuzzy learning, the system achieves favorable performance in a few-shot learning manner, and the deterioration of cross-dataset generalization ability is alleviated. In addition, the interpretability is further enhanced. The effectiveness of the proposed system is analyzed on the publicly available mammogram database of INbreast and a private database through cross-validation. Thorough comparative experiments are also conducted and demonstrated.

中文翻译:

用于乳腺癌诊断的具有深度学习和 2 类模糊学习的分层融合模型

基于医学影像的乳腺癌诊断需要细粒度的病灶分割和疾病分级。尽管深度学习 (DL) 为这两项任务提供了一种新兴且强大的特征学习范式,但由于缺乏可解释性、泛化能力和大型标记训练集,它在实际应用中的普及受到阻碍。在本文中,我们提出了一种基于深度学习和模糊学习的分层融合模型,以克服乳房 X 线摄影乳房图像的像素分割和疾病分级的缺点。所提出的系统由分割模型 (ResU-segNet) 和分层模糊分类器 (HFC) 组成,后者是区间类型 2 可能性模糊 c 均值和模糊神经网络的融合。ResU-segNet 通过卷积神经网络从图像中分割出大量区域的掩码,而 HFC 对来自大量图像和掩码的特征进行编码,通过模糊表示和基于规则的学习获得疾病分级。通过将领域知识辅助的特征提取与模糊学习相结合,系统以少拍学习的方式取得了良好的性能,缓解了跨数据集泛化能力的下降。此外,进一步增强了可解释性。通过交叉验证,在公开可用的 INbreast 乳房 X 线照片数据库和私人数据库上分析了所提出系统的有效性。还进行了彻底的比较实验并进行了演示。而HFC通过模糊表示和基于规则的学习对来自海量图像和掩码的特征进行编码以获得疾病分级。通过将领域知识辅助的特征提取与模糊学习相结合,系统以少拍学习的方式取得了良好的性能,缓解了跨数据集泛化能力的下降。此外,进一步增强了可解释性。通过交叉验证,在公开可用的 INbreast 乳房 X 线照片数据库和私人数据库上分析了所提出系统的有效性。还进行了彻底的比较实验并进行了演示。而HFC通过模糊表示和基于规则的学习对来自海量图像和掩码的特征进行编码以获得疾病分级。通过将领域知识辅助的特征提取与模糊学习相结合,系统以少拍学习的方式取得了良好的性能,缓解了跨数据集泛化能力的下降。此外,进一步增强了可解释性。通过交叉验证,在公开可用的 INbreast 乳房 X 线照片数据库和私人数据库上分析了所提出系统的有效性。还进行了彻底的比较实验并进行了演示。该系统以少拍学习的方式取得了良好的性能,缓解了跨数据集泛化能力的恶化。此外,进一步增强了可解释性。通过交叉验证,在公开可用的 INbreast 乳房 X 线照片数据库和私人数据库上分析了所提出系统的有效性。还进行了彻底的比较实验并进行了演示。该系统以少拍学习的方式取得了良好的性能,缓解了跨数据集泛化能力的恶化。此外,进一步增强了可解释性。通过交叉验证,在公开可用的 INbreast 乳房 X 线照片数据库和私人数据库上分析了所提出系统的有效性。还进行了彻底的比较实验并进行了演示。
更新日期:2020-12-01
down
wechat
bug