当前位置: X-MOL 学术IEEE J. Biomed. Health Inform. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
FABNet: Fusion Attention Block and Transfer Learning for Laryngeal Cancer Tumor Grading in P63 IHC Histopathology Images
IEEE Journal of Biomedical and Health Informatics ( IF 7.7 ) Pub Date : 2021-09-01 , DOI: 10.1109/jbhi.2021.3108999
Pan Huang 1 , Xiaoheng Tan 1 , Xiaoli Zhou 1 , Shuxian Liu 2 , Francesco Mercaldo 3 , Antonella Santone 3
Affiliation  

Laryngeal cancer tumor (LCT) grading is a challenging task in P63 Immunohistochemical (IHC) histopathology images due to small differences between LCT levels in pathology images, the lack of precision in lesion regions of interest (LROIs) and the paucity of LCT pathology image samples. The key to solving the LCT grading problem is to transfer knowledge from other images and to identify more accurate LROIs, but the following problems occur: 1) transferring knowledge without a priori experience often causes negative transfer and creates a heavy workload due to the abundance of image types, and 2) convolutional neural networks (CNNs) constructing deep models by stacking cannot sufficiently identify LROIs, often deviate significantly from the LROIs focused on by experienced pathologists, and are prone to providing misleading second opinions. So we propose a novel fusion attention block network (FABNet) to address these problems. First, we propose a model transfer method based on clinical a priori experience and sample analysis (CPESA) that analyzes the transfer ability by integrating clinical a priori experience using indicators such as the relationship between the cancer onset location and morphology and the texture and staining degree of cell nuclei in histopathology images; our method further validates these indicators by the probability distribution of cancer image samples. Then, we propose a fusion attention block (FAB) structure, which can both provide an advanced non-uniform sparse representation of images and extract spatial relationship information between nuclei; consequently, the LROI can be more accurate and more relevant to pathologists. We conducted extensive experiments, compared with the best Baseline model, the classification accuracy is improved 25%, and It is demonstrated that FABNet performs better on different cancer pathology image datasets and outperforms other state of the art (SOTA) models.

中文翻译:

FABNet:P63 IHC 组织病理学图像中喉癌肿瘤分级的融合注意块和迁移学习

喉癌肿瘤 (LCT) 分级是 P63 免疫组织化学 (IHC) 组织病理学图像中的一项具有挑战性的任务,因为病理图像中 LCT 水平之间的差异很小、病变感兴趣区域 (LROI) 缺乏精确度以及 LCT 病理图像样本缺乏. 解决 LCT 分级问题的关键是从其他图像中迁移知识并识别更准确的 LROI,但会出现以下问题:1)在没有先验经验的情况下迁移知识通常会导致负迁移,并且由于大量的数据而产生繁重的工作量图像类型,以及 2) 通过堆叠构建深度模型的卷积神经网络 (CNN) 无法充分识别 LROI,通常与经验丰富的病理学家关注的 LROI 有很大偏差,并且容易提供误导性的第二意见。因此,我们提出了一种新颖的融合注意力块网络(FABNet)来解决这些问题。首先,我们提出了一种基于临床先验经验和样本分析(CPESA)的模型迁移方法,该方法通过整合临床先验经验,利用癌症发病位置和形态与纹理和染色程度之间的关系等指标来分析迁移能力。组织病理学图像中的细胞核;我们的方法通过癌症图像样本的概率分布进一步验证了这些指标。然后,我们提出了一种融合注意力块(FAB)结构,它既可以提供先进的图像非均匀稀疏表示,又可以提取核之间的空间关系信息;因此,LROI 可以更准确且与病理学家更相关。
更新日期:2021-09-01
down
wechat
bug