当前位置: X-MOL 学术Signal Image Video Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Joint training of two-channel deep neural network for brain tumor classification
Signal, Image and Video Processing ( IF 2.0 ) Pub Date : 2020-10-08 , DOI: 10.1007/s11760-020-01793-2
Jyostna Devi Bodapati , Nagur Shareef Shaik , Veeranjaneyulu Naralasetti , Nirupama Bhat Mundukur

Brain tumor recognition is a challenging task, and accurate diagnosis increases the chance of patient survival. In this article, we propose a two-channel deep neural network architecture for tumor classification that is more generalizable. Initially, local feature representations are extracted from convolution blocks of InceptionResNetV2 and Xception networks and are vectorized using proposed pooling-based techniques. An attention mechanism is proposed that allows more focus on tumor regions and less focus on non-tumor regions which eventually helps to differentiate the type of tumor present in the images. The proposed two-channel model allows joint training of two sets of tumor image representations in an end-to-end manner to achieve good generalization. Empirical studies on Figshare and BraT’S2018, benchmark datasets, reveal that our approach is superior in terms of generalization and simple in terms of number of layers compared to the existing complex models that follow fine-tuning of deep CNN models. Avoiding too much preprocessing and augmentation techniques, the proposed model sets new state-of-the-art scores on both the brain tumor datasets.

中文翻译:

用于脑肿瘤分类的双通道深度神经网络联合训练

脑肿瘤识别是一项具有挑战性的任务,准确的诊断增加了患者生存的机会。在本文中,我们提出了一种更通用的用于肿瘤分类的双通道深度神经网络架构。最初,局部特征表示是从 InceptionResNetV2 和 Xception 网络的卷积块中提取的,并使用提出的基于池化的技术进行矢量化。提出了一种注意力机制,允许更多地关注肿瘤区域而更少地关注非肿瘤区域,这最终有助于区分图像中存在的肿瘤类型。提出的双通道模型允许以端到端的方式联合训练两组肿瘤图像表示,以实现良好的泛化。Figshare 和 BraT'S2018 的实证研究,基准数据集,表明,与遵循深度 CNN 模型微调的现有复杂模型相比,我们的方法在泛化方面更优,在层数方面更简单。避免过多的预处理和增强技术,所提出的模型在两个脑肿瘤数据集上都设置了新的最先进的分数。
更新日期:2020-10-08
down
wechat
bug