当前位置: X-MOL 学术PeerJ Comput. Sci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A novel perceptual two layer image fusion using deep learning for imbalanced COVID-19 dataset
PeerJ Computer Science ( IF 3.5 ) Pub Date : 2021-02-10 , DOI: 10.7717/peerj-cs.364
Omar M Elzeki 1 , Mohamed Abd Elfattah 2 , Hanaa Salem 3 , Aboul Ella Hassanien 4, 5 , Mahmoud Shams 6
Affiliation  

Background and Purpose COVID-19 is a new strain of viruses that causes life stoppage worldwide. At this time, the new coronavirus COVID-19 is spreading rapidly across the world and poses a threat to people’s health. Experimental medical tests and analysis have shown that the infection of lungs occurs in almost all COVID-19 patients. Although Computed Tomography of the chest is a useful imaging method for diagnosing diseases related to the lung, chest X-ray (CXR) is more widely available, mainly due to its lower price and results. Deep learning (DL), one of the significant popular artificial intelligence techniques, is an effective way to help doctors analyze how a large number of CXR images is crucial to performance. Materials and Methods In this article, we propose a novel perceptual two-layer image fusion using DL to obtain more informative CXR images for a COVID-19 dataset. To assess the proposed algorithm performance, the dataset used for this work includes 87 CXR images acquired from 25 cases, all of which were confirmed with COVID-19. The dataset preprocessing is needed to facilitate the role of convolutional neural networks (CNN). Thus, hybrid decomposition and fusion of Nonsubsampled Contourlet Transform (NSCT) and CNN_VGG19 as feature extractor was used. Results Our experimental results show that imbalanced COVID-19 datasets can be reliably generated by the algorithm established here. Compared to the COVID-19 dataset used, the fuzed images have more features and characteristics. In evaluation performance measures, six metrics are applied, such as QAB/F, QMI, PSNR, SSIM, SF, and STD, to determine the evaluation of various medical image fusion (MIF). In the QMI, PSNR, SSIM, the proposed algorithm NSCT + CNN_VGG19 achieves the greatest and the features characteristics found in the fuzed image is the largest. We can deduce that the proposed fusion algorithm is efficient enough to generate CXR COVID-19 images that are more useful for the examiner to explore patient status. Conclusions A novel image fusion algorithm using DL for an imbalanced COVID-19 dataset is the crucial contribution of this work. Extensive results of the experiment display that the proposed algorithm NSCT + CNN_VGG19 outperforms competitive image fusion algorithms.

中文翻译:


使用深度学习的新颖感知两层图像融合来处理不平衡的 COVID-19 数据集



背景和目的 COVID-19 是一种新型病毒,可​​导致全球范围内的生命中断。当前,新型冠状病毒COVID-19正在全球迅速传播,对人们的健康构成威胁。实验医学测试和分析表明,几乎所有COVID-19患者都会发生肺部感染。尽管胸部计算机断层扫描是诊断肺部相关疾病的一种有用的成像方法,但胸部 X 光检查 (CXR) 的应用更为广泛,主要是因为其价格较低且效果较好。深度学习 (DL) 是重要的流行人工智能技术之一,是帮助医生分析大量 CXR 图像对性能的重要影响的有效方法。材料和方法在本文中,我们提出了一种新颖的感知两层图像融合,使用深度学习为 COVID-19 数据集获取更多信息的 CXR 图像。为了评估所提出的算法性能,本工作使用的数据集包括从 25 个病例获取的 87 张 CXR 图像,所有这些图像均已确诊为 COVID-19。需要对数据集进行预处理以促进卷积神经网络(CNN)的作用。因此,使用非下采样轮廓波变换(NSCT)和 CNN_VGG19 的混合分解和融合作为特征提取器。结果我们的实验结果表明,本文建立的算法可以可靠地生成不平衡的 COVID-19 数据集。与使用的 COVID-19 数据集相比,融合图像具有更多的特征和特征。在评估性能指标中,应用了QAB/F、QMI、PSNR、SSIM、SF和STD等六个指标来确定各种医学图像融合(MIF)的评估。 在QMI、PSNR、SSIM中,本文提出的算法NSCT+CNN_VGG19取得了最大的成绩,并且在融合图像中发现的特征特征也是最大的。我们可以推断,所提出的融合算法足以有效地生成 CXR COVID-19 图像,这些图像对于检查者探索患者状态更有用。结论 使用深度学习处理不平衡的 COVID-19 数据集的新型图像融合算法是这项工作的重要贡献。大量实验结果表明,所提出的算法 NSCT + CNN_VGG19 优于竞争图像融合算法。
更新日期:2021-02-10
down
wechat
bug