当前位置: X-MOL 学术IEEE J. Biomed. Health Inform. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
MultiSDGAN: Translation of OCT Images to Superresolved Segmentation Labels Using Multi-Discriminators in Multi-Stages
IEEE Journal of Biomedical and Health Informatics ( IF 6.7 ) Pub Date : 2021-09-13 , DOI: 10.1109/jbhi.2021.3110265
Paria Jeihouni 1 , Omid Dehzangi 2 , Annahita Amireskandari 3 , Ali Rezai 4 , Nasser M. Nasrabadi 1
Affiliation  

Optical coherence tomography (OCT) has been identified as a non-invasive and inexpensive imaging modality to discover potential biomarkers for Alzheimer’s diagnosis and progress determination. Current hypotheses presume the thickness of the retinal layers, which are analyzable within OCT scans, as an effective biomarker for the presence of Alzheimer’s. As a logical first step, this work concentrates on the accurate segmentation of retinal layers to isolate the layers for further analysis. This paper proposes a generative adversarial network (GAN) that concurrently learns to increase the image resolution for higher clarity and then segment the retinal layers. We propose a multi-stage and multi-discriminatory generative adversarial network (MultiSDGAN) specifically for superresolution and segmentation of OCT scans of the retinal layer. The resulting generator is adversarially trained against multiple discriminator networks at multiple stages. We aim to avoid early saturation of generator model training leading to poor segmentation accuracies and enhance the process of OCT domain translation by satisfying all the discriminators in multiple scales. We also investigated incorporating the Dice loss and Structured Similarity Index Measure (SSIM) as additional loss functions to specifically target and improve our proposed GAN architecture’s segmentation and superresolution performance, respectively. The ablation study results conducted on our data set suggest that the proposed MultiSDGAN with ten-fold cross-validation (10-CV) provides a reduced equal error rate with 44.24% and 34.09% relative improvements, respectively (p-values of the improvement level tests < .01). Furthermore, our experimental results also demonstrate that the addition of the new terms to the loss function improves the segmentation results significantly by relative improvements of 31.33% (p-value < .01).

中文翻译:


MultiSDGAN:在多阶段使用多鉴别器将 OCT 图像转换为超分辨率分割标签



光学相干断层扫描 (OCT) 已被确定为一种非侵入性且廉价的成像方式,可发现用于阿尔茨海默病诊断和进展确定的潜在生物标志物。目前的假设假设视网膜层的厚度(可在 OCT 扫描中分析)作为阿尔茨海默病存在的有效生物标志物。作为合乎逻辑的第一步,这项工作集中于视网膜层的精确分割,以隔离各层以进行进一步分析。本文提出了一种生成对抗网络(GAN),它同时学习提高图像分辨率以获得更高的清晰度,然后分割视网膜层。我们提出了一种多阶段、多判别生成对抗网络(MultiSDGAN),专门用于视网膜层 OCT 扫描的超分辨率和分割。生成的生成器在多个阶段针对多个鉴别器网络进行对抗性训练。我们的目标是避免生成器模型训练的早期饱和导致分割精度较差,并通过满足多个尺度的所有判别器来增强 OCT 域翻译过程。我们还研究了将 Dice 损失和结构化相似性指数测量 (SSIM) 作为附加损失函数,分别专门针对和改进我们提出的 GAN 架构的分割和超分辨率性能。对我们的数据集进行的消融研究结果表明,所提出的具有十倍交叉验证 (10-CV) 的 MultiSDGAN 提供了降低的等错误率,相对改进分别为 44.24% 和 34.09%(改进水平的 p 值)测试< .01)。 此外,我们的实验结果还表明,在损失函数中添加新项可显着改善分割结果,相对改进 31.33%(p 值 < .01)。
更新日期:2021-09-13
down
wechat
bug