当前位置: X-MOL 学术Comput. Methods Programs Biomed. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep learning can generate traditional retinal fundus photographs using ultra-widefield images via generative adversarial networks.
Computer Methods and Programs in Biomedicine ( IF 6.1 ) Pub Date : 2020-09-16 , DOI: 10.1016/j.cmpb.2020.105761
Tae Keun Yoo 1 , Ik Hee Ryu 2 , Jin Kuk Kim 2 , In Sik Lee 3 , Jung Sub Kim 3 , Hong Kyu Kim 4 , Joon Yul Choi 5
Affiliation  

Background and objective

Retinal imaging has two major modalities, traditional fundus photography (TFP) and ultra-widefield fundus photography (UWFP). This study demonstrates the feasibility of a state-of-the-art deep learning-based domain transfer from UWFP to TFP.

Methods

A cycle-consistent generative adversarial network (CycleGAN) was used to automatically translate the UWFP to the TFP domain. The model was based on an unpaired dataset including anonymized 451 UWFP and 745 TFP images. To apply CycleGAN to an independent dataset, we randomly divided the data into training (90%) and test (10%) datasets. After automated image registration and masking dark frames, the generator and discriminator networks were trained. Additional twelve publicly available paired TFP and UWFP images were used to calculate the intensity histograms and structural similarity (SSIM) indices.

Results

We observed that all UWFP images were successfully translated into TFP-style images by CycleGAN, and the main structural information of the retina and optic nerve was retained. The model did not generate fake features in the output images. Average histograms demonstrated that the intensity distribution of the generated output images provided a good match to the ground truth images, with an average SSIM level of 0.802.

Conclusions

Our approach enables automated synthesis of TFP images directly from UWFP without a manual pre-conditioning process. The generated TFP images might be useful for clinicians in investigating posterior pole and for researchers in integrating TFP and UWFP databases. This is also likely to save scan time and will be more cost-effective for patients by avoiding additional examinations for an accurate diagnosis.



中文翻译:

深度学习可以通过生成的对抗网络使用超广角图像生成传统的视网膜眼底照片。

背景和目标

视网膜成像有两种主要方式,传统眼底摄影(TFP)和超广角眼底摄影(UWFP)。这项研究证明了从UWFP到TFP进行基于深度学习的最新技术领域的可行性。

方法

周期一致的生成对抗网络(CycleGAN)用于将UWFP自动转换为TFP域。该模型基于未配对的数据集,包括匿名的451 UWFP和745 TFP图像。为了将CycleGAN应用于独立的数据集,我们将数据随机分为训练(90%)和测试(10%)数据集。在自动图像配准和掩盖黑框之后,对生成器和鉴别器网络进行了训练。另外使用了十二张公开可用的成对的TFP和UWFP图像来计算强度直方图和结构相似性(SSIM)指数。

结果

我们观察到,CycleGAN将所有UWFP图像成功转换为TFP样式的图像,并且保留了视网膜和视神经的主要结构信息。该模型未在输出图像中生成假特征。平均直方图表明,生成的输出图像的强度分布与地面真实图像非常匹配,平均SSIM值为0.802。

结论

我们的方法可以直接从UWFP自动合成TFP图像,而无需手动进行预处理。生成的TFP图像可能对临床医生研究后极以及对研究人员整合TFP和UWFP数据库有用。通过避免进行额外的检查以进行准确的诊断,这也可能节省扫描时间,并且对患者更具成本效益。

更新日期:2020-09-20
down
wechat
bug