当前位置: X-MOL 学术Wirel. Commun. Mob. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Transfer Deep Generative Adversarial Network Model to Synthetic Brain CT Generation from MR Images
Wireless Communications and Mobile Computing ( IF 2.146 ) Pub Date : 2021-04-26 , DOI: 10.1155/2021/9979606
Yi Gu 1 , Qiankun Zheng 1
Affiliation  

Background. The generation of medical images is to convert the existing medical images into one or more required medical images to reduce the time required for sample diagnosis and the radiation to the human body from multiple medical images taken. Therefore, the research on the generation of medical images has important clinical significance. At present, there are many methods in this field. For example, in the image generation process based on the fuzzy C-means (FCM) clustering method, due to the unique clustering idea of FCM, the images generated by this method are uncertain of the attribution of certain organizations. This will cause the details of the image to be unclear, and the resulting image quality is not high. With the development of the generative adversarial network (GAN) model, many improved methods based on the deep GAN model were born. Pix2Pix is a GAN model based on UNet. The core idea of this method is to use paired two types of medical images for deep neural network fitting, thereby generating high-quality images. The disadvantage is that the requirements for data are very strict, and the two types of medical images must be paired one by one. DualGAN model is a network model based on transfer learning. The model cuts the 3D image into multiple 2D slices, simulates each slice, and merges the generated results. The disadvantage is that every time an image is generated, bar-shaped “shadows” will be generated in the three-dimensional image. Method/Material. To solve the above problems and ensure the quality of image generation, this paper proposes a Dual3D&PatchGAN model based on transfer learning. Since Dual3D&PatchGAN is set based on transfer learning, there is no need for one-to-one paired data sets, only two types of medical image data sets are needed, which has important practical significance for applications. This model can eliminate the bar-shaped “shadows” produced by DualGAN’s generated images and can also perform two-way conversion of the two types of images. Results. From the multiple evaluation indicators of the experimental results, it can be analyzed that Dual3D&PatchGAN is more suitable for the generation of medical images than other models, and its generation effect is better.

中文翻译:

转移深度生成对抗网络模型从MR图像合成脑部CT

背景。医学图像的生成是将现有医学图像转换为一个或多个所需医学图像,以减少样品诊断和从多个医学图像中辐射到人体所需的时间。因此,医学图像生成的研究具有重要的临床意义。目前,该领域有很多方法。例如,在基于模糊C均值(FCM)聚类方法的图像生成过程中,由于FCM具有独特的聚类思想,因此使用该方法生成的图像不确定某些组织的归属。这将导致图像的细节不清楚,从而导致图像质量不高。随着生成对抗网络(GAN)模型的发展,基于深度GAN模型的许多改进方法应运而生。Pix2Pix是基于UNet的GAN模型。该方法的核心思想是使用成对的两种医学图像进行深度神经网络拟合,从而生成高质量的图像。缺点是对数据的要求非常严格,并且必须将两种类型的医学图像一对一地配对。DualGAN模型是基于迁移学习的网络模型。该模型将3D图像切成多个2D切片,模拟每个切片,然后合并生成的结果。缺点是每次生成图像时,都会在三维图像中生成条形的“阴影”。缺点是对数据的要求非常严格,并且必须将两种类型的医学图像一对一地配对。DualGAN模型是基于迁移学习的网络模型。该模型将3D图像切成多个2D切片,模拟每个切片,然后合并生成的结果。缺点是每次生成图像时,都会在三维图像中生成条形的“阴影”。缺点是对数据的要求非常严格,并且必须将两种类型的医学图像一对一地配对。DualGAN模型是基于迁移学习的网络模型。该模型将3D图像切成多个2D切片,模拟每个切片,然后合并生成的结果。缺点是每次生成图像时,都会在三维图像中生成条形的“阴影”。方法/材料。为了解决上述问题并确保图像生成的质量,本文提出了一种基于转移学习的Dual3D&PatchGAN模型。由于Dual3D&PatchGAN是基于转移学习设置的,因此不需要一对一的数据集,仅需要两种类型的医学图像数据集,这对应用具有重要的现实意义。该模型可以消除DualGAN生成的图像产生的条形“阴影”,并且还可以对两种类型的图像进行双向转换。结果。从实验结果的多个评价指标可以看出,Dual3D&PatchGAN比其他模型更适合医学图像的生成,并且生成效果更好。
更新日期:2021-04-26
down
wechat
bug