当前位置: X-MOL 学术ACM Comput. Surv. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Systematic Survey of Regularization and Normalization in GANs
ACM Computing Surveys ( IF 23.8 ) Pub Date : 2023-02-09 , DOI: 10.1145/3569928
Ziqiang Li , Muhammad Usman , Rentuo Tao , Pengfei Xia 1 , Chaoyue Wang 2 , Huanhuan Chen , Bin Li 1
Affiliation  

Generative Adversarial Networks (GANs) have been widely applied in different scenarios thanks to the development of deep neural networks. The original GAN was proposed based on the non-parametric assumption of the infinite capacity of networks. However, it is still unknown whether GANs can fit the target distribution without any prior information. Due to the overconfident assumption, many issues remain unaddressed in GANs training, such as non-convergence, mode collapses, and gradient vanishing. Regularization and normalization are common methods of introducing prior information to stabilize training and improve discrimination. Although a handful number of regularization and normalization methods have been proposed for GANs, to the best of our knowledge, there exists no comprehensive survey that primarily focuses on objectives and development of these methods, apart from some incomprehensive and limited-scope studies. In this work, we conduct a comprehensive survey on the regularization and normalization techniques from different perspectives of GANs training. First, we systematically describe different perspectives of GANs training and thus obtain the different objectives of regularization and normalization. Based on these objectives, we propose a new taxonomy. Furthermore, we compare the performance of the mainstream methods on different datasets and investigate the applications of regularization and normalization techniques that have been frequently employed in state-of-the-art GANs. Finally, we highlight potential future directions of research in this domain. Code and studies related to the regularization and normalization of GANs in this work are summarized at https://github.com/iceli1007/GANs-Regularization-Review.



中文翻译:

GAN 中正则化和规范化的系统调查

由于深度神经网络的发展,生成对抗网络(GAN)已被广泛应用于不同的场景。最初的 GAN 是基于网络无限容量的非参数假设提出的。然而,在没有任何先验信息的情况下,GAN 是否能够拟合目标分布仍然是未知数。由于过于自信的假设,GAN 训练中的许多问题仍未得到解决,例如不收敛、模式崩溃和梯度消失。正则化和规范化是引入先验信息以稳定训练和提高辨别力的常用方法。尽管已经为 GAN 提出了一些正则化和归一化方法,但据我们所知,除了一些不全面和范围有限的研究外,没有主要侧重于这些方法的目标和发展的全面调查。在这项工作中,我们从 GAN 训练的不同角度对正则化和归一化技术进行了全面的调查。首先,我们系统地描述了 GANs 训练的不同视角,从而获得正则化和归一化的不同目标。基于这些目标,我们提出了一个新的分类法。此外,我们比较了主流方法在不同数据集上的性能,并研究了最先进的 GAN 中经常使用的正则化和归一化技术的应用。最后,我们强调了该领域潜在的未来研究方向。

更新日期:2023-02-09
down
wechat
bug