当前位置: X-MOL 学术arXiv.cs.NI › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A New Distributed Method for Training Generative Adversarial Networks
arXiv - CS - Networking and Internet Architecture Pub Date : 2021-07-19 , DOI: arxiv-2107.08681
Jinke Ren, Chonghe Liu, Guanding Yu, Dongning Guo

Generative adversarial networks (GANs) are emerging machine learning models for generating synthesized data similar to real data by jointly training a generator and a discriminator. In many applications, data and computational resources are distributed over many devices, so centralized computation with all data in one location is infeasible due to privacy and/or communication constraints. This paper proposes a new framework for training GANs in a distributed fashion: Each device computes a local discriminator using local data; a single server aggregates their results and computes a global GAN. Specifically, in each iteration, the server sends the global GAN to the devices, which then update their local discriminators; the devices send their results to the server, which then computes their average as the global discriminator and updates the global generator accordingly. Two different update schedules are designed with different levels of parallelism between the devices and the server. Numerical results obtained using three popular datasets demonstrate that the proposed framework can outperform a state-of-the-art framework in terms of convergence speed.

中文翻译:

一种训练生成对抗网络的新分布式方法

生成对抗网络 (GAN) 是新兴的机器学习模型,用于通过联合训练生成器和鉴别器来生成类似于真实数据的合成数据。在许多应用程序中,数据和计算资源分布在许多设备上,因此由于隐私和/或通信限制,将所有数据集中在一个位置的集中计算是不可行的。本文提出了一种以分布式方式训练 GAN 的新框架:每个设备使用本地数据计算本地鉴别器;单个服务器汇总其结果并计算全局 GAN。具体来说,在每次迭代中,服务器将全局 GAN 发送到设备,然后更新它们的本地鉴别器;设备将结果发送到服务器,然后计算它们的平均值作为全局鉴别器并相应地更新全局生成器。两种不同的更新计划被设计为在设备和服务器之间具有不同级别的并行性。使用三个流行数据集获得的数值结果表明,所提出的框架在收敛速度方面可以优于最先进的框架。
更新日期:2021-07-20
down
wechat
bug