当前位置: X-MOL 学术IEEE Trans. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
ApGAN: Approximate GAN for Robust Low Energy Learning from Imprecise Components
IEEE Transactions on Computers ( IF 3.7 ) Pub Date : 2020-03-01 , DOI: 10.1109/tc.2019.2949042
Arman Roohi , Shadi Sheikhfaal , Shaahin Angizi , Deliang Fan , Ronald F DeMara

A Generative Adversarial Network (GAN) is an adversarial learning approach which empowers conventional deep learning methods by alleviating the demands of massive labeled datasets. However, GAN training can be computationally-intensive limiting its feasibility in resource-limited edge devices. In this paper, we propose an approximate GAN (ApGAN) for accelerating GANs from both algorithm and hardware implementation perspectives. First, inspired by the binary pattern feature extraction method along with binarized representation entropy, the existing Deep Convolutional GAN (DCGAN) algorithm is modified by binarizing the weights for a specific portion of layers within both the generator and discriminator models. Further reduction in storage and computation resources is achieved by leveraging a novel hardware-configurable in-memory addition scheme, which can operate in the accurate and approximate modes. Finally, a memristor-based processing-in-memory accelerator for ApGAN is developed. The performance of the ApGAN accelerator on different data-sets such as Fashion-MNIST, CIFAR-10, STL-10, and celeb-A is evaluated and compared with recent GAN accelerator designs. With almost the same Inception Score (IS) to the baseline GAN, the ApGAN accelerator can increase the energy-efficiency by $\sim 28.6\times$28.6× achieving 35-fold speedup compared with a baseline GPU platform. Additionally, it shows 2.5× and 5.8× higher energy-efficiency and speedup over CMOS-ASIC accelerator subject to an 11 percent reduction in IS.

中文翻译:

ApGAN:用于从不精确组件进行稳健低能量学习的近似 GAN

生成对抗网络 (GAN) 是一种对抗学习方法,它通过减轻对大量标记数据集的需求来增强传统深度学习方法的能力。然而,GAN 训练可能是计算密集型的,限制了其在资源有限的边缘设备中的可行性。在本文中,我们从算法和硬件实现的角度提出了一种用于加速 GAN 的近似 GAN(ApGAN)。首先,受到二值模式特征提取方法和二值化表示熵的启发,现有的深度卷积 GAN (DCGAN) 算法通过对生成器和鉴别器模型中特定部分层的权重进行二值化来修改。通过利用一种新颖的硬件可配置内存添加方案,可以进一步减少存储和计算资源,它可以在精确和近似模式下运行。最后,开发了用于 ApGAN 的基于忆阻器的内存处理加速器。ApGAN 加速器在不同数据集(如 Fashion-MNIST、CIFAR-10、STL-10 和 celeb-A)上的性能进行了评估,并与最近的 GAN 加速器设计进行了比较。与基线 GAN 几乎相同的初始分数 (IS),ApGAN 加速器可以将能源效率提高$\sim 28.6\times$28.6×与基准 GPU 平台相比,实现了 35 倍的加速。此外,与 CMOS-ASIC 加速器相比,它的能效和速度提高了 2.5 倍和 5.8 倍,但 IS 降低了 11%。
更新日期:2020-03-01
down
wechat
bug