当前位置: X-MOL 学术Inform. Sci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Neural Network Architecture Optimizer Based on DARTS and Generative Adversarial Learning
Information Sciences ( IF 8.1 ) Pub Date : 2021-09-17 , DOI: 10.1016/j.ins.2021.09.041
Ting Zhang 1 , Muhammad Waqas 1, 2 , Hao Shen 1, 3 , Zhaoying Liu 1 , Xiangyu Zhang 1 , Yujian Li 1 , Zahid Halim 2 , Sheng Chen 4, 5
Affiliation  

Neural network architecture search automatically configures a set of network architectures according to the targeted rules. Thus, it relieves the human-dependent effort and repetitive resources consumption for designing neural network architectures and makes the task of finding the optimum network architecture with better performance much more accessible. Network architecture search methods based on differentiable architecture search (DARTS), however, introduces parameter redundancy. To address this issue, this work presents a novel method for optimizing network architectures that combines DARTS with generative adversarial learning (GAL). We first find the module structures utilizing the DARTS algorithm. Afterwards, the retrieved modules are stacked to derive the initial neural network architecture. Next, the GAL is used to prune some branches of the initial neural network, thereby obtaining the final neural network architecture. The proposed DARTS-GAL method re-optimizes the network architecture searched by DARTS to simplify the network connection and reduce network parameters without compromising network performance. Experimental results on benchmark datasets, i.e., Mixed National Institute of Standards and Technology (MNIST), FashionMNIST, Canadian Institute for Advanced Research10 (CIFAR10), Canadian Institute for Advanced Research100 (CIAFR100), Cats vs Dogs, and voiceprint recognition datasets, indicate that the test accuracies of the DARTS-GAL are higher than those of the DARTS in the majority of the cases. In particular, the proposed solution exhibits an improvement in accuracy by 7.35% on CIFAR10 compared with DARTS, attaining the state-of-the-art result of 99.60%. Additionally, the number of network parameters derived by the DARTS-GAL is significantly lower than that by the DARTS method, with a pruning rate of 62.3% at the highest case.



中文翻译:

基于 DARTS 和生成对抗学习的神经网络架构优化器

神经网络架构搜索根据目标规则自动配置一组网络架构。因此,它减轻了设计神经网络架构的人力依赖和重复性资源消耗,并使寻找具有更好性能的最佳网络架构的任务更容易完成。然而,基于可微架构搜索 (DARTS) 的网络架构搜索方法引入了参数冗余。为了解决这个问题,这项工作提出了一种优化网络架构的新方法,该方法将 DARTS 与生成对抗学习 (GAL) 相结合。我们首先使用 DARTS 算法找到模块结构。之后,将检索到的模块堆叠起来以导出初始神经网络架构。下一个,GAL 用于修剪初始神经网络的一些分支,从而获得最终的神经网络架构。提出的 DARTS-GAL 方法重新优化了 DARTS 搜索的网络架构,以在不影响网络性能的情况下简化网络连接并减少网络参数。基准数据集的实验结果,即混合国家标准与技术研究所 (MNIST)、FashionMNIST、加拿大高级研究所 (CIFAR10)、加拿大高级研究所 (CIAFR100)、猫 vs 狗和声纹识别数据集,表明在大多数情况下,DARTS-GAL 的测试精度高于 DARTS。特别是,与 DARTS 相比,所提出的解决方案在 CIFAR10 上的精度提高了 7.35%,达到 99.60% 的最先进结果。此外,DARTS-GAL 导出的网络参数数量明显低于 DARTS 方法,最高情况下的剪枝率为 62.3%。

更新日期:2021-09-17
down
wechat
bug