当前位置: X-MOL 学术IEEE T. Evolut. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
AS-NAS: Adaptive Scalable Neural Architecture Search With Reinforced Evolutionary Algorithm for Deep Learning
IEEE Transactions on Evolutionary Computation ( IF 11.7 ) Pub Date : 2021-02-23 , DOI: 10.1109/tevc.2021.3061466
Tong Zhang , Chunyu Lei , Zongyan Zhang , Xian-Bing Meng , C. L. Philip Chen

Neural architecture search (NAS) is a challenging problem in the design of deep learning due to its nonconvexity. To address this problem, an adaptive scalable NAS method (AS-NAS) is proposed based on the reinforced I-Ching divination evolutionary algorithm (IDEA) and variable-architecture encoding strategy. First, unlike the typical reinforcement learning (RL)-based and evolutionary algorithm (EA)-based NAS methods, a simplified RL algorithm is developed and used as the reinforced operator controller to adaptively select the efficient operators of IDEA. Without the complex actor–critic parts, the reinforced IDEA based on simplified RL can enhance the search efficiency of the original EA with lower computational cost. Second, a variable-architecture encoding strategy is proposed to encode neural architecture as a fixed-length binary string. By simultaneously considering variable layers, channels, and connections between different convolution layers, the deep neural architecture can be scalable. Through the integration with the reinforced IDEA and variable-architecture encoding strategy, the design of the deep neural architecture can be adaptively scalable. Finally, the proposed AS-NAS are integrated with the ${L}_{1/2}$ regularization to increase the sparsity of the optimized neural architecture. Experiments and comparisons demonstrate the effectiveness and superiority of the proposed method.

中文翻译:

AS-NAS:具有增强进化算法的自适应可扩展神经架构搜索,用于深度学习

由于其非凸性,神经架构搜索 (NAS) 是深度学习设计中的一个具有挑战性的问题。为了解决这个问题,提出了一种基于增强易经占卜进化算法(IDEA)和可变架构编码策略的自适应可扩展NAS方法(AS-NAS)。首先,与典型的基于强化学习 (RL) 和基于进化算法 (EA) 的 NAS 方法不同,开发了一种简化的 RL 算法并将其用作强化算子控制器,以自适应地选择 IDEA 的有效算子。没有复杂的actor-critic部分,基于简化RL的增强IDEA可以以更低的计算成本提高原始EA的搜索效率。其次,提出了一种可变架构编码策略,将神经架构编码为固定长度的二进制字符串。通过同时考虑不同卷积层之间的可变层、通道和连接,深度神经架构可以扩展。通过与增强型 IDEA 和可变架构编码策略的集成,深度神经架构的设计可以自适应扩展。最后,提议的 AS-NAS 与 ${L}_{1/2}$ 正则化以增加优化神经架构的稀疏性。实验和比较证明了所提出方法的有效性和优越性。
更新日期:2021-02-23
down
wechat
bug