当前位置: X-MOL 学术IEEE Comput. Intell. Mag. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Self-Supervised Representation Learning for Evolutionary Neural Architecture Search
IEEE Computational Intelligence Magazine ( IF 10.3 ) Pub Date : 2021-07-21 , DOI: 10.1109/mci.2021.3084415
Chen Wei , Yiping Tang , Chuang Niu Chuang Niu , Haihong Hu , Yue Wang , Jimin Liang

Recently proposed neural architecture search (NAS) algorithms adopt neural predictors to accelerate architecture search. The capability of neural predictors to accurately predict the performance metrics of the neural architecture is critical to NAS, but obtaining training datasets for neural predictors is often time-consuming. How to obtain a neural predictor with high prediction accuracy using a small amount of training data is a central problem to neural predictor-based NAS. Here, a new architecture encoding scheme is first devised to calculate the graph edit distance of neural architectures, which overcomes the drawbacks of existing vector-based architecture encoding schemes. To enhance the predictive performance of neural predictors, two self-supervised learning methods are proposed to pre-train the architecture embedding part of neural predictors to generate a meaningful representation of neural architectures. The first method designs a graph neural network-based model with two independent branches and utilizes the graph edit distance of two different neural architectures as a supervision to force the model to generate meaningful architecture representations. Inspired by contrastive learning, the second method presents a new contrastive learning algorithm that utilizes a central feature vector as a proxy to contrast positive pairs against negative pairs. Experimental results illustrate that the pre-trained neural predictors can achieve comparable or superior performance compared with their supervised counterparts using only half of the training samples. The effectiveness of the proposed methods is further validated by integrating the pre-trained neural predictors into a neural predictor guided evolutionary neural architecture search (NPENAS) algorithm, which achieves stateof-the-art performance on NASBench-101, NASBench-201, and DARTS benchmarks

中文翻译:


用于进化神经架构搜索的自监督表示学习



最近提出的神经架构搜索(NAS)算法采用神经预测器来加速架构搜索。神经预测器准确预测神经架构性能指标的能力对于 NAS 至关重要,但获取神经预测器的训练数据集通常非常耗时。如何利用少量的训练数据获得高预测精度的神经预测器是基于神经预测器的NAS的核心问题。这里,首先设计了一种新的架构编码方案来计算神经架构的图编辑距离,克服了现有基于向量的架构编码方案的缺点。为了增强神经预测器的预测性能,提出了两种自监督学习方法来预训练神经预测器嵌入部分的架构,以生成有意义的神经架构表示。第一种方法设计一个具有两个独立分支的基于图神经网络的模型,并利用两个不同神经架构的图编辑距离作为监督,迫使模型生成有意义的架构表示。受对比学习的启发,第二种方法提出了一种新的对比学习算法,该算法利用中心特征向量作为代理来对比正对和负对。实验结果表明,与仅使用一半训练样本的监督神经预测器相比,预训练的神经预测器可以实现相当或更好的性能。通过将预训练的神经预测器集成到神经预测器引导的进化神经架构搜索(NPENAS)算法中,进一步验证了所提出方法的有效性,该算法在 NASBench-101、NASBench-201 和 DARTS 上实现了最先进的性能基准
更新日期:2021-07-21
down
wechat
bug