当前位置: X-MOL 学术arXiv.cs.NE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Homogeneous Architecture Augmentation for Neural Predictor
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2021-07-28 , DOI: arxiv-2107.13153
Yuqiao Liu, Yehui Tang, Yanan Sun

Neural Architecture Search (NAS) can automatically design well-performed architectures of Deep Neural Networks (DNNs) for the tasks at hand. However, one bottleneck of NAS is the prohibitively computational cost largely due to the expensive performance evaluation. The neural predictors can directly estimate the performance without any training of the DNNs to be evaluated, thus have drawn increasing attention from researchers. Despite their popularity, they also suffer a severe limitation: the shortage of annotated DNN architectures for effectively training the neural predictors. In this paper, we proposed Homogeneous Architecture Augmentation for Neural Predictor (HAAP) of DNN architectures to address the issue aforementioned. Specifically, a homogeneous architecture augmentation algorithm is proposed in HAAP to generate sufficient training data taking the use of homogeneous representation. Furthermore, the one-hot encoding strategy is introduced into HAAP to make the representation of DNN architectures more effective. The experiments have been conducted on both NAS-Benchmark-101 and NAS-Bench-201 dataset. The experimental results demonstrate that the proposed HAAP algorithm outperforms the state of the arts compared, yet with much less training data. In addition, the ablation studies on both benchmark datasets have also shown the universality of the homogeneous architecture augmentation.

中文翻译:

神经预测器的同构架构增强

神经架构搜索 (NAS) 可以为手头的任务自动设计性能良好的深度神经网络 (DNN) 架构。然而,NAS 的一个瓶颈是计算成本过高,这主要是由于昂贵的性能评估。神经预测器可以直接估计性能,而无需对要评估的 DNN 进行任何训练,因此越来越受到研究人员的关注。尽管它们很受欢迎,但它们也受到严重的限制:缺少用于有效训练神经预测器的带注释的 DNN 架构。在本文中,我们提出了 DNN 架构的神经预测器(HAAP)的同构架构增强来解决上述问题。具体来说,在 HAAP 中提出了一种同构架构增强算法,以使用同构表示生成足够的训练数据。此外,HAAP 中引入了 one-hot 编码策略,使 DNN 架构的表示更加有效。实验已在 NAS-Benchmark-101 和 NAS-Bench-201 数据集上进行。实验结果表明,所提出的 HAAP 算法优于现有技术,但训练数据要少得多。此外,对两个基准数据集的消融研究也显示了同构架构增强的普遍性。实验已在 NAS-Benchmark-101 和 NAS-Bench-201 数据集上进行。实验结果表明,所提出的 HAAP 算法优于现有技术,但训练数据要少得多。此外,对两个基准数据集的消融研究也显示了同构架构增强的普遍性。实验已在 NAS-Benchmark-101 和 NAS-Bench-201 数据集上进行。实验结果表明,所提出的 HAAP 算法优于现有技术,但训练数据要少得多。此外,对两个基准数据集的消融研究也显示了同构架构增强的普遍性。
更新日期:2021-07-29
down
wechat
bug