当前位置: X-MOL 学术Parallel Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A scalable algorithm for the optimization of neural network architectures
Parallel Computing ( IF 1.4 ) Pub Date : 2021-04-24 , DOI: 10.1016/j.parco.2021.102788
Massimiliano Lupo Pasini , Junqi Yin , Ying Wai Li , Markus Eisenbach

We propose a new scalable method to optimize the architecture of an artificial neural network. The proposed algorithm, called Greedy Search for Neural Network Architecture, aims to determine a neural network with minimal number of layers that is at least as performant as neural networks of the same structure identified by other hyperparameter search algorithms in terms of accuracy and computational cost. Numerical results performed on benchmark datasets show that, for these datasets, our method outperforms state-of-the-art hyperparameter optimization algorithms in terms of attainable predictive performance by the selected neural network architecture, and time-to-solution for the hyperparameter optimization to complete.



中文翻译:

用于神经网络架构优化的可扩展算法

我们提出了一种新的可扩展方法来优化人工神经网络的体系结构。所提出的算法,称为贪婪搜索神经网络体系结构,旨在确定具有最少层数的神经网络,其准确性和计算成本至少与其他超参数搜索算法所标识的相同结构的神经网络具有相同的性能。在基准数据集上进行的数值结果表明,对于这些数据集,我们的方法在通过选定的神经网络体系结构可实现的预测性能以及超参数优化的求解时间方面优于最新的超参数优化算法。完全的。

更新日期:2021-05-25
down
wechat
bug