当前位置: X-MOL 学术Knowl. Based Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
ExperienceThinking: Constrained hyperparameter optimization based on knowledge and pruning
Knowledge-Based Systems ( IF 7.2 ) Pub Date : 2020-11-16 , DOI: 10.1016/j.knosys.2020.106602
Chunnan Wang , Hongzhi Wang , Chang Zhou , Hanxiao Chen

Machine learning models are very sensitive to the hyperparameters, and their evaluations are generally expensive. Users desperately need intelligent methods to quickly optimize hyperparameter settings according to known evaluation information, so as to effectively promote the performance of the machine learning models within the limited and small budget. Motivated by this, in this paper, we propose ExperienceThinking algorithm to quickly find the best possible hyperparameter configuration of machine learning algorithms within a few configuration evaluations. ExperienceThinking designs two novel approaches, which make full use of the known evaluation information to intelligently infer optimal configurations from two aspects: search space pruning and knowledge utilization respectively. Two approaches suit for two different kinds of constrained hyperparameter optimization problems, they complement with each other and their combination increases the generality and effectiveness of the ExperienceThinking. To demonstrate the benefit of ExperienceThinking, we conduct extensive experiments using various constrained hyperparameter optimization problems, and compare it with classic hyperparameter optimization algorithms. The experimental results present that our proposed algorithm provides superior results and the design of our proposed algorithm is reasonable.



中文翻译:

ExperienceThinking:基于知识和修剪的约束超参数优化

机器学习模型对超参数非常敏感,并且它们的评估通常很昂贵。用户迫切需要智能方法,以根据已知的评估信息快速优化超参数设置,从而在有限且小的预算内有效地提升机器学习模型的性能。因此,在本文中,我们提出了ExperienceThinking算法,以在一些配置评估中快速找到最佳的机器学习算法超参数配置。ExperienceThinking设计了两种新颖的方法,它们充分利用已知的评估信息从两个方面智能地推断最佳配置:分别是搜索空间修剪和知识利用。两种方法适合两种不同类型的约束超​​参数优化问题,它们相互补充,并且它们的组合提高了ExperienceThinking的通用性和有效性。为了证明ExperienceThinking的好处,我们使用各种约束的超参数优化问题进行了广泛的实验,并将其与经典的超参数优化算法进行了比较。实验结果表明,所提算法提供了较好的结果,所提算法的设计是合理的。我们使用各种受约束的超参数优化问题进行了广泛的实验,并将其与经典的超参数优化算法进行了比较。实验结果表明,所提算法提供了较好的结果,所提算法的设计是合理的。我们使用各种受约束的超参数优化问题进行了广泛的实验,并将其与经典的超参数优化算法进行了比较。实验结果表明,所提算法提供了较好的结果,所提算法的设计是合理的。

更新日期:2020-11-16
down
wechat
bug