当前位置: X-MOL 学术Mach. Learn. Sci. Technol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Efficient hyperparameter tuning for kernel ridge regression with Bayesian optimization
Machine Learning: Science and Technology ( IF 6.013 ) Pub Date : 2021-06-16 , DOI: 10.1088/2632-2153/abee59
Annika Stuke 1 , Patrick Rinke 1 , Milica Todorović 1, 2
Affiliation  

Machine learning methods usually depend on internal parameters—so called hyperparameters—that need to be optimized for best performance. Such optimization poses a burden on machine learning practitioners, requiring expert knowledge, intuition or computationally demanding brute-force parameter searches. We here assess three different hyperparameter selection methods: grid search, random search and an efficient automated optimization technique based on Bayesian optimization (BO). We apply these methods to a machine learning problem based on kernel ridge regression in computational chemistry. Two different descriptors are employed to represent the atomic structure of organic molecules, one of which introduces its own set of hyperparameters to the method. We identify optimal hyperparameter configurations and infer entire prediction error landscapes in hyperparameter space that serve as visual guides for the hyperparameter performance. We further demonstrate that for an increasing number of hyperparameters, BO and random search become significantly more efficient in computational time than an exhaustive grid search, while delivering an equivalent or even better accuracy.



中文翻译:

使用贝叶斯优化对核岭回归进行有效的超参数调整

机器学习方法通​​常依赖于内部参数——所谓的超参数——需要对其进行优化以获得最佳性能。这种优化给机器学习从业者带来了负担,需要专业知识、直觉或计算要求高的蛮力参数搜索。我们在这里评估了三种不同的超参数选择方法:网格搜索、随机搜索和基于贝叶斯优化 (BO) 的高效自动优化技术。我们将这些方法应用于基于计算化学中核岭回归的机器学习问题。使用两种不同的描述符来表示有机分子的原子结构,其中一种将其自己的一组超参数引入该方法。我们确定最佳超参数配置并推断超参数空间中的整个预测误差格局,作为超参数性能的视觉指南。我们进一步证明,对于越来越多的超参数,BO 和随机搜索在计算时间上变得比穷举网格搜索更有效,同时提供等效甚至更好的准确性。

更新日期:2021-06-16
down
wechat
bug