当前位置: X-MOL 学术Stat › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Honest leave-one-out cross-validation for estimating post-tuning generalization error
Stat ( IF 1.7 ) Pub Date : 2021-08-24 , DOI: 10.1002/sta4.413
Boxiang Wang 1 , Hui Zou 2
Affiliation  

Many machine learning models have tuning parameters to be determined by the training data, and cross-validation (CV) is perhaps the most commonly used method for selecting tuning parameters. This work concerns the problem of estimating the generalization error of a CV-tuned predictive model. We propose to use an honest leave-one-out cross-validation framework to produce a nearly unbiased estimator of the post-tuning generalization error. By using the kernel support vector machine and the kernel logistic regression as examples, we demonstrate that the honest leave-one-out cross-validation has very competitive performance even when competing with the state-of-the-art .632+ estimator.

中文翻译:

用于估计调整后泛化误差的诚实留一法交叉验证

许多机器学习模型都有由训练数据确定的调整参数,而交叉验证 (CV) 可能是选择调整参数最常用的方法。这项工作涉及估计 CV 调整预测模型的泛化误差的问题。我们建议使用诚实的留一法交叉验证框架来生成一个几乎无偏的调整后泛化误差估计量。通过使用内核支持向量机和内核逻辑回归作为示例,我们证明了诚实的留一法交叉验证即使在与最先进的 .632+ 估计器竞争时也具有非常有竞争力的性能。
更新日期:2021-08-24
down
wechat
bug