当前位置: X-MOL 学术Can. J. Stat. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Robust penalized logistic regression with truncated loss functions.
The Canadian Journal of Statistics ( IF 0.6 ) Pub Date : 2011-05-23 , DOI: 10.1002/cjs.10105
Seo Young Park 1 , Yufeng Liu
Affiliation  

The penalized logistic regression (PLR) is a powerful statistical tool for classification. It has been commonly used in many practical problems. Despite its success, since the loss function of the PLR is unbounded, resulting classifiers can be sensitive to outliers. To build more robust classifiers, we propose the robust PLR (RPLR) which uses truncated logistic loss functions, and suggest three schemes to estimate conditional class probabilities. Connections of the RPLR with some other existing work on robust logistic regression have been discussed. Our theoretical results indicate that the RPLR is Fisher consistent and more robust to outliers. Moreover, we develop estimated generalized approximate cross validation (EGACV) for the tuning parameter selection. Through numerical examples, we demonstrate that truncating the loss function indeed yields better performance in terms of classification accuracy and class probability estimation. The Canadian Journal of Statistics 39: 300–323; 2011 © 2011 Statistical Society of Canada

中文翻译:

具有截断损失函数的稳健惩罚逻辑回归。

惩罚逻辑回归 (PLR) 是一种强大的分类统计工具。它已被广泛应用于许多实际问题中。尽管取得了成功,但由于 PLR 的损失函数是无界的,因此生成的分类器可能对异常值很敏感。为了构建更稳健的分类器,我们提出了使用截断逻辑损失函数的稳健 PLR (RPLR),并提出了三种方案来估计条件类概率。已经讨论了 RPLR 与其他一些关于稳健逻辑回归的现有工作的联系。我们的理论结果表明 RPLR 是 Fisher 一致的,对异常值更稳健。此外,我们为调整参数选择开发了估计的广义近似交叉验证(EGACV)。通过数值例子,我们证明了截断损失函数确实在分类精度和类概率估计方面产生了更好的性能。加拿大统计杂志 39:300-323;2011 © 2011 加拿大统计学会
更新日期:2011-05-23
down
wechat
bug