当前位置: X-MOL 学术Test › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
MM for penalized estimation
TEST ( IF 1.3 ) Pub Date : 2021-04-08 , DOI: 10.1007/s11749-021-00770-2
Zhu Wang

Penalized estimation can conduct variable selection and parameter estimation simultaneously. The general framework is to minimize a loss function subject to a penalty designed to generate sparse variable selection. The majorization–minimization (MM) algorithm is a computational scheme for stability and simplicity, and the MM algorithm has been widely applied in penalized estimation. Much of the previous work has focused on convex loss functions such as generalized linear models. When data are contaminated with outliers, robust loss functions can generate more reliable estimates. Recent literature has witnessed a growing impact of nonconvex loss-based methods, which can generate robust estimation for data contaminated with outliers. This article investigates MM algorithm for penalized estimation, provides innovative optimality conditions and establishes convergence theory with both convex and nonconvex loss functions. With respect to applications, we focus on several nonconvex loss functions, which were formerly studied in machine learning for regression and classification problems. Performance of the proposed algorithms is evaluated on simulated and real data including cancer clinical status. Efficient implementations of the algorithms are available in the R package mpath in CRAN.



中文翻译:

MM用于惩罚性估计

惩罚估计可以同时进行变量选择和参数估计。一般框架是使损失函数最小化,该损失函数会受到旨在生成稀疏变量选择的惩罚。主化-最小化(MM)算法是一种稳定且简单的计算方案,并且MM算法已广泛应用于惩罚估计中。先前的许多工作都集中在凸损失函数上,例如广义线性模型。当数据被异常值污染时,稳健的损失函数可以生成更可靠的估计。最近的文献已经见证了基于非凸损失的方法的影响越来越大,该方法可以为被异常值污染的数据生成可靠的估计。本文研究了用于惩罚估计的MM算法,提供了创新的最优条件,并建立了具有凸和非凸损失函数的收敛理论。关于应用程序,我们专注于几个非凸损失函数,这些函数以前在机器学习中针对回归和分类问题进行过研究。拟议算法的性能是在模拟和真实数据(包括癌症临床状况)上进行评估的。该算法的有效实现在CRAN中的Rmpath

更新日期:2021-04-08
down
wechat
bug