当前位置: X-MOL 学术Math. Program. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Sample average approximation with sparsity-inducing penalty for high-dimensional stochastic programming
Mathematical Programming ( IF 2.7 ) Pub Date : 2018-05-03 , DOI: 10.1007/s10107-018-1278-0
Hongcheng Liu 1 , Xue Wang 2 , Tao Yao 2 , Runze Li 3 , Yinyu Ye 4
Affiliation  

The theory on the traditional sample average approximation (SAA) scheme for stochastic programming (SP) dictates that the number of samples should be polynomial in the number of problem dimensions in order to ensure proper optimization accuracy. In this paper, we study a modification to the SAA in the scenario where the global minimizer is either sparse or can be approximated by a sparse solution. By making use of a regularization penalty referred to as the folded concave penalty (FCP), we show that, if an FCP-regularized SAA formulation is solved locally, then the required number of samples can be significantly reduced in approximating the global solution of a convex SP: the sample size is only required to be poly-logarithmic in the number of dimensions. The efficacy of the FCP regularizer for nonconvex SPs is also discussed. As an immediate implication of our result, a flexible class of folded concave penalized sparse M-estimators in high-dimensional statistical learning may yield a sound performance even when the problem dimension cannot be upper-bounded by any polynomial function of the sample size.

中文翻译:

用于高维随机规划的带有稀疏诱导惩罚的样本平均近似

用于随机规划 (SP) 的传统样本平均逼近 (SAA) 方案的理论规定,样本数量应该是问题维度数量的多项式,以确保适当的优化精度。在本文中,我们研究了在全局极小值稀疏或可以通过稀疏解近似的情况下对 SAA 的修改。通过使用称为折叠凹面惩罚 (FCP) 的正则化惩罚,我们表明,如果局部求解 FCP 正则化 SAA 公式,那么在逼近 a 的全局解时所需的样本数量可以显着减少凸SP:样本大小只需要在维数上是多对数的。还讨论了 FCP 正则化器对非凸 SP 的功效。
更新日期:2018-05-03
down
wechat
bug