当前位置: X-MOL 学术J. Comput. Graph. Stat. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Testing Sparsity-Inducing Penalties
Journal of Computational and Graphical Statistics ( IF 1.4 ) Pub Date : 2019-08-19 , DOI: 10.1080/10618600.2019.1637749
Maryclare Griffin 1 , Peter D. Hoff 2
Affiliation  

Abstract Many penalized maximum likelihood estimators correspond to posterior mode estimators under specific prior distributions. Appropriateness of a particular class of penalty functions can therefore be interpreted as the appropriateness of a prior for the parameters. For example, the appropriateness of a lasso penalty for regression coefficients depends on the extent to which the empirical distribution of the regression coefficients resembles a Laplace distribution. We give a testing procedure of whether or not a Laplace prior is appropriate and accordingly, whether or not using a lasso penalized estimate is appropriate. This testing procedure is designed to have power against exponential power priors which correspond to penalties. Via simulations, we show that this testing procedure achieves the desired level and has enough power to detect violations of the Laplace assumption when the numbers of observations and unknown regression coefficients are large. We then introduce an adaptive procedure that chooses a more appropriate prior and corresponding penalty from the class of exponential power priors when the null hypothesis is rejected. We show that this can improve estimation of the regression coefficients both when they are drawn from an exponential power distribution and when they are drawn from a spike-and-slab distribution. Supplementary materials for this article are available online.

中文翻译:

测试稀疏诱导惩罚

摘要 许多惩罚最大似然估计量对应于特定先验分布下的后验模式估计量。因此,特定类别的惩罚函数的适当性可以解释为参数先验的适当性。例如,回归系数的套索惩罚的适当性取决于回归系数的经验分布与拉普拉斯分布的相似程度。我们给出了拉普拉斯先验是否合适的测试程序,因此,使用套索惩罚估计是否合适。该测试程序旨在具有对抗对应于惩罚的指数幂先验的幂。通过模拟,我们表明,当观察数量和未知回归系数很大时,该测试程序达到了所需的水平,并且有足够的能力来检测对拉普拉斯假设的违反。然后,我们引入了一个自适应程序,该程序在拒绝原假设时从指数幂先验类中选择更合适的先验和相应的惩罚。我们表明,这可以改善回归系数的估计,无论是从指数幂分布中提取还是从尖峰板分布中提取时。本文的补充材料可在线获取。然后,我们引入了一个自适应程序,该程序在拒绝原假设时从指数幂先验类中选择更合适的先验和相应的惩罚。我们表明,这可以改善回归系数的估计,无论是从指数幂分布中提取还是从尖峰板分布中提取时。本文的补充材料可在线获取。然后,我们引入了一个自适应程序,该程序在拒绝原假设时从指数幂先验类中选择更合适的先验和相应的惩罚。我们表明,这可以改善回归系数的估计,无论是从指数幂分布中提取还是从尖峰板分布中提取时。本文的补充材料可在线获取。
更新日期:2019-08-19
down
wechat
bug