当前位置: X-MOL 学术J. Nonparametr. Stat. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Tuning parameter selection for penalised empirical likelihood with a diverging number of parameters
Journal of Nonparametric Statistics ( IF 0.8 ) Pub Date : 2020-01-02 , DOI: 10.1080/10485252.2020.1717491
Chaowen Zheng 1 , Yichao Wu 2
Affiliation  

ABSTRACT Penalised likelihood methods have been a success in analysing high dimensional data. Tang and Leng [(2010), ‘Penalized High-Dimensional Empirical Likelihood’, Biometrika, 97(4), 905–920] extended the penalisation approach to the empirical likelihood scenario and showed that the penalised empirical likelihood estimator could identify the true predictors consistently in the linear regression models. However, this desired selection consistency property of the penalised empirical likelihood method relies heavily on the choice of the tuning parameter. In this work, we propose a tuning parameter selection procedure for penalised empirical likelihood to guarantee that this selection consistency can be achieved. Specifically, we propose a generalised information criterion (GIC) for the penalised empirical likelihood in the linear regression case. We show that the tuning parameter selected by the GIC yields the true model consistently even when the number of predictors diverges to infinity with the sample size. We demonstrate the performance of our procedure by numerical simulations and a real data analysis.

中文翻译:

使用不同数量的参数调整受罚经验似然的参数选择

摘要 惩罚似然方法在分析高维数据方面取得了成功。Tang 和 Leng [(2010), 'Penalized High-Dimensional Empirical Likelihood', Biometrika, 97(4), 905-920] 将惩罚方法扩展到经验似然场景,并表明惩罚的经验似然估计器可以识别真正的预测变量在线性回归模型中始终如一。然而,惩罚经验似然方法的这种理想的选择一致性属性在很大程度上依赖于调整参数的选择。在这项工作中,我们提出了一种用于惩罚经验似然的调整参数选择程序,以保证可以实现这种选择一致性。具体来说,我们为线性回归案例中的惩罚经验似然提出了一个广义信息标准(GIC)。我们表明,即使预测变量的数量随着样本大小而发散到无穷大,GIC 选择的调整参数也能始终如一地产生真实模型。我们通过数值模拟和真实数据分析证明了我们程序的性能。
更新日期:2020-01-02
down
wechat
bug