当前位置: X-MOL 学术J. Am. Stat. Assoc. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
From Fixed-X to Random-X Regression: Bias-Variance Decompositions, Covariance Penalties, and Prediction Error Estimation: Rejoinder
Journal of the American Statistical Association ( IF 3.0 ) Pub Date : 2020-01-02 , DOI: 10.1080/01621459.2020.1727236
Saharon Rosset 1 , Ryan J. Tibshirani 2
Affiliation  

We thank the discussants for their thoughtful contributions. Shen and Huang employ parametric bootstrap to estimate the excess variance and excess bias components V+ and B+, respectively, and empirically attain results which are consistent with our results, in their case for Random-X Lasso model selection: Covariance-penalty methods can be more efficient than crossvalidation (CV) in this task. Wager concentrates on CV as the major tool in the RandomX evaluation toolbox. He argues that model evaluation is a hard problem, since CV error is dominated by the modelindependent “oracle” error. On the other hand, model selection is a much easier problem in both theory and practice, as the oracle error cancels out in this case. In this note, we briefly comment on the two discussions and propose our view on the main challenges of this research area, and their evolution in the time since our article was originally written.

中文翻译:

从固定 X 到随机 X 回归:偏差方差分解、协方差惩罚和预测误差估计:反驳

我们感谢讨论者的深思熟虑的贡献。Shen 和 Huang 分别使用参数自举法来估计超额方差和超额偏差分量 V+ 和 B+,并凭经验获得与我们的结果一致的结果,在他们的 Random-X Lasso 模型选择的情况下:协方差惩罚方法可以更多在此任务中比交叉验证(CV)更有效。Wager 专注于将 CV 作为 RandomX 评估工具箱中的主要工具。他认为模型评估是一个难题,因为 CV 错误受模型无关的“oracle”错误支配。另一方面,模型选择在理论和实践中都是一个更容易的问题,因为在这种情况下预言机错误被抵消了。在本说明中,我们简要评论了两次讨论,并就该研究领域的主要挑战提出了我们的看法,
更新日期:2020-01-02
down
wechat
bug