当前位置: X-MOL 学术Am. Stat. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Null Hypothesis Significance Testing Defended and Calibrated by Bayesian Model Checking
The American Statistician ( IF 1.8 ) Pub Date : 2020-01-06 , DOI: 10.1080/00031305.2019.1699443
David R. Bickel 1, 2, 3
Affiliation  

Abstract

Significance testing is often criticized because p-values can be low even though posterior probabilities of the null hypothesis are not low according to some Bayesian models. Those models, however, would assign low prior probabilities to the observation that the p-value is sufficiently low. That conflict between the models and the data may indicate that the models needs revision. Indeed, if the p-value is sufficiently small while the posterior probability according to a model is insufficiently small, then the model will fail a model check. That result leads to a way to calibrate a p-value by transforming it into an upper bound on the posterior probability of the null hypothesis (conditional on rejection) for any model that would pass the check. The calibration may be calculated from a prior probability of the null hypothesis and the stringency of the check without more detailed modeling. An upper bound, as opposed to a lower bound, can justify concluding that the null hypothesis has a low posterior probability.



中文翻译:

贝叶斯模型检验验证和校准的零假设显着性检验

摘要

显着性检验经常受到批评,因为即使根据某些贝叶斯模型,零假设的后验概率并不低,p 值也可能很低。然而,这些模型会将低先验概率分配给p值足够低的观察结果。模型和数据之间的冲突可能表明模型需要修改。事实上,如果p值足够小而根据模型的后验概率不够小,那么模型将无法通过模型检查。该结果导致了一种校准p 的方法-value 通过将其转换为任何通过检查的模型的零假设(以拒绝为条件)的后验概率的上限。可以根据零假设的先验概率和检查的严格性来计算校准,而无需更详细的建模。与下限相反,上限可以证明原假设具有低后验概率的结论是正确的。

更新日期:2020-01-06
down
wechat
bug