当前位置: X-MOL 学术Biometrika › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Jeffreys-prior penalty, finiteness and shrinkage in binomial-response generalized linear models
Biometrika ( IF 2.7 ) Pub Date : 2020-08-04 , DOI: 10.1093/biomet/asaa052
Ioannis Kosmidis 1 , David Firth 1
Affiliation  

Penalization of the likelihood by Jeffreys' invariant prior, or by a positive power thereof, is shown to produce finite-valued maximum penalized likelihood estimates in a broad class of binomial generalized linear models. The class of models includes logistic regression, where the Jeffreys-prior penalty is known additionally to reduce the asymptotic bias of the maximum likelihood estimator; and also models with other commonly used link functions such as probit and log-log. Shrinkage towards equiprobability across observations, relative to the maximum likelihood estimator, is established theoretically and is studied through illustrative examples. Some implications of finiteness and shrinkage for inference are discussed, particularly when inference is based on Wald-type procedures. A widely applicable procedure is developed for computation of maximum penalized likelihood estimates, by using repeated maximum likelihood fits with iteratively adjusted binomial responses and totals. These theoretical results and methods underpin the increasingly widespread use of reduced-bias and similarly penalized binomial regression models in many applied fields.

中文翻译:

二项式响应广义线性模型中的杰弗里斯先验惩罚、有限性和收缩

Jeffreys 的不变先验或其正幂对似然的惩罚显示出在广泛的二项式广义线性模型中产生有限值的最大惩罚似然估计。该类模型包括逻辑回归,其中还已知 Jeffreys 先验惩罚以减少最大似然估计量的渐近偏差;以及具有其他常用链接函数的模型,例如 probit 和 log-log。相对于最大似然估计量,观测值的等概率收缩是在理论上建立的,并通过说明性示例进行研究。讨论了有限性和收缩对推理的一些影响,特别是当推理是基于 Wald 类型的过程时。通过使用重复的最大似然拟合与迭代调整的二项式响应和总数,开发了一种广泛适用的程序来计算最大惩罚似然估计。这些理论结果和方法支持了减少偏差和类似惩罚二项式回归模型在许多应用领域中日益广泛的使用。
更新日期:2020-08-04
down
wechat
bug