当前位置: X-MOL 学术Statistics › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Asymptotic theory for maximum likelihood estimates in reduced-rank multivariate generalized linear models
Statistics ( IF 1.9 ) Pub Date : 2018-05-08 , DOI: 10.1080/02331888.2018.1467420
E Bura 1, 2 , S Duarte 3 , L Forzani 3 , E Smucler 4, 5 , M Sued 5
Affiliation  

ABSTRACT Reduced-rank regression is a dimensionality reduction method with many applications. The asymptotic theory for reduced rank estimators of parameter matrices in multivariate linear models has been studied extensively. In contrast, few theoretical results are available for reduced-rank multivariate generalized linear models. We develop M-estimation theory for concave criterion functions that are maximized over parameter spaces that are neither convex nor closed. These results are used to derive the consistency and asymptotic distribution of maximum likelihood estimators in reduced-rank multivariate generalized linear models, when the response and predictor vectors have a joint distribution. We illustrate our results in a real data classification problem with binary covariates.

中文翻译:

降阶多元广义线性模型中最大似然估计的渐近理论

摘要 降秩回归是一种具有多种应用的降维方法。多元线性模型中参数矩阵的降阶估计量的渐近理论已被广泛研究。相比之下,降阶多元广义线性模型的理论结果很少。我们开发了凹准则函数的 M 估计理论,该函数在既不是凸的也不是封闭的参数空间上最大化。当响应向量和预测向量具有联合分布时,这些结果用于导出降秩多元广义线性模型中最大似然估计量的一致性和渐近分布。我们通过二元协变量的真实数据分类问题来说明我们的结果。
更新日期:2018-05-08
down
wechat
bug