当前位置: X-MOL 学术Comput. Stat. Data Anal. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Mixture of linear experts model for censored data: A novel approach with scale-mixture of normal distributions
Computational Statistics & Data Analysis ( IF 1.5 ) Pub Date : 2021-01-24 , DOI: 10.1016/j.csda.2021.107182
Elham Mirfarah , Mehrdad Naderi , Ding-Geng Chen

Mixture of linear experts (MoE) model is one of the widespread statistical frameworks for modeling, classification, and clustering of data. Built on the normality assumption of the error terms for mathematical and computational convenience, the classical MoE model has two challenges: (1) it is sensitive to atypical observations and outliers, and (2) it might produce misleading inferential results for censored data. The aim is then to resolve these two challenges, simultaneously, by proposing a robust MoE model for model-based clustering and discriminant censored data with the scale-mixture of normal (SMN) class of distributions for the unobserved error terms. An analytical expectation–maximization (EM) type algorithm is developed in order to obtain the maximum likelihood parameter estimates. Simulation studies are carried out to examine the performance, effectiveness, and robustness of the proposed methodology. Finally, a real dataset is used to illustrate the superiority of the new model.



中文翻译:

用于检查数据的线性专家模型的混合:具有正态分布的比例混合的新方法

线性专家(MoE)模型的混合是用于数据建模,分类和聚类的广泛统计框架之一。基于误差项的正态性假设,为数学和计算方便起见,经典的MoE模型面临两个挑战:(1)对非典型观察值和离群值敏感;(2)对于被检查的数据可能会产生误导性的推论结果。然后,目的是通过针对基于模型的聚类和判别式删失数据提出鲁棒的MoE模型,同时针对未观察到的误差项提出正态(SMN)分布类别的比例混合,从而解决这两个挑战。为了获得最大似然参数估计值,开发了一种分析期望最大化(EM)类型算法。进行仿真研究以检查所提出方法的性能,有效性和鲁棒性。最后,使用实际数据集来说明新模型的优越性。

更新日期:2021-02-05
down
wechat
bug