当前位置: X-MOL 学术Comput. Appl. Math. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Maximum likelihood and the maximum product of spacings from the viewpoint of the method of weighted residuals
Computational and Applied Mathematics ( IF 2.5 ) Pub Date : 2020-05-29 , DOI: 10.1007/s40314-020-01179-7
Takuya Kawanishi

In parameter estimation, the maximum-likelihood method (ML) does not work for some distributions. In such cases, the maximum product of spacings method (MPS) is used as an alternative. However, the advantages and disadvantages of the MPS, its variants, and the ML are still unclear. These methods are based on the Kullback–Leibler divergence (KLD), and we consider applying the method of weighted residuals (MWR) to it. We prove that, after transforming the KLD to the integral over [0, 1], the application of the collocation method yields the ML, and that of the Galerkin method yields the MPS and Jiang’s modified MPS (JMMPS); and the application of zero boundary conditions yields the ML and JMMPS, and that of non-zero boundary conditions yields the MPS. Additionally, we establish formulas for the approximate difference among the ML, MPS, and JMMPS estimators. Our simulation for seven distributions demonstrates that, for zero boundary condition parameters, for the bias convergence rate, ML and JMMPS are better than the MPS; however, regarding the MSE for small samples, the relative performance of the methods differs according to the distributions and parameters. For non-zero boundary condition parameters, the MPS outperforms the other methods: the MPS yields an unbiased estimator and the smallest MSE among the methods. We demonstrate that from the viewpoint of the MWR, the counterpart of the ML is JMMPS not the MPS. Using this KLD-MWR approach, we introduce a unified view for comparing estimators, and provide a new tool for analyzing and selecting estimators.



中文翻译:

从加权残差法的角度来看,最大似然性和间距的最大乘积

在参数估计中,最大似然法(ML)对于某些分布不起作用。在这种情况下,可以使用最大乘积法(MPS)。但是,MPS,其变体和ML的优缺点仍然不清楚。这些方法基于Kullback-Leibler散度(KLD),我们考虑对其应用加权残差(MWR)方法。我们证明,在将KLD转换为[0,1]上的整数后,搭配方法的应用会产生ML,而Galerkin方法的应用会产生MPS和Jiang的修正MPS(JMMPS);零边界条件的应用产生ML和JMMPS,非零边界条件的应用产生MPS。此外,我们为ML,MPS,和JMMPS估算器。我们对七个分布的仿真表明,对于零边界条件参数,对于偏置收敛速率,ML和JMMPS优于MPS;但是,对于小样本的MSE,该方法的相对性能会根据分布和参数而有所不同。对于非零边界条件参数,MPS优于其他方法:MPS产生无偏估计量,并且是这些方法中最小的MSE。我们证明从MWR的角度来看,ML的对应对象是JMMPS而不是MPS。使用这种KLD-MWR方法,我们引入了一个统一的视图来比较估计量,并提供了一种用于分析和选择估计量的新工具。对于偏差收敛率,ML和JMMPS优于MPS。但是,对于小样本的MSE,该方法的相对性能会根据分布和参数而有所不同。对于非零边界条件参数,MPS优于其他方法:MPS产生无偏估计量,并且是这些方法中最小的MSE。我们证明从MWR的角度来看,ML的对应对象是JMMPS而不是MPS。使用这种KLD-MWR方法,我们引入了用于比较估计量的统一视图,并提供了用于分析和选择估计量的新工具。对于偏差收敛率,ML和JMMPS优于MPS。但是,对于小样本的MSE,该方法的相对性能会根据分布和参数而有所不同。对于非零边界条件参数,MPS优于其他方法:MPS产生无偏估计量,并且是这些方法中最小的MSE。我们证明从MWR的角度来看,ML的对应对象是JMMPS而不是MPS。使用这种KLD-MWR方法,我们引入了用于比较估计量的统一视图,并提供了用于分析和选择估计量的新工具。MPS胜过其他方法:MPS得出无偏估计量,并且是这些方法中最小的MSE。我们证明从MWR的角度来看,ML的对应对象是JMMPS而不是MPS。使用这种KLD-MWR方法,我们引入了用于比较估计量的统一视图,并提供了用于分析和选择估计量的新工具。MPS胜过其他方法:MPS得出无偏估计量,并且是这些方法中最小的MSE。我们证明从MWR的角度来看,ML的对应对象是JMMPS而不是MPS。使用这种KLD-MWR方法,我们引入了用于比较估计量的统一视图,并提供了用于分析和选择估计量的新工具。

更新日期:2020-05-29
down
wechat
bug