当前位置: X-MOL 学术IEEE Trans. Signal Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Low-Complexity Methods for Estimation After Parameter Selection
IEEE Transactions on Signal Processing ( IF 5.4 ) Pub Date : 2020-01-01 , DOI: 10.1109/tsp.2020.2970311
Nadav Harel , Tirza Routtenberg

Statistical inference of multiple parameters often involves a preliminary parameter selection stage. The selection stage has an impact on subsequent estimation, for example by introducing a selection bias. The post-selection maximum likelihood (PSML) estimator is shown to reduce the selection bias and the post-selection mean-squared-error (PSMSE) compared with conventional estimators, such as the maximum likelihood (ML) estimator. However, the computational complexity of the PSML is usually high due to the multi-dimensional exhaustive search for a global maximum of the post-selection log-likelihood (PSLL) function. Moreover, the PSLL involves the probability of selection that, in general, does not have an analytical form. In this paper, we develop new low-complexity post-selection estimation methods for a two-stage estimation after parameter selection architecture. The methods are based on implementing the iterative maximization by parts (MBP) approach, which is based on the decomposition of the PSLL function into “easily-optimized” and complicated parts. The proposed second-best PSML method applies the MBP-PSML algorithm with a pairwise probability of selection between the two highest-ranked parameters w.r.t. the selection rule. The proposed SA-PSML method is based on using stochastic approximation (SA) and Monte Carlo integrations to obtain a non-parametric estimation of the gradient of the probability of selection and then applying the MBP-PSML algorithm on this approximation. For low-complexity performance analysis, we develop the empirical post-selection Cram$\acute{\text{e}}$r-Rao-type lower bound. Simulations demonstrate that the proposed post-selection estimation methods are tractable and reduce both the bias and the PSMSE, compared with the ML estimator, while only requiring moderate computational complexity.

中文翻译:

参数选择后的低复杂度估计方法

多个参数的统计推断通常涉及初步参数选择阶段。选择阶段对后续估计有影响,例如通过引入选择偏差。与最大似然 (ML) 估计器等传统估计器相比,选择后最大似然 (PSML) 估计器可减少选择偏差和选择后均方误差 (PSMSE)。然而,由于对选择后对数似然 (PSLL) 函数的全局最大值的多维穷举搜索,PSML 的计算复杂度通常很高。此外,PSLL 涉及选择概率,通常没有分析形式。在本文中,我们为参数选择架构后的两阶段估计开发了新的低复杂度后选择估计方法。这些方法基于实现迭代最大化(MBP)方法,该方法基于将 PSLL 函数分解为“易于优化”和复杂的部分。所提出的次优 PSML 方法应用 MBP-PSML 算法,并在两个最高排名的参数之间使用成对概率选择规则。所提出的 SA-PSML 方法基于使用随机逼近 (SA) 和蒙特卡罗积分来获得选择概率梯度的非参数估计,然后将 MBP-PSML 算法应用于该逼近。对于低复杂度的性能分析,我们开发了经验选择后 Cram$\acute{\text{e}}$r-Rao 类型的下界。模拟表明,与 ML 估计器相比,所提出的选择后估计方法易于处理,并且减少了偏差和 PSMSE,同时只需要适度的计算复杂度。
更新日期:2020-01-01
down
wechat
bug