当前位置: X-MOL 学术IEEE Trans. Signal Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Globally Convergent Gradient Projection Type Algorithms for a Class of Robust Hypothesis Testings
IEEE Transactions on Signal Processing ( IF 4.6 ) Pub Date : 2021-02-15 , DOI: 10.1109/tsp.2021.3059097
Ting Ma , Enbin Song , Qingjiang Shi

This paper considers the popular minimax robust hypothesis testing problem—seeking the optimal decision rule with a minimum error probability for the least favorable distributions (LFDs) lying within an uncertainty set, which is characterized by an upper bound on the distance between actual and nominal densities. First, we convert the minimax robust hypothesis testing problem to a convex minimization problem. By leveraging Danskin's theorem, the gradient of the objective function of the transformed problem is derived as a function of LFDs. Then, we propose the gradient projection algorithm (GPA) and the hybrid gradient projection algorithm (HGPA) to solve the transformed problem. In particular, when the distance is chosen to be the Kullback-Leibler (KL) or $\alpha$ -divergence, each LFD only relies on two unknown parameters which can be determined efficiently. In these two cases, the decision rule sequences generated by the GPA and the HGPA are respectively proved to converge weakly and strongly to the global minimizer under some mild conditions. To the best of our knowledge, these decision rules are the first to be guaranteed to globally converge towards the optimal solution for this type of robust hypothesis testing problems. We further propose an accelerated gradient projection algorithm (AGPA) to improve the efficiency of the GPA when the observation space of the robust hypothesis testing only contains finitely many points. Several simulations illustrate that the proposed GPA, HGPA and AGPA can obtain the globally optimal solution with high efficiency.

中文翻译:

一类鲁棒假设检验的全局收敛梯度投影类型算法

本文考虑了流行的极小极大鲁棒假设检验问题,即在不确定性集内寻求具有最小误差概率的最优决策规则,以最不利分布(LFD)为特征,其特征在于实际密度与名义密度之间的距离上限。首先,我们将极小极大鲁棒假设检验问题转换为凸极小问题。通过利用Danskin定理,可以将变换后的问题的目标函数的梯度作为LFD的函数来推导。然后,我们提出了梯度投影算法(GPA)和混合梯度投影算法(HGPA)来解决变换后的问题。特别是当距离选择为Kullback-Leibler(KL)或$ \ alpha $ -散度,每个LFD仅依赖于两个可以有效确定的未知参数。在这两种情况下,分别证明了由GPA和HGPA生成的决策规则序列在某些温和条件下弱而强地收敛到全局极小值。就我们所知,这些决策规则是第一个被保证能够在全球范围内收敛到针对此类鲁棒假设检验问题的最佳解决方案的方法。当健壮假设检验的观察空间仅包含有限多个点时,我们进一步提出了一种加速梯度投影算法(AGPA),以提高GPA的效率。若干仿真结果表明,所提出的GPA,HGPA和AGPA可以高效地获得全局最优解。
更新日期:2021-04-02
down
wechat
bug