当前位置: X-MOL 学术IEEE Signal Process. Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Robust Hierarchical-Optimization RLS Against Sparse Outliers
IEEE Signal Processing Letters ( IF 3.9 ) Pub Date : 2020-01-01 , DOI: 10.1109/lsp.2019.2963188
Konstantinos Slavakis , Sinjini Banerjee

This letter fortifies the recently introduced hierarchical-optimization recursive least squares (HO-RLS) against outliers which contaminate infrequently linear-regression models. Outliers are modeled as nuisance variables and are estimated together with the linear filter/system variables via a sparsity-inducing (non-)convexly regularized least-squares task. The proposed outlier-robust HO-RLS builds on steepest-descent directions with a constant step size (learning rate), needs no matrix inversion (lemma), accommodates colored nominal noise of known correlation matrix, exhibits small computational footprint, and offers theoretical guarantees, in a probabilistic sense, for the convergence of the system estimates to the solutions of a hierarchical-optimization problem: Minimize a convex loss, which models a-priori knowledge about the unknown system, over the minimizers of the classical ensemble LS loss. Extensive numerical tests on synthetically generated data in both stationary and non-stationary scenarios showcase notable improvements of the proposed scheme over state-of-the-art techniques.

中文翻译:

针对稀疏异常值的鲁棒分层优化 RLS

这封信加强了最近引入的分层优化递归最小二乘法 (HO-RLS) 来对抗不经常污染线性回归模型的异常值。异常值被建模为干扰变量,并通过稀疏诱导(非)凸正则化最小二乘任务与线性滤波器/系统变量一起估计。提出的异常稳健的 HO-RLS 建立在最速下降方向上,具有恒定的步长(学习率),不需要矩阵求逆(引理),适应已知相关矩阵的有色标称噪声,表现出小的计算足迹,并提供理论保证,在概率意义上,为了系统估计收敛到分层优化问题的解决方案:最小化凸损失,它对未知系统的先验知识进行建模,超过经典集成 LS 损失的最小值。在静态和非静态场景中对合成生成的数据进行了广泛的数值测试,展示了所提出的方案相对于最先进技术的显着改进。
更新日期:2020-01-01
down
wechat
bug