当前位置: X-MOL 学术IEEE Trans. Pattern Anal. Mach. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Iteratively Reweighted Minimax-Concave Penalty Minimization for Accurate Low-rank Plus Sparse Matrix Decomposition.
IEEE Transactions on Pattern Analysis and Machine Intelligence ( IF 20.8 ) Pub Date : 2022-11-07 , DOI: 10.1109/tpami.2021.3122259
Praveen Kumar Pokala 1 , Raghu Vamshi Hemadri 2 , Chandra Sekhar Seelamantula 1
Affiliation  

Low-rank plus sparse matrix decomposition (LSD) is an important problem in computer vision and machine learning. It has been solved using convex relaxations of the matrix rank and l0-pseudo-norm, which are the nuclear norm and l1-norm, respectively. Convex approximations are known to result in biased estimates, to overcome which, nonconvex regularizers such as weighted nuclear-norm minimization and weighted Schatten p-norm minimization have been proposed. However, works employing these regularizers have used heuristic weight-selection strategies. We propose weighted minimax-concave penalty (WMCP) as the nonconvex regularizer and show that it admits an equivalent representation that enables weight adaptation. Similarly, an equivalent representation to the weighted matrix gamma norm (WMGN) enables weight adaptation for the low-rank part. The optimization algorithms are based on the alternating direction method of multipliers technique. We show that the optimization frameworks relying on the two penalties, WMCP and WMGN, coupled with a novel iterative weight update strategy, result in accurate low-rank plus sparse matrix decomposition. The algorithms are also shown to satisfy descent properties and convergence guarantees. On the applications front, we consider the problem of foreground-background separation in video sequences. Simulation experiments and validations on standard datasets, namely, I2R, CDnet 2012, and BMC 2012 show that the proposed techniques outperform the benchmark techniques.

中文翻译:

用于精确低秩加稀疏矩阵分解的迭代重加权 Minimax-Concave Penalty 最小化。

低秩加稀疏矩阵分解(LSD)是计算机视觉和机器学习中的一个重要问题。它已经使用矩阵秩和 l0 伪范数的凸松弛来解决,它们分别是核范数和 l1 范数。众所周知,凸近似会导致有偏估计,为了克服这种情况,已经提出了非凸正则化器,例如加权核范数最小化和加权 Schatten p 范数最小化。然而,使用这些正则化器的作品使用了启发式权重选择策略。我们提出了加权极小凹惩罚(WMCP)作为非凸正则化器,并表明它承认一个等价的表示,可以实现权重适应。类似地,加权矩阵伽马范数(WMGN)的等效表示可以对低秩部分进行权重调整。优化算法基于乘法器技术的交替方向方法。我们表明,依赖于 WMCP 和 WMGN 这两个惩罚项的优化框架,再加上一种新颖的迭代权重更新策略,可以产生准确的低秩和稀疏矩阵分解。该算法也被证明满足下降特性和收敛保证。在应用方面,我们考虑了视频序列中的前景-背景分离问题。在标准数据集(即 I2R、CDnet 2012 和 BMC 2012)上的仿真实验和验证表明,所提出的技术优于基准技术。再加上一种新颖的迭代权重更新策略,可实现准确的低秩加稀疏矩阵分解。该算法也被证明满足下降特性和收敛保证。在应用方面,我们考虑了视频序列中的前景-背景分离问题。在标准数据集(即 I2R、CDnet 2012 和 BMC 2012)上的仿真实验和验证表明,所提出的技术优于基准技术。再加上一种新颖的迭代权重更新策略,可实现准确的低秩加稀疏矩阵分解。该算法也被证明满足下降特性和收敛保证。在应用方面,我们考虑了视频序列中的前景-背景分离问题。在标准数据集(即 I2R、CDnet 2012 和 BMC 2012)上的仿真实验和验证表明,所提出的技术优于基准技术。
更新日期:2021-10-26
down
wechat
bug