当前位置:
X-MOL 学术
›
IEEE Trans. Pattern Anal. Mach. Intell.
›
论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Bilinear Factor Matrix Norm Minimization for Robust PCA: Algorithms and Applications
IEEE Transactions on Pattern Analysis and Machine Intelligence ( IF 23.6 ) Pub Date : 2017-09-04 , DOI: 10.1109/tpami.2017.2748590 Fanhua Shang , James Cheng , Yuanyuan Liu , Zhi-Quan Luo , Zhouchen Lin
IEEE Transactions on Pattern Analysis and Machine Intelligence ( IF 23.6 ) Pub Date : 2017-09-04 , DOI: 10.1109/tpami.2017.2748590 Fanhua Shang , James Cheng , Yuanyuan Liu , Zhi-Quan Luo , Zhouchen Lin
The heavy-tailed distributions of corrupted outliers and singular values of all channels in low-level vision have proven effective priors for many applications such as background modeling, photometric stereo and image alignment. And they can be well modeled by a hyper-Laplacian. However, the use of such distributions generally leads to challenging non-convex, non-smooth and non-Lipschitz problems, and makes existing algorithms very slow for large-scale applications. Together with the analytic solutions to
$\ell _{p}$
-norm minimization with two specific values of
$p$
, i.e.,
$p=1/2$
and
$p=2/3$
, we propose two novel bilinear factor matrix norm minimization models for robust principal component analysis. We first define the double nuclear norm and Frobenius/nuclear hybrid norm penalties, and then prove that they are in essence the Schatten-
$1/2$
and
$2/3$
quasi-norms, respectively, which lead to much more tractable and scalable Lipschitz optimization problems. Our experimental analysis shows that both our methods yield more accurate solutions than original Schatten quasi-norm minimization, even when the number of observations is very limited. Finally, we apply our penalties to various low-level vision problems, e.g., text removal, moving object detection, image alignment and inpainting, and show that our methods usually outperform the state-of-the-art methods.
中文翻译:
鲁棒PCA的双线性因子矩阵范数最小化:算法和应用
在低水平视觉中,损坏的异常值和所有通道的奇异值的重尾分布已被证明对于许多应用(例如背景建模,光度立体和图像对齐)是有效的先验。超级拉普拉斯人可以很好地模拟它们。但是,使用这样的分布通常会导致非凸,非平滑和非Lipschitz问题的出现,并使现有算法在大规模应用中非常慢。连同分析解决方案$ \ ell _ {p} $
-norm最小化的两个特定值
$ p $
, IE,
$ p = 1/2 $
和
$ p = 2/3 $
,我们提出了两个新颖的双线性因子矩阵范数最小化模型用于鲁棒主成分分析。我们首先定义了双重核规范和Frobenius /核混合规范惩罚,然后证明它们本质上是Schatten-
$ 1/2 $
和
$ 2/3 $
准范数分别导致更易处理和可扩展的Lipschitz优化问题。我们的实验分析表明,即使观察次数非常有限,我们的两种方法也比原始的Schatten准范数最小化产生更准确的解决方案。最后,我们将惩罚应用于各种低级视觉问题,例如,文本删除,运动目标检测,图像对齐和修复,并表明我们的方法通常优于最新方法。
更新日期:2018-08-06
中文翻译:
鲁棒PCA的双线性因子矩阵范数最小化:算法和应用
在低水平视觉中,损坏的异常值和所有通道的奇异值的重尾分布已被证明对于许多应用(例如背景建模,光度立体和图像对齐)是有效的先验。超级拉普拉斯人可以很好地模拟它们。但是,使用这样的分布通常会导致非凸,非平滑和非Lipschitz问题的出现,并使现有算法在大规模应用中非常慢。连同分析解决方案