当前位置: X-MOL 学术IEEE Trans. Image Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning Latent Low-Rank and Sparse Embedding for Robust Image Feature Extraction.
IEEE Transactions on Image Processing ( IF 10.8 ) Pub Date : 2019-09-09 , DOI: 10.1109/tip.2019.2938859
Zhenwen Ren , Quansen Sun , Bin Wu , Xiaoqian Zhang , Wenzhu Yan

To defy the curse of dimensionality, the inputs are always projected from the original high-dimensional space into the target low-dimension space for feature extraction. However, due to the existence of noise and outliers, the feature extraction task for corrupted data is still a challenging problem. Recently, a robust method called low rank embedding (LRE) was proposed. Despite the success of LRE in experimental studies, it also has many disadvantages: 1) The learned projection cannot quantitatively interpret the importance of features. 2) LRE does not perform data reconstruction so that the features may not be capable of holding the main energy of the original "clean" data. 3) LRE explicitly transforms error into the target space. 4) LRE is an unsupervised method, which is only suitable for unsupervised scenarios. To address these problems, in this paper, we propose a novel method to exploit the latent discriminative features. In particular, we first utilize an orthogonal matrix to hold the main energy of the original data. Next, we introduce an l2,1 -norm term to encourage the features to be more compact, discriminative and interpretable. Then, we enforce a columnwise l2,1 -norm constraint on an error component to resist noise. Finally, we integrate a classification loss term into the objective function to fit supervised scenarios. Our method performs better than several state-of-the-art methods in terms of effectiveness and robustness, as demonstrated on six publicly available datasets.

中文翻译:

学习潜在的低秩和稀疏嵌入以实现鲁棒的图像特征提取。

为了克服维数的诅咒,总是将输入从原始的高维空间投影到目标低维空间以进行特征提取。然而,由于噪声和离群值的存在,用于损坏数据的特征提取任务仍然是一个具有挑战性的问题。最近,提出了一种称为低秩嵌入(LRE)的鲁棒方法。尽管LRE在实验研究中取得了成功,但它也有许多缺点:1)习得的投影无法定量地解释特征的重要性。2)LRE不执行数据重构,因此功能可能无法保持原始“干净”数据的主要能量。3)LRE将错误显式转换为目标空间。4)LRE是一种无监督方法,仅适用于无监督情况。为了解决这些问题,在本文中,我们提出了一种新的方法来利用潜在的区分特征。特别地,我们首先利用正交矩阵来保存原始数据的主要能量。接下来,我们引入一个l2,1 -norm术语,以鼓励这些功能更紧凑,更具区分性和可解释性。然后,我们对误差分量实施逐列的l2,1-norm约束以抵抗噪声。最后,我们将分类损失项整合到目标函数中,以适应监督情况。正如在六个可公开获得的数据集上所展示的那样,我们的方法在有效性和鲁棒性方面比几种最先进的方法表现更好。我们首先利用正交矩阵来保存原始数据的主要能量。接下来,我们引入一个l2,1 -norm术语,以鼓励这些功能更紧凑,更具区分性和可解释性。然后,我们对误差分量实施逐列的l2,1-norm约束以抵抗噪声。最后,我们将分类损失项整合到目标函数中,以适应监督情况。就有效性和鲁棒性而言,我们的方法的性能优于几种最新方法,如六个公开可用的数据集所示。我们首先利用正交矩阵来保存原始数据的主要能量。接下来,我们引入一个l2,1 -norm术语,以鼓励这些功能更紧凑,更具区分性和可解释性。然后,我们对误差分量实施逐列的l2,1-norm约束以抵抗噪声。最后,我们将分类损失项整合到目标函数中,以适应监督情况。就有效性和鲁棒性而言,我们的方法的性能优于几种最新方法,如六个公开可用的数据集所示。我们将分类损失项整合到目标函数中,以适应监督情况。正如在六个可公开获得的数据集上所展示的那样,我们的方法在有效性和鲁棒性方面比几种最先进的方法表现更好。我们将分类损失项整合到目标函数中,以适应监督情况。正如在六个可公开获得的数据集上所展示的那样,我们的方法在有效性和鲁棒性方面比几种最先进的方法表现更好。
更新日期:2020-04-22
down
wechat
bug