当前位置: X-MOL 学术Neural Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A unified robust framework for multi-view feature extraction with L2,1-norm constraint.
Neural Networks ( IF 6.0 ) Pub Date : 2020-05-08 , DOI: 10.1016/j.neunet.2020.04.024
Jinxin Zhang 1 , Liming Liu 2 , Ling Zhen 3 , Ling Jing 3
Affiliation  

Multi-view feature extraction methods mainly focus on exploiting the consistency and complementary information between multi-view samples, and most of the current methods apply the F-norm or L2-norm as the metric, which are sensitive to the outliers or noises. In this paper, based on L2,1-norm, we propose a unified robust feature extraction framework, which includes four special multi-view feature extraction methods, and extends the state-of-art methods to a more generalized form. The proposed methods are less sensitive to outliers or noises. An efficient iterative algorithm is designed to solve L2,1-norm based methods. Comprehensive analyses, such as convergence analysis, rotational invariance analysis and relationship between our methods and previous F-norm based methods illustrate the effectiveness of our proposed methods. Experiments on two artificial datasets and six real datasets demonstrate that the proposed L2,1-norm based methods have better performance than the related methods.

中文翻译:

具有L2,1-norm约束的多视图特征提取的统一鲁棒框架。

多视图特征提取方法主要着重于利用多视图样本之间的一致性和互补信息,当前大多数方法都采用F范数或L2范数作为度量标准,这些度量对异常值或噪声敏感。在本文中,基于L2,1-范数,我们提出了一个统一的鲁棒特征提取框架,其中包括四种特殊的多视图特征提取方法,并将现有技术方法扩展为更通用的形式。所提出的方法对异常值或噪声不太敏感。设计了一种有效的迭代算法来求解基于L2,1-范数的方法。综合分析,例如收敛性分析,旋转不变性分析以及我们的方法与以前的基于F范数的方法之间的关系,说明了我们提出的方法的有效性。
更新日期:2020-05-08
down
wechat
bug