当前位置: X-MOL 学术Neural Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
ELM embedded discriminative dictionary learning for image classification.
Neural Networks ( IF 6.0 ) Pub Date : 2019-12-20 , DOI: 10.1016/j.neunet.2019.11.015
Yijie Zeng 1 , Yue Li 1 , Jichao Chen 1 , Xiaofan Jia 1 , Guang-Bin Huang 1
Affiliation  

Dictionary learning is a widely adopted approach for image classification. Existing methods focus either on finding a dictionary that produces discriminative sparse representation, or on enforcing priors that best describe the dataset distribution. In many cases, the dataset size is often small with large intra-class variability and nondiscriminative feature space. In this work we propose a simple and effective framework called ELM-DDL to address these issues. Specifically, we represent input features with Extreme Learning Machine (ELM) with orthogonal output projection, which enables diverse representation on nonlinear hidden space and task specific feature learning on output space. The embeddings are further regularized via a maximum margin criterion (MMC) to maximize the inter-class variance and minimize intra-class variance. For dictionary learning, we design a novel weighted class specific ℓ1,2 norm to regularize the sparse coding vectors, which promotes uniformity of the sparse patterns of samples belonging to the same class and suppresses support overlaps of different classes. We show that such regularization is robust, discriminative and easy to optimize. The proposed method is combined with a sparse representation classifier (SRC) to evaluate on benchmark datasets. Results show that our approach achieves state-of-the-art performance compared to other dictionary learning methods.

中文翻译:

ELM嵌入式判别词典学习用于图像分类。

字典学习是图像分类的一种广泛采用的方法。现有方法要么集中在寻找产生区分性稀疏表示的字典上,要么集中在最能描述数据集分布的先验条件上。在许多情况下,数据集的大小通常很小,并且具有较大的类内变异性和非歧视性特征空间。在这项工作中,我们提出了一个简单有效的框架,称为ELM-DDL,以解决这些问题。具体来说,我们使用具有正交输出投影的极限学习机(ELM)来表示输入要素,从而可以在非线性隐藏空间上进行多种表示,并在输出空间上进行任务特定的特征学习。嵌入通过最大余量准则(MMC)进一步规范化,以使类间差异最大化,并使类内差异最小化。对于字典学习,我们设计了一种新颖的加权特定于类别ℓ1,2范数来规范稀疏编码向量,从而促进了属于同一类别的样本的稀疏模式的均匀性并抑制了不同类别的支持重叠。我们表明,这种正则化功能强大,具有判别性且易于优化。所提出的方法与稀疏表示分类器(SRC)相结合,可以对基准数据集进行评估。结果表明,与其他词典学习方法相比,我们的方法可实现最先进的性能。区分性强,易于优化。所提出的方法与稀疏表示分类器(SRC)相结合,可以对基准数据集进行评估。结果表明,与其他词典学习方法相比,我们的方法可实现最先进的性能。区分性强,易于优化。所提出的方法与稀疏表示分类器(SRC)相结合,可以对基准数据集进行评估。结果表明,与其他词典学习方法相比,我们的方法可实现最先进的性能。
更新日期:2019-12-20
down
wechat
bug