当前位置: X-MOL 学术IEEE Trans. Neural Netw. Learn. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
When Dictionary Learning Meets Deep Learning: Deep Dictionary Learning and Coding Network for Image Recognition With Limited Data.
IEEE Transactions on Neural Networks and Learning Systems ( IF 10.2 ) Pub Date : 2020-06-09 , DOI: 10.1109/tnnls.2020.2997289
Hao Tang , Hong Liu , Wei Xiao , Nicu Sebe

We present a new deep dictionary learning and coding network (DDLCN) for image-recognition tasks with limited data. The proposed DDLCN has most of the standard deep learning layers (e.g., input/output, pooling, and fully connected), but the fundamental convolutional layers are replaced by our proposed compound dictionary learning and coding layers. The dictionary learning learns an overcomplete dictionary for input training data. At the deep coding layer, a locality constraint is added to guarantee that the activated dictionary bases are close to each other. Then, the activated dictionary atoms are assembled and passed to the compound dictionary learning and coding layers. In this way, the activated atoms in the first layer can be represented by the deeper atoms in the second dictionary. Intuitively, the second dictionary is designed to learn the fine-grained components shared among the input dictionary atoms; thus, a more informative and discriminative low-level representation of the dictionary atoms can be obtained. We empirically compare DDLCN with several leading dictionary learning methods and deep learning models. Experimental results on five popular data sets show that DDLCN achieves competitive results compared with state-of-the-art methods when the training data are limited. Code is available at https://github.com/Ha0Tang/DDLCN.

中文翻译:

当词典学习遇到深度学习时:用于有限数据图像识别的深度词典学习和编码网络。

我们提出了一种新的深度词典学习和编码网络(DDLCN),用于数据量有限的图像识别任务。提出的DDLCN具有大多数标准深度学习层(例如,输入/输出,池化和完全连接),但是基本卷积层已由我们提出的复合字典学习和编码层取代。字典学习为输入的训练数据学习了一个不完整的字典。在深度编码层,添加了局部性约束,以确保激活的字典库彼此靠近。然后,将激活的字典原子组装并传递到复合字典学习和编码层。这样,第一层中的活化原子可以由第二字典中的较深原子表示。凭直觉 第二个字典旨在学习输入字典原子之间共享的细粒度组件;因此,可以获得字典原子的更多信息和判别性低级表示。我们将DDLCN与几种领先的词典学习方法和深度学习模型进行经验比较。在五个受欢迎的数据集上的实验结果表明,在训练数据有限的情况下,DDLCN与最新技术方法相比具有竞争优势。可从https://github.com/Ha0Tang/DDLCN获得代码。在五个受欢迎的数据集上的实验结果表明,在训练数据有限的情况下,DDLCN与最新技术方法相比具有竞争优势。可从https://github.com/Ha0Tang/DDLCN获得代码。在五个受欢迎的数据集上的实验结果表明,在训练数据有限的情况下,DDLCN与最先进的方法相比具有竞争优势。可从https://github.com/Ha0Tang/DDLCN获得代码。
更新日期:2020-06-09
down
wechat
bug