当前位置: X-MOL 学术J. Visual Commun. Image Represent. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learned Greedy Method (LGM): A novel neural architecture for sparse coding and beyond
Journal of Visual Communication and Image Representation ( IF 2.6 ) Pub Date : 2021-03-26 , DOI: 10.1016/j.jvcir.2021.103095
Rajaei Khatib , Dror Simon , Michael Elad

The fields of signal and image processing have been deeply influenced by the introduction of deep neural networks. Despite their impressive success, the architectures used in these solutions come with no clear justification, being “black box” machines that lack interpretability. A constructive remedy to this drawback is a systematic design of networks by unfolding well-understood iterative algorithms. A popular representative of this approach is LISTA, evaluating sparse representations of processed signals. In this paper, we revisit this task and propose an unfolded version of a greedy pursuit algorithm for the same goal. More specifically, we concentrate on the well-known OMP algorithm, and introduce its unfolded and learned version. Key features of our Learned Greedy Method (LGM) are the ability to accommodate a dynamic number of unfolded layers, and a stopping mechanism based on representation error. We develop several variants of the proposed LGM architecture and demonstrate their flexibility and efficiency.



中文翻译:

博学的贪婪方法(LGM):一种用于稀疏编码及其他方面的新型神经体系结构

信号和图像处理领域受到深度神经网络的引入的深刻影响。尽管取得了令人瞩目的成就,但这些解决方案中使用的体系结构并没有明确的理由,它们是缺乏可解释性的“黑匣子”机器。针对此缺点的建设性补救措施是通过展开广为人知的迭代算法来对网络进行系统设计。LISTA是这种方法的一个流行代表,它评估处理信号的稀疏表示。在本文中,我们将重新审视此任务,并针对相同的目标提出贪婪追踪算法的展开版本。更具体地说,我们专注于著名的OMP算法,并介绍其展开和学习的版本。我们的“学习贪婪方法”(LGM)的主要功能是能够容纳动态数量的展开层,以及基于表示错误的停止机制。我们开发了建议的LGM体系结构的几种变体,并展示了它们的灵活性和效率。

更新日期:2021-04-01
down
wechat
bug