当前位置: X-MOL 学术IEEE Trans. Signal Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
On the Convergence of a Bayesian Algorithm for Joint Dictionary Learning and Sparse Recovery
IEEE Transactions on Signal Processing ( IF 5.4 ) Pub Date : 2020-01-01 , DOI: 10.1109/tsp.2019.2954526
Geethu Joseph , Chandra R. Murthy

Dictionary learning (DL) is a well-researched problem, where the goal is to learn a dictionary from a finite set of noisy training signals, such that the training data admits a sparse representation over the dictionary. While several solutions are available in the literature, relatively little is known about their convergence and optimality properties. In this paper, we make progress on this problem by analyzing a Bayesian algorithm for DL. Specifically, we cast the DL problem into the sparse Bayesian learning (SBL) framework by imposing a hierarchical Gaussian prior on the sparse vectors. This allows us to simultaneously learn the dictionary as well as the parameters of the prior on the sparse vectors using the expectation-maximization algorithm. The dictionary update step turns out to be a non-convex optimization problem, and we present two solutions, namely, an alternating minimization (AM) procedure and an Armijo line search (ALS) method. We analytically show that the ALS procedure is globally convergent, and establish the stability of the solution by characterizing its limit points. Further, we prove the convergence and stability of the overall DL-SBL algorithm, and show that the minima of the cost function of the overall algorithm are achieved at sparse solutions. As a concrete example, we consider the application of the SBL-based DL algorithm to image denoising, and demonstrate the efficacy of the algorithm relative to existing DL algorithms.

中文翻译:

关于联合字典学习和稀疏恢复的贝叶斯算法的收敛性

字典学习 (DL) 是一个经过充分研究的问题,其目标是从一组有限的嘈杂训练信号中学习字典,以便训练数据在字典上接受稀疏表示。虽然文献中有几种解决方案,但对它们的收敛性和最优性知之甚少。在本文中,我们通过分析深度学习的贝叶斯算法来解决这个问题。具体来说,我们通过对稀疏向量施加分层高斯先验,将 DL 问题转化为稀疏贝叶斯学习 (SBL) 框架。这允许我们使用期望最大化算法同时学习字典以及稀疏向量上的先验参数。字典更新步骤原来是一个非凸优化问题,我们提出了两种解决方案,即交替最小化 (AM) 程序和 Armijo 线搜索 (ALS) 方法。我们分析表明 ALS 过程是全局收敛的,并通过表征其极限点来建立解的稳定性。此外,我们证明了整体 DL-SBL 算法的收敛性和稳定性,并表明整体算法的成本函数的最小值是在稀疏解处实现的。作为一个具体的例子,我们考虑了基于 SBL 的 DL 算法在图像去噪中的应用,并证明了该算法相对于现有 DL 算法的有效性。并通过表征其极限点来建立解的稳定性。此外,我们证明了整体 DL-SBL 算法的收敛性和稳定性,并表明整体算法的成本函数的最小值是在稀疏解处实现的。作为一个具体的例子,我们考虑了基于 SBL 的 DL 算法在图像去噪中的应用,并证明了该算法相对于现有 DL 算法的有效性。并通过表征其极限点来建立解的稳定性。此外,我们证明了整体 DL-SBL 算法的收敛性和稳定性,并表明整体算法的成本函数的最小值是在稀疏解处实现的。作为一个具体的例子,我们考虑了基于 SBL 的 DL 算法在图像去噪中的应用,并证明了该算法相对于现有 DL 算法的有效性。
更新日期:2020-01-01
down
wechat
bug