当前位置: X-MOL 学术Int. J. Pattern Recognit. Artif. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Efficient Convolutional Dictionary Learning Using Preconditioned ADMM
International Journal of Pattern Recognition and Artificial Intelligence ( IF 1.5 ) Pub Date : 2021-08-16 , DOI: 10.1142/s0218001421510095
Xuesong Zhang 1 , Baoping Li 1 , Jing Jiang 2
Affiliation  

Given training data, convolutional dictionary learning (CDL) seeks a translation-invariant sparse representation, which is characterized by a set of convolutional kernels. However, even a small training set with moderate sample size can render the optimization process both computationally challenging and memory starving. Under a biconvex optimization strategy for CDL, we propose to diagonally precondition the system matrices in the filter learning sub-problem that can be solved by the alternating direction method of multipliers (ADMM). This method leads to the substitution of matrix inversion (𝒪(n3)) and matrix multiplication (𝒪(n3)) involved in ADMM with an element-wise operation (𝒪(n)), which significantly reduces the computational complexity as well as the memory requirement. Numerical experiments validate the performance advantage of the proposed method over the state-of-the-arts. Code is available at https://github.com/baopingli/Efficient-Convolutional-Dictionary-Learning-using-PADMM.

中文翻译:

使用预处理 ADMM 的高效卷积字典学习

给定训练数据,卷积字典学习 (CDL) 寻求翻译不变的稀疏表示,其特征在于一组卷积核。然而,即使是具有中等样本大小的小型训练集也会使优化过程在计算上具有挑战性和内存不足。在 CDL 的双凸优化策略下,我们建议对可以通过乘法器交替方向法 (ADMM) 解决的滤波器学习子问题中的系统矩阵进行对角预处理。这种方法导致矩阵求逆的替换(𝒪(n3))和矩阵乘法(𝒪(n3))参与 ADMM 的元素操作(𝒪(n)),这显着降低了计算复杂度以及内存需求。数值实验验证了所提出方法相对于最新技术的性能优势。代码可在https://github.com/baopingli/Efficient-Convolutional-Dictionary-Learning-using-PADMM.
更新日期:2021-08-16
down
wechat
bug