当前位置: X-MOL 学术IEEE Signal Proc. Mag. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep Learning Meets Sparse Regularization: A signal processing perspective
IEEE Signal Processing Magazine ( IF 14.9 ) Pub Date : 2023-09-07 , DOI: 10.1109/msp.2023.3286988
Rahul Parhi 1 , Robert D. Nowak 2
Affiliation  

Deep learning (DL) has been wildly successful in practice, and most of the state-of-the-art machine learning methods are based on neural networks (NNs). Lacking, however, is a rigorous mathematical theory that adequately explains the amazing performance of deep NNs (DNNs). In this article, we present a relatively new mathematical framework that provides the beginning of a deeper understanding of DL. This framework precisely characterizes the functional properties of NNs that are trained to fit to data. The key mathematical tools that support this framework include transform-domain sparse regularization, the Radon transform of computed tomography, and approximation theory, which are all techniques deeply rooted in signal processing. This framework explains the effect of weight decay regularization in NN training, use of skip connections and low-rank weight matrices in network architectures, role of sparsity in NNs, and explains why NNs can perform well in high-dimensional problems.

中文翻译:

深度学习与稀疏正则化的结合:信号处理的视角

深度学习 (DL) 在实践中取得了巨大成功,大多数最先进的机器学习方法都基于神经网络 (NN)。然而,缺乏严格的数学理论来充分解释深度神经网络(DNN)的惊人性能。在本文中,我们提出了一个相对较新的数学框架,为更深入地理解 DL 提供了开始。该框架精确地描述了经过训练以适应数据的神经网络的功能特性。支持该框架的关键数学工具包括变换域稀疏正则化、计算机断层扫描的 Radon 变换和逼近理论,这些技术都深深植根于信号处理中。该框架解释了神经网络训练中权重衰减正则化的影响,
更新日期:2023-09-08
down
wechat
bug