当前位置: X-MOL 学术Appl. Comput. Harmon. Anal. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Nonconvex regularization for sparse neural networks
Applied and Computational Harmonic Analysis ( IF 2.5 ) Pub Date : 2022-06-03 , DOI: 10.1016/j.acha.2022.05.003
Konstantin Pieper , Armenak Petrosyan

Convex 1 regularization using an infinite dictionary of neurons has been suggested for constructing neural networks with desired approximation guarantees, but can be affected by an arbitrary amount of over-parametrization. This can lead to a loss of sparsity and result in networks with too many active neurons for the given data, in particular if the number of data samples is large. As a remedy, in this paper, a nonconvex regularization method is investigated in the context of shallow ReLU networks: We prove that in contrast to the convex approach, any resulting (locally optimal) network is finite even in the presence of infinite data (i.e., if the data distribution is known and the limiting case of infinite samples is considered). Moreover, we show that approximation guarantees and existing bounds on the network size for finite data are maintained.



中文翻译:

稀疏神经网络的非凸正则化

凸的1已经建议使用无限的神经元字典进行正则化,以构建具有所需近似保证的神经网络,但可能会受到任意数量的过度参数化的影响。这可能导致稀疏性的损失,并导致网络对于给定数据具有过多的活动神经元,特别是在数据样本数量很大的情况下。作为一种补救措施,本文在浅层 ReLU 网络的背景下研究了一种非凸正则化方法:我们证明,与凸方法相比,即使存在无限数据(即,如果数据分布已知并且考虑了无限样本的极限情况)。此外,我们证明了有限数据的近似保证和网络大小的现有界限得以维持。

更新日期:2022-06-03
down
wechat
bug