当前位置: X-MOL 学术Appl. Comput. Harmon. Anal. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Understanding neural networks with reproducing kernel Banach spaces
Applied and Computational Harmonic Analysis ( IF 2.5 ) Pub Date : 2022-09-05 , DOI: 10.1016/j.acha.2022.08.006
F. Bartolucci , E. De Vito , L. Rosasco , S. Vigogna

Characterizing the function spaces corresponding to neural networks can provide a way to understand their properties. In this paper we discuss how the theory of reproducing kernel Banach spaces can be used to tackle this challenge. In particular, we prove a representer theorem for a wide class of reproducing kernel Banach spaces that admit a suitable integral representation and include one hidden layer neural networks of possibly infinite width. Further, we show that, for a suitable class of ReLU activation functions, the norm in the corresponding reproducing kernel Banach space can be characterized in terms of the inverse Radon transform of a bounded real measure, with norm given by the total variation norm of the measure. Our analysis simplifies and extends recent results in [45], [36], [37].



中文翻译:

理解具有再现核巴拿赫空间的神经网络

表征与神经网络相对应的函数空间可以提供一种理解其属性的方法。在本文中,我们讨论了如何使用再现核巴拿赫空间的理论来应对这一挑战。特别是,我们证明了一类广泛的可再生核巴拿赫空间的表示定理,这些空间允许合适的积分表示并包括一个可能无限宽度的隐藏层神经网络。此外,我们表明,对于合适的 ReLU 激活函数类,相应的再生核 Banach 空间中的范数可以用有界实测度的 Radon 逆变换来表征,范数由措施。我们的分析简化并扩展了 [45]、[36]、[37] 中的最新结果。

更新日期:2022-09-05
down
wechat
bug