当前位置: X-MOL 学术Ann. Math. Artif. Intel. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
The Universal Approximation Property
Annals of Mathematics and Artificial Intelligence ( IF 1.2 ) Pub Date : 2021-01-22 , DOI: 10.1007/s10472-020-09723-1
Anastasis Kratsios

The universal approximation property of various machine learning models is currently only understood on a case-by-case basis, limiting the rapid development of new theoretically justified neural network architectures and blurring our understanding of our current models' potential. This paper works towards overcoming these challenges by presenting a characterization, a representation, a construction method, and an existence result, each of which applies to any universal approximator on most function spaces of practical interest. Our characterization result is used to describe which activation functions allow the feed-forward architecture to maintain its universal approximation capabilities when multiple constraints are imposed on its final layers and its remaining layers are only sparsely connected. These include a rescaled and shifted Leaky ReLU activation function but not the ReLU activation function. Our construction and representation result is used to exhibit a simple modification of the feed-forward architecture, which can approximate any continuous function with non-pathological growth, uniformly on the entire Euclidean input space. This improves the known capabilities of the feed-forward architecture.

中文翻译:

通用逼近属性

各种机器学习模型的通用逼近特性目前只能根据具体情况来理解,这限制了新的理论上合理的神经网络架构的快速发展,并模糊了我们对当前模型潜力的理解。本文致力于通过提出表征、表示、构造方法和存在结果来克服这些挑战,其中每一个都适用于大多数具有实际意义的函数空间上的任何通用逼近器。我们的表征结果用于描述哪些激活函数允许前馈架构在对其最终层施加多个约束并且其其余层仅稀疏连接时保持其通用逼近能力。这些包括重新缩放和移位的 Leaky ReLU 激活函数,但不包括 ReLU 激活函数。我们的构建和表示结果用于展示前馈架构的简单修改,它可以在整个欧几里德输入空间上均匀地近似任何具有非病理性增长的连续函数。这改进了前馈架构的已知能力。
更新日期:2021-01-22
down
wechat
bug