当前位置: X-MOL 学术Neural Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A direct approach for function approximation on data defined manifolds.
Neural Networks ( IF 7.8 ) Pub Date : 2020-08-25 , DOI: 10.1016/j.neunet.2020.08.018
H N Mhaskar 1
Affiliation  

In much of the literature on function approximation by deep networks, the function is assumed to be defined on some known domain, such as a cube or a sphere. In practice, the data might not be dense on these domains, and therefore, the approximation theory results are observed to be too conservative. In manifold learning, one assumes instead that the data is sampled from an unknown manifold; i.e., the manifold is defined by the data itself. Function approximation on this unknown manifold is then a two stage procedure: first, one approximates the Laplace–Beltrami operator (and its eigen-decomposition) on this manifold using a graph Laplacian, and next, approximates the target function using the eigen-functions. Alternatively, one estimates first some atlas on the manifold and then uses local approximation techniques based on the local coordinate charts.

In this paper, we propose a more direct approach to function approximation on unknown, data defined manifolds without computing the eigen-decomposition of some operator or an atlas for the manifold, and without any kind of training in the classical sense. Our constructions are universal; i.e., do not require the knowledge of any prior on the target function other than continuity on the manifold. We estimate the degree of approximation. For smooth functions, the estimates do not suffer from the so-called saturation phenomenon. We demonstrate via a property called good propagation of errors how the results can be lifted for function approximation using deep networks where each channel evaluates a Gaussian network on a possibly unknown manifold.



中文翻译:

在数据定义的流形上进行函数逼近的直接方法。

在许多有关通过深层网络进行函数逼近的文献中,假定函数是在某些已知域(例如立方体或球体)上定义的。在实践中,数据可能在这些域上并不密集,因此,近似理论结果被认为过于保守。在流形学习中,人们假设从一个未知的流形中采样数据。即,流形由数据本身定义。然后,在这个未知流形上进行函数逼近是一个两阶段的过程:首先,使用图拉普拉斯图对这个流形上的Laplace–Beltrami算子(及其特征分解)进行逼近,然后,使用特征函数对目标函数进行逼近。或者,

在本文中,我们提出了一种更直接的方法来对未知的,数据定义的流形进行函数逼近,而无需计算某些算子的特征分解或流形图集,并且无需进行经典意义上的任何训练。我们的结构是通用的。也就是说,除了歧管上的连续性之外,不需要了解目标功能的任何先验知识。我们估计近似程度。对于平滑函数,估计值不会受到所谓的饱和现象的影响。我们通过一个称为“错误的良好传播”的属性演示了如何使用深度网络提升结果以进行函数逼近,其中每个通道在可能未知的流形上评估高斯网络。

更新日期:2020-09-11
down
wechat
bug