当前位置: X-MOL 学术Nat. Mach. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators
Nature Machine Intelligence ( IF 18.8 ) Pub Date : 2021-03-18 , DOI: 10.1038/s42256-021-00302-5
Lu Lu , Pengzhan Jin , Guofei Pang , Zhongqiang Zhang , George Em Karniadakis

It is widely known that neural networks (NNs) are universal approximators of continuous functions. However, a less known but powerful result is that a NN with a single hidden layer can accurately approximate any nonlinear continuous operator. This universal approximation theorem of operators is suggestive of the structure and potential of deep neural networks (DNNs) in learning continuous operators or complex systems from streams of scattered data. Here, we thus extend this theorem to DNNs. We design a new network with small generalization error, the deep operator network (DeepONet), which consists of a DNN for encoding the discrete input function space (branch net) and another DNN for encoding the domain of the output functions (trunk net). We demonstrate that DeepONet can learn various explicit operators, such as integrals and fractional Laplacians, as well as implicit operators that represent deterministic and stochastic differential equations. We study different formulations of the input function space and its effect on the generalization error for 16 different diverse applications.



中文翻译:

基于算子普遍逼近定理的DeepONet学习非线性算子

众所周知,神经网络 (NN) 是连续函数的通用逼近器。然而,一个鲜为人知但强大的结果是具有单个隐藏层的 NN 可以准确地逼近任何非线性连续算子。算子的这种通用逼近定理暗示了深度神经网络 (DNN) 在从分散的数据流中学习连续算子或复杂系统的结构和潜力。在这里,我们因此将这个定理扩展到 DNN。我们设计了一个泛化误差小的新网络,即深度算子网络(DeepONet),它由一个用于编码离散输入函数空间(分支网络)的 DNN 和另一个用于编码输出函数域(主干网络)的 DNN 组成。我们证明 DeepONet 可以学习各种显式运算符,例如积分和分数拉普拉斯算子,以及表示确定性和随机微分方程的隐式算子。我们研究了输入函数空间的不同公式及其对 16 种不同应用的泛化误差的影响。

更新日期:2021-03-18
down
wechat
bug