当前位置: X-MOL 学术Phys. D Nonlinear Phenom. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Error bounds of the invariant statistics in machine learning of ergodic Itô diffusions
Physica D: Nonlinear Phenomena ( IF 4 ) Pub Date : 2021-08-30 , DOI: 10.1016/j.physd.2021.133022
He Zhang 1 , John Harlim 1, 2 , Xiantao Li 1
Affiliation  

This paper studies the theoretical underpinnings of machine learning of ergodic Itô diffusions. The objective is to understand the convergence properties of the invariant statistics when the underlying system of stochastic differential equations (SDEs) is empirically estimated with a supervised regression framework. Using the perturbation theory of ergodic Markov chains and the linear response theory, we deduce a linear dependence of the errors of one-point and two-point invariant statistics on the error in the learning of the drift and diffusion coefficients. More importantly, our study shows that the usual L2-norm characterization of the learning generalization error is insufficient for achieving this linear dependence result. We find that sufficient conditions for such a linear dependence result are through learning algorithms that produce a uniformly Lipschitz and consistent estimator in the hypothesis space that retains certain characteristics of the drift coefficients, such as the usual linear growth condition that guarantees the existence of solutions of the underlying SDEs. We examine these conditions on two well-understood learning algorithms: the kernel-based spectral regression method and the shallow random neural networks with the ReLU activation function.



中文翻译:

遍历 Itô 扩散的机器学习中不变统计量的误差界限

本文研究了遍历 Itô 扩散的机器学习的理论基础。目标是在使用监督回归框架对随机微分方程 (SDE) 的基础系统进行经验估计时,了解不变统计量的收敛特性。利用遍历马尔可夫链的微扰理论和线性响应理论,我们推导出了一点和两点不变统计的误差对漂移和扩散系数学习误差的线性依赖性。更重要的是,我们的研究表明,通常2- 学习泛化误差的范数表征不足以实现这种线性依赖结果。我们发现这种线性相关结果的充分条件是通过学习算法在保留漂移系数的某些特征的假设空间中产生一致的 Lipschitz 和一致的估计量,例如保证解存在的通常线性增长条件底层 SDE。我们在两种易于理解的学习算法上检查这些条件:基于内核的谱回归方法和具有 ReLU 激活函数的浅层随机神经网络。

更新日期:2021-09-12
down
wechat
bug