当前位置: X-MOL 学术SIAM J. Numer. Anal. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Neural Parametric Fokker--Planck Equation
SIAM Journal on Numerical Analysis ( IF 2.9 ) Pub Date : 2022-06-14 , DOI: 10.1137/20m1344986
Shu Liu , Wuchen Li , Hongyuan Zha , Haomin Zhou

SIAM Journal on Numerical Analysis, Volume 60, Issue 3, Page 1385-1449, June 2022.
In this paper, we develop and analyze numerical methods for high-dimensional Fokker--Planck equations by leveraging generative models from deep learning. Our starting point is a formulation of the Fokker--Planck equation as a system of ordinary differential equations (ODEs) on finite-dimensional parameter space with the parameters inherited from generative models such as normalizing flows. We call such ODEs neural parametric Fokker--Planck equations. The fact that the Fokker--Planck equation can be viewed as the $L^2$-Wasserstein gradient flow of Kullback--Leibler (KL) divergence allows us to derive the ODEs as the constrained $L^2$-Wasserstein gradient flow of KL divergence on the set of probability densities generated by neural networks. For numerical computation, we design a variational semi-implicit scheme for the time discretization of the proposed ODE. Such an algorithm is sampling-based, which can readily handle the Fokker--Planck equations in higher dimensional spaces. Moreover, we also establish bounds for the asymptotic convergence analysis of the neural parametric Fokker--Planck equation as well as the error analysis for both the continuous and discrete versions. Several numerical examples are provided to illustrate the performance of the proposed algorithms and analysis.


中文翻译:

神经参数 Fokker--Planck 方程

SIAM 数值分析杂志,第 60 卷,第 3 期,第 1385-1449 页,2022 年 6 月。
在本文中,我们利用深度学习中的生成模型开发和分析高维 Fokker-Planck 方程的数值方法。我们的出发点是将 Fokker-Planck 方程表述为有限维参数空间上的常微分方程 (ODE) 系统,其参数继承自生成模型,例如归一化流。我们称这种 ODE 为神经参数 Fokker--Planck 方程。Fokker--Planck 方程可以看作是 Kullback-Leibler (KL) 散度的 $L^2$-Wasserstein 梯度流,这一事实允许我们将 ODE 推导出为受约束的 $L^2$-Wasserstein 梯度流神经网络生成的概率密度集上的 KL 散度。对于数值计算,我们为所提出的 ODE 的时间离散化设计了一个变分半隐式方案。这种算法是基于采样的,可以很容易地处理高维空间中的 Fokker-Planck 方程。此外,我们还为神经参数 Fokker-Planck 方程的渐近收敛分析以及连续和离散版本的误差分析建立了界限。提供了几个数值例子来说明所提出的算法和分析的性能。
更新日期:2022-06-15
down
wechat
bug