Skip to main content
Log in

How Much Can One Learn a Partial Differential Equation from Its Solution?

  • Published:
Foundations of Computational Mathematics Aims and scope Submit manuscript

Abstract

In this work, we study the problem of learning a partial differential equation (PDE) from its solution data. PDEs of various types are used to illustrate how much the solution data can reveal the PDE operator depending on the underlying operator and initial data. A data-driven and data-adaptive approach based on local regression and global consistency is proposed for stable PDE identification. Numerical experiments are provided to verify our analysis and demonstrate the performance of the proposed algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. S. Agmon. On the eigenfunctions and on the eigenvalues of general elliptic boundary value problems. Communications on Pure and Applied Mathematics, 15(2):119–147, 1962.

    Article  MathSciNet  MATH  Google Scholar 

  2. S. Agmon. Lectures on elliptic boundary value problems, volume 369. American Mathematical Soc., 2010.

    MATH  Google Scholar 

  3. E. Akutowicz. The ergodic property of the characteristics on a torus. The Quarterly Journal of Mathematics, 9(1):275–281, 1958.

    Article  MathSciNet  MATH  Google Scholar 

  4. H. Y. Benjamini Y. Controlling the false discovery rate: a practical and powerful approach to multiple hypothesis testing. R Stat Soc B, 57:289-300, 1995.

    MATH  Google Scholar 

  5. D. Beran and F. Hall Jr. Remote sensing for air pollution meteorology. Bulletin of the American Meteorological Society, 55(9):1097–1106, 1974.

    Article  Google Scholar 

  6. R. Bhatia. Matrix analysis, volume 169. Springer Science & Business Media, 2013.

    MATH  Google Scholar 

  7. J. Bongard and H. Lipson. Automated reverse engineering of nonlinear dynamical systems. Proceedings of the National Academy of Sciences, 104(24):9943–9948, 2007.

    Article  MATH  Google Scholar 

  8. F. E. Browder. On the eigenfunctions and eigenvalues of the general linear elliptic differential operator. Proceedings of the National Academy of Sciences of the United States of America, 39(5):433, 1953.

    Article  MathSciNet  MATH  Google Scholar 

  9. W. Dai and O. Milenkovic. Subspace pursuit for compressive sensing signal reconstruction. IEEE transactions on Information Theory, 55(5):2230–2249, 2009.

    Article  MathSciNet  MATH  Google Scholar 

  10. J. Douglas and B. Edwards. Recent and future developments in earthquake ground motion estimation. Earth-Science Reviews, 160:203–219, 2016.

    Article  Google Scholar 

  11. U. Fasel, J. N. Kutz, B. W. Brunton, and S. L. Brunton. Ensemble-sindy: Robust sparse model discovery in the low-data, high-noise limit, with active learning and control. arXiv preprintarXiv:2111.10992, 2021.

  12. A. Fick. Ueber diffusion. Annalen der Physik, 170(1):59–86, 1855.

    Article  Google Scholar 

  13. L. Gårding. On the asymptotic distribution of the eigenvalues and eigenfunctions of elliptic differential operators. Mathematica Scandinavica, pages 237–255, 1953.

  14. I. Gavrilyuk, W. Hackbusch, and B. Khoromskij. Data-sparse approximation to the operator-valued functions of elliptic operator. Mathematics of computation, 73(247):1297–1324, 2004.

    Article  MathSciNet  MATH  Google Scholar 

  15. I. P. Gavrilyuk, W. Hackbusch, and B. N. Khoromskij. \(\cal{H}\)-matrix approximation for the operator exponential with applications. Numerische Mathematik, 92(1):83–111, 2002.

    Article  MathSciNet  MATH  Google Scholar 

  16. A. Hasler, K. Palaniappan, C. Kambhammetu, P. Black, E. Uhlhorn, and D. Chesters. High-resolution wind fields within the inner core and eye of a mature tropical cyclone from goes 1-min images. Bulletin of the American Meteorological Society, 79(11):2483–2496, 1998.

    Article  Google Scholar 

  17. Y. He, S.-H. Kang, W. Liao, H. Liu, and Y. Liu. Robust identification of differential equations by numerical techniques from a single set of noisy observation. SIAM Journal on Scientific Computing, 44(3):A1145–A1175, 2022.

    Article  MathSciNet  MATH  Google Scholar 

  18. Y. He, S.-H. Kang, W. Liao, H. Liu, and Y. Liu. Group projected subspace pursuit for identification of variable coefficient differential equations (GP-IDENT). arXiv preprintarXiv:2304.05543, 2023.

  19. Y. He, N. Suh, X. Huo, S. H. Kang, and Y. Mei. Asymptotic theory of-regularized pde identification from a single noisy trajectory. SIAM/ASA Journal on Uncertainty Quantification, 10(3):1012–1036, 2022.

    Article  MathSciNet  MATH  Google Scholar 

  20. P. Jaccard. The distribution of the flora in the alpine zone. New phytologist, 11(2):37–50, 1912.

    Article  Google Scholar 

  21. S. Jiang, L. Greengard, and S. Wang. Efficient sum-of-exponentials approximations for the heat kernel and their applications. Advances in Computational Mathematics, 41(3):529–551, 2015.

    Article  MathSciNet  MATH  Google Scholar 

  22. K. Kaheman, J. N. Kutz, and S. L. Brunton. Sindy-pi: a robust algorithm for parallel implicit sparse identification of nonlinear dynamics. Proceedings of the Royal Society A, 476(2242):20200279, 2020.

    Article  MathSciNet  MATH  Google Scholar 

  23. S. H. Kang, W. Liao, and Y. Liu. Ident: Identifying differential equations with numerical time evolution. Journal of Scientific Computing, 87(1):1–27, 2021.

    Article  MathSciNet  MATH  Google Scholar 

  24. A. Kolmogoroff. Uber die beste annaherung von funktionen einer gegebenen funktionenklasse. Annals of Mathematics, pages 107–110, 1936.

  25. V. V. Kozlov. Dynamical systems with multivalued integrals on a torus. Proceedings of the Steklov Institute of Mathematics, 256(1):188–205, 2007.

    Article  MathSciNet  MATH  Google Scholar 

  26. Z. Li, N. Kovachki, K. Azizzadenesheli, B. Liu, K. Bhattacharya, A. Stuart, and A. Anandkumar. Fourier neural operator for parametric partial differential equations. arXiv preprintarXiv:2010.08895, 2020.

  27. Z. Long, Y. Lu, and B. Dong. Pde-net 2.0: Learning pdes from data with a numeric-symbolic hybrid deep network. Journal of Computational Physics, 399:108925, 2019.

    Article  MathSciNet  MATH  Google Scholar 

  28. Z. Long, Y. Lu, X. Ma, and B. Dong. Pde-net: Learning pdes from data. In International Conference on Machine Learning, pages 3208–3216. PMLR, 2018.

  29. M. López-Fernández, C. Palencia, and A. Schädle. A spectral order method for inverting sectorial laplace transforms. SIAM journal on numerical analysis, 44(3):1332–1350, 2006.

    Article  MathSciNet  MATH  Google Scholar 

  30. J. B. Marion. Classical dynamics of particles and systems. Academic Press, UK 2013.

    Google Scholar 

  31. D. A. Messenger and D. M. Bortz. Weak sindy for partial differential equations. Journal of Computational Physics, page 110525, 2021.

  32. A. Pazy. Semigroups of linear operators and applications to partial differential equations, volume 44. Springer Science & Business Media, 2012.

    MATH  Google Scholar 

  33. P. Phillipson and P. Schuster. Modeling by Nonlinear Differential Equations: Dissipative and Conservative Processes, volume 69. World Scientific, 2009.

    MATH  Google Scholar 

  34. M. Raissi and G. E. Karniadakis. Hidden physics models: Machine learning of nonlinear partial differential equations. Journal of Computational Physics, 357:125–141, 2018.

    Article  MathSciNet  MATH  Google Scholar 

  35. J. Reade. Eigenvalues of positive definite kernels. SIAM Journal on Mathematical Analysis, 14(1):152–157, 1983.

    Article  MathSciNet  MATH  Google Scholar 

  36. J. Reade. Eigenvalues of positive definite kernels ii. SIAM Journal on Mathematical Analysis, 15(1):137–142, 1984.

    Article  MathSciNet  MATH  Google Scholar 

  37. W. Rudin. Functional analysis 2nd ed. International Series in Pure and Applied Mathematics. McGraw-Hill, Inc., New York, 1991.

    MATH  Google Scholar 

  38. S. Rudy, A. Alla, S. L. Brunton, and J. N. Kutz. Data-driven identification of parametric partial differential equations. SIAM Journal on Applied Dynamical Systems, 18(2):643–660, 2019.

    Article  MathSciNet  MATH  Google Scholar 

  39. S. H. Rudy, S. L. Brunton, J. L. Proctor, and J. N. Kutz. Data-driven discovery of partial differential equations. Science Advances, 3(4):e1602614, 2017.

    Article  Google Scholar 

  40. T. Saito. On the measure-preserving flow on the torus. Journal of the Mathematical Society of Japan, 3(2):279–284, 1951.

    Article  MathSciNet  MATH  Google Scholar 

  41. T. Saito. On dynamical systems in n-dimensional torus. Funkcial. Ekvac., 7:91–102, 1965.

    MathSciNet  MATH  Google Scholar 

  42. H. Schaeffer. Learning partial differential equations via data discovery and sparse optimization. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 473(2197):20160446, 2017.

    Article  MathSciNet  MATH  Google Scholar 

  43. H. Schaeffer, R. Caflisch, C. D. Hauck, and S. Osher. Sparse dynamics for partial differential equations. Proceedings of the National Academy of Sciences, 110(17):6634–6639, 2013.

    Article  MathSciNet  MATH  Google Scholar 

  44. M. Schmidt and H. Lipson. Distilling free-form natural laws from experimental data. science, 324(5923):81–85, 2009.

    Article  Google Scholar 

  45. J. P. Shaffer. Multiple hypothesis testing. Ann. Rev. Psych., 46:561–584, 1995.

    Article  Google Scholar 

  46. D. E. Shea, S. L. Brunton, and J. N. Kutz. Sindy-bvp: Sparse identification of nonlinear dynamics for boundary value problems. Physical Review Research, 3(2):023255, 2021.

    Article  Google Scholar 

  47. S. Sternberg. On differential equations on the torus. American Journal of Mathematics, 79(2):397–402, 1957.

    Article  MathSciNet  MATH  Google Scholar 

  48. K. Wu and D. Xiu. Data-driven deep learning of partial differential equations in modal space. Journal of Computational Physics, 408, 2020.

  49. H. Xu, H. Chang, and D. Zhang. Dl-pde: Deep-learning based data-driven discovery of partial differential equations from discrete and noisy data. arXiv preprintarXiv:1908.04463, 2019.

  50. K. Xu and E. Darve. Physics constrained learning for data-driven inverse modeling from sparse observations. Journal of Computational Physics, page 110938, 2022.

Download references

Acknowledgements

H. Zhao’s research is partially supported by NSF Grant DMS-2012860 and DMS-2309551. Y. Zhong’s research is partially supported by NSF Grant DMS-2309530.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yimin Zhong.

Additional information

Communicated by Rachel Ward.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix A. Proof of (4.20) and (4.21)

Appendix A. Proof of (4.20) and (4.21)

In the following, we assume \(N > 1\). Recalling that the variance of a random variable X can be expressed as \({\mathbb {E}}[X^2]-({\mathbb {E}}[X])^2\), we get

$$\begin{aligned} \sum _{m=1}^N{\mathbb {E}}\left[ (\zeta _m-\frac{1}{N}\sum _{n=1}^N\zeta _n)^2\right]&=\sum _{m=1}^N{\mathbb {E}}\left[ \zeta _m^2\right] -\frac{1}{N}{\mathbb {E}}\left[ (\sum _{n=1}^N\zeta _n)^2\right] \\&=\frac{N(B-1)}{B}\sigma ^2+\sum _{m=1}^N\mu _m^2-\frac{B-1}{B}\sigma ^2-\frac{1}{N}\left( \sum _{n=1}^N\mu _n\right) ^2\\&=\frac{(N-1)(B-1)}{B}\sigma ^2+\sum _{m=1}^N\mu _m^2-\frac{1}{N}\left( \sum _{n=1}^N\mu _n\right) ^2. \end{aligned}$$

Hence by defining the estimator:

$$\begin{aligned} {\widehat{\sigma }}^2 = \frac{B\sum _{m=1}^N\left( \zeta _m-\frac{1}{N}\sum _{n=1}^N\zeta _n\right) ^2}{(N-1)(B-1)} \end{aligned}$$

and noting the Lipschitz assumption, we have

$$\begin{aligned} {\mathbb {E}}[{\widehat{\sigma }}^2-\sigma ^2]&=\frac{B\left( \sum _{m=1}^N\mu _m^2-\frac{1}{N}(\sum _{n=1}^N\mu _n)^2\right) }{(N-1)(B-1)}\\&\le \frac{B\sum _{m=1}^N\mu _m^2}{(N-1)(B-1)}\le \frac{DNBL^2R^2}{(N-1)(B-1)}. \end{aligned}$$

As for the variance of the estimator, we notice that since \(\zeta _n\), \(n=1,\dots , N\) are independent Gaussian random variables, if we denote \(S:=\sqrt{\frac{B-1}{B}}\sigma \), \(\sum _{n=1}^N\zeta _n^2/S^2\) has a non-central Chi-squared distribution whose mean is \(N+\sum _{n=1}^N\mu _n^2/S^2\), and variance is \(2(N+2\sum _{n=1}^N\mu _n^2/S^2)\); and \((\sum _{n=1}^N\zeta _n)^2/(NS^2)\) also has a non-central Chi-squared distribution whose mean is \(1+(\sum _{n=1}^N\mu _n)^2/(NS^2)\), and variance is \(2(1+2(\sum _{n=1}^N\mu _n)^2/(NS^2))\). First, we compute the covariance

$$\begin{aligned}&\text {Cov}\left( \sum _{n=1}^N\zeta _n^2,\left( \sum _{n=1}^N\zeta _n\right) ^2\right) ={\mathbb {E}}\left[ \sum _{n=1}^N\zeta _n^2(\sum _{m=1}^N\zeta _m)^2\right] -{\mathbb {E}}\left[ \sum _{n=1}^N\zeta _n^2\right] {\mathbb {E}} \left[ \left( \sum _{n=1}^N\zeta _n\right) ^2\right] \\&\quad ={\mathbb {E}}\left[ \sum _{n=1}^N\zeta _n^2\left( \sum _{m=1}^N\zeta _m\right) ^2\right] -\left( NS^2+\sum _{n=1}^N \mu _n^2\right) \left( NS^2 +\left( \sum _{n=1}^N\mu _n\right) ^2 \right) . \end{aligned}$$

Focusing on the first term, we have

$$\begin{aligned}&{\mathbb {E}}\left[ \sum _{n=1}^N\zeta _n^2\left( \sum _{m=1}^N\zeta _m\right) ^2\right] = \sum _{n=1}^N{\mathbb {E}}\left[ \zeta _n^2\sum _{m=1}^N\zeta _m^2\right] +\sum _{n=1}^N{\mathbb {E}}\left[ \zeta _n^2\sum _{i\ne j}\zeta _i\zeta _j\right] \\&\quad =\sum _{n=1}^N{\mathbb {E}}[\zeta _n^4]+\sum _{n=1}^N\left( {\mathbb {E}}[\zeta _n^2]\sum _{m=1,m\ne n}^N{\mathbb {E}}[\zeta _m^2]\right) +2\sum _{n=1}^N\left( {\mathbb {E}}[\zeta _n^3]\sum _{m=1,m\ne n}^N{\mathbb {E}}[\zeta _m]\right) \\&\qquad + \sum _{n=1}^N\left( {\mathbb {E}}[\zeta _n^2]\sum _{i\ne n,j\ne n,i\ne j}{\mathbb {E}}[\zeta _i]{\mathbb {E}}[\zeta _j]\right) \\&\quad =\sum _{n=1}^N(\mu _n^4+6\mu _n^2S^2+3S^4) +\sum _{n=1}^N\left( (\mu _n^2+S^2)\sum _{m=1,m\ne n}^N(\mu _m^2+S^2)\right) \\&\qquad + 2\sum _{n=1}^N\left( (\mu _n^3+3\mu _nS^2)\sum _{m=1,m\ne n}^N\mu _m\right) +\sum _{n=1}^N\left( (\mu _n^2+S^2)\sum _{i\ne n,j\ne n,i\ne j}\mu _i\mu _j\right) \\&\quad =\sum _{n=1}^N\mu _n^2\left( \sum _{n=1}^N\mu _n\right) ^2+\left( 2(N-1)\sum _{n=1}^N\mu _n^2\right. \\&\left. \qquad +6\left( \sum _{n=1}^N\mu _n\right) ^2 +(N-2)\sum _{n\ne m}\mu _n\mu _m\right) S^2 + N(N+2)S^4. \end{aligned}$$

Hence, we have

$$\begin{aligned}&\text {Cov}\left( \sum _{n=1}^N\zeta _n^2,\left( \sum _{n=1}^N\zeta _n\right) ^2\right) \\&\quad =\left( (N-2)\sum _{n=1}^N\mu _n^2+(6-N)\left( \sum _{n=1}^N\mu _n\right) ^2+(N-2)\sum _{n\ne m}\mu _n\mu _m\right) S^2+2NS^4\\&\quad =4\left( \sum _{n=1}^N\mu _n\right) ^2S^2+2NS^4 . \end{aligned}$$

Now we note that

$$\begin{aligned}&\text {Var}\left[ \sum _{m=1}^N\left( \zeta _m-\frac{1}{N}\sum _{n=1}^N\zeta _n\right) ^2\right] \\&\quad =\text {Var}\left[ \sum _{m=1}^N\zeta ^2_m\right] +\frac{1}{N^2}\text {Var}\left[ (\sum _{n=1}^N\zeta _n)^2\right] \\&\qquad -\frac{2}{N}\text {Cov}\left( \sum _{m=1}^N\zeta _m^2,\left( \sum _{m=1}^N\zeta _m\right) ^2\right) \\&\quad =2\left( NS^4+2\sum _{n=1}^N\mu _n^2S^2\right) +2S^4\left( 1+2\left( \sum _{n=1}^N\mu _n\right) ^2/(NS^2)\right) \\&\qquad -\frac{8}{N}\left( \sum _{n=1}^N\mu _n\right) ^2S^2-4S^4. \end{aligned}$$

After simplification, we get

$$\begin{aligned}&\text {Var}\left[ \sum _{m=1}^N\left( \zeta _m-\frac{1}{N}\sum _{n=1}^N\zeta _n\right) ^2\right] \\&\quad =2(N-1)S^4+4\left( \sum _{n=1}^N\mu _n^2-\frac{\left( \sum _{n=1}^N\mu _n\right) ^2}{N}\right) S^2. \end{aligned}$$

Considering the Lipschitz assumption, we obtain

$$\begin{aligned}&\text {Var}\left[ \sum _{m=1}^N\left( \zeta _m-\frac{1}{N}\sum _{n=1}^N\zeta _n\right) ^2\right] \le 2(N-1)S^4+4NDL^2R^2S^2. \end{aligned}$$

Therefore, denoting \(\gamma =4DL^2R^2\), then we get

$$\begin{aligned} \text {Var}[{\widehat{\sigma }}^2]&\le \frac{2(N-1)(B-1)^2\sigma ^4+\gamma N B(B-1)\sigma ^2}{(N-1)^2(B-1)^2}\\&=\frac{2\sigma ^4}{N-1}+\frac{NB\sigma ^2\gamma }{(N-1)^2(B-1)}. \end{aligned}$$

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

He, Y., Zhao, H. & Zhong, Y. How Much Can One Learn a Partial Differential Equation from Its Solution?. Found Comput Math (2023). https://doi.org/10.1007/s10208-023-09620-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10208-023-09620-z

Keywords

Mathematics Subject Classification

Navigation