Skip to main content
Log in

Performance guarantees for a variational “multi-space” decoder

  • Published:
Advances in Computational Mathematics Aims and scope Submit manuscript

Abstract

Model-order reduction methods tackle the following general approximation problem: find an “easily-computable” but accurate approximation \(\hat {\boldsymbol {h}}\) of some target solution h. In order to achieve this goal, standard methodologies combine two main ingredients: (i) a set of partial observations of h and (ii) some “simple” prior model on the set of target solutions. The most common prior models encountered in the literature assume that the target solution h is “close” to some low-dimensional subspace. Recently, triggered by the work by Binev et al. (SIAM/ASA Journal on Uncertainty Quantification 5(1), 1–29, 2017), several contributions have shown that refined prior models (based on a set of embedded approximation subspaces) may lead to enhanced approximation performance. In this paper, we focus on a particular decoder exploiting such a “multi-space” information and evaluating \(\hat {\boldsymbol {h}}\) as the solution of a constrained optimization problem. To date, no theoretical results have been derived to support the good empirical performance of this decoder. The goal of the present paper is to fill this gap. More specifically, we provide a mathematical characterization of the approximation performance achievable by this variational “multi-space” decoder and emphasize that, in some specific setups, it has provably better recovery guarantees than its standard “single-space” counterpart. We also discuss the similarities and differences between this decoder and the one proposed in Binev et al. (SIAM/ASA Journal on Uncertainty Quantification 5(1), 1–29, 2017).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. We remind the reader that we assume m = n.

References

  1. Argaud, J., Bouriquet, B., Gong, H., Maday, Y., Mula, O.: Stabilization of (G)EIM in presence of measurement noise: application to nuclear reactor physics. In: Spectral and High Order Methods for Partial Differential Equations ICOSAHOM 2016, pp 133–145. Springer (2017)

  2. Babuska, I.: Error-bounds for finite element method. Numerische Mathematik 16, 322–333 (1970/71)

    Article  MathSciNet  Google Scholar 

  3. Binev, P., Cohen, A., Dahmen, W., DeVore, R., Petrova, G., Wojtaszczyk, P.: Data assimilation in reduced modeling. SIAM/ASA Journal on Uncertainty Quantification 5(1), 1–29 (2017). https://doi.org/10.1137/15M1025384

    Article  MathSciNet  MATH  Google Scholar 

  4. Chaturantabut, S., Sorensen, D.: Nonlinear model reduction via discrete empirical interpolation. SIAM J. Sci. Comput. 32(5), 2737–2764 (2010). https://doi.org/10.1137/090766498

    Article  MathSciNet  MATH  Google Scholar 

  5. Everson, R., Sirovich, L.: Karhunen-Loève procedure for gappy data. J. Opt. Soc. Am. A 12(8), 1657–1664 (1995). https://doi.org/10.1364/josaa.12.001657

    Article  Google Scholar 

  6. Fick, L., Maday, Y., Patera, A. T., Taddei, T.: A reduced basis technique for long-time unsteady turbulent flows. ArXiv e-prints (2017)

  7. Herzet, C., Diallo, M., Héas, P.: Beyond petrov-galerkin projection by using multi-space prior. In: European Conference on Numerical Mathematics and Advanced Applications (Enumath’17). (https://hal.inria.fr/hal-02173637v1), Voss, Norway (2017)

  8. Herzet, C., Diallo, M., Héas, P.: Beyond Petrov-Galerkin projection by using multi-space prior. In: Model Reduction of Parametrized Systems IV (MoRePaS’18). (https://hal.inria.fr/hal-01937876), Nantes, France (2018)

  9. Maday, Y., Mula, O., Patera, A.T., Yano, M.: The generalized empirical interpolation method: Stability theory on hilbert spaces with an application to the stokes equation. Computer Methods in Applied Mechanics and Engineering 287, 310–334 (2015). https://doi.org/10.1016/j.cma.2015.01.018. http://www.sciencedirect.com/science/article/pii/S0045782515000389

    Article  MathSciNet  MATH  Google Scholar 

  10. Maday, Y., Mula, O., Turinici, G.: Convergence analysis of the generalized empirical interpolation method. SIAM J. Numer. Anal. 54(3), 1713–1731 (2016)

    Article  MathSciNet  Google Scholar 

  11. Quarteroni, A., Manzoni, A., Negri, F.: Reduced Basis Methods for Partial Differential Equations, vol. 92. Springer International Publishing, Berlin (2016). http://www.springer.com/us/book/9783319154305

    MATH  Google Scholar 

Download references

Funding

The authors thank the “Agence nationale de la recherche” for its financial support through the Geronimo project (ANR-13-JS03-0002).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to C. Herzet.

Additional information

Communicated by: Anthony Nouy

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article belongs to the Topical Collection: Model Reduction of Parametrized Systems

Guest Editors: Anthony Nouy, Peter Benner, Mario Ohlberger, Gianluigi Rozza, Karsten Urban and Karen Willcox

Appendix: Proof of (42)

Appendix: Proof of (42)

In this appendix, we show that the cost function \(f({{\boldsymbol {h}}})\triangleq {\sum }_{j=1}^{{n}} {\left ({y}_{j}- \left \langle {{\boldsymbol w}_{j},{{\boldsymbol {h}}}}\right \rangle \right )}^{2}\) can be rewritten as in (42) when hVn and σn > 0.Footnote 1 First, using the definition of yj, we have

$$ f({{\boldsymbol{h}}})= \sum\limits_{j=1}^{{n}} {\left( \left\langle{{{\boldsymbol{w}}}_{j},{\boldsymbol{h}}^{\star}}\right\rangle - \left\langle{{{\boldsymbol{w}}}_{j},{{\boldsymbol{h}}}}\right\rangle\right)}^{2}. $$
(73)

Moreover, using the particular bases introduced in Section 5.1.1, we obtain

$$ \begin{array}{@{}rcl@{}} f({{\boldsymbol{h}}}) &=& \sum\limits_{j=1}^{{n}} {\left( \left\langle{{{\boldsymbol{w}}}_{j}^{*},{\boldsymbol{h}}^{\star}}\right\rangle - \left\langle{{{\boldsymbol{w}}}_{j}^{*},{{\boldsymbol{h}}}}\right\rangle\right)}^{2},\\ &=& \sum\limits_{j=1}^{{n}} {\left( \left\langle{{{\boldsymbol{w}}}_{j}^{*},{\boldsymbol{h}}^{\star}}\right\rangle - {\sigma}_{j} \left\langle{{\boldsymbol v}_{j}^{*},{{\boldsymbol{h}}}}\right\rangle\right)}^{2}, \end{array} $$
(74)

where the first equality follows from the fact that \(\{{{\boldsymbol {w}}}_{j}\}_{j=1}^{{n}}\) and \(\{{{\boldsymbol {w}}}_{j}^{*}\}_{j=1}^{{n}}\) differ up to an orthogonal transformation; the second is a consequence of (55) and our hypothesis hVn.

Since \({{\hat {\boldsymbol {h}}}_{\text {SS}}}\) corresponds to the minimum of f(h) over Vn (see (17)), we simply have

$$ \left\langle{{\boldsymbol v}_{j}^{*},{\hat{{\boldsymbol{h}}}_{\text{SS}}}}\right\rangle = \frac{\left\langle{{{\boldsymbol{w}}}_{j}^{*},{\boldsymbol{h}}^{\star}}\right\rangle}{{\sigma}_{j}}, $$
(75)

if σn > 0. Hence, under this assumption, (2) can also be rewritten as in (42).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Herzet, C., Diallo, M. Performance guarantees for a variational “multi-space” decoder. Adv Comput Math 46, 10 (2020). https://doi.org/10.1007/s10444-020-09746-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10444-020-09746-6

Keywords

Mathematics Subject Classification (2010)

Navigation