Skip to main content
Log in

Sparse harmonic transforms II: best s-term approximation guarantees for bounded orthonormal product bases in sublinear-time

  • Published:
Numerische Mathematik Aims and scope Submit manuscript

Abstract

In this paper we develop a sublinear-time compressive sensing algorithm for approximating functions of many variables which are compressible in a given Bounded Orthonormal Product Basis (BOPB). The resulting algorithm is shown to both have an associated best s-term recovery guarantee in the given BOPB, and also to work well numerically for solving sparse approximation problems involving functions contained in the span of fairly general sets of as many as \(\sim 10^{230}\) orthonormal basis functions. All code is made publicly available. As part of the proof of the main recovery guarantee new variants of the well known CoSaMP algorithm are proposed which can utilize any sufficiently accurate support identification procedure satisfying a Support Identification Property (SIP) in order to obtain strong sparse approximation guarantees. These new CoSaMP variants are then proven to have both runtime and recovery error behavior which are largely determined by the associated runtime and error behavior of the chosen support identification method. The main theoretical results of the paper are then shown by developing a sublinear-time support identification algorithm for general BOPB sets which is robust to arbitrary additive errors. Using this new support identification method to create a new CoSaMP variant then results in a new robust sublinear-time compressive sensing algorithm for BOPB-compressible functions of many variables.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Similar content being viewed by others

Notes

  1. Big-\({\mathcal {O}}\) and little-o notation is defined as follows: Let \(B \in {\mathbb {R}}^+\) and \(1 \leqslant m \leqslant d\) for md positive integers. A function of d-variables \(g: D \subseteq (0,\infty )^m \times (0,B]^{d-m} \mapsto {\mathbb {R}}^+\) is said to be \({\mathcal {O}} \left( h \right) \) with respect to another function \(h: D \mapsto (0, \infty )\) if \(\exists C \in (0, \infty ), {\mathbf {y}}_1 \in (0, \infty )^m,\) and \({\mathbf {y}}_2 \in (0,B]^{d-m}\) such that \(g({\mathbf {x}}) < C h({\mathbf {x}})\) for all \({\mathbf {x}} \in D\) with \((x_1, \dots , x_m) > {\mathbf {y}}_1\) (componentwise) and \((x_{m+1}, \dots , x_d) < {\mathbf {y}}_2\) (also componentwise). Note that h may not depend on some variables in a given discussion in which case those variables are considered to be held constant in g (at arbitrary values) therein. For example, if we say that \(g: D \mapsto {\mathbb {R}}^+\) is \({\mathcal {O}} \left( h \right) \) with respect to its \(j^{\mathrm{th}}\)-variable \(\in (0, \infty )\) for \(h: (0, \infty ) \rightarrow \mathbb (0,\infty )\), it means that \(\forall {\mathbf {x}} \in D~ \exists y_1,C \in (0, \infty )\) such that \(g(x_1, \dots , x_{j-1}, z, x_{j+1} \dots , x_d) < C h(z)\) for all \(z > y_1\) with \((x_1, \dots , x_{j-1}, z, x_{j+1} \dots , x_d) \in D\). Finally, we also note that when \(h: D \mapsto (0, \infty )\) involves logarithmic functions it is always assumed that the domain of g, D, is restricted so that those logarithmic functions map into \((0, \infty )\) (i.e., D is always restricted s.t. the stated range of h holds, which means that the domain D under consideration may occasionally depend implicitly on, e.g., the particular function being approximated through terms such as \(\left\| {\tilde{\mathbf {c}}}_{{\varOmega }^{\mathrm{opt}}_{{\tilde{f}}, s}} \right\| _2 / \eta \) in Theorem 1). Similarly, \(g: D \subseteq (0,\infty )^m \times (0,B]^{d-m} \mapsto {\mathbb {R}}^+\) is said to be \({o} \left( h \right) \) with respect to \(h: D \mapsto (0, \infty )\) if \(\forall \epsilon > 0~\exists {\mathbf {y}}_1 \in (0, \infty )^m\) and \({\mathbf {y}}_2 \in (0,B]^{d-m}\) such that \(g({\mathbf {x}}) < \epsilon h({\mathbf {x}})\) for all \({\mathbf {x}} \in D\) with \((x_1, \dots , x_m) > {\mathbf {y}}_1\) and \((x_{m+1}, \dots , x_d) < {\mathbf {y}}_2\).

  2. The code for an implementation of [37] is available at https://sourceforge.net/projects/aafftannarborfa/. The code for an implementation of [5] is available at https://www.math.msu.edu/~markiwen/Code/FAST_block_sparse.zip.

  3. A function is exactly s-sparse in \({\mathcal {B}}\) if it is a linear combination of \(\leqslant s\) unknown elements of \({\mathcal {B}}\).

  4. Given that we will be recovering f based on point samples we will require at least enough smoothness to guarantee that any particular point sample we might possibly utilize actually contains information about the given function’s basis coefficients \(\left\{ c_{{{\varvec{n}}}} \right\} _{{{\varvec{n}}}\in {{\mathbb {N}}}^D}\). Of course, the details regarding this smoothness requirement will vary with the choice of basis \({\mathcal {B}}'\).

  5. See “SHT II: Best s-Term Approximation Guarantees for Bounded Orthonormal Product Bases in Sublinear-Time” on Mark Iwen’s code page https://www.math.msu.edu/~markiwen/Code.html.

  6. The definition of inner product with respect to measure/domain pairs is found in (1.4), and the other inner product without decoration is a standard vector inner product.

  7. During the review process it was pointed out that the log factors in the lower bound for m quoted in Theorem 2 can be improved (see, e.g., [8, 10]). If one uses such refined RIP results the log factors in the sampling and runtime bounds of our subsequent theorems can also be improved as a result.

  8. In practice, it suffices to approximate the least-squares solution \({{\varvec{b}}}_{{\varOmega }}\) by an iterative least-squares approach such as Richardson’s iteration or conjugate gradient [6, 16] since computing the exact least squares solution can be expensive when s is large. The argument of [39] shows that it is enough to take three iterations for Richardson’s iteration or conjugate gradient if the initial condition is set to \({{\varvec{a}}}^{k-1}\), and if \({\varPhi }_{\mathrm{CE}}\) has an RIP constant \(\delta _{2s}<0.025\). In fact, both of these methods have similar runtime performance.

  9. Note that \({\mathcal {A}}\left( {\varPhi }_{\mathrm{SID}} {{\varvec{r}}}^k + {{\varvec{e}}}_{\mathrm{SID}} \right) = {\mathcal {A}}\left( {\varPhi }_{\mathrm{SID}} \left( {{\varvec{x}}}_s - {{\varvec{a}}}^k \right) + {{\varvec{e}}}_{\mathrm{SID}} \right) = {\mathcal {A}}\left( {\varPhi }_{\mathrm{SID}} \left( {\varvec{{\tilde{c}}}}_{{\varOmega }^{\mathrm{opt}}_{{\tilde{f}},s}} - {{\varvec{a}}}^k \right) + {{\varvec{e}}}_{\mathrm{SID}} \right) = {\mathcal {A}}\left( {\varPhi }_{\mathrm{SID}} \left( {\varvec{{\tilde{c}}}}- {{\varvec{a}}}^k \right) + {\varvec{e'}}_{\mathrm{SID}} \right) \) where \({\varvec{e'}}_{\mathrm{SID}} := {{\varvec{e}}}_{\mathrm{SID}} - {\varPhi }_{\mathrm{SID}} \left( {\varvec{{\tilde{c}}}}- {\varvec{{\tilde{c}}}}_{{\varOmega }^{\mathrm{opt}}_{{\tilde{f}},s}} \right) \).

  10. See the input of Algorithm 2 for a description of the sampling points and note that the \(2D-1\) blocks have been reindexed for ease of discussion, and that the index sets \({\mathcal {S}}_j\) must therefore correspond to either \(\{ j \}\) or \([j+1]\) accordingly. For a description of how to generate the component points \({{\varvec{w}}}^j_\ell ,{{\varvec{z}}}_k^j\) we refer the reader to Theorem 6.

  11. See (4.20) in Theorem 10 for a definition of \({\varGamma }\) with explicit constants, where we further point out that \(\alpha \) is fixed to be \(\sqrt{23}\) in Theorem 6. When looking at Theorem 10 one should keep in mind that the matrix \({\mathcal {E}}^h_{{\mathcal {S}}} \in {{{\mathbb {C}}}}^{m_1 \times m_2}\) therein is nothing other than a matricized version of \({{\varvec{e}}}_{\mathrm{SID}}^j\) with \({\mathcal {S}}= {\mathcal {S}}_j\) for any desired choice of \(j \in [2D - 1]\).

  12. To see why this holds, note that \(\left\| \left( {\tilde{\mathbf{r}}}_{{\varOmega }^{\mathrm{opt}}_{{\tilde{h}},s'}} \right) _{{\mathcal {S}};\mathbf{n}} \right\| ^2_2 = \sum _{\mathbf{k} ~\text {s.t.}~\mathbf{k}_{{\mathcal {S}}} = \mathbf{n}_{{\mathcal {S}}}} \left| \left( { {\tilde{r}}}_{{\varOmega }^{\mathrm{opt}}_{{\tilde{h}},s'}} \right) _\mathbf{k} \right| ^2 = \sum _{\mathbf{k} \in {\varOmega }^{\mathrm{opt}}_{{\tilde{h}},s'} ~\text {s.t.}~\mathbf{k}_{{\mathcal {S}}} = \mathbf{n}_{{\mathcal {S}}}} \left| {{\tilde{r}}}_\mathbf{k} \right| ^2,\) so that \(\mathbf{n}_{{\mathcal {S}}} \in {\varOmega }_{{\mathcal {S}}}^{\alpha , s'} \implies \left\| \left( {\tilde{\mathbf{r}}}_{{\varOmega }^{\mathrm{opt}}_{{\tilde{h}},s'}} \right) _{{\mathcal {S}};\mathbf{n}} \right\| _2 \geqslant \left\| {\tilde{\mathbf{r}}}_{{\varOmega }^{\mathrm{opt}}_{{\tilde{h}},s'}} \right\| _2 ~\big /~ (\alpha \sqrt{s'}) > 0\) \(\implies \exists \mathbf{k} \in {\varOmega }^{\mathrm{opt}}_{{\tilde{h}},s'} \) with \(\mathbf{k}_{{\mathcal {S}}} = \mathbf{n}_{{\mathcal {S}}}\). As a result, \(\mathbf{n}_{{\mathcal {S}}} \in {\varOmega }_{{\mathcal {S}}}^{\alpha , s'} \implies \mathbf{n}_{{\mathcal {S}}} \in {\varOmega }_{s', {\mathcal {S}}}^{\mathrm{opt}} := \left\{ \mathbf{k}_{{\mathcal {S}}}~\big |~ \mathbf{k} \in {\varOmega }^{\mathrm{opt}}_{{\tilde{h}},s'} \right\} \). Intuitively, \({\varOmega }_{{\mathcal {S}}}^{\alpha , s'}\) contains the prefixes of all the index vectors in \({\varOmega }^{\mathrm{opt}}_{{\tilde{h}},s'}\) whose associated entries in \({\tilde{\mathbf{r}}}\) are at least \((1/(\alpha \sqrt{s'}))^{\mathrm{th}}\) the size of \(\left\| {\tilde{\mathbf{r}}}_{{\varOmega }^{\mathrm{opt}}_{{\tilde{h}},s'}} \right\| _2\) in magnitude (i.e., it contains the prefix information we need in order to eventually find the most significant elements of \({\varOmega }^{\mathrm{opt}}_{{\tilde{h}},s'}\)).

  13. Recall that \({\varvec{{\tilde{c}}}}\in {{{\mathbb {C}}}}^{{\mathcal {I}}_{N,d}}\) is the coefficient vector of \({\tilde{f}}\) as per (3.1), and that \({{\varvec{a}}}^k \in {{\mathbb {C}}}^{{\mathcal {I}}_{N,d}}\) is CoSaMP’s s-sparse approximation to \({{\varvec{x}}}= {\varvec{{\tilde{c}}}}\in {{{\mathbb {C}}}}^{{\mathcal {I}}_{N,d}}\) in its \(k^{\mathrm{th}}\)-iteration.

  14. Here the “EI” in the superscript of \({\mathcal {S}}^{\mathrm{EI}}_j\) stands for “Entry Identification” in the terminology of [13]. In fact many other valid choices for these sets also exist – please see Algorithm 3 for the general criteria they must satisfy.

  15. Recall that we currently use \({\mathcal {S}}^{\mathrm{EI}}_j = \{ j \}\) for all \(j \in [D]\). Here we let, e.g., \({\mathcal {T}}_1 = N^{{\mathcal {S}}^{\mathrm{EI}}_0}\) when \(j = 0\) to start the induction. To see how this induction argument works, consider the following simplified example. Suppose that for each for \(j \in [D]\) our entry identification method successfully outputs the set \({\varOmega }_{{\mathcal {S}}_j^{\mathrm{EI}}}^{\alpha , 2s}\) containing the \(j^{\mathrm{th}}\) entries of all index vectors corresponding to energetic coefficients. Now suppose that in a prior step we have managed to identify \({\varOmega }_{[j]}^{\alpha , 2s}\) containing all the index vector prefixes of length j corresponding to energetic coefficients. By taking \({\mathcal {S}}_1 = [j] = \{0, \dots , j-1\}\), \({\mathcal {S}}_2 = \{j\}\), \({\mathcal {T}}_1 = {\varOmega }_{[j]}^{\alpha , 2s}\), and \({\mathcal {T}}_2 = {\varOmega }_{{\mathcal {S}}_{j}^{\mathrm{EI}}}^{\alpha , 2s}\) with \(s' = 2s\) in Lemma 9, we come to the conclusion that \({\varOmega }_{[j+1]}^{\alpha , 2s} \subseteq {\mathcal {T}}_{1,2} := \left\{ {\mathbf {n}} + {\mathbf {m}}~ \big |~ {\mathbf {n}} \in {\varOmega }_{[j]}^{\alpha , 2s},~ {\mathbf {m}} \in {\varOmega }_{{\mathcal {S}}_{j}^{\mathrm{EI}}}^{\alpha , 2s} \right\} \) \(\cap ~ {\mathcal {I}}_{N, d}\) which implies that all prefixes of length \(j+1\) corresponding to energetic coefficients will now belong to \({\mathcal {T}}_{1,2}\). Thus, by induction starting with \({\mathcal {T}}_1 = {\varOmega }_{{\mathcal {S}}_0^{\mathrm{EI}}}^{\alpha , 2s}\) and \({\mathcal {T}}_2 = {\varOmega }_{{\mathcal {S}}_1^{\mathrm{EI}}}^{\alpha , 2s}\), and continuing in the \((j+1)^{\mathrm{th}}\) step with \({\mathcal {T}}_1 = {\mathcal {T}}_{1,2}\) from the \(j^{\mathrm{th}}\) step and \({\mathcal {T}}_2= {\varOmega }_{{\mathcal {S}}_{j+1}^{\mathrm{EI}}}^{\alpha , 2s}\), we get \({\varOmega }_{[D]}^{\alpha , 2s} \subset \left\{ {\mathbf {n}} + {\mathbf {m}}~ \big |~ {\mathbf {n}} \in {\varOmega }_{[D-1]}^{\alpha , 2s}, ~{\mathbf {m}} \in {\varOmega }_{{\mathcal {S}}_{D-1}^{\mathrm{EI}}}^{\alpha , 2s} \right\} \cap {\mathcal {I}}_{N, d} \) at the last, \((D-1)^{\mathrm{st}}\), step when \(j = D - 1\).

  16. The constants here have been rounded up to the nearest integer from those implied by Theorem 10 and Lemma 14 after substituting \(s'=2s\).

  17. It is important to emphasize here that the grid on which we must evaluate each function f is a fixed grid which does not change depending on h.

  18. Herein we assume that h has been evaluated in advance on our non-adaptive grid so that its values at each grid point can be retrieved in \({\mathcal {O}}(1)\)-time. In addition, note that setting \(d = D\) above still leads to sampling and runtime complexities for each sieve function that scale only polynomially in D. This is due to \({\tilde{d}}\) being independent of d.

  19. In the bounds below t may be upper bounded by D.

  20. Note that the nonzero columns of \({\varPhi }_{{\mathcal {S}}^c;{{\varvec{n}}}}\) will be indexed by different \({{\varvec{q}}}\) in \({\varPhi }_{{\mathcal {S}}^c;{{\varvec{0}}}}\). However, this reindexing will ultimately just represent a permutation of the nonzero columns of \({\varPhi }_{{\mathcal {S}}^c;{{\varvec{n}}}}\) as a submatrix of \({\varPhi }_{{\mathcal {S}}^c;{{\varvec{0}}}}\). And, permuting the columns of a matrix does not change its restricted isometry constants.

  21. See “SHT II: Best s-Term Approximation Guarantees for Bounded Orthonormal Product Bases in Sublinear-Time” on Mark Iwen’s code page https://www.math.msu.edu/~markiwen/Code.html.

References

  1. Adcock, B.: Infinite-dimensional \(\ell ^{1}\) minimization and function approximation from pointwise data. Constr. Approx. 45(3), 345–390 (2017)

    Article  MathSciNet  Google Scholar 

  2. Adcock, B., Brugiapaglia, S., Webster, C.G.: Compressed sensing approaches for polynomial approximation of high-dimensional functions. In: Compressed Sensing and Its Applications, pp. 93–124. Springer International Publishing (2017)

  3. Bailey, J., Iwen, M.A., Spencer, C.V.: On the design of deterministic matrices for fast recovery of Fourier compressible functions. SIAM J. Matrix Anal. Appl. 33(1), 263–289 (2012)

    Article  MathSciNet  Google Scholar 

  4. Bittens, S., Plonka, G.: Sparse fast DCT for vectors with one-block support. Numer. Algorithms 82, 663–697 (2018)

    Article  MathSciNet  Google Scholar 

  5. Bittens, S., Zhang, R., Iwen, M.A.: A deterministic sparse FFT for functions with structured Fourier sparsity. Adv. Comput. Math. 45, 519–561 (2019)

    Article  MathSciNet  Google Scholar 

  6. Björck, A.: Numerical Methods for Least Squares Problems. Society for Industrial Applied Mathematics (SIAM), University City (1996)

    Book  Google Scholar 

  7. Bouchot, J.-L., Rauhut, H., Schwab, C.: Multi-level Compressed Sensing Petrov-Galerkin discretization of high-dimensional parametric PDEs. arXiv:1701.01671 (2017)

  8. Brugiapaglia, S., Dirksen, S., Jung, H.C., Rauhut, H.: Sparse recovery in bounded Riesz systems with applications to numerical methods for PDEs, arXiv:2005.06994 (2020)

  9. Bungartz, H.-J., Griebel, M.: Sparse grids. Acta Numer. 13, 147–269 (2004)

    Article  MathSciNet  Google Scholar 

  10. Chkifa, A., Dexter, N., Tran, H., Webster, C.: Polynomial approximation via compressed sensing of high-dimensional functions on lower sets. Math. Comput. 87(311), 1415–1450 (2018)

    Article  MathSciNet  Google Scholar 

  11. Choi, B., Christlieb, A., Wang, Y.: Multi-dimensional sublinear sparse Fourier algorithm. arXiv preprint arXiv:1606.07407 (2016)

  12. Choi, B., Christlieb, A., Wang, Y.: Multiscale high-dimensional sparse Fourier algorithms for noisy data. arXiv e-prints, arXiv:1907.03692 (2019)

  13. Choi, B., Iwen, M., Krahmer, F.: Sparse harmonic transforms: a new class of sublinear-time algorithms for learning functions of many variables. Found. Comput. Math. (2020). https://doi.org/10.1007/s10208-020-09462-z

    Article  MATH  Google Scholar 

  14. Cohen, A., Dahmen, W., DeVore, R.: Compressed sensing and best \(k\)-term approximation. J. Am. Math. Soc. 22(1), 211–231 (2009)

    Article  MathSciNet  Google Scholar 

  15. Dũng, D., Temlyakov, V.N., Ullrich, T.: Hyperbolic Cross Approximation. Advanced Courses in Mathematics - CRM Barcelona. Birkhäuser, Cham (2018)

    Book  Google Scholar 

  16. Dahlquist, G., Björck, A.: Numerical Methods in Scientific Computing, vol. 1. Society for Industrial and Applied Mathematics (SIAM), Philadelphia (2008)

    MATH  Google Scholar 

  17. DeVore, R., Petrova, G., Wojtaszczyk, P.: Approximation of functions of few variables in high dimensions. Constr. Approx. 33(1), 125–143 (2011)

    Article  MathSciNet  Google Scholar 

  18. Duarte, M.F., Baraniuk, R.G.: Kronecker compressive sensing. IEEE Trans. Image Process. 21(2), 494–504 (2012)

    Article  MathSciNet  Google Scholar 

  19. Efron, B., Stein, C.: The jackknife estimate of variance. Ann. Stat. 9(3), 586–596 (1981)

    Article  MathSciNet  Google Scholar 

  20. Foucart, S., Rauhut, H.: A Mathematical Introduction to Compressive Sensing. Springer, New York (2013)

    Book  Google Scholar 

  21. Gilbert, A., Gu, A., Re, C., Rudra, A., Wootters, M.: Sparse recovery for orthogonal polynomial transforms. arXiv preprint arXiv:1907.08362 (2019)

  22. Gilbert, A., Iwen, M., Strauss, M.: Empirical evaluation of a sub-linear time sparse DFT algorithm. Commun. Math. Sci. 5(4), 981–998 (2007)

    Article  MathSciNet  Google Scholar 

  23. Gilbert, A.C., Indyk, P., Iwen, M.A., Schmidt, L.: Recent developments in the sparse Fourier transform: a compressed Fourier transform for big data. IEEE Signal Process. Mag. 31(5), 91–100 (2014)

    Article  Google Scholar 

  24. Gilbert, A.C., Muthukrishnan, S., Strauss, M.: Improved time bounds for near-optimal sparse Fourier representations. In: Proceedings of SPIE, vol. 5914, p. 59141A (2005)

  25. Griebel, M., Kuo, F.Y., Sloan, I.H.: The smoothing effect of the ANOVA decomposition. J. Complex. 26(5), 523–551, 2010. SI: HDA (2009)

  26. Hassanieh, H., Indyk, P., Katabi, D., Price, E.: Simple and practical algorithm for sparse Fourier transform. In: Proceedings of the Twenty-Third Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 1183–1194. Society for Industrial and Applied Mathematics (SIAM) (2012)

  27. Holtz, M.: Sparse Grid Quadrature in High Dimensions With Applications in Finance and Insurance. Lecture Notes in Computational Science and Engineering, vol. 77. Springer, Berlin (2011)

    MATH  Google Scholar 

  28. Hu, X., Iwen, M., Kim, H.: Rapidly computing sparse Legendre expansions via sparse Fourier transforms. Numer. Algorithms 74(4), 1029–1059 (2017)

    Article  MathSciNet  Google Scholar 

  29. Iwen, M.A.: A deterministic sub-linear time sparse Fourier algorithm via non-adaptive compressed sensing methods. In: Proceedings of the Nineteenth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 20–29. Society for Industrial and Applied Mathematics (SIAM) (2008)

  30. Iwen, M.A.: Combinatorial sublinear-time Fourier algorithms. Found. Comput. Math. 10(3), 303–338 (2010)

    Article  MathSciNet  Google Scholar 

  31. Iwen, M.A.: Improved approximation guarantees for sublinear-time Fourier algorithms. Appl. Comput. Harmonic Anal. 34(1), 57–82 (2013)

    Article  MathSciNet  Google Scholar 

  32. Kämmerer, L., Potts, D., Volkmer, T.: High-dimensional sparse FFT based on sampling along multiple rank-1 lattices. arXiv preprint arXiv:1711.05152, (2017)

  33. Kapralov, M.: Sparse Fourier transform in any constant dimension with nearly-optimal sample complexity in sublinear time. In: Proceedings of the Forty-eighth Annual ACM Symposium on Theory of Computing, pp. 264–277. ACM Press (2016)

  34. Kapralov, M., Velingker, A., Zandieh, A.: Dimension-independent sparse Fourier transform. arXiv e-prints, arXiv:1902.10633 (2019)

  35. Kuo, F.Y., Sloan, I.H., Wasilkowski, G.W., Woźniakowski, H.: On decomposition of multivariate functions. Math. Comput. 79(270), 953–966 (2010)

    Article  MathSciNet  Google Scholar 

  36. Mansour, Y.: Randomized interpolation and approximation of sparse polynomials. In: Proceedings of the 19th International Colloquium on Automata, Languages and Programming, ICALP ’92, pp. 261–272, London, UK. Springer (1992)

  37. Merhi, S., Zhang, R., Iwen, M.A., Christlieb, A.: A new class of fully discrete sparse Fourier transforms: faster stable implementations with guarantees. J. Fourier Anal. Appl. 25(3), 751–784 (2019)

    Article  MathSciNet  Google Scholar 

  38. Morotti, L.: Explicit universal sampling sets in finite vector spaces. Appl. Comput. Harmonic Anal. 43(2), 354–369 (2017)

    Article  MathSciNet  Google Scholar 

  39. Needell, D., Tropp, J.A.: CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmonic Anal. 26(3), 301–321 (2009)

    Article  MathSciNet  Google Scholar 

  40. Novak, E., Woźniakowski, H.: Tractability of multivariate problems. Volume 1: Linear information, volume 6 of EMS Tracts in Mathematics. European Mathematical Society (2008)

  41. Potts, D., Schmischke, M.: Approximation of high-dimensional periodic functions with Fourier-based methods. arXiv preprint arXiv:1907.11412 (2019)

  42. Potts, D., Schmischke, M.: Learning multivariate functions with low-dimensional structures using polynomial bases. arXiv preprint arXiv:1912.03195 (2019)

  43. Potts, D., Volkmer, T.: Sparse high-dimensional FFT based on rank-1 lattice sampling. Appl. Comput. Harmonic Anal. 41(3), 713–748 (2016)

    Article  MathSciNet  Google Scholar 

  44. Potts, D., Volkmer, T.: Multivariate sparse FFT based on rank-1 Chebyshev lattice sampling. In: 2017 International Conference on Sampling Theory and Applications (SampTA), pp. 504–508. IEEE (2017)

  45. Rauhut, H.: Random sampling of sparse trigonometric polynomials. Appl. Comput. Harmonic Anal. 22(1), 16–42 (2007)

    Article  MathSciNet  Google Scholar 

  46. Rauhut, H., Ward, R.: Sparse Legendre expansions via \(\ell _1\)-minimization. J. Approx. Theory 164(5), 517–533 (2012)

    Article  MathSciNet  Google Scholar 

  47. Schwab, C., Todor, R.A.: Karhunen–Loève approximation of random fields by generalized fast multipole methods. J. Comput. Phys. 217(1), 100–122 (2006)

    Article  MathSciNet  Google Scholar 

  48. Segal, B., Iwen, M.A.: Improved sparse Fourier approximation results: faster implementations and stronger guarantees. Numer. Algorithms 63(2), 239–263 (2013)

    Article  MathSciNet  Google Scholar 

  49. Shen, J., Wang, L.-L.: Sparse spectral approximations of high-dimensional problems based on hyperbolic cross. SIAM J. Numer. Anal. 48(3), 1087–1109 (2010)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

Mark Iwen was supported in part by NSF DMS-1912706, and would like to dedicate this paper to his ever bright, hard working, and spirited wife Tsveta, and to the prosperity of their newborn daughter Evgenia. Evgenia – I am anxious to know you are healthy, eager to see you are happy, and already sad at the distant prospect your moving out. May you be more like your mother than like me for your own sake!

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bosu Choi.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: Flowchart of main theorems and lemmas

Appendix: Flowchart of main theorems and lemmas

See Fig. 17.

Fig. 17
figure 17

Proof structure of Theorem 1

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Choi, B., Iwen, M. & Volkmer, T. Sparse harmonic transforms II: best s-term approximation guarantees for bounded orthonormal product bases in sublinear-time. Numer. Math. 148, 293–362 (2021). https://doi.org/10.1007/s00211-021-01200-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00211-021-01200-z

Keywords

Mathematics Subject Classification

Navigation