Skip to main content
Log in

Analysis of Decimation on Finite Frames with Sigma-Delta Quantization

  • Published:
Constructive Approximation Aims and scope

Abstract

In analog-to-digital conversion, signal decimation has been proven to greatly improve the efficiency of data storage while maintaining high accuracy. When one couples signal decimation with the \(\Sigma \Delta \) quantization scheme, the reconstruction error decays exponentially with respect to the bit-rate. In this study, similar results are proved for finite unitarily generated frames. We introduce a process called alternative decimation on finite frames that is compatible with first- and second-order \(\Sigma \Delta \) quantization. In both cases, alternative decimation results in exponential error decay with respect to the bit usage.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

  1. Aldroubi, A., Davis, J., Krishtal, I.: Exact reconstruction of spatially undersampled signals in evolutionary systems. arXiv preprint arXiv:1312.3203 (2013)

  2. Aldroubi, A., Davis, J., Krishtal, I.: Exact reconstruction of signals in evolutionary systems via spatiotemporal trade-off. J. Fourier Anal. Appl. 21(1), 11–31 (2015)

    Article  MathSciNet  Google Scholar 

  3. Benedetto, J.J., Powell, A.M., Yilmaz, O.: Sigma-delta quantization and finite frames. IEEE Trans. Inf. Theory 52(5), 1990–2005 (2006)

    Article  MathSciNet  Google Scholar 

  4. Blum, J., Lammers, M., Powell, A.M., Yılmaz, Ö.: Sobolev duals in frame theory and sigma-delta quantization. J. Fourier Anal. Appl. 16(3), 365–381 (2010)

    Article  MathSciNet  Google Scholar 

  5. Candy, J.: Decimation for sigma delta modulation. IEEE Trans. Commun. 34, 72–76 (1986)

    Article  Google Scholar 

  6. Chou, E., Güntürk, C.S.: Distributed noise-shaping quantization: II. Classical frames. In: Excursions in Harmonic Analysis: The February Fourier Talks at the Norbert Wiener Center, vol. 5, pp. 179-198 (2017)

  7. Chou, E., Güntürk, C.S., Krahmer, F., Saab, R., Yılmaz, Ö.: Noise-Shaping Quantization Methods for Frame-Based and Compressive Sampling Systems, pp. 157–184. Springer, Berlin (2015)

    MATH  Google Scholar 

  8. Chou, E., Güntürk, C.S.: Distributed noise-shaping quantization: I. Beta duals of finite frames and near-optimal quantization of random measurements. Constr. Approx. 44(1), 1–22 (2016)

    Article  MathSciNet  Google Scholar 

  9. Chou, W., Wong, P.W., Gray, R.M.: Multistage sigma-delta modulation. IEEE Trans. Inf. Theory 35(4), 784–796 (1989)

    Article  MathSciNet  Google Scholar 

  10. Daubechies, I., DeVore, R.: Approximating a bandlimited function using very coarsely quantized data: a family of stable sigma-delta modulators of arbitrary order. Ann. Math. 158(2), 679–710 (2003)

    Article  MathSciNet  Google Scholar 

  11. Daubechies, I., DeVore, R.A., Gunturk, C.S., Vaishampayan, V.A.: A/D conversion with imperfect quantizers. IEEE Trans. Inf. Theory 52(3), 874–885 (2006)

    Article  MathSciNet  Google Scholar 

  12. Daubechies, I., Saab, R.: A deterministic analysis of decimation for sigma-delta quantization of bandlimited functions. IEEE Signal Process. Lett. 22(11), 2093–2096 (2015)

    Article  Google Scholar 

  13. Deift, P., Krahmer, F., Güntürk, C.S.: An optimal family of exponentially accurate one-bit sigma-delta quantization schemes. Commun. Pure Appl. Math. 64, 883–919 (2011)

    Article  MathSciNet  Google Scholar 

  14. Eldar, Y.C., Bolcskei, H.: Geometrically uniform frames. IEEE Trans. Inf. Theory 49(4), 993–1006 (2003)

    Article  MathSciNet  Google Scholar 

  15. Ferguson, P.F., Ganesan, A., Adams, R.W.: One bit higher order sigma-delta A/D converters. In: IEEE International Symposium on Circuits and Systems, pp. 890–893 (1990)

  16. Forney, G.D.: Geometrically uniform codes. IEEE Trans. Inf. Theory 37(5), 1241–1260 (1991)

    Article  MathSciNet  Google Scholar 

  17. Goyal, V.K., Kovačević, J., Kelner, J.A.: Quantized frame expansions with erasures. Appl. Comput. Harmon. Anal. 10(3), 203–233 (2001)

    Article  MathSciNet  Google Scholar 

  18. Goyal, V.K., Kovacevic, J., Vetterli, M.: Quantized frame expansions as source-channel codes for erasure channels. In: Proceedings DCC’99 Data Compression Conference (Cat. No. PR00096), pp. 326–335 (1999)

  19. Güntürk, C.S.: One-bit sigma-delta quantization with exponential accuracy. Commun. Pure Appl. Math. 56(11), 1608–1630 (2003)

    Article  MathSciNet  Google Scholar 

  20. Inose, H., Yasuda, Y.: A unity bit coding method by negative feedback. Proc. IEEE 51, 1524–1535 (1963)

    Article  Google Scholar 

  21. Lu, Y.M., Vetterli, M.: Distributed spatio-temporal sampling of diffusion fields from sparse instantaneous sources. In: Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), 2009 3rd IEEE International Workshop on, pp. 205–208 (2009)

  22. Lu, Y.M., Vetterli, M.: Spatial super-resolution of a diffusion field by temporal oversampling in sensor networks. In: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, no. LCAV-CONF-2009-009, pp. 2249–2252 (2009)

  23. Tewksbury, S., Hallock, R.W.: Oversampled, linear predictive and noise-shaping coders of order n> 1. IEEE Trans. Circuits Syst. 25(7), 436–447 (1978)

    Article  Google Scholar 

Download references

Acknowledgements

The author greatly acknowledges the support from ARO Grant W911NF-17-1-0014, and John Benedetto for the thoughtful advice and insights. Further, the author appreciates the constructive analysis and suggestions of the referees.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kung-Ching Lin.

Additional information

Communicated by Ronald A. DeVore.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The author gratefully acknowledges the support of ARO Grant W911NF-17-1-0014.

Appendices

Appendix A: Limitation of Alternative Decimation: Third-Order Decimation

The non-commutativity between \({\bar{\Delta }}_\rho \) and \(\Delta ^{-1}\) results in incomplete difference scaling when applying \(D_\rho S_\rho ^r\) on \(\Delta ^r\), creating substantial error terms. This phenomenon already occurs for \(r=3\).

Proposition A.1

Given \(m,\rho \in {{\mathbb {N}}}\) with \(\rho \mid m\), the third-order decimation satisfies \(D_\rho S_\rho ^3\Delta ^3=\frac{1}{\rho ^3}(\Delta ^{(\eta )})^3 D_\rho +O(\rho ^{-2})\). In particular, \(D_\rho S_\rho ^3\) only yields quadratic error decay with respect to the oversampling ratio \(\rho \).

First, by noting that \(\Delta ^{-1}{\bar{\Delta }}_\rho \Delta ={\mathcal {E}}\) as in Lemma 7.4, one has

$$\begin{aligned} \begin{aligned}&D_\rho S_\rho ^3\Delta ^3\\&\quad =\frac{1}{\rho ^3}D_\rho {\bar{\Delta }}_\rho \Delta ^{-1}{\bar{\Delta }}_\rho \Delta ^{-1}{\bar{\Delta }}_\rho \Delta ^2\\&\quad =\frac{1}{\rho ^3}D_\rho {\bar{\Delta }}_\rho (\Delta ^{-1}{\bar{\Delta }}_\rho \Delta )\Delta ^{-2}{\bar{\Delta }}_\rho \Delta ^2\\&\quad =\frac{1}{\rho ^3}D_\rho {\bar{\Delta }}_\rho ({\bar{\Delta }}_\rho +{\mathcal {E}})\Delta ^{-1}({\bar{\Delta }}_\rho +{\mathcal {E}})\Delta \\&\quad =\frac{1}{\rho ^3}D_\rho {\bar{\Delta }}_\rho ({\bar{\Delta }}_\rho +{\mathcal {E}})({\bar{\Delta }}_\rho +{\mathcal {E}}+\Delta ^{-1}{\mathcal {E}}\Delta )\\&\quad =\frac{1}{\rho ^3}D_\rho \bigg ({\bar{\Delta }}_\rho ^3+{\bar{\Delta }}_\rho ^2{\mathcal {E}}+{\bar{\Delta }}_\rho ^2(\Delta ^{-1}{\mathcal {E}}\Delta )+{\bar{\Delta }}_\rho {\mathcal {E}}{\bar{\Delta }}_\rho +{\bar{\Delta }}_\rho {\mathcal {E}}^2+{\bar{\Delta }}_\rho {\mathcal {E}}(\Delta ^{-1}{\mathcal {E}}\Delta )\bigg ). \end{aligned} \end{aligned}$$
(15)

We shall calculate all terms one by one.

Lemma A.2

We have the following equalities:

  1. (1)
    $$\begin{aligned} (D_\rho {\bar{\Delta }}_\rho ^2{\mathcal {E}})_{l,s}=\delta (s-(m-\rho ))\bigg (\delta (l-1)-\delta (l-2)\bigg ), \end{aligned}$$
  2. (2)
    $$\begin{aligned} (D_\rho {\bar{\Delta }}_\rho ^2(\Delta ^{-1}{\mathcal {E}}\Delta ))_{l,s}=\left\{ \begin{array}{lll} -\rho &{}\quad \text {if}&{}\quad (l,s)=(1,m-\rho -1),\\ \rho &{}\quad \text {if}&{}\quad (l,s)=(1,m-\rho ),\\ 0&{}\quad \text {otherwise},\end{array}\right. \end{aligned}$$
  3. (3)
    $$\begin{aligned} (D_\rho {\bar{\Delta }}_\rho {\mathcal {E}}{\bar{\Delta }}_\rho )_{l,s}=\delta (l-1)\bigg (\delta (s-(m-\rho ))-\delta (s-(m-2\rho ))\bigg ), \end{aligned}$$
  4. (4)
    $$\begin{aligned} (D_\rho {\bar{\Delta }}_\rho {\mathcal {E}}^2)_{l,s}=\delta (l-1)\delta (s-(m-\rho )), \end{aligned}$$
  5. (5)
    $$\begin{aligned}&(D_\rho {\bar{\Delta }}_\rho {\mathcal {E}}(\Delta ^{-1}{\mathcal {E}}\Delta ))_{l,s}\\&\quad =(m-\rho )\delta (l-1)\bigg (\delta (s-(m-\rho ))-\delta (s-(m-\rho -1)\bigg ), \end{aligned}$$

where given \(n\in {{\mathbb {N}}}\), \([n]:=\{1,\dots , n\}\). In particular, \(D_\rho \big ({\bar{\Delta }}_\rho ^2(\Delta ^{-1}{\mathcal {E}}\Delta )+{\bar{\Delta }}_\rho {\mathcal {E}}(\Delta ^{-1}{\mathcal {E}}\Delta )\big )=O(m)\), and \(D_\rho ({\bar{\Delta }}_\rho ^2{\mathcal {E}}+{\bar{\Delta }}_\rho {\mathcal {E}}{\bar{\Delta }}_\rho +{\bar{\Delta }}_\rho {\mathcal {E}}^2)=O(1)\).

Proof

We will first compute each term without the effect of \(D_\rho \) since \(D_\rho \) is the sub-sampling matrix retaining only the \(t\rho \)th rows for \(t\in [\eta ]\).

  1. (1), (3)

    First, note that \(({\bar{\Delta }}_\rho {\mathcal {E}})_{l,s}=\delta (l-\rho )\delta (s+\rho )\), so

    $$\begin{aligned} ({\bar{\Delta }}_\rho ^2{\mathcal {E}})_{l,s}=\delta (s+\rho )({\bar{\Delta }}_\rho )_{l,\rho }=\delta (s-(m-\rho ))(\delta (l-\rho )-\delta (l-2\rho )). \end{aligned}$$

    Similarly,

    $$\begin{aligned}&({\bar{\Delta }}_\rho {\mathcal {E}}{\bar{\Delta }}_\rho )_{l,s}\\&\quad =\delta (l-\rho )({\bar{\Delta }}_\rho )_{m-\rho ,s}=\delta (l-\rho )(\delta (s-(m-\rho ))-\delta (s-(m-2\rho ))). \end{aligned}$$
  2. (5)

    Now, to compute \(\Delta ^{-1}{\mathcal {E}}\Delta \), we see that, for \(s\ne m\),

    $$\begin{aligned} (\Delta ^{-1}{\mathcal {E}}\Delta )_{l,s}=\sum _{j=1}^l({\mathcal {E}}_{j,s}-{\mathcal {E}}_{j,s+1})=l(\delta (m-\rho -s)-\delta (m-\rho -(s+1))), \end{aligned}$$

    and \((\Delta ^{-1}{\mathcal {E}}\Delta )_{l,m}=0\). In particular,

    $$\begin{aligned} \Delta ^{-1}{\mathcal {E}}\Delta =\begin{pmatrix} 0&{}\quad \ldots &{}\quad 0&{}\quad -\,1&{}\quad 1&{}\quad 0&{}\quad \ldots &{}\quad 0\\ \vdots &{}\quad &{}\quad \vdots &{}\quad -\,2&{}\quad 2&{}\quad \vdots &{}\quad &{}\quad \vdots \\ \vdots &{}\quad &{}\quad \vdots &{}\quad \vdots &{}\quad \vdots &{}\quad \vdots &{}\quad &{}\quad \vdots \\ 0&{}\quad \ldots &{}\quad 0&{}\quad -\,m&{}\quad m&{}\quad 0&{}\quad \ldots &{}\quad 0 \end{pmatrix}, \end{aligned}$$

    where the nonzero columns occur at the \((m-\rho -1)\) and \((m-\rho )\)th positions.

    For \({\bar{\Delta }}_\rho {\mathcal {E}}(\Delta ^{-1}{\mathcal {E}}\Delta )\),

    $$\begin{aligned} \begin{aligned} ({\bar{\Delta }}_\rho {\mathcal {E}}(\Delta ^{-1}{\mathcal {E}}\Delta ))_{l,s}&=\delta (l-\rho )(\Delta ^{-1}{\mathcal {E}}\Delta )_{m-\rho ,s}\\&=\delta (l-\rho )(m-\rho )(\delta (s-(m-\rho ))-\delta (s-(m-\rho -1))). \end{aligned} \end{aligned}$$
  3. (4)

    Note that \({\bar{\Delta }}_\rho {\mathcal {E}}^2={\bar{\Delta }}_\rho {\mathcal {E}}\). The result then follows from the calculation on the first term.

  4. (2)

    Finally, as \(\Delta ^{-1}{\mathcal {E}}\Delta \) only has nonzero entries on the \((m-\rho -1)\) and \((m-\rho )\)th columns, and the two columns differ by a sign, it suffices to calculate the \((m-\rho )\)th column of \({\bar{\Delta }}_\rho ^2(\Delta ^{-1}{\mathcal {E}}\Delta )\).

    $$\begin{aligned} \begin{aligned} ({\bar{\Delta }}_\rho (\Delta ^{-1}{\mathcal {E}}\Delta ))_{l,m-\rho }&=\sum _{j=1}^mj({\bar{\Delta }}_\rho )_{l,j}\\&=\left\{ \begin{array}{lll} l-(l-\rho )=\rho &{}\quad \text {if}&{}\quad l>\rho ,\\ l-(l-\rho +m)=-(m-\rho )&{}\quad \text {if}&{}\quad l<\rho ,\\ l=\rho &{}\quad \text {if}&{}\quad l=\rho . \end{array}\right. \end{aligned} \end{aligned}$$

    Then,

    $$\begin{aligned} \begin{aligned} ({\bar{\Delta }}_\rho ^2(\Delta ^{-1}{\mathcal {E}}\Delta ))_{l,m-\rho }&=\sum _{j=1}^m({\bar{\Delta }}_\rho )_{l,j}({\bar{\Delta }}_\rho (\Delta ^{-1}{\mathcal {E}}\Delta ))_{j,m-\rho }\\&=\left\{ \begin{array}{lll} -m&{}\quad \text {if}&{}\quad l\in [2\rho -1]\backslash \{\rho \},\\ \rho &{}\quad \text {if}&{}\quad l=\rho ,\\ 0&{}\quad \text {otherwise}. \end{array}\right. \end{aligned} \end{aligned}$$

\(\square \)

Proof of Proposition A.1

From (15) and Lemma A.2, we see that

$$\begin{aligned} D_\rho S_\rho ^3\Delta ^3=\frac{1}{\rho ^3}D_\rho {\bar{\Delta }}_\rho ^3+\frac{\eta }{\rho ^2}{\mathcal {E}}_1+\frac{1}{\rho ^3}{\mathcal {E}}_2=\frac{1}{\rho ^3}D_\rho {\bar{\Delta }}_\rho ^3+O(\rho ^{-2}), \end{aligned}$$

where

$$\begin{aligned} ({\mathcal {E}}_1)_{l,s}= & {} \frac{1}{m}\bigg (D_\rho ({\bar{\Delta }}_\rho ^2(\Delta ^{-1}{\mathcal {E}}\Delta )+{\bar{\Delta }}_\rho {\mathcal {E}}(\Delta ^{-1}{\mathcal {E}}\Delta ))\bigg )_{l,s}\\= & {} \left\{ \begin{array}{lll} -\,1&{}\quad \text {if}&{}\quad (l,s)=(1,m-\rho -1),\\ 1&{}\quad \text {if}&{}\quad (l,s)=(1,m-\rho ),\\ 0&{}\quad \text {otherwise,} \end{array}\right. \end{aligned}$$

and

$$\begin{aligned} ({\mathcal {E}}_2)_{l,s}= & {} \bigg (D_\rho ({\bar{\Delta }}_\rho ^2{\mathcal {E}}+{\bar{\Delta }}_\rho {\mathcal {E}}{\bar{\Delta }}_\rho +{\bar{\Delta }}_\rho {\mathcal {E}}^2)\bigg )_{l,s}\\= & {} \left\{ \begin{array}{lll} -1&{}\quad \text {if} &{}\quad (l,s)=(2,m-\rho )\,\text { or }\, (1,m-2\rho ),\\ 3&{}\quad \text {if} &{}\quad (l,s)=(1,m-\rho ),\\ 0&{}\quad \text {otherwise}. \end{array}\right. \end{aligned}$$

\(\square \)

Even in higher-order cases, alternative decimation still only yields quadratic error decay with respect to the oversampling ratio, as can be seen in Fig. 2d, e.

Alternative decimation is limited by this incomplete cancellation, but canonical decimation has even worse error decay. Contrary to the quadratic decay for alternative decimation, canonical decimation only has linear decay for high-order \(\Sigma \Delta \) quantization. The same thing applies to plain \(\Sigma \Delta \) quantization, as can be seen in Fig. 2b.

Appendix B: Numerical Experiments

Here, we present numerical evidence that the alternative decimation on frames has linear and quadratic error decay rate for the first and the second order, respectively. Moreover, it is shown that the canonical decimation, as described in Remark 3.2, is not suitable for our purpose when \(r\ge 2\).

Recall that given \(m,r,\rho \), one can define the canonical decimation operator \(D_\rho {\tilde{S}}_\rho ^r\in {{\mathbb {R}}}^{\eta \times m}\), where \({\tilde{S}}_\rho \in {{\mathbb {R}}}^{m\times m}\) is a circulant matrix.

1.1 B.1. Setting

In our experiment, we look at three different quantization schemes: alternative decimation, canonical decimation, and plain \(\Sigma \Delta \). Given observed data \(y\in {{\mathbb {C}}}^m\) from a frame \(E\in {{\mathbb {C}}}^{m\times k}\) and \(r\in {{\mathbb {N}}}\), one can determine the quantized samples \(q\in {{\mathbb {C}}}^m\) by

$$\begin{aligned} y-q=\Delta ^r u \end{aligned}$$

for some bounded u. The three schemes differ in the choice of dual frames:

  • Alternative decimation: \({\tilde{x}}=(D_\rho S_\rho ^r E)^\dagger D_\rho S_\rho ^r q=F_a q\).

  • Canonical decimation: \({\tilde{x}}=(D_\rho {\tilde{S}}_\rho ^r E)^\dagger D_\rho {\tilde{S}}_\rho ^r q=F_c q\).

  • Plain \(\Sigma \Delta \): \({\tilde{x}}=E^\dagger q=F_p q\).

For each experiment, we use the mid-rise quantizer \({\mathscr {A}}\) and fix \(k=55, \delta =0.5, L=100\), and \(\eta =65\). For each \(\rho \), we set \(m=\rho \eta \) and pick 10 randomly generated vectors \(\{x^j\}_{j=1}^{10}\subset {{\mathbb {C}}}^k\). \(\Sigma \Delta \) quantization on each signal gives \(\{q^j\}_{j=1}^{10}\subset {{\mathbb {C}}}^m\). The maximum reconstruction error over the 10 experiments is recorded, namely

$$\begin{aligned} {\mathscr {E}}_{i}=\max _{1\le j\le 10}\Vert x^j-F_i q^j\Vert _2,\quad i\in \{a,c,p\}. \end{aligned}$$

The frame in our experiment is

$$\begin{aligned} (E^{m,k})_{l,j}=(E)_{l,j}=\frac{1}{\sqrt{k}}(\exp (-2\pi \imath (l+1)(j+1)/m))_{l,j}. \end{aligned}$$

First, we shall compare alternative decimation with plain \(\Sigma \Delta \) quantization from Fig. 2. For \(r=1\), alternative decimation performs worse than plain \(\Sigma \Delta \) quantization, as plain \(\Sigma \Delta \) quantization benefits from the smoothness of the frame elements, having decay rate \(O((\frac{m}{k})^{-5/4})\) proved in [3]. However, for \(r\ge 2\), alternative decimation supersedes plain \(\Sigma \Delta \) quantization as the better scheme. This can be explained by the boundary effect in finite-dimensional spaces that results in incomplete cancellation for backward difference matrices. We are interested in the case \(r=1\) or 2. As we can see, the theoretical error bound does not have a tight constant, although the decay rate is consistent with our experimental result.

Fig. 2
figure 2

The log-log plot for reconstruction error against the decimation ratio \(\rho \) for different quantization schemes. In the case \(r=1\), alternative decimation coincides with canonical decimation. For \(r\ge 2\), alternative decimation has better error decay rate than both canonical decimation and plain \(\Sigma \Delta \) quantization

1.2 B.2. Necessity of Alternative Decimation

The main difference between the alternative decimation operator \(D_\rho S_\rho ^r\) and the canonical one \(D_\rho {\tilde{S}}_\rho ^r\) lies in the scaling effect on difference structures. We have \({\tilde{S}}_\rho ^r=(S_\rho +L)^r\) with \(\rho L\) having unit entries on the first \(\rho -1\) rows and 0 everywhere else.

In Fig. 2, we can see the performance drop-off when switching from alternative decimation to canonical decimation for \(r\ge 2\). We can see that canonical decimation incurs much worse reconstruction error than the alternative one, while generally having worse decay rate. For demonstration, we show explicitly the difference between alternative and canonical decimation schemes for \(r=2\):

$$\begin{aligned} \begin{aligned} {\tilde{S}}_\rho ^2\Delta ^2&=(S_\rho +L)^2\Delta ^2\\&=S_\rho ^2\Delta ^2+(LS_\rho +S_\rho L+L^2)\Delta ^2\\&=S_\rho ^2\Delta ^2+L(S_\rho +L^2)\Delta ^2+S_\rho L\Delta ^2. \end{aligned} \end{aligned}$$

Since \(D_\rho L=0\), we are left with \(D_\rho S_\rho L\Delta ^2\). Now,

$$\begin{aligned} (L\Delta ^2)_{l,j}=\left\{ \begin{array}{ll} \frac{-1}{\rho }&{}\quad \text {if}\quad 1\le l\le \rho -1,\, j=m-1,\\ \frac{1}{\rho }&{}\quad \text {if}\quad 1\le l\le \rho -1,\, j=m,\\ 0&{}\quad \text {otherwise}. \end{array}\right. \end{aligned}$$

Then, we see that

$$\begin{aligned} (DS_\rho L\Delta ^2)_{l,j}=\left\{ \begin{array}{ll} \frac{-(\rho -1)}{\rho ^2}&{}\quad \text {if}\quad l=1,\, j=m-1,\\ \frac{\rho -1}{\rho ^2}&{}\quad \text {if}\quad l=1,\, j=m,\\ 0&{}\quad \text {otherwise}. \end{array}\right. \end{aligned}$$

We see that \(D_\rho {\tilde{S}}_\rho ^2\Delta ^2=O(\rho ^{-1})\), hence the linear decay for \(r=2\).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lin, KC. Analysis of Decimation on Finite Frames with Sigma-Delta Quantization. Constr Approx 50, 507–542 (2019). https://doi.org/10.1007/s00365-019-09480-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00365-019-09480-3

Keywords

Mathematics Subject Classification

Navigation