Abstract
Signal-to-noise ratio (SNR) statistics play a central role in many applications. A common situation where SNR is studied is when a continuous time signal is sampled at a fixed frequency with some noise in the background. While estimation methods exist, little is known about its distribution when the noise is not weakly stationary. In this paper we develop a nonparametric method to estimate the distribution of an SNR statistic when the noise belongs to a fairly general class of stochastic processes that encompasses both short and long-range dependence, as well as nonlinearities. The method is based on a combination of smoothing and subsampling techniques. Computations are only operated at the subsample level, and this allows to manage the typical enormous sample size produced by modern data acquisition technologies. We derive asymptotic guarantees for the proposed method, and we show the finite sample performance based on numerical experiments. Finally, we propose an application to electroencephalography data.
Similar content being viewed by others
References
Altman NS (1990) Kernel smoothing of data with correlated errors. J Am Stat Assoc 85(411):749–759
Brillinger DR, Irizarry RA (1998) An investigation of the second-and higher-order spectra of music. Sig Process 65(2):161–179
Conte E, Maio AD (2002) Adaptive radar detection of distributed targets in non-gaussian noise. In: RADAR 2002. IEE
Coretto P, Giordano F (2017) Nonparametric estimation of the dynamic range of music signals. Aust N Z J Stat 59(4):389–412
Czanner G, Sarma SV, Ba D, Eden UT, Wu W, Eskandar E, Lim HH, Temereanca S, Suzuki WA, Brown EN (2015) Measuring the signal-to-noise ratio of a neuron. Proc Natl Acad Sci 112(23):7141–7146
Goldberger AL, Amaral LAN, Glass L, Hausdorff JM, Ivanov PC, Mark RG, Mietus JE, Moody GB, Peng C-K, Stanley HE (2000) PhysioBank, PhysioToolkit, and PhysioNet: components of a new research resource for complex physiologic signals. Circulation 101(23):e215–e220
Götze F, Račkauskas A (2001) Adaptive choice of bootstrap sample sizes. In: State of the art in probability and statistics. Institute of Mathematical Statistics, pp 286–309
Gray R (1990) Quantization noise spectra. IEEE Trans Inf Theory 36(6):1220–1244
Hall P, Jing BY, Lahiri SN (1998) On the sampling window method for long-range dependent data. Stat Sin 8:1189–1204
Hall P, Lahiri SN, Polzehl J (1995) On bandwidth choice in nonparametric regression with both short- and long-range dependent errors. Ann Stat 23(6):1921–1936
Haykin SS, Kosko B (2001) Intelligent signal processing. Wiley-IEEE Press
Hosking JRM (1996) Asymptotic distributions of the sample mean, autocovariances, and autocorrelations of long-memory time series. J Econom 73:261–284
Jach A, McElroy T, Politis DN (2012) Subsampling inference for the mean of heavy-tailed long-memory time series. J Time Ser Anal 33:96–111
Kalogera V (2017) Too good to be true? Nat Astron 1(0112):1–4
Kay SM (1993) Fundamentals of statistical signal processing, volume 1. Estimation theory. Prentice Hall, Englewood Cliffs
Kemp B, Zwinderman A, Tuk B, Kamphuisen H, Oberye J (2000) Analysis of a sleep-dependent neuronal feedback loop: the slow-wave microcontinuity of the EEG. IEEE Trans Biomed Eng 47(9):1185–1194
Kenig E, Cross MC (2014) Eliminating 1/f noise in oscillators. Phys Rev E 89(4):0429011–0429017
Kogan S (1996) Electronic noise and fluctuations in solids. Cambridge University Press, Cambridge
Levitin DJ, Chordia P, Menon V (2012) Musical rhythm spectra from bach to joplin obey a 1/f power law. Proc Natl Acad Sci 109(10):3716–3720
Ligges U, Krey S, Mersmann O, Schnackenberg S (2016) tuneR: analysis of music. CRAN
Loizou PC (2013) Speech enhancement: theory and practice, 2nd edn. CRC Press, Boca Raton
Parzen E (1966) Time series analysis for models of signal plus white noise. Technical report, Department of Statistics, Stanford University
Parzen E (1999) Stochastic processes (classics in applied mathematics). Society for Industrial and Applied Mathematics
Politis DN, Romano JP (1994) Large sample confidence regions based on subsamples under minimal assumptions. Ann Stat 22(4):2031–2050
Politis DN, Romano JP, Wolf M (1999) Subsampling. Springer, New York
Politis DN, Romano JP, Wolf M (2001) On the asymptotic theory of subsampling. Stat Sin 11(4):1105–1124
Priestley MB, Chao MT (1972) Nonparametric function fitting. J R Stat Soc 34:385–392
Richards MA (2014) Fundamentals of Radar signal processing, second edition (McGraw-Hill professional engineering). McGraw-Hill Education, New York
Romano JP (1989) Bootstrap and randomization tests of some nonparametric hypotheses. Ann Stat 17(1):141–159
Shoeb AH (2009) Application of machine learning to epileptic seizure onset detection and treatment. Ph.D. thesis, Massachusetts Institute of Technology
Timmer J, König M (1995) On generating power law noise. Astron Astrophys 300:707
Ullsperger M, Debener S (2010) Simultaneous EEG and fMRI: recording, analysis, and application. Oxford University Press, Oxford
Voss RF, Clarke J (1975) “1/f noise” in music and speech. Nature 258:317–318
Voss RF, Clarke J (1978) “1/f noise” in music: music from 1/f noise. J Acoust Soc Am 63:258
Weihs C, Jannach D, Vatolkin I, Rudolph G (2016) Music data analysis: foundations and applications. Chapman and Hall/CRC, New York
Weinberg G (2017) Radar detection theory of sliding window processes. CRC Press, Boca Raton
Weissman MB (1988) 1fnoise and other slow, nonexponential kinetics in condensed matter. Rev Mod Phys 60(2):537–571
Acknowledgements
We thank the editor and two anonymous reviewers for their constructive comments, which helped to improve the manuscript.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
In this section we report the proofs of statements and some useful technical lemmas. First, we state a lemma to evaluate the \(\text {MISE}(\hat{s};h)\).
Lemma 1
Assume A1, A2 and A3. For \(t\in I_h=(h,1-h)\)
where \(\hat{s}\) is the kernel estimator in (9), \(R(s'')=\int _{I_h}[s''(t)]^2dt\), \(d_K=\int u^2\mathscr {K}(u)du\), \(N_K=\int \mathscr {K}^2(u)du\), \(\sigma ^2_{\varepsilon }={{\,\mathrm{E}\,}}[\varepsilon _t^2]\), \(\Lambda _n\) is defined in (10) and
Proof
By A3 it follows that conditions A–C of Altman (1990) are satisfied. Now, let
For the cases SRD and LRD with \(\gamma _1=1\) the conditions D and E of Altman (1990) are still satisfied with \(\rho _n(j)\). Following the same arguments as in the proof of Theorem 1 of Altman (1990) the result follows. Finally, in the last case, \(\rho _n(j)\) satisfies condition D but not condition E of Altman (1990). So, we have
Therefore, using Lemma A.4 in Altman (1990), it follows that
The latter completes the proof. \(\square \)
The \(\text {AMISE}(\hat{s};h)\) is the asymptotic MISE, the main part of the MISE. Note that Lemma 1 gives a similar formula to (2.8) in Theorem 2.1 of Hall et al. (1995). However, differently from Hall et al. (1995) our approach does not need to introduce an additional parameter to capture SRD and LRD. Also notice that taking \(h\in H\) as in A3, implies that \(\text {MISE}(\hat{s};h)=O\left( \Lambda _n^{-4/5}\right) \), which means that the kernel estimator achieves the global optimal rate.
Proof of Theorem 1
Lemma 1 holds under A1, A2 and A3. Let \(\hat{\gamma }(j)=\frac{1}{n}\sum _{t=1}^{n-j}\hat{\varepsilon }_t\hat{\varepsilon }_{t+j}\) be the estimator of the autocovariance \(\gamma (j)\) with \(j=0,1,\ldots \). By A3\(r_n=\frac{1}{\Lambda _nh}+h^4=\Lambda _n^{-4/5}\), and by Markov inequality
for some \(\eta >0\) and when \(n\rightarrow \infty \).
It means that \(\frac{1}{n}\sum _{i=1}^{n}\left( s(i/n)-\hat{s}(i/n)\right) ^2=\text {AMISE}(\hat{s};h)+o_p(r_n)\). Rewrite \(\hat{\gamma }(j)\) as
By (18) and Cauchy–Schwartz inequality it results that term I\(=O_p(r_n)\) in \(\hat{\gamma }(j)\). Consider term III in (19). Without loss of generality assume t \(s(t)\not = 0\). By Chebyshev inequality
for some \(\eta >0\). By using the same arguments as in the proof of Lemma 1, it follows that \(MSE(\hat{s};h)=O\left( r_n\right) \) so that \(\hat{s}(t)=s(t)(1+O_p(r_n^{1/2}))\). Therefore, it is sufficient to investigate the behaviour of
\(\sum _j^n\rho (j)=O(\log n)\) under LRD with \(\gamma _1=1\), and \(\sum _j^n\rho (j)=O(n^{1-\gamma _1})\) under LRD with \(0<\gamma _1<1\). By A1, and applying Chebyshev inequality, it happens that III\(=O_p(\Lambda _n^{-1/2})\). Based on similar arguments one has that term II\(=O_p(\Lambda _n^{-1/2})\). Now consider last term of (19), and notice that it is the series of products of autocovariances. Theorem 3 in Hosking (1996) is used to conclude that the series is convergent under SRD and LRD with \(1/2<\gamma _1\le 1\), while it is divergent under LRD with \(0<\gamma _1\le 1/2\). Based on this, direct application of Chebishev inequality to term IV implies that IV\(=o_p(\Lambda _n^{-1/2})\). Then \(\hat{\gamma }(j)=\gamma (j) + O_p(r_n) +O_p(\Lambda _n^{-1/2}) +O_p(j/n)\), where the \(O_p(j/n)\) is due to the bias of \(\hat{\gamma }(j)\). This means that \(\hat{\rho }(j) = \rho (j) + O_p(r_n)+ O_p(\Lambda _n^{-1/2})+O_p(j/n)\). Since \(\mathscr {K}(\cdot )\) is bounded then one can write
Using A4 and \(h=O(\Lambda _n^{-1/5})\), A3 implies that
Consider
and by (20) it follows that
which implies that \(Q_1=o_p(r_n)\). It means that the CV function, as defined in (22) of Altman (1990) with the estimated correlation function, has an error rate of \(o_p(r_n)\) with respect to
Now, we can apply the classical bias correction and based on (14) in Altman (1990), we have that
Since \(\text {AMISE}(\hat{s};h)=O(r_n)\), it follows that \(\hat{h}\), the minimizer of \(\text {CV}(h)\), is equal to \(h^\star \), the minimizer of \(\text {MISE}(\hat{s};h)\), asymptotically in probability. By Lemma 1, it follows that \(h^\star \) is the same minimizer with respect to \(\text {AMISE}(\hat{s};h)\) asymptotically. \(\square \)
The subsequent Lemmas are needed to show Theorem 2 and Corollary 1.
Lemma 2
Assume A2. Suppose that \(\{a_t\}\), in A2, is Normally distributed when \(0<\gamma _1\le 1/2\). Then \(n\rightarrow \infty \), \(b\rightarrow \infty \) and \(b/n \rightarrow 0\) implies \(\sup _x\left| G_{n,b}(x)-G(x)\right| {\mathop {\longrightarrow }\limits ^{\text {p}}}0\), and \(q_{n,b}(\gamma _2) {\mathop {\longrightarrow }\limits ^{\text {p}}}q(\gamma _2)\) for all \(\gamma _2 \in (0,1)\).
Proof
Under A2-SRD, Theorems 4.1 and 5.1 of Politis et al. (2001) hold and the results follow. The rest of the proof deals with the LRD case. Since G(x) is continuous (Hosking 1996), we follow proof of Theorem 4 of Jach et al. (2012). Fix \(G_{n,b}^0(x)=\frac{1}{N}\sum _{i=1}^N \mathbb {I}\left\{ \tau _b\left( V_{n,b,i}-\sigma _{\varepsilon }^2\right) \le x\right\} \) with \(N=n-b+1\). It is sufficient to show that \({{\,\mathrm{Var}\,}}[G_{n,b}^0(x)] \rightarrow 0\) as \(n\rightarrow \infty \). Apply Theorem 2 in Hosking (1996) to conclude that \(\tau _n\left( V_n-\sigma _{\varepsilon }^2\right) \) has the same distribution as \(\tau _n\left( V_n^{1}\right) \), where \(V_n^{1}=\frac{1}{n}\sum _{i=1}^n(\varepsilon _i^2-\sigma _{\varepsilon }^2)\). Therefore, we have to show that \({{\,\mathrm{Var}\,}}[G^1_{n,b}(x)] \rightarrow 0\) as \(n \rightarrow \infty \), where
Using the stationarity of \( \{\varepsilon _i\}_{i \in \mathbb {Z}}\), it follows that \({{\,\mathrm{Var}\,}}[G_{n,b}^1(x)] = {{\,\mathrm{E}\,}}[(G_{n,b}^1(x)-G_b^1(x))^2]\), where \(G_b^1(x)=P\left( \tau _bV_b^1\le x\right) \). By Hall et al. (1998) the Hermite rank of the square function is 2. Then, based on the same arguments as in the proof of Theorem 2.2 of Hall et al. (1998) with \(q=2\), we can write
Consider
After some algebra, we obtain
where for \(k=1,2,\ldots \), \(\phi _2(k)\) are the autocovariances of \(\{\varepsilon _t^2\}_{t_\in \mathbb {Z}}\). For \(k \rightarrow \infty \), A2-LRD with \(0<\gamma _1\le 1\) implies that \(\phi _2(k)=O(k^{-2\gamma _1})\) by Theorem 3 of Hosking (1996). Take (22) and note that
where
The latter implies that for \(n\rightarrow \infty \), (22) converges to zero. Therefore, \(\tau _bV_{n,b,1}^1\) and \(\tau _bV_{n,b,N}^1\) are asymptotically independent. The latter can be argued based on asymptotic normality when \(1/2 \le \gamma _1 \le 1\). For the case \(0< \gamma _1 < 1/2\) the asymptotic independence can be obtained by using Theorem 2.3 of Hall et al. (1998). Thus, right hand side of (21) converges to zero as \(n\rightarrow \infty \) by Cesaro Theorem. The latter shows that \(\sup _x\left| G_{n,b}(x)-G(x)\right| {\mathop {\longrightarrow }\limits ^{\text {p}}}0\).
Following the same arguments as in Theorem 5.1 of Politis et al. (2001), and by using the first part of this proof one shows that \(q_{n,b}(\gamma _2){\mathop {\longrightarrow }\limits ^{p}}q(\gamma _2)\). The latter completes the proof. \(\square \)
Lemma 3
Assume A1, A2, A3 and A4. Suppose that \(\{a_t\}\), in A2, is Normally distributed when \(0<\gamma _1\le 1/2\). Let \(\hat{s}(t)\) be the estimate of s(t) computed on the entire sample (of length n). Then \(n \rightarrow \infty \) and \(b=o(n^{4/5})\) implies \(\sup _x \left| \hat{G}_{n,b}(x)-G(x) \right| {\mathop {\longrightarrow }\limits ^{\text {p}}}0\).
Proof
Denote \(r_n=\frac{1}{\Lambda _nh}+h^4\). By Lemma 1 and A3, \(r_n=\Lambda _n^{-\frac{4}{5}}\). \(\hat{s}(t)\) is computed on the whole time series. By Lemma 2, we can use the same approach as in Lemma 1, part (i) of Coretto and Giordano (2017). We have only to verify that \(\tau _b r_n \rightarrow 0\) as \(n \rightarrow \infty \) which is always true if \(b=o(n^{4/5})\). \(\square \)
Lemma 4
Assume A1, A2, A3 and A4. Suppose that \(\{a_t\}\), in A2, is Normally distributed when \(0<\gamma _1\le 1/2\). Let \(\hat{s}(t)\) be the estimate of s(t) computed on the entire sample (of length n). Then \(n \rightarrow \infty \) and \(b=o(n^{4/5})\) implies \(\hat{q}_{n,b}(\gamma _2) {\mathop {\longrightarrow }\limits ^{\text {p}}}q(\gamma _2)\) for any \(\gamma _2 \in (0,1)\).
Proof
Using the same arguments as in Lemma 3 we have that \(\hat{G}_{n,b}(x)-G_{n,b}(x) = o_p(1)\) for each point x. By the continuity of G(x) at all x we have that \(q_{n,b}(\gamma _2) {\mathop {\longrightarrow }\limits ^{\text {p}}}q(\gamma _2)\) by Lemma 2. Therefore \(\hat{q}_{n,b}(\gamma _2) {\mathop {\longrightarrow }\limits ^{\text {p}}}q(\gamma _2)\). Note that assumption \(b=o(n^{4/5})\) is needed to deal with A2-LRD, however for A2-SRD only we would only need \(b=o(n)\). \(\square \)
Proof of Theorem 2
Let \(P^*(X)\) and \({{\,\mathrm{E}\,}}^*(X)\) be the conditional probability and the conditional expectation of a random variable X with respect to a set \(\chi = \left\{ Y_1,\ldots ,Y_n\right\} \). Let \(\hat{G}_{n,b_1}^b(x)\) be the same as \(\hat{G}_{n,b}(x)\), but now \(\hat{s}(t)\) is estimated on each subsample of length b, and the variance of the error term is computed on the same subsample of length \(b_1 < b\). Without loss of generality, we consider the first observaiton with \(t=1\) as in Algorithm 1. Then,
using Lemma 1 as in the proof of Lemma 3. Let \(b_1=o(b^{4/5})\).
Let \(Z_i(x)=\mathbb {I}\left\{ \tau _{b_1}\left( \hat{V}_{n,b_1,i}-V_n\right) \le x\right\} \) and \(Z_i^*(x)=\mathbb {I}\left\{ \tau _{b_1}\left( \hat{V}_{n,b_1,I_i}-V_n\right) \le x\right\} \). \(I_i\) is a Uniform random variable on \(I=\left\{ 1,2,\ldots ,n-b+1\right\} \). \(P(Z_i^*(x)=Z_i(x)|\chi )=\frac{1}{n-b+1}\)\(\forall i\) at each x. Write \(\tilde{G}_{n,b_1}(x)=\frac{1}{K}\sum _{i=1}^KZ_i^*(x)\), it follows that
as \(n\rightarrow \infty \), the latter is implied by by Lemma 3, and the fact that \(\tau _{b_1}\Lambda _b^{-4/5}\rightarrow 0\) when \(0<\gamma _1\le 1\) in assumption A2.
Since \(\{I_i\}\) is the set of uniform random variables sampled without replacement, we can apply Corollary 4.1 of Romano (1989).
Therefore it follows that \(\tilde{G}_{n,b_1}(x)-\hat{G}_{n,b_1}^b(x){\mathop {\longrightarrow }\limits ^{\text {p}}}0\) as \(K\rightarrow \infty \) and \(n\rightarrow \infty \). Applying the delta method approach
as \(K\rightarrow \infty \), \(n\rightarrow \infty \) and \(\forall x\). Since G(x) is continuous, the convergence is uniform because of the argument of the last part of the proof of Theorem 2.2.1 in Politis et al. (1999). This concludes the proof. \(\square \)
Proof of Corollary 1
The results follow from the proof of Lemma 4 by replacing Lemma 3 with Theorem 2. \(\square \)
Proof of Corollary 2
We can write \(\tilde{G}_{n,b_1}^0(x)\) as
So, it is sufficient to use Theorem 2 to show that \(\sup _x|\tilde{G}_{n,b_1}(x)-G(x)|{\mathop {\longrightarrow }\limits ^{\text {p}}}0\) and we only need to show that \(\tau _{b_1}\left( V_n-\hat{V}_n\right) {\mathop {\longrightarrow }\limits ^{\text {p}}}0\).
Following the same arguments as in the proof of Theorem 1, we have that \(V_n-\sigma _{\varepsilon }^2=O_p\left( \tau _n^{-1}\right) \) and
So, \(\tau _{b_1}\left( V_n-\hat{V}_n\right) =\tau _{b_1}\left( V_n-\sigma _{\varepsilon }^2\right) -\tau _{b_1}\left( \hat{V}_n-\sigma _{\varepsilon }^2\right) \).
Since \(b_1=o(b^{4/5})\), we have that \(\tau _{b_1}\left( V_n-\sigma _{\varepsilon }^2\right) =O_p\left( \tau _{b_1}\tau _n^{-1}\right) =o_p(1)\) and
In both cases, it follows that \(O_p\left( \tau _{b_1}\tau _n^{-1}\right) =o_p(1)\) and \(O_p\left( \tau _{b_1}\Lambda _n^{-4/5}\right) =o_p(1)\), respectively. Finally, we can conclude that the result follows. \(\square \)
Proof of Theorem 3
By (4) we have that
and \(SNR=10\log _{10}\left( {\sigma _{\varepsilon }^{-2}{\int s^2(t)dt}}\right) \). First, we analyze the quantity \(\tau _m(\widehat{SNR}-SNR)\). So we can write
Using the same arguments as in the proof of Theorem 1, it follows that \(\hat{V}_m-\sigma _{\varepsilon }^2=O_p(\tau _m^{-1})\). Expanding \(\log _{10}(1+x)\) in Taylor’s series, we have that
and
Now, we show the last result. From the proof of Theorem 1 and by assumption A3, we have that \(\hat{s}(t)=s(t)\left( 1+O_p(\Lambda _n^{-2/5})\right) \). Therefore,
Now, we can write
By using the convergence of the quadrature of a bounded and continuous function to its integral, it follows that \(II_s=O(n^{-1})\). By (24), we have that
Since \(m=o(n^{2/5})\), it follows that \(\tau _m\Lambda _n^{-2/5}\rightarrow 0\) as \(n\rightarrow \infty \). So, (23) is shown.
Hence, we can conclude that \(\tau _m(\widehat{SNR}-SNR)\) has the same asymptotic distribution as \(\frac{\tau _m}{\sigma _{\varepsilon }^2}\left( \hat{V}_m-\sigma _{\varepsilon }^2\right) \) by the Slutsky’s Theorem. Therefore, assumption 3.2.1 of Politis et al. (1999) is verified by Theorem 2.
Consider the SNR evaluated at a given point, namely \(SNR_i=10\log _{10}\left( \frac{s^2(t_i)}{\sigma _{\varepsilon }^2}\right) \)., and write \(\tau _{b_1}\left( \widehat{SNR}_{n,b,I_i}-\widehat{SNR}\right) \) in \(\mathbb {Q}_n(x)\) as
for a given subsample starting at \(I_i\). By using the first part of this proof, it follows that \(S_2=O_p(\tau _{b_1}/\tau _{m})=o_p(1)\) since \(b_1/m\rightarrow 0\) when \(n\rightarrow \infty \). Now, in order to deal with the quantity \(S_1\), we need to show that
where \(t_i\) is the initial point in the block of b values. By using again the convergence of the quadrature of a bounded and continuous function to its integral, we have that \(\frac{1}{b}\sum _{j=i}^{i+b-1}\left[ s\left( \frac{j-i+1}{b}\right) \right] ^2\rightarrow \int _0^1\left( s^b_i(t)\right) ^2dt\) as \(n\rightarrow \infty \), \(b\rightarrow \infty \) and \(b/n\rightarrow 0\). The quantity \(s_i^b(\cdot )\) denotes the portion of the signal in the block of b values in (0, 1) with i the index for the initial point. Note that \(b/n\rightarrow 0\), and by the mean value theorem \(\int _0^1\left( s_i^b(t)\right) ^2dt\rightarrow s^2(t_i)\). By using, again, the first part of this proof and by (25), we have that \(\tau _{b_1}\left( \widehat{SNR}_{n,b,I_i}-SNR_{I_i}\right) \) has the same asymptotic distribution as \(\frac{\tau _{b_1}}{\sigma _{\varepsilon }^2}\left( \hat{V}_{n,b_1,I_i}-\sigma _{\varepsilon }^2\right) \). Now we study the quantity \(S_3\). First, we show that
when \(n\rightarrow \infty \) with some \(x>0\). Since \(SNR_i-SNR=10\log _{10}\left( \frac{s^2(t_i)}{\int s^2(t)dt}\right) \), the equation in (26) becomes
We have that
Moreover, \(\frac{s^2(t_i)}{\int s^2(t)dt}>10^{\frac{x}{10\tau _{b_1}}}\) can be written as
Summing over the index i and dividing by \(n-b+1\), we can write
Since \(\tau _{b_1}\left( 10^{\frac{x}{10\tau _{b_1}}}-1\right) \rightarrow c>0\) when \(b_1\rightarrow \infty \), by using equation (27) we obtain
Therefore \(\frac{N_n^b}{n-b+1}\rightarrow 0\) as \(n\rightarrow \infty \), where \(N_n^b=\sum _{i=1}^{n-b+1}\mathbb {I}\left\{ \frac{s^2(t_i)}{\int s^2(t)dt}>10^{\frac{x}{10\tau _{b_1}}}\right\} \). Then, (26) is shown.
As in the proof of Slutsky’s Theorem, we split \(\mathbb {Q}_n(x)\) as the sum of three empirical distribution function computed over \(S_1\), \(S_2\) and \(S_3\) respectively. Here the random variables \(I_i\) are treated as in the proof of Theorem 2. Based on the argument above only the component of \(\mathbb {Q}_n(x)\) computed over \(S_1\) has a non degenerate limit distribution, and this will the same as the asymptotic distribution of the estimator for the variance of the error term. The proof is now completed. \(\square \)
Rights and permissions
About this article
Cite this article
Giordano, F., Coretto, P. A Monte Carlo subsampling method for estimating the distribution of signal-to-noise ratio statistics in nonparametric time series regression models. Stat Methods Appl 29, 483–514 (2020). https://doi.org/10.1007/s10260-019-00487-5
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10260-019-00487-5