Skip to main content
Log in

Asymptotic Properties of Least Squares Estimators and Sequential Least Squares Estimators of a Chirp-like Signal Model Parameters

  • Published:
Circuits, Systems, and Signal Processing Aims and scope Submit manuscript

A Correction to this article was published on 23 July 2021

This article has been updated

Abstract

Sinusoidal model and chirp model are the two fundamental models in digital signal processing. Recently, a chirp-like model was introduced by Grover et al. (International conference on computing, power and communication technologies, IEEE, pp. 1095–1100, 2018). A chirp-like model is a generalization of a sinusoidal model and provides an alternative to a chirp model. We derive, in this paper, the asymptotic properties of least squares estimators and sequential least squares estimators of the parameters of a chirp-like signal model. It is observed theoretically as well as through extensive numerical computations that the sequential least squares estimators perform at par with the usual least squares estimators. The computational complexity involved in the sequential algorithm is significantly lower than that involved in calculating the least squares estimators. This is achieved by exploiting the orthogonality structure of the different components of the underlying model. The performances of both the estimators for finite sample sizes are illustrated by simulation results. In the specific real-life data analyses of signals, we show that a chirp-like signal model is capable of modeling phenomena that can be otherwise modeled by a chirp signal model, in a computationally more efficient manner.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19

Similar content being viewed by others

Data Availability

The datasets analyzed during the current study are available from the corresponding author on reasonable request.

Change history

Notes

  1. A simple elementary chirp model has the following mathematical expression:

    $$\begin{aligned} y(t) = C^0 \cos \left( \beta ^0 t^2\right) + D^0 \sin \left( \beta ^0 t^2\right) + X(t);\ t = 1, \ldots , n. \end{aligned}$$
    (4)

    Here, \(C^0\), \(D^0\) are the amplitudes, \(\beta ^0\) is the chirp rate , and X(t) is the noise. Although in recent years a lot of work has been done on a chirp model (2), not much attention has been paid on an elementary chirp model. For the reference on model (4), one may refer to Casazza and Fickus [4] and Mboup and Adali [27].

References

  1. T.J. Abatzoglou, Fast maximum likelihood joint estimation of frequency and frequency rate. IEEE Trans. Aerosp. Electron. Syst. 6, 708–715 (1986)

    Article  Google Scholar 

  2. P. Bello, Joint estimation of delay, Doppler, and Doppler rate. IRE Trans. Inf. Theory 6(3), 330–341 (1960)

    Article  MathSciNet  Google Scholar 

  3. R.P. Brent, Algorithms for Minimization without Derivatives, chap. 4 (1973)

  4. P.G. Casazza, M. Fickus, Fourier transforms of finite chirps. EURASIP J. Appl. Signal Process. 2006. Article ID 70204, 1–7 (2006)

  5. M.G. Christensen, P. Stoica, A. Jakobsson, S.H. Jensen, Multi-pitch estimation. Signal Process. 88(4), 972–983 (2008)

    Article  Google Scholar 

  6. P.M. Djuric, S.M. Kay, Parameter estimation of chirp signals. IEEE Trans. Acoust. Speech Signal Process. 38(12), 2118–2126 (1990)

    Article  Google Scholar 

  7. P. Flandrin, Time-frequency processing of bat sonar signals, in Animal Sonar (Springer, Boston, MA, 1988), pp. 797–802

  8. P. Flandrin, March. Time frequency and chirps, in Wavelet Applications VIII. International Society for Optics and Photonics, vol. 4391, pp. 161–176 (2001)

  9. W.A. Fuller, Introduction to statistical Time Series, vol. 428 (Wiley, New York, 2009)

    Google Scholar 

  10. S. Gholami, A. Mahmoudi, E. Farshidi, Two-stage estimator for frequency rate and initial frequency in LFM signal using linear prediction approach. Circuits Syst. Signal Process. 38(1), 105–117 (2019)

    Article  Google Scholar 

  11. F. Gini, M. Luise, R. Reggiannini, Cramer–Rao bounds in the parametric estimation of fading radiotransmission channels. IEEE Trans. Commun. 46(10), 1390–1398 (1998)

    Article  Google Scholar 

  12. R. Grover, D. Kundu, A. Mitra,, Chirp-like model and its parameters estimation, in 2018 International Conference on Computing, Power and Communication Technologies (GUCON) (IEEE, 2018), pp. 1095-1100

  13. H. Hassani, D. Thomakos, A review on singular spectrum analysis for economic and financial time series. Stat. Interface 3(3), 377–397 (2010)

    Article  MathSciNet  Google Scholar 

  14. M.Z. Ikram, K. Abed-Meraim, Y. Hua, Fast quadratic phase transform for estimating the parameters of multicomponent chirp signals. Digital Signal Process. 7(2), 127–135 (1997)

    Article  Google Scholar 

  15. D.L. Jones, R.G. Baraniuk, An adaptive optimal-kernel time-frequency representation. IEEE Trans. Signal Process. 43(10), 2361–2371 (1995)

    Article  Google Scholar 

  16. S.M. Kay, S.L. Marple, Spectrum analysis—a modern perspective. Proc. IEEE 69(11), 1380–1419 (1981)

    Article  Google Scholar 

  17. E.J. Kelly, The radar measurement of range, velocity and acceleration. IRE Trans. Mil. Electron. 1051(2), 51–57 (1961)

    Article  Google Scholar 

  18. D. Kundu, S. Nandi, Parameter estimation of chirp signals in presence of stationary noise. Stat. Sinica 18(1), 187–201 (2008)

  19. D. Kundu, S. Nandi, Statistical Signal Processing: Frequency Estimation. New Delhi (2012)

  20. A. Lahiri, Estimators of Parameters of Chirp Signals and Their Properties. PhD thesis, Indian Institute of Technology, Kanpur (2011)

  21. A. Lahiri, D. Kundu, A. Mitra, On least absolute deviation estimators for one-dimensional chirp model. Statistics 48(2), 405–420 (2014)

    Article  MathSciNet  Google Scholar 

  22. A. Lahiri, D. Kundu, A. Mitra, Estimating the parameters of multiple chirp signals. J. Multivar. Anal. 139, 189–206 (2015)

    Article  MathSciNet  Google Scholar 

  23. L. Li, T. Qiu, A robust parameter estimation of LFM signal based on sigmoid transform under the alpha stable distribution noise. Circuits Systems Signal Process. 38(7), 3170–3186 (2019)

    Article  Google Scholar 

  24. A.M. Legendre, Nouvelles méthodes pour la détermination des orbites des cométes. F. Didot (1805)

  25. N. Ma, D. Vray, Bottom backscattering coefficient estimation from wideband chirp sonar echoes by chirp adapted time-frequency representation, in Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP’98 (Cat. No. 98CH36181), Vol. 4 (IEEE, 1998), pp. 2461–2464

  26. S. Mazumder, Single-step and multiple-step forecasting in one-dimensional single chirp signal using MCMC-based Bayesian analysis. Commun. Stat. Simul. Comput. 46(4), 2529–2547 (2017)

    Article  MathSciNet  Google Scholar 

  27. M. Mboup, T. Adal, A generalization of the Fourier transform and its application to spectral analysis of chirp-like signals. Appl. Comput. Harmon. Anal. 32(2), 305–312 (2012)

    Article  MathSciNet  Google Scholar 

  28. R. McAulay, T. Quatieri, Speech analysis/synthesis based on a sinusoidal representation. IEEE Trans. Acoust. Speech Signal Process. 34(4), 744–754 (1986)

    Article  Google Scholar 

  29. H.L. Montgomery, Ten Lectures on the Interface Between Analytic Number Theory and Harmonic Analysis (American Mathematical Society, Providence, 1994)

    Book  Google Scholar 

  30. V.K. Murthy, L.J. Haywood, J. Richardson, R. Kalaba, S. Salzberg, G. Harvey, D. Vereeke, Analysis of power spectral densities of electrocardiograms. Math. Biosci. 12(1–2), 41–51 (1971)

    Article  Google Scholar 

  31. S. Nandi, D. Kundu, Asymptotic properties of the least squares estimators of the parameters of the chirp signals. Ann. Inst. Stat. Math. 56(3), 529–544 (2004)

    Article  MathSciNet  Google Scholar 

  32. J. Neuberg, R. Luckett, B. Baptie, K. Olsen, Models of tremor and low-frequency earthquake swarms on Montserrat. J. Volcanol. Geoth. Res. 101(1–2), 83–104 (2000)

    Article  Google Scholar 

  33. S. Peleg, B. Porat, Linear FM signal parameter estimation from discrete-time observations. IEEE Trans. Aerosp. Electron. Syst. 27(4), 607–616 (1991)

    Article  Google Scholar 

  34. S. Prasad, M. Chakraborty, H. Parthasarathy, The role of statistics in signal processing—‘a brief review and some emerging trends. Indian J. Pure Appl. Math. 26, 547–578 (1995)

    MathSciNet  MATH  Google Scholar 

  35. J.A. Rice, M. Rosenblatt, On frequency estimation. Biometrika 75(3), 477–484 (1988)

    Article  MathSciNet  Google Scholar 

  36. F.SG. Richards, A method of maximum-likelihood estimation. J. R. Stat. Soc. Ser. B (Methodol.), pp. 469-475 (1962)

  37. S. Saha, S.M. Kay, Maximum likelihood parameter estimation of superimposed chirps using Monte Carlo importance sampling. IEEE Trans. Signal Process. 50(2), 224–230 (2002)

    Article  Google Scholar 

  38. J. Song, Y. Xu, Y. Liu, Y. Zhang, Investigation on Estimator of Chirp Rate and Initial Frequency of LFM Signals Based on Modified Discrete Chirp Fourier Transform. Circuits Systems Signal Process. 38(12), 5861–5882 (2019)

    Article  Google Scholar 

  39. P. Stoica, List of references on spectral line analysis. Signal Process. 31(3), 329–340 (1993)

    Article  Google Scholar 

  40. J.T. VanderPlas, Ž. Ivezic, Periodograms for multiband astronomical time series. Astrophys J 812(1), 18 (2015)

    Article  Google Scholar 

  41. G. Wang, X.G. Xia, An adaptive filtering approach to chirp estimation and ISAR imaging of maneuvering targets, in Record of the IEEE 2000 International Radar Conference [Cat. No. 00CH37037]. (IEEE, 2000), pp. 481–486

Download references

Acknowledgements

The authors would like to thank the unknown reviewers for their constructive comments which have helped to improve the manuscript significantly. Part of the work of the second author has been supported by a research grant from the Science and Engineering Research Board, Government of India.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rhythm Grover.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The original online version of this article was revised: In this article, the captions to Figures 16, 17 and 18 were inadvertently swapped. The corrected captions are: Figure 16, ‘Speech Signal data sets: “EEE” and “UUU”; Observed data (red solid line) and fitted signal (blue dashed line). The sub-plots on the left represent chirp model fitting and those on the right represent chirp-like model fitting’, Figure 17, ‘Simulated data’ and Figure 18, ‘Simulated data signal along with estimated signal using chirp model’ Also, the missing Figure 19 is inserted.

Appendices

Some Preliminary Results

To provide the proofs of the asymptotic properties established in this manuscript, we will require the following results:

Lemma 1

If \(\phi \in (0, \pi )\), then the following hold true:

  1. (a)

    \(\lim \limits _{n \rightarrow \infty } \frac{1}{n} \sum \limits _{t=1}^{n}\cos (\phi t) = \lim \limits _{n \rightarrow \infty } \frac{1}{n} \sum \limits _{t=1}^{n}\sin (\phi t) = 0.\)

  2. (b)

    \(\lim \limits _{n \rightarrow \infty } \frac{1}{n^{k+1}} \sum \limits _{t=1}^{n}t^{k} \cos ^2(\phi t) = \lim \limits _{n \rightarrow \infty } \frac{1}{n^{k+1}} \sum \limits _{t=1}^{n}t^{k} \sin ^2(\phi t) = \frac{1}{2(k+1)};\ k = 0, 1, 2, \ldots .\)

  3. (c)

    \(\lim \limits _{n \rightarrow \infty } \frac{1}{n^{k+1}} \sum \limits _{t=1}^{n}t^{k} \sin (\phi t) \cos (\phi t) = 0;\ k = 0, 1, 2, \ldots .\)

Proof

Refer to Kundu and Nandi [19]. \(\square \)

Lemma 2

If \(\phi \in (0, \pi )\), then except for a countable number of points, the following hold true:

  1. (a)

    \(\lim \limits _{n \rightarrow \infty } \frac{1}{n} \sum \limits _{t=1}^{n}\cos (\phi t^2) = \lim \limits _{n \rightarrow \infty } \frac{1}{n} \sum \limits _{t=1}^{n}\sin (\phi t^2) = 0.\)

  2. (b)

    \(\lim \limits _{n \rightarrow \infty } \frac{1}{n^{k+1}} \sum \limits _{t=1}^{n}t^{k} \cos ^2(\phi t^2) = \lim \limits _{n \rightarrow \infty } \frac{1}{n^{k+1}} \sum \limits _{t=1}^{n}t^{k} \sin ^2(\phi t^2) = \frac{1}{2(k+1)};\ k = 0, 1, 2, \ldots .\)

  3. (c)

    \(\lim \limits _{n \rightarrow \infty } \frac{1}{n^{k+1}} \sum \limits _{t=1}^{n}t^{k} \sin (\phi t^2) \cos (\phi t^2) = 0;\ k = 0, 1, 2, \ldots .\)

Proof

Refer to Lahiri [20]. \(\square \)

Lemma 3

If \((\phi _1, \phi _2) \in (0, \pi ) \times (0, \pi )\), then except for a countable number of points, the following holds true:

  1. (a)

    \(\lim \limits _{n \rightarrow \infty } \frac{1}{n^{k+1}} \sum \limits _{t=1}^{n} t^k \cos (\phi _1 t)\cos (\phi _2 t^2) = 0\)

  2. (b)

    \(\lim \limits _{n \rightarrow \infty } \frac{1}{n^{k+1}} \sum \limits _{t=1}^{n} t^k \cos (\phi _1 t)\sin (\phi _2 t^2) = 0\)

  3. (c)

    \(\lim \limits _{n \rightarrow \infty } \frac{1}{n^{k+1}} \sum \limits _{t=1}^{n} t^k \sin (\phi _1 t)\cos (\phi _2 t^2) = 0\)

  4. (d)

    \(\lim \limits _{n \rightarrow \infty } \frac{1}{n^{k+1}} \sum \limits _{t=1}^{n} t^k \sin (\phi _1 t)\sin (\phi _2 t^2) = 0\)

\(k = 0, 1, 2, \ldots \)

Proof

This proof follows from the number theoretic result proved by Lahiri [20] (see Lemma 2.2.1 of the reference). \(\square \)

Lemma 4

If X(t) satisfies Assumptions 13 and 4 , then for \(k \geqslant 0\):

  1. (a)

    \(\sup \limits _{\phi } \bigg |\frac{1}{n^{k+1}} \sum \limits _{t=1}^{n} t^k X(t)e^{i(\phi t)}\bigg | \xrightarrow {a.s.} 0\)

  2. (b)

    \(\sup \limits _{\phi } \bigg |\frac{1}{n^{k+1}} \sum \limits _{t=1}^{n} t^k X(t)e^{i(\phi t^2)}\bigg | \xrightarrow {a.s.} 0\)

Here, \(i = \sqrt{-i}.\)

Proof

These can be obtained as particular cases of Lemma 2.2.2 of Lahiri [20]. \(\square \)

Following is the famous number theoretic conjecture of Montgomery [29].

Conjecture 1

If \(\theta _1\), \(\theta _2\), \(\theta '_1\), \(\theta '_2\) \(\in \) \((0, \pi )\), then except for a countable number of points:

  1. (a)
    $$\begin{aligned}\begin{aligned} \lim _{n \rightarrow \infty } \frac{1}{n^k \sqrt{n}}\sum _{t=1}^{n} t^k \cos \left( \theta _1 t + \theta _2 t^2\right) \sin \left( \theta '_1 t + \theta '_2 t^2\right) = 0;\ k = 0,1,2, \ldots , \end{aligned}\end{aligned}$$
  2. (b)
    $$\begin{aligned}\begin{aligned} \lim _{n \rightarrow \infty } \frac{1}{n^k \sqrt{n}}\sum _{t=1}^{n} t^k \cos \left( \theta _1 t + \theta _2 t^2\right) \cos \left( \theta '_1 t + \theta '_2 t^2\right) = 0;\ k = 0,1,2, \ldots ;\ if \theta _2 \ne \theta '_2, \\ \lim _{n \rightarrow \infty } \frac{1}{n^k \sqrt{n}}\sum _{t=1}^{n} t^k \sin \left( \theta _1 t + \theta _2 t^2\right) \sin \left( \theta '_1 t + \theta '_2 t^2\right) = 0;\ k = 0,1,2, \ldots ;\ if \theta _2 \ne \theta '_2. \end{aligned} \end{aligned}$$

The following conjecture follows from Montgomery’s conjecture:

Conjecture 2

If \((\phi _1, \phi _2) \in (0, \pi ) \times (0, \pi )\), then except for a countable number of points, the following holds true:

  1. (a)

    \(\lim \limits _{n \rightarrow \infty } \frac{1}{n^k\sqrt{n}} \sum \limits _{t=1}^{n} t^k \cos (\phi _1 t^2) = 0\)

  2. (b)

    \(\lim \limits _{n \rightarrow \infty } \frac{1}{n^k\sqrt{n}} \sum \limits _{t=1}^{n} t^k \sin (\phi _1 t^2) = 0\)

  3. (c)

    \(\lim \limits _{n \rightarrow \infty } \frac{1}{n^k\sqrt{n}} \sum \limits _{t=1}^{n} t^k \cos (\phi _1 t)\cos (\phi _2 t) = 0\)

  4. (d)

    \(\lim \limits _{n \rightarrow \infty } \frac{1}{n^k\sqrt{n}} \sum \limits _{t=1}^{n} t^k \cos (\phi _1 t)\sin (\phi _2 t) = 0\)

  5. (e)

    \(\lim \limits _{n \rightarrow \infty } \frac{1}{n^k\sqrt{n}} \sum \limits _{t=1}^{n} t^k \sin (\phi _1 t)\sin (\phi _2 t) = 0\)

  6. (f)

    \(\lim \limits _{n \rightarrow \infty } \frac{1}{n^k\sqrt{n}} \sum \limits _{t=1}^{n} t^k \cos (\phi _1 t)\cos (\phi _2 t^2) = 0\)

  7. (g)

    \(\lim \limits _{n \rightarrow \infty } \frac{1}{n^k\sqrt{n}} \sum \limits _{t=1}^{n} t^k \cos (\phi _1 t)\sin (\phi _2 t^2) = 0\)

  8. (h)

    \(\lim \limits _{n \rightarrow \infty } \frac{1}{n^k \sqrt{n}} \sum \limits _{t=1}^{n} t^k \sin (\phi _1 t)\cos (\phi _2 t^2) = 0\)

  9. (i)

    \(\lim \limits _{n \rightarrow \infty } \frac{1}{n^k \sqrt{n}} \sum \limits _{t=1}^{n} t^k \sin (\phi _1 t)\sin (\phi _2 t^2) = 0\)

\(k = 0, 1, 2, \ldots \).

In the subsequent appendices, we show that if the above conjecture holds, then the asymptotic distribution of the sequential LSEs coincides with that of the usual LSEs.

One Component Chirp-like Model

1.1 Proofs of the Asymptotic Properties of the LSEs

We need the following lemmas to prove the consistency of the LSEs:

Lemma 5

Consider the set \(S_c = \{\varvec{\theta }: |\varvec{\theta } - \varvec{\theta }^0| > c; \varvec{\theta } \in \varvec{\varTheta }\}\). If the following holds true,

$$\begin{aligned} \liminf \inf \limits _{S_c} \frac{1}{n} \left( Q\left( \varvec{\theta }\right) - Q(\varvec{\theta }^0)\right) > 0 a.s., \end{aligned}$$
(17)

then \(\hat{\varvec{\theta }} \xrightarrow {a.s.} \varvec{\theta }^0\) as \(n \rightarrow \infty \)

Proof

Let us denote \(\hat{\varvec{\theta }}\) by \(\hat{\varvec{\theta }}_{n}\), to highlight the fact that the estimates depend on the sample size n. Now suppose, \(\hat{\varvec{\theta }}_{n} \nrightarrow \varvec{\theta }^0\), then there exists a subsequence \(\{n_k\}\) of \(\{n\}\), such that \(\hat{\varvec{\theta }}_{n_k} \nrightarrow \varvec{\theta }^0\). In such a situation, one of two cases may arise:

  1. 1.

    \(|{\hat{A}}_{n_k}| + |{\hat{B}}_{n_k}| + |{\hat{C}}_{n_k}| + |{\hat{D}}_{n_k}|\) is not bounded, that is, at least one of the \(|{\hat{A}}_{n_k}|\) or \(|{\hat{B}}_{n_k}|\) or \(|{\hat{C}}_{n_k}|\) or \(|{\hat{D}}_{n_k}|\) \(\rightarrow \infty \) \(\Rightarrow \frac{1}{n_k}Q_{n_k}(\hat{\varvec{\theta }}_{n_k}) \rightarrow \infty \)

    But, \(\lim \limits _{n_k \rightarrow \infty } \frac{1}{n_k} Q_{n_k}(\varvec{\theta }^0) < \infty \) which implies, \(\frac{1}{n_k} (Q_{n_k}(\hat{\varvec{\theta }}_{n_k}) - Q_{n_k}(\varvec{\theta }^0)) \rightarrow \infty .\) This contradicts the fact that:

    $$\begin{aligned} Q_{n_k}\left( \hat{\varvec{\theta }}_{n_k}\right) \leqslant Q_{n_k}(\varvec{\theta }^0), \end{aligned}$$
    (18)

    which holds true as \(\hat{\varvec{\theta }}_{n_k}\) is the LSE of \(\varvec{\theta }^0\).

  2. 2.

    \(|{\hat{A}}_{n_k}| + |{\hat{B}}_{n_k}| + |{\hat{C}}_{n_k}| + |{\hat{D}}_{n_k}|\) is bounded, then there exists a \(c > 0\) such that \(\hat{\varvec{\theta }}_{n_k} \in S_c\), for all \(k = 1, 2, \ldots \). Now, since (17) is true, this contradicts (18).

Hence, the result. \(\square \)

Proof of Theorem 1:

Consider the difference:

$$\begin{aligned}&\frac{1}{n}\left( Q(\varvec{\theta }) - Q\left( \varvec{\theta }^0\right) \right) \\&\quad = \frac{1}{n}\sum _{t=1}^{n}\left( y(t) - A\cos (\alpha t) - B \sin (\alpha t) - C \cos \left( \beta t^2\right) - D \sin \left( \beta t^2\right) \right) ^2 \\&\qquad - \frac{1}{n}\sum _{t=1}^{n}\left( y(t) - A^0\cos \left( \alpha ^0 t\right) - B^0 \sin \left( \alpha ^0 t\right) - C^0 \cos \left( \beta ^0 t^2\right) - D^0 \sin \left( \beta ^0 t^2\right) \right) ^2 \\&\quad = \frac{1}{n} \sum _{t=1}^{n}\left( A^0 \cos \left( \alpha ^0 t\right) - A \cos (\alpha t) + B^0 \sin \left( \alpha ^0 t\right) \right. \\&\qquad \left. - B \sin (\alpha t) + C^0 \cos \left( \beta ^0 t^2\right) - C \cos (\beta t^2) + D^0 \sin \left( \beta ^0 t^2\right) - D \sin \left( \beta t^2\right) \right) ^2 \\&\qquad + \frac{1}{n} \sum _{t=1}^{n} X(t) \left( A^0 \cos (\alpha ^0 t) - A \cos (\alpha t) + B^0 \sin \left( \alpha ^0 t\right) - B \sin (\alpha t) + C^0 \cos \left( \beta ^0 t^2\right) \right. \\&\quad \left. - C \cos \left( \beta t^2\right) + D^0 \sin \left( \beta ^0 t^2\right) - D \sin \left( \beta t^2\right) \right) \\&\quad = f(\varvec{\theta }) + g(\varvec{\theta }). \end{aligned}$$

Now using Lemma 4, it can be easily seen that:

$$\begin{aligned} \lim _{n \rightarrow \infty } \sup _{\varvec{\theta } \in S_c} g(\varvec{\theta }) = 0 a.s. \end{aligned}$$
(19)

Thus, we have:

$$\begin{aligned} \liminf \inf _{\varvec{\theta } \in S_c} \frac{1}{n}\left( Q(\varvec{\theta }) - Q(\varvec{\theta }^0)\right) = \liminf \inf _{\varvec{\theta } \in S_c} f(\varvec{\theta }). \end{aligned}$$

Note that the proof will follow if we show that \( \liminf \inf _{\varvec{\theta } \in S_c} f(\varvec{\theta }) > 0\). Consider the set \(S_c = \{\varvec{\theta }: |\varvec{\theta } - \varvec{\theta }^0| \geqslant 6c; \varvec{\theta } \in \varvec{\varTheta }\} \subset S_c^{(1)} \cup S_c^{(2)} \cup S_c^{(3)} \cup S_c^{(4)} \cup S_c^{(5)} \cup S_c^{(6)}\), where

$$\begin{aligned} S_c^{(1)}&= \left\{ \varvec{\theta }: |A - A^0| \geqslant c; \varvec{\theta } \in \varvec{\varTheta }\right\} \qquad S_c^{(2)} = \left\{ \varvec{\theta }: |B - B^0| \geqslant c; \varvec{\theta } \in \varvec{\varTheta }\right\} \\ S_c^{(3)}&= \left\{ \varvec{\theta }: |\alpha - \alpha ^0| \geqslant c; \varvec{\theta } \in \varvec{\varTheta }\right\} \qquad S_c^{(4)} = \left\{ \varvec{\theta }: |C - C^0| \geqslant c; \varvec{\theta } \in \varvec{\varTheta }\right\} \\ S_c^{(5)}&= \left\{ \varvec{\theta }: |D - D^0| \geqslant c; \varvec{\theta } \in \varvec{\varTheta }\right\} \qquad S_c^{(6)} = \left\{ \varvec{\theta }: |\beta - \beta ^0| \geqslant c; \varvec{\theta } \in \varvec{\varTheta }\right\} \end{aligned}$$

Now, we split the set \(S_c^{(1)}\) as follows:

$$\begin{aligned} S_c^{(1)}&= \left\{ \varvec{\theta }: |A - A^0| \geqslant c; \varvec{\theta } \in \varvec{\varTheta }\right\} \\&\qquad \subset \left\{ \varvec{\theta }: |A - A^0| \geqslant c; \varvec{\theta } \in \varvec{\varTheta }; \alpha = \alpha ^0; \beta = \beta ^0 \right\} \\&\qquad \cup \left\{ \varvec{\theta }: |A - A^0| \geqslant c; \varvec{\theta } \in \varvec{\varTheta }; \alpha \ne \alpha ^0; \beta = \beta ^0 \right\} \\&\qquad \cup \left\{ \varvec{\theta }: |A - A^0| \geqslant c; \varvec{\theta } \in \varvec{\varTheta }; \alpha = \alpha ^0; \beta \ne \beta ^0 \right\} \\&\qquad \cup \left\{ \varvec{\theta }: |A - A^0| \geqslant c; \varvec{\theta } \in \varvec{\varTheta }; \alpha \ne \alpha ^0; \beta \ne \beta ^0 \right\} \\&= S_c^{(1)_{1}} \cup S_c^{(1)_{2}} \cup S_c^{(1)_{3}} \cup S_c^{(1)_{4}} \end{aligned}$$

Now let us consider:

$$\begin{aligned}&\liminf \inf _{\varvec{\theta } \in S_c^{(1)_{1}}} f(\varvec{\theta }) \\&\quad = \liminf \inf _{\varvec{\theta } \in S_c^{(1)_{1}}} \frac{1}{n} \sum _{t=1}^{n} \left( A^0 \cos \left( \alpha ^0 t\right) - A \cos (\alpha t) + B^0 \sin \left( \alpha ^0 t\right) - B \sin (\alpha t)\right. \\&\qquad + C^0 \cos \left( \beta ^0 t^2\right) - C \cos \left( \beta t^2\right) \\&\qquad \left. + D^0 \sin (\beta ^0 t^2) - D \sin \left( \beta t^2\right) \right) ^2 \\&\quad = \liminf \inf _{\varvec{\theta } \in S_c^{(1)_{1}}} \frac{1}{n} \sum _{t=1}^{n} \left( \left( A^0 - A\right) \cos \left( \alpha ^0 t\right) \right. \\&\qquad \left. + \left( B^0 - B\right) \sin \left( \alpha ^0 t\right) + \left( C^0 - C\right) \cos \left( \beta ^0 t^2\right) + \left( D^0 - D\right) \sin \left( \beta ^0 t^2\right) \right) ^2 \\&\quad = \frac{\left( A^0 - A\right) ^2}{2} + \frac{\left( B^0 - B\right) ^2}{2}\\&\qquad + \frac{\left( C^0 - C\right) ^2}{2} + \frac{\left( D^0 - D\right) ^2}{2}> 0\\&\liminf \inf _{\varvec{\theta } \in S_c^{(1)_{2}}} f(\varvec{\theta }) \\&\quad = \liminf \inf _{\varvec{\theta } \in S_c^{(1)_{1}}} \frac{1}{n} \sum _{t=1}^{n} \left( A^0 \cos \left( \alpha ^0 t\right) \right. \\&\qquad - A \cos (\alpha t) + B^0 \sin \left( \alpha ^0 t\right) - B \sin (\alpha t) + \left( C^0 - C\right) \cos \left( \beta ^0 t^2\right) \\&\qquad \left. + \left( D^0 - D\right) \sin (\beta ^0 t^2)\right) ^2 = \frac{{A^0}^2}{2} + \frac{{A}^2}{2} + \frac{{B^0}^2}{2} + \frac{{B}^2}{2} + \frac{\left( C^0 - C\right) ^2}{2}\\&\qquad + \frac{\left( D^0 - D\right) ^2}{2}> 0\\&\liminf \inf _{\varvec{\theta } \in S_c^{(1)_{3}}} f(\varvec{\theta }) \\&\quad = \liminf \inf _{\varvec{\theta } \in S_c^{(1)_{3}}} \frac{1}{n} \sum _{t=1}^{n} \left( \left( A^0 - A\right) \cos \left( \alpha ^0 t\right) \right. \\&\qquad + \left( B^0 - B\right) \sin \left( \alpha ^0 t\right) + C^0 \cos \left( \beta ^0 t^2\right) - C \cos \left( \beta t^2\right) \\&\qquad \left. + D^0 \sin \left( \beta ^0 t^2\right) - D \sin \left( \beta t^2\right) \right) ^2 = \frac{\left( A^0 - A\right) ^2}{2}\\&\qquad + \frac{\left( B^0 - B\right) ^2}{2} + \frac{{C^0}^2}{2} + \frac{C^2}{2}+ \frac{{D^0}^2}{2} + \frac{{D}^2}{2} > 0\\ \end{aligned}$$

Finally,

$$\begin{aligned}&\liminf \inf _{\varvec{\theta } \in S_c^{(1)_{4}}} f(\varvec{\theta }) \\&\quad = \liminf \inf _{\varvec{\theta } \in S_c^{(1)_{1}}} \frac{1}{n} \sum _{t=1}^{n} \left( A^0 \cos \left( \alpha ^0 t\right) - A \cos \left( \alpha t\right) \right. \\&\qquad + B^0 \sin (\alpha ^0 t) - B \sin (\alpha t) + C^0 \cos \left( \beta ^0 t^2\right) - C \cos \left( \beta t^2\right) \\&\qquad \left. + D^0 \sin \left( \beta ^0 t^2\right) - D \sin \left( \beta t^2\right) \right) ^2 = \frac{{A^0}^2}{2} + \frac{{A}^2}{2} + \frac{{B^0}^2}{2}\\&\qquad + \frac{{B}^2}{2} + \frac{{C^0}^2}{2} + \frac{C^2}{2} + \frac{{D^0}^2}{2} + \frac{{D}^2}{2} > 0 \end{aligned}$$

Note that we used Lemmas 1 and 2 in all the above computations of the limits. On combining all the above, we have \( \liminf \inf \limits _{\varvec{\theta } \in S_c^{(1)}} f(\varvec{\theta }) > 0.\) Similarly, it can be shown that the result holds for the rest of the sets. Therefore, by Lemma 5, \(\hat{\varvec{\theta }}\) is a strongly consistent estimator of \(\varvec{\theta }^0\). \(\square \)

Proof of Theorem 2:

To obtain the asymptotic distribution of the LSEs, we express \(\mathbf{Q }'(\hat{\varvec{\theta }})\) using multivariate Taylor series expansion arount the point \(\varvec{\theta }^0\), as follows:

$$\begin{aligned} \mathbf{Q }'\left( \hat{\varvec{\theta }}\right) - \mathbf{Q }'\left( \varvec{\theta }^0\right) = \left( \hat{\varvec{\theta }} - \varvec{\theta }^0\right) \mathbf{Q }''\left( \bar{\varvec{\theta }}\right) . \end{aligned}$$
(20)

Here, \(\bar{\varvec{\theta }}\) is a point between \(\hat{\varvec{\theta }}\) and \(\varvec{\theta }^0\). Since, \(\hat{\varvec{\theta }}\) is the LSE of \(\varvec{\theta }^0\), \(\mathbf{Q }'(\hat{\varvec{\theta }}) = 0\). Thus, we have:

$$\begin{aligned} \left( \hat{\varvec{\theta }} - \varvec{\theta }^0\right) = - \mathbf{Q }'\left( \varvec{\theta }^0\right) \left[ \mathbf{Q }''\left( \bar{\varvec{\theta }}\right) \right] ^{-1}. \end{aligned}$$
(21)

Multiplying both sides of (21)) by the \(6 \times 6\) diagonal matrix \(\mathbf{D } = \hbox {diag}(\frac{1}{\sqrt{n}}, \frac{1}{\sqrt{n}}, \frac{1}{n\sqrt{n}}, \frac{1}{\sqrt{n}}, \frac{1}{\sqrt{n}}, \frac{1}{n^2\sqrt{n}})\), we get:

$$\begin{aligned} \left( \hat{\varvec{\theta }} - \varvec{\theta }^0\right) \mathbf{D }^{-1} = - \mathbf{Q }'\left( \varvec{\theta }^0\right) \mathbf{D }\left[ \mathbf{D }\mathbf{Q }''\left( \bar{\varvec{\theta }}\right) \mathbf{D }\right] ^{-1}. \end{aligned}$$
(22)

First, we will show that:

$$\begin{aligned} \lim _{n \rightarrow \infty } \mathbf{Q }'\left( \varvec{\theta }^0\right) \mathbf{D } \xrightarrow {d} N\left( 0, 4c \sigma ^2 \varvec{\varSigma }\right) . \end{aligned}$$
(23)

Here,

$$\begin{aligned} \varvec{\varSigma } = \begin{pmatrix} \frac{1}{2} &{} \quad 0 &{} \quad \frac{B^0}{4} &{} \quad 0 &{} \quad 0 &{} \quad 0 \\ 0 &{} \quad \frac{1}{2} &{} \quad \frac{-A^0}{4} &{} \quad 0 &{} \quad 0 &{} \quad 0 \\ \frac{B^0}{4} &{} \quad \frac{-A^0}{4} &{} \quad \frac{{A^0}^2 + {B^0}^2}{6} &{} \quad 0 &{} \quad 0 &{} \quad 0 \\ 0 &{} \quad 0 &{} \quad 0 &{} \quad \frac{1}{2} &{} \quad 0 &{} \quad \frac{D^0}{6} \\ 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad \frac{1}{2} &{} \quad \frac{-C^0}{6} \\ 0 &{} \quad 0 &{} \quad 0 &{} \quad \frac{D^0}{6} &{} \quad \frac{-C^0}{6} &{} \quad \frac{{C^0}^2 +{D^0}^2}{10} \end{pmatrix} \end{aligned}$$
(24)

To prove (23)), we compute the elements of the \(6 \times 1\) vector

\(\mathbf{Q }'(\varvec{\theta }^0)\mathbf{D } = \begin{pmatrix} \frac{1}{\sqrt{n}}\frac{\partial Q(\varvec{\theta })}{\partial A}&\frac{1}{\sqrt{n}}\frac{\partial Q(\varvec{\theta })}{\partial B}&\frac{1}{n\sqrt{n}}\frac{\partial Q(\varvec{\theta })}{\partial \alpha }&\frac{1}{\sqrt{n}}\frac{\partial Q(\varvec{\theta })}{\partial C}&\frac{1}{\sqrt{n}}\frac{\partial Q(\varvec{\theta })}{\partial D}&\frac{1}{n^2\sqrt{n}}\frac{\partial Q(\varvec{\theta })}{\partial \beta } \end{pmatrix}\) as follows:

$$\begin{aligned}&\frac{1}{\sqrt{n}}\frac{\partial Q\left( \varvec{\theta }\right) }{\partial A} = \frac{-2}{\sqrt{n}} \sum _{t=1}^{n} \left( y(t) - A \cos (\alpha t)\right. \\&\left. - B \sin (\alpha t) - C \cos (\beta t^2) - D \sin (\beta t^2)\right) \cos (\alpha t)\\&\quad \Rightarrow \frac{1}{\sqrt{n}}\frac{\partial Q(\varvec{\theta }^0)}{\partial A} = \frac{-2}{\sqrt{n}} \sum _{t=1}^{n} X(t)\cos \left( \alpha ^0 t\right) . \end{aligned}$$

Similarly, the rest of the elements can be computed and we get:

$$\begin{aligned} \mathbf{Q }'(\varvec{\theta }^0)\mathbf{D } = \begin{pmatrix} \frac{-2}{\sqrt{n}} \sum \limits _{t=1}^{n} X(t)\cos \left( \alpha ^0 t\right) \\ \frac{-2}{\sqrt{n}} \sum \limits _{t=1}^{n} X(t)\sin \left( \alpha ^0 t\right) \\ \frac{-2}{n\sqrt{n}} \sum \limits _{t=1}^{n} t X(t)\left( -A^0\sin \left( \alpha ^0 t\right) + B^0 \cos \left( \alpha ^0 t\right) \right) \\ \frac{-2}{\sqrt{n}}\sum \limits _{t=1}^{n} X(t)\cos \left( \beta ^0 t^2\right) \\ \frac{-2}{\sqrt{n}}\sum \limits _{t=1}^{n} X(t)\sin \left( \beta ^0 t^2\right) \\ \frac{-2}{n^2\sqrt{n}} \sum \limits _{t=1}^{n} t^2 X(t)\left( -C^0\sin \left( \beta ^0 t^2\right) + D^0 \cos \left( \beta ^0 t^2\right) \right) \end{pmatrix}. \end{aligned}$$

Now using the central limit theorem (CLT) of stochastic processes (see Fuller [9]), the above vector tends to a six-variate Gaussian distribution with mean 0 and variance \(4 c \sigma ^2 \varvec{\varSigma }\) and hence (23) holds true. Now, we consider the second-derivative matrix \(\mathbf{D }\mathbf{Q }''(\bar{\varvec{\theta }})\mathbf{D }\). Note that since \(\hat{\varvec{\theta }} \xrightarrow {a.s.} \varvec{\theta }^0\) as \(n \rightarrow \infty \) and \(\bar{\varvec{\theta }}\) is a point between \(\hat{\varvec{\theta }}\) and \(\varvec{\theta }^0\),

$$\begin{aligned} \lim \limits _{n \rightarrow \infty } \mathbf{D }\mathbf{Q }''\left( \bar{\varvec{\theta }}\right) \mathbf{D } = \lim \limits _{n \rightarrow \infty } \mathbf{D }\mathbf{Q }''\left( \varvec{\theta }^0\right) \mathbf{D }. \end{aligned}$$

Using Lemmas 123 and 4 and after some calculations, it can be shown that:

$$\begin{aligned} \mathbf{D }\mathbf{Q }''\left( \varvec{\theta }^0\right) \mathbf{D } = 2\varvec{\varSigma }, \end{aligned}$$
(25)

where \(\varvec{\varSigma }\) is as defined in (24). On combining, (22),(23) and (25), the desired result follows. \(\square \)

1.2 Proofs of the Asymptotic Properties of the Sequential LSEs

Following lemmas are required to prove the consistency of the sequential LSEs:

Lemma 6

Let us define the set \(M_c = \{\varvec{\theta }^{(1)}: |\varvec{\theta }^{(1)} - {\varvec{\theta }^0}^{(1)}| \geqslant 3 c; \varvec{\theta }^{(1)} \in \varvec{\varTheta }^{(1)}\}\). If the following holds true,

$$\begin{aligned} \liminf \inf \limits _{M_c} \frac{1}{n} \left( Q_1\left( \varvec{\theta }^{(1)}\right) - Q_1\left( {\varvec{\theta }^0}^{(1)}\right) \right) > 0 a.s. \end{aligned}$$
(26)

then \(\tilde{\varvec{\theta }}^{(1)} \xrightarrow {a.s.} {\varvec{\theta }^0}^{(1)}\) as \(n \rightarrow \infty \)

Proof

This can be proved by contradiction along the same lines as Lemma 5. \(\square \)

Lemma 7

Let us define the set \(N_c = \{\varvec{\theta }^{(2)} : \varvec{\theta }^{(2)} \in \varvec{\varTheta }^{(2)} ;\ |\varvec{\theta }^{(2)} - {\varvec{\theta }^0}^{(2)}| \geqslant 3c\}.\) If for any \(c>0\),

$$\begin{aligned} \liminf \inf \limits _{\varvec{\theta }^{(2)} \in N_c} \frac{1}{n} \left( Q_2\left( \varvec{\theta }^{(2)}\right) - Q_2\left( {\varvec{\theta }^0}^{(2)}\right) \right) > 0 a.s. \end{aligned}$$
(27)

then \(\tilde{\varvec{\theta }}^{(2)} \xrightarrow {a.s.} {\varvec{\theta }^0}^{(2)}\) as \(n \rightarrow \infty .\)

Proof

This can be proved by contradiction along the same lines as Lemma 5. \(\square \)

Proof of Theorem 3:

First we prove the consistency of the parameter estimates of the sinusoidal component, \(\tilde{\varvec{\theta }}^{(1)}\). For this, consider the difference:

$$\begin{aligned}&\frac{1}{n}\left( Q_1\left( \varvec{\theta }^{(1)}\right) - Q_1\left( {\varvec{\theta }^0}^{(1)}\right) \right) \\&\quad = \frac{1}{n}\left[ \sum _{t=1}^{n}\left( y(t) - A\cos (\alpha t) - B\sin (\alpha t)\right) ^2\right. \\&\qquad \left. - \left( y(t) - A^0 \cos \left( \alpha ^0 t\right) - B^0 \sin \left( \alpha ^0 t\right) \right) ^2 \right] \\&\quad = \frac{1}{n} \sum _{t=1}^n\left( A^0 \cos \left( \alpha ^0 t\right) - A\cos (\alpha t) + B^0 \sin \left( \alpha ^0 t\right) \right. \\&\qquad \left. - B \sin (\alpha t) + C^0 \cos \left( \beta ^0 t^2\right) + D^0 \sin \left( \beta ^0 t^2\right) + X(t)\right) ^2\\&\qquad - \frac{1}{n}\sum _{t=1}^{n}\left( C^0 \cos \left( \beta ^0 t^2\right) + D^0 \sin \left( \beta ^0 t^2\right) + X(t)\right) ^2 \\&\quad = \frac{1}{n} \sum _{t=1}^n\left( A^0 \cos \left( \alpha ^0 t\right) + B^0 \sin \left( \alpha ^0 t\right) - A\cos (\alpha t) - B \sin (\alpha t) \right) ^2\\&\qquad + \frac{2}{n}\sum _{t=1}^n\left( C^0 \cos \left( \beta ^0 t^2\right) + D^0 \sin \left( \beta ^0 t^2\right) + X(t) \right) \left( A^0 \cos \left( \alpha ^0 t\right) \right. \\&\qquad \left. + B^0 \sin \left( \alpha ^0 t\right) - A\cos (\alpha t) - B\sin (\alpha t)\right) \\&\quad = f_1\left( \varvec{\theta }^{(1)}\right) + g_1\left( \varvec{\theta }^{(1)}\right) . \end{aligned}$$

Here,

$$\begin{aligned} f_1\left( \varvec{\theta }^{(1)}\right)= & {} \frac{1}{n} \sum _{t=1}^n\left( A^0 \cos \left( \alpha ^0 t\right) + B^0 \sin \left( \alpha ^0 t\right) - A\cos (\alpha t) - B \sin (\alpha t) \right) ^2 and, \\ g_1\left( \varvec{\theta }^{(1)}\right)= & {} \frac{2}{n} \sum _{t=1}^{n}\left( C^0 \cos \left( \beta ^0 t^2\right) + D^0 \sin \left( \beta ^0 t^2\right) + X(t) \right) \left( A^0 \cos \left( \alpha ^0 t\right) \right. \\&\left. + B^0 \sin \left( \alpha ^0 t\right) - A\cos (\alpha t) - B \sin (\alpha t)\right) \end{aligned}$$

Now using Lemmas 3 and 4 , it is easy to see that:

$$\begin{aligned} \sup \limits _{\varvec{\theta } \in M_c} |g_1\left( \varvec{\theta }^{(1)}\right) | \xrightarrow {a.s.} 0. \end{aligned}$$

Thus, if we prove that \(\liminf \inf \limits _{M_c}f_1(\varvec{\theta }^{(1)}) > 0\) a.s., it will follow that \(\liminf \inf \limits _{M_c} \frac{1}{n} (Q_1(\varvec{\theta }^{(1)}) - Q_1({\varvec{\theta }^0}^{(1)})) > 0 a.s. \). First consider the set \(M_c = \{\varvec{\theta }^{(1)}: |\varvec{\theta }^{(1)} - {\varvec{\theta }^0}^{(1)}| \geqslant 3c; \varvec{\theta }^{(1)} \in \varvec{\varTheta }^{(1)}\}\). It is evident that:

$$\begin{aligned} M_c \subset M_c^{(1)} \cup M_c^{(2)} \cup M_c^{(3)}, \end{aligned}$$

where \(M_c^{(1)} = \{\varvec{\theta }^{(1)}: |A - A^0| \geqslant c; \varvec{\theta }^{(1)} \in \varvec{\varTheta }^{(1)}\}\), \(M_c^{(2)} = \{\varvec{\theta }^{(1)}: |B - B^0| \geqslant c; \varvec{\theta }^{(1)} \in \varvec{\varTheta }^{(1)}\}\) and \(M_c^{(3)} = \{\varvec{\theta }^{(1)}: |\alpha - \alpha ^0| \geqslant c; \varvec{\theta }^{(1)} \in \varvec{\varTheta }^{(1)}\}.\) Now, we further split the set \(M_c^{(1)}\) which can be written as: \(M_c^{(1)_{1}} \cup M_c^{(1)_{2}}\), where

$$\begin{aligned} M_c^{(1)_{1}}&= \left\{ \varvec{\theta }^{(1)}: |A - A^0| \geqslant c; \varvec{\theta }^{(1)} \in \varvec{\varTheta }^{(1)}; \alpha = \alpha ^0\right\} and M_c^{(1)_{2}}\\&= \left\{ \varvec{\theta }^{(1)}: |A - A^0| \geqslant c; \varvec{\theta }^{(1)} \in \varvec{\varTheta }^{(1)}; \alpha \ne \alpha ^0\right\} \end{aligned}$$

Consider,

$$\begin{aligned} \liminf \inf \limits _{M_c^{(1)_{1}}}f_1\left( \varvec{\theta }^{(1)}\right)&= \liminf \inf \limits _{M_c^{(1)_{1}}} \frac{1}{n} \sum _{t=1}^n\left( A^0 \cos \left( \alpha ^0 t\right) \right. \\&\quad \left. + B^0 \sin \left( \alpha ^0 t\right) - A\cos (\alpha t) - B \sin (\alpha t)\right) ^2 \\&= \frac{\left( A^0 - A\right) ^2}{2} + \frac{(B^0 - B)^2}{2} > 0 a.s. \text { (using Lemma}~1{\text{) }.} \end{aligned}$$

Again, using Lemma 1,

$$\begin{aligned}&\liminf \inf \limits _{M_c^{(1)_{2}}} \frac{1}{n} \sum _{t=1}^n\left( A^0 \cos \left( \alpha ^0 t\right) + B^0 \sin \left( \alpha ^0 t\right) - A\cos (\alpha t) - B \sin (\alpha t) \right) ^2 \\&\quad = \frac{{A^0}^2}{2} + \frac{{B^0}^2}{2} + \frac{A^2}{2} + \frac{B^2}{2} > 0 a.s. . \end{aligned}$$

Similarly, it can be shown that \(\liminf \inf \limits _{M_c^{(2)}}f_1(\varvec{\theta }^{(1)}) > 0\) a.s. and \(\liminf \inf \limits _{M_c^{(3)}}f_1(\varvec{\theta }^{(1)}) > 0\) a.s. Now using Lemma 6, \({\tilde{A}}\), \({\tilde{B}}\) and \({\tilde{\alpha }}\) are strongly consistent estimators of \(A^0\), \(B^0\) and \(\alpha ^0\), respectively. To prove the consistency of the chirp parameter sequential estimates, \({\tilde{C}}\), \({\tilde{D}}\) and \({\tilde{\beta }}\), we need the following lemma:

Lemma 8

If Assumptions 1, 2 and 3 are satisfied, then:

$$\begin{aligned} \left( \tilde{\varvec{\theta }}^{(1)} - {\varvec{\theta }^0}^{(1)}\right) \left( \sqrt{n}\mathbf{D }_1\right) ^{-1} \xrightarrow {a.s.} 0. \end{aligned}$$

Here, \(\mathbf{D }_1 = \hbox {diag}(\frac{1}{\sqrt{n}}, \frac{1}{\sqrt{n}}, \frac{1}{n\sqrt{n}})\).

Proof

Consider the error sum of squares: \(Q_1(\varvec{\theta }) = \frac{1}{n}\sum \limits _{t=1}^n(y(t) - A \cos (\alpha t) - B \sin (\alpha t))^2.\)

By Taylor series expansion of \(\mathbf{Q }_1'(\tilde{\varvec{\theta }}^{(1)})\) around the point \({\varvec{\theta }^0}^{(1)}\), we get:

$$\begin{aligned} \mathbf{Q }_1'\left( \tilde{\varvec{\theta }}^{(1)}\right) - \mathbf{Q }_1'\left( {\varvec{\theta }^0}^{(1)}\right) = \left( \tilde{\varvec{\theta }}^{(1)} - {\varvec{\theta }^0}^{(1)}\right) \mathbf{Q }_1''\left( \bar{\varvec{\theta }}^{(1)}\right) \end{aligned}$$
(28)

where \(\bar{\varvec{\theta }}^{(1)}\) is a point lying between \(\tilde{\varvec{\theta }}^{(1)}\) and \({\varvec{\theta }^0}^{(1)}\). Since \(\tilde{\varvec{\theta }}^{(1)}\) minimizes \(Q_1(\varvec{\theta })\), it implies that \(\mathbf{Q }_1'(\tilde{\varvec{\theta }}^{(1)}) = 0\), and therefore, (28) can be written as:

$$\begin{aligned}&\left( \tilde{\varvec{\theta }}^{(1)} - {\varvec{\theta }^0}^{(1)}\right) = - \mathbf{Q }_1'\left( {\varvec{\theta }^0}^{(1)}\right) \left[ \mathbf{Q }_1''\left( \bar{\varvec{\theta }}^{(1)}\right) \right] ^{-1} \end{aligned}$$
(29)
$$\begin{aligned}&\Rightarrow \left( \tilde{\varvec{\theta }}^{(1)} -{\varvec{\theta }^0}^{(1)}\right) (\sqrt{n}\mathbf{D }_1)^{-1} = \left[ - \frac{1}{\sqrt{n}}\mathbf{Q }_1'\left( {\varvec{\theta }^0}^{(1)}\right) \mathbf{D }_1\right] \left[ \mathbf{D }_1 \mathbf{Q }_1''\left( \bar{\varvec{\theta }}^{(1)}\right) \mathbf{D }_1\right] ^{-1} \end{aligned}$$
(30)

Now let us calculate the right-hand side explicitly. First consider the first-derivative vector \(\frac{1}{\sqrt{n}}\mathbf{Q }_1'({\varvec{\theta }^0}^{(1)})\mathbf{D }_1\).

$$\begin{aligned} \frac{1}{\sqrt{n}}\mathbf{Q }_1'\left( {\varvec{\theta }^0}^{(1)}\right) \mathbf{D }_1 = \begin{pmatrix} \frac{1}{n}\frac{\partial Q_1\left( {\varvec{\theta }^0}^{(1)}\right) }{\partial A}&\frac{1}{n}\frac{\partial Q_1\left( {\varvec{\theta }^0}^{(1)}\right) }{\partial B}&\frac{1}{n^2}\frac{\partial Q_1\left( {\varvec{\theta }^0}^{(1)}\right) }{\partial \alpha } \end{pmatrix} \end{aligned}$$

By straightforward calculations and using Lemmas 3 and 4(a), one can easily see that:

$$\begin{aligned} \frac{1}{\sqrt{n}}\mathbf{Q }_1'\left( {\varvec{\theta }^0}^{(1)}\right) \mathbf{D }_1 \rightarrow 0 a.s. \end{aligned}$$
(31)

Now let us consider the second-derivative matrix \(\mathbf{D }_1 \mathbf{Q }_1''(\bar{\varvec{\theta }}^{(1)})\mathbf{D }_1\). Since \(\tilde{\varvec{\theta }}^{(1)} \xrightarrow {a.s.} {\varvec{\theta }^0}^{(1)}\) and \(\bar{\varvec{\theta }}^{(1)}\) is a point between them, we have:

$$\begin{aligned} \mathbf{D }_1 \mathbf{Q }_1''\left( \bar{\varvec{\theta }}^{(1)}\right) \mathbf{D }_1 = \lim _{n \rightarrow \infty } \mathbf{D }_1 \mathbf{Q }_1''\left( {\varvec{\theta }^0}^{(1)}\right) \mathbf{D }_1 \end{aligned}$$

Again by routine calculations and using Lemmas 13 and 4(a) , one can evaluate each element of this \(3 \times 3\) matrix, and get:

$$\begin{aligned} \lim _{n \rightarrow \infty } \mathbf{D }_1 \mathbf{Q }_1''\left( {\varvec{\theta }^0}^{(1)}\right) \mathbf{D }_1 = 2 \varvec{\varSigma }_1, \end{aligned}$$
(32)

where \(\varvec{\varSigma }_1 = \begin{pmatrix} \frac{1}{2} &{} 0 &{} \frac{B^0}{4} \\ 0 &{} \frac{1}{2} &{} \frac{-A^0}{4} \\ \frac{B^0}{4} &{} \frac{-A^0}{4} &{} \frac{{A^0}^2 + {B^0}^2}{6} \\ \end{pmatrix} > 0,\) a positive definite matrix. Hence, combining (31) and (32), we get the desired result. \(\square \)

Using the above lemma, we get the following relationship between the sinusoidal component of the model and its estimate:

$$\begin{aligned} {\tilde{A}} \cos ({\tilde{\alpha }} t) + {\tilde{B}} \sin ({\tilde{\alpha }} t) = A^0 \cos (\alpha ^0 t) + B^0 \sin (\alpha ^0 t) + o(1) \end{aligned}$$
(33)

Now to prove the consistency of \(\tilde{\varvec{\theta }}^{(1)} = ({\tilde{C}}, {\tilde{D}}, {\tilde{\beta }})\), we consider the following difference:

$$\begin{aligned}&\frac{1}{n}\left( Q_2\left( {\varvec{\theta }}^{(2)}\right) - Q_2\left( {\varvec{\theta }^0}^{(2)}\right) \right) \\&\quad = \frac{1}{n}\left[ \sum _{t=1}^{n}\left( y_1(t) - C\cos \left( \beta t^2\right) - D\sin (\beta t^2)\right) ^2\right. \\&\qquad \left. - \left( y_1(t) - C^0 \cos \left( \beta ^0 t^2\right) - D^0 \sin \left( \beta ^0 t^2\right) \right) ^2 \right] \\&\quad = \frac{1}{n} \sum _{t=1}^n\left( C^0 \cos \left( \beta ^0 t^2\right) \right. \\&\qquad \left. + D^0 \sin \left( \beta ^0 t^2\right) - C\cos \left( \beta t^2\right) - D \sin \left( \beta t^2\right) \right) ^2\\&\qquad + \frac{2}{n}\sum _{t=1}^n\left( A^0 \cos \left( \alpha ^0 t\right) + B^0 \sin \left( \alpha ^0 t^2\right) + X(t) \right) \left( C^0 \cos \left( \beta ^0 t^2\right) \right. \\&\qquad \left. + D^0 \sin \left( \beta ^0 t^2\right) - C\cos \left( \beta t^2\right) - D \sin \left( \beta t^2\right) \right) \\&\quad = f_2\left( {\varvec{\theta }}^{(2)}\right) + g_2\left( {\varvec{\theta }}^{(2)}\right) . \end{aligned}$$

Using Lemmas 3 and 4 , we have

$$\begin{aligned} \sup \limits _{{\varvec{\theta }} \in N_c} |g_2\left( {\varvec{\theta }}^{(2)}\right) | \xrightarrow {a.s.} 0, \end{aligned}$$

and using straightforward, but lengthy calculations and splitting the set \(N_c\), similar to the splitting of set \(M_c\), before, it can be shown that \(\liminf \inf \limits _{\varvec{\xi } \in N_c} f_2({\varvec{\theta }}^{(2)}) > 0\).

\(\text {Thus, }\tilde{\varvec{\theta }}^{(2)} \xrightarrow {a.s.} {\varvec{\theta }^0}^{(2)} as n \rightarrow \infty \)   by Lemma 7. Hence, the result. \(\square \)

Proof of Theorem 4:

We first examine the asymptotic distribution of the sequential estimates of the sinusoidal component, that is, \(\tilde{\varvec{\theta }}^{(1)}\) from 29, we have:

$$\begin{aligned} \left( \tilde{\varvec{\theta }}^{(1)} - {\varvec{\theta }^0}^{(1)}\right) \mathbf{D }_1^{-1} = - \mathbf{Q }_1'\left( {\varvec{\theta }^0}^{(1)}\right) \mathbf{D }_1\left[ \mathbf{D }_1\mathbf{Q }_1''\left( \bar{\varvec{\theta }}^{(1)}\right) \mathbf{D }_1\right] ^{-1}. \end{aligned}$$

First, we show that \(\mathbf{Q }_1'({\varvec{\theta }^0}^{(1)})\mathbf{D }_1 \rightarrow N_3(0, 4 \sigma ^2 c \varvec{\varSigma }_1).\) We compute the elements of the derivative vector \(\mathbf{Q }_1'({\varvec{\theta }^0}^{(1)})\) and using Conjecture 2(e), (f), (g) and (h) (see Sect. A), we obtain:

$$\begin{aligned} \mathbf{Q }_1'\left( {\varvec{\theta }^0}^{(1)}\right) \mathbf{D }_1 \overset{a.eq.}{=} -2 \begin{pmatrix} \frac{1}{\sqrt{n}}\sum \limits _{t=1}^{n} X(t) \cos \left( \alpha ^0 t\right) \\ \frac{1}{\sqrt{n}}\sum \limits _{t=1}^{n} X(t) \sin \left( \alpha ^0 t\right) \\ \frac{1}{n\sqrt{n}}\sum \limits _{t=1}^{n} t X(t)\left( -A_1^0\sin \left( \alpha ^0 t\right) + B^0 \cos \left( \alpha ^0 t\right) \right) \end{pmatrix}. \end{aligned}$$
(34)

Here, \(\overset{a.eq.}{=}\) means asymptotically equivalent. Now again using CLT, the right-hand side of (34) tends to three-variate Gaussian distribution with mean 0 and variance–covariance matrix, \(4 \sigma ^2 c \varvec{\varSigma }_1.\) Using this and (32), we have the desired result.

Next, we determine the asymptotic distribution of \(\tilde{\varvec{\theta }}^{(2)}.\) For this, we consider the error sum of squares, \(Q_2(\varvec{\theta }^{(2)})\) as defined in (12). Let \({\varvec{Q}}'_2({\varvec{\theta }}^{(2)})\) be the first-derivative vector and \({\varvec{Q}}''_2(\varvec{\theta }^{(2)})\), the second-derivative matrix of \(Q_2(\varvec{\theta }^{(2)})\). Using multivariate Taylor series expansion, we expand \({\varvec{Q}}'_2(\tilde{\varvec{\theta }}^{(2)})\) around the point \({\varvec{\theta }^0}^{(2)}\), and get:

$$\begin{aligned} \left( \tilde{\varvec{\theta }}^{(2)} - {\varvec{\theta }^0}^{(2)}\right) = -{\varvec{Q}}'_2\left( {\varvec{\theta }^0}^{(2)}\right) [{\varvec{Q}}''_2(\bar{\varvec{\theta }}^{(2)})]^{-1}. \end{aligned}$$

Multiplying both sides by the matrix \({\varvec{D}}_2^{-1}\), where \({\varvec{D}}_2 = \hbox {diag}(\frac{1}{\sqrt{n}}, \frac{1}{\sqrt{n}},\frac{1}{n^2\sqrt{n}})\), we get:

$$\begin{aligned} \left( \tilde{\varvec{\theta }}^{(2)} - {\varvec{\theta }^0}^{(2)}\right) \mathbf{D }_2^{-1} = -{\varvec{Q}}'_2\left( {\varvec{\theta }^0}^{(2)}\right) \mathbf{D }_2 \left[ \mathbf{D }_2{\varvec{Q}}''_2\left( \bar{\varvec{\theta }}^{(2)}\right) \mathbf{D }_2\right] ^{-1}. \end{aligned}$$

Now, when we evaluate the first-derivative vector \({\varvec{Q}}'_2({\varvec{\theta }^0}^{(2)})\mathbf{D }_2\), using Conjecture 2(a) (see Sect. A), we obtain:

$$\begin{aligned} {\varvec{Q}}'_2\left( {\varvec{\theta }^0}^{(2)}\right) \mathbf{D }_2 \overset{a.eq.}{=} -2 \begin{pmatrix} \frac{1}{\sqrt{n}}\sum \limits _{t=1}^{n} X(t) \cos \left( \beta ^0 t^2\right) \\ \frac{1}{\sqrt{n}}\sum \limits _{t=1}^{n} X(t) \sin \left( \beta ^0 t^2\right) \\ \frac{1}{n^2\sqrt{n}}\sum \limits _{t=1}^{n} t X(t)\left( -C^0\sin \left( \beta ^0 t^2\right) + D^0 \cos \left( \beta ^0 t^2\right) \right) \end{pmatrix}. \end{aligned}$$
(35)

Again using the CLT, the vector on the right-hand side of (35) tends to \(N_3(0, 4\sigma ^2 c \varvec{\varSigma }_2),\) where \(\varvec{\varSigma }_2 = \begin{pmatrix} \frac{1}{2} &{} \quad 0 &{} \quad \frac{D^0}{6} \\ 0 &{} \quad \frac{1}{2} &{} \quad \frac{-C^0}{6} \\ \frac{D^0}{6} &{} \quad \frac{-C^0}{6} &{} \quad \frac{{C^0}^2 + {D^0}^2}{10} \\ \end{pmatrix} > 0.\)

Note that:

$$\begin{aligned} \lim _{n \rightarrow \infty }\mathbf{D }_2{\varvec{Q}}''_2\left( \bar{\varvec{\theta }}^{(2)}\right) \mathbf{D }_2 = \lim _{n \rightarrow \infty }\mathbf{D }_2{\varvec{Q}}''_2\left( {\varvec{\theta }^0}^{(2)}\right) \mathbf{D }_2. \end{aligned}$$

On computing the second derivative \(3 \times 3\) matrix \(\mathbf{D }_2{\varvec{Q}}''_2({\varvec{\theta }^0}^{(2)})\mathbf{D }_2\) and using Lemmas 23 and 4(b), we get:

$$\begin{aligned} \lim _{n \rightarrow \infty }\mathbf{D }_2{\varvec{Q}}''_2\left( {\varvec{\theta }^0}^{(2)}\right) \mathbf{D }_2 = 2\varvec{\varSigma }_2. \end{aligned}$$
(36)

Combining results (35) and (36), we get the stated asymptotic distribution of \(\tilde{\varvec{\theta }}^{(2)}.\) Hence, the result. \(\square \)

Multiple Component Chirp-like Model

1.1 Proofs of the Asymptotic Properties of the LSEs

Proof of Theorm 6:

Consider the error sum of squares, defined in (14). Let us denote \(\mathbf{Q }'(\varvec{\vartheta })\) as the \(3(p+q) \times 1\) first-derivative vector and \(\mathbf{Q }''(\varvec{\vartheta })\) as the \(3(p+q) \times 3(p+q)\) second-derivative matrix. Using multivariate Taylor series expansion, we have:

$$\begin{aligned} \mathbf{Q }'\left( \hat{\varvec{\vartheta }}\right) - \mathbf{Q }'\left( \varvec{\vartheta }^0\right) = \left( \hat{\varvec{\vartheta }} - \varvec{\vartheta }^0\right) \mathbf{Q }''\left( \bar{\varvec{\vartheta }}\right) . \end{aligned}$$

Here, \(\bar{\varvec{\vartheta }}\) is a point between \(\hat{\varvec{\vartheta }}\) and \(\varvec{\vartheta }^0.\) Now, using the fact that \(\mathbf{Q }'(\hat{\varvec{\vartheta }}) = 0\) and multiplying both sides of the above equation by \({\mathfrak {D}}^{-1}\), we have:

$$\begin{aligned} \left( \hat{\varvec{\vartheta }} - \varvec{\vartheta }^0\right) {\mathfrak {D}}^{-1} = - \mathbf{Q }'\left( \hat{\varvec{\vartheta }}\right) {\mathfrak {D}}\left[ {\mathfrak {D}}\mathbf{Q }''\left( \bar{\varvec{\vartheta }}\right) {\mathfrak {D}}\right] ^{-1}. \end{aligned}$$

Also note that \((\hat{\varvec{\vartheta }} - \varvec{\vartheta }^0){\mathfrak {D}}^{-1} = \bigg (({\hat{\varvec{\theta }}_1}^{(1)} - {\varvec{\theta }_1^0}^{(1)}), \ldots , ({\hat{\varvec{\theta }}_p}^{(1)} - {\varvec{\theta }_p^0}^{(1)}), ({\hat{\varvec{\theta }}_{1}}^{(2)} - {\varvec{\theta }_{1}^0}^{(2)}), \ldots , ({\hat{\varvec{\theta }}_q}^{(2)} - {\varvec{\theta }_q^0}^{(2)}) \bigg ){\mathfrak {D}}^{-1}.\)

Now, we evaluate the elements of the vector \(\mathbf{Q }'(\varvec{\vartheta }^0)\) and the matrix \(\mathbf{Q }''(\bar{\varvec{\vartheta }})\):

$$\begin{aligned} \frac{\partial Q(\varvec{\vartheta })}{\partial A_j}\bigg |_{\varvec{\vartheta }^0}&= -2 \sum _{t=1}^{n} X(t)\cos \left( \alpha _j^0 t\right) , \quad \frac{\partial Q(\varvec{\vartheta })}{\partial B_j}\bigg |_{\varvec{\vartheta }^0} = -2 \sum _{t=1}^{n} X(t)\sin \left( \alpha _j^0 t\right) , and \\ \frac{\partial Q(\varvec{\vartheta })}{\partial \alpha _j}\bigg |_{\varvec{\vartheta }^0}&= -2 \sum _{t=1}^{n} t X(t)\left( -A_j^0 \sin \left( \alpha _j^0 t\right) + B_j^0 \cos \left( \alpha _j^0 t\right) \right) , for j = 1, \ldots , p. \end{aligned}$$

Similarly, for

$$\begin{aligned} k&= 1, \ldots , q,\ \frac{\partial Q(\varvec{\vartheta })}{\partial C_k}\bigg |_{\varvec{\vartheta }^0} = -2 \sum _{t=1}^{n} X(t)\cos \left( \beta _k^0 t^2\right) , \quad \frac{\partial Q(\varvec{\vartheta })}{\partial D_k}\bigg |_{\varvec{\vartheta }^0}\\&= -2 \sum _{t=1}^{n} X(t)\sin \left( \beta _k^0 t^2\right) and \\&\quad \frac{\partial Q(\varvec{\vartheta })}{\partial \beta _k}\bigg |_{\varvec{\vartheta }^0} = -2 \sum _{t=1}^{n} t^2 X(t)\left( -C_k^0 \sin \left( \beta _k^0 t^2\right) + D_k^0 \cos \left( \beta _k^0 t\right) \right) .\\&\quad \frac{\partial ^2 Q(\varvec{\vartheta })}{\partial A_j^2}\bigg |_{\varvec{\vartheta }^0} = 2\sum _{t=1}^{n}\cos ^2\left( \alpha _j^0 t\right) , \ \frac{\partial ^2 Q(\varvec{\vartheta })}{\partial B_j^2}\bigg |_{\varvec{\vartheta }^0} = 2\sum _{t=1}^{n}\sin ^2\left( \alpha _j^0 t\right) ,\ j = 1, \ldots , p, \\&\quad \frac{\partial ^2 Q(\varvec{\vartheta })}{\partial C_k^2}\bigg |_{\varvec{\vartheta }^0} = 2\sum _{t=1}^{n}\cos ^2\left( \beta _k^0 t^2\right) and \frac{\partial ^2 Q(\varvec{\vartheta })}{\partial D_k^2}\bigg |_{\varvec{\vartheta }^0} = 2\sum _{t=1}^{n}\sin ^2\left( \beta _k^0 t^2\right) ,\ k = 1, \ldots , q. \\&\quad \frac{\partial ^2 Q(\varvec{\vartheta })}{\partial A_j \partial B_j}\bigg |_{\varvec{\vartheta }^0} = 2 \sum _{t=1}^{n} \sin \left( \alpha _j^0 t\right) \cos \left( \alpha _j^0 t\right) ,\\&\quad \frac{\partial ^2 Q(\varvec{\vartheta })}{\partial A_j \partial \alpha _j}\bigg |_{\varvec{\vartheta }^0} = 2 \sum _{t=1}^{n}t X(t) \sin \left( \alpha _j^0 t\right) \\&\qquad - 2 A_j^0 \sum _{t=1}^{n} t \cos \left( \alpha _j^0 t\right) \sin \left( \alpha _j^0 t\right) + 2 B_j^0 \sum _{t=1}^{n} t\cos ^2\left( \alpha _j^0 t\right) ,\\&\quad \frac{\partial ^2 Q(\varvec{\vartheta })}{\partial A_j \partial C_k}\bigg |_{\varvec{\vartheta }^0} = 2 \sum _{t=1}^{n} \cos \left( \beta _k^0 t^2\right) \cos \left( \alpha _j^0 t\right) ,\ \frac{\partial ^2 Q(\varvec{\vartheta })}{\partial A_j \partial D_k}\bigg |_{\varvec{\vartheta }^0} = 2 \sum _{t=1}^{n} \sin \left( \beta _k^0 t^2\right) \cos \left( \alpha _j^0 t\right) , \\&\quad \frac{\partial ^2 Q(\varvec{\vartheta })}{\partial A_j \partial \beta _k}\bigg |_{\varvec{\vartheta }^0} = - 2 C_k^0 \sum _{t=1}^{n} t^2 \cos \left( \alpha _j^0 t\right) \sin \left( \beta _k^0 t^2\right) + 2 D_k^0 \sum _{t=1}^{n} t^2\cos \left( \alpha _j^0 t\right) \cos \left( \beta _k^0 t^2\right) . \end{aligned}$$

Similarly the rest of the partial derivatives can be computed and using Lemmas 123 and 4 , it can be shown that:

$$\begin{aligned} {\mathfrak {D}}\mathbf{Q }''\left( \bar{\varvec{\vartheta }}\right) {\mathfrak {D}} \rightarrow 2 {\mathcal {E}}\left( \varvec{\vartheta }^0\right) . \end{aligned}$$

Now, using CLT on the first-derivative vector, \(\mathbf{Q }'(\varvec{\vartheta }^0){\mathfrak {D}}\), it can be shown that it converges to a multivariate Gaussian distribution. Using routine calculations, and again using Lemmas 123 and 4, we compute the asymptotic variances for each of the elements and their covariances and we get:

$$\begin{aligned} \mathbf{Q }'\left( \varvec{\vartheta }^0\right) {\mathfrak {D}} \xrightarrow {d} N_{3(p+q)}\left( 0, 4c \sigma ^2 {\mathcal {E}}\left( \varvec{\vartheta }^0\right) \right) . \end{aligned}$$

Hence, the result. \(\square \)

1.2 Proofs of the Asymptotic Properties of the LSEs

To prove Theorems 7 and 8 , we need the following lemmas:

Lemma 9

  1. (a)

    Consider the set \(M_c^{(j)} = \{\varvec{\theta }^{(1)}_j: |\varvec{\theta }^{(1)}_j - {\varvec{\theta }_j^0}^{(1)}| \geqslant 3 c; \varvec{\theta }_j^{(1)} \in \varvec{\varTheta }^{(1)}\},\ j = 1, \ldots , p\). If the following holds true:

    $$\begin{aligned} \liminf \inf \limits _{M_c^{(j)}} \frac{1}{n} \left( Q_{2j-1}\left( \varvec{\theta }^{(1)}_j\right) - Q_{2j-1}\left( {\varvec{\theta }_j^0}^{(1)}\right) \right) > 0 a.s. \end{aligned}$$
    (37)

    then \(\tilde{\varvec{\theta }}_j^{(1)} \xrightarrow {a.s.} {\varvec{\theta }_j^0}^{(1)}\) as \(n \rightarrow \infty \)

  2. (b)

    Let us define the set \(N_c^{(k)} = \{\varvec{\theta }_k^{(2)} : \varvec{\theta }_k^{(2)} \in \varvec{\varTheta }^{(2)} ;\ |\varvec{\theta }_k^{(2)} - {\varvec{\theta }_k^0}^{(2)}| \geqslant 3c\},\ k = 1, \ldots , q.\) If for any \(c>0\),

    $$\begin{aligned} \liminf \inf \limits _{\varvec{\theta }_k^{(2)} \in N_c^{(k)}} \frac{1}{n} \left( Q_{2k}\left( \varvec{\theta }_k^{(2)}\right) - Q_{2k}\left( {\varvec{\theta }_k^0}^{(2)}\right) \right) > 0 a.s. \end{aligned}$$
    (38)

    then \(\tilde{\varvec{\theta }}_k^{(2)} \xrightarrow {a.s.} {\varvec{\theta }_k^0}^{(2)}\) as \(n \rightarrow \infty .\)

Proof

This can be proved by contradiction along the same lines as Lemma 5. \(\square \)

Lemma 10

If the Assumptions 1, 3 and 4 are satisfied, then for \(j \leqslant p\) and \(k \leqslant q\):

  1. (a)

    \((\tilde{\varvec{\theta }_j} - \varvec{\theta }_j^0)(\sqrt{n}{\varvec{D}}_1)^{-1} \xrightarrow {a.s.} 0.\)

  2. (b)

    \((\tilde{\varvec{\xi }_k} - \varvec{\xi }_k^0)(\sqrt{n}{\varvec{D}}_2)^{-1} \xrightarrow {a.s.} 0.\)

Here, \({\varvec{D}}_1 = \hbox {diag}(\frac{1}{\sqrt{n}}, \frac{1}{\sqrt{n}}, \frac{1}{n\sqrt{n}})\) and \({\varvec{D}}_2 = \hbox {diag}(\frac{1}{\sqrt{n}}, \frac{1}{\sqrt{n}}, \frac{1}{n^2\sqrt{n}})\).

Proof

This proof can be obtained along the same lines as Lemma 8. \(\square \)

Now the proofs of Theorems  7 and 8 can be obtained by using the above lemmas and following the same argument as in Theorem 3.

Next, we examine the situation when the number of components is overestimated (see Theorem 9). The proof of Theorem 9 will follow consequently from the below-stated lemmas:

Lemma 11

If X(t), is the error component as defined before, and if \({\tilde{A}}\), \({\tilde{B}}\) and \({\tilde{\alpha }}\) are obtained by minimizing the following function:

$$\begin{aligned} Q_{p+q+1}\left( \varvec{\theta }^{(1)}\right) = \frac{1}{n}\sum _{t=1}^{N}\left( X(t) - A \cos (\alpha t) - B \sin (\alpha t)\right) ^2, \end{aligned}$$

then \({\tilde{A}} \xrightarrow {a.s.} 0\) and \({\tilde{B}} \xrightarrow {a.s.} 0.\)

Proof

The sum of squares function \(Q_{p+q+1}(\varvec{\theta }^{(1)})\) can be written as:

$$\begin{aligned}&\frac{1}{n} \sum _{t=1}^{n}X^2(t) - \frac{2}{n} \sum _{t=1}^{n} X(t)\left( A \cos (\alpha t) + B \sin (\alpha t)\right) + \frac{A^2 + B^2}{2} + o(1)\\&\quad = R\left( \varvec{\theta }^{(1)}\right) + o(1). \end{aligned}$$

Since the difference between \(Q_{p+q+1}(\varvec{\theta }^{(1)})\) and \(R(\varvec{\theta }^{(1)})\) is o(1), replacing former with latter will have negligible effect on the estimators. Thus, we have

$$\begin{aligned} {\tilde{A}} = \frac{2}{n}\sum \limits _{t=1}^{n} X(t)\cos (\alpha t) + o(1) and {\tilde{B}} = \frac{2}{n}\sum _{t=1}^{n} X(t)\sin (\alpha t) + o(1). \end{aligned}$$

Now using Lemma 4(a), the result follows. \(\square \)

Lemma 12

If X(t), is the error component as defined before, and if \({\tilde{C}}\), \({\tilde{D}}\) and \({\tilde{\beta }}\) are obtained by minimizing the following function:

$$\begin{aligned} \frac{1}{n}\sum _{t=1}^{N}\left( X(t) - C \cos \left( \beta t^2\right) - D \sin \left( \beta t^2\right) \right) ^2, \end{aligned}$$

then \({\tilde{C}} \xrightarrow {a.s.} 0\) and \({\tilde{D}} \xrightarrow {a.s.} 0.\)

Proof

The proof of this lemma follows along the same lines as Lemma 11. \(\square \)

Now, we provide the proof of the fact that the sequential LSEs have the same asymptotic distribution as the LSEs.

Proof of Theorem 10:

(a) By Taylor series expansion of \({\varvec{Q}}'_1(\tilde{\varvec{\theta }}_1^{(1)})\) around the point \({\varvec{\theta }_1^0}^{(1)}\), we have:

$$\begin{aligned} \left( \tilde{\varvec{\theta }}_1^{(1)} - {\varvec{\theta }_1^0}^{(1)}\right) = -{\varvec{Q}}'_1\left( {\varvec{\theta }_1^0}^{(1)}\right) \left[ {\varvec{Q}}''_1\left( \bar{\varvec{\theta }}_1^{(1)}\right) \right] ^{-1} \end{aligned}$$

Multiplying both sides by the matrix \({\varvec{D}}_1^{-1}\), where \({\varvec{D}}_1 = \hbox {diag}(\frac{1}{\sqrt{n}}, \frac{1}{\sqrt{n}},\frac{1}{n\sqrt{n}})\), we get:

$$\begin{aligned} \left( \tilde{\varvec{\theta }}_1^{(1)} - {\varvec{\theta }_1^0}^{(1)}\right) {\varvec{D}}_1^{-1} = -{\varvec{Q}}'_1\left( {\varvec{\theta }_1^0}^{(1)}\right) {\varvec{D}}_1 \left[ {\varvec{D}}_1{\varvec{Q}}''_1\left( \bar{\varvec{\theta }}_1^{(1)}\right) {\varvec{D}}_1\right] ^{-1} \end{aligned}$$

First, we show that \({\varvec{Q}}'_1({\varvec{\theta }_1^0}^{(1)}){\varvec{D}}_1 \rightarrow N_3(0, 4 \sigma ^2 c \varvec{\varSigma }_1^{(1)}).\)

To prove this, we compute the elements of the derivative vector \({\varvec{Q}}'_1({\varvec{\theta }_1^0}^{(1)})\):

$$\begin{aligned}&\frac{\partial Q_1\left( {\varvec{\theta }_1^0}^{(1)}\right) }{\partial A_1}\\&\quad = -2 \sum _{t=1}^n \left( \sum _{j=2}^p\left( A_j^0 \cos \left( \alpha _j^0 t\right) + B_j^0 \sin \left( \alpha _j^0 t\right) \right) + \sum _{k=1}^q \left( C_k^0 \cos \left( \beta _k^0 t^2\right) + D_k^0 \sin \left( \beta _k^0 t^2\right) \right) \right. \\&\qquad \left. +X(t)\right) \cos \left( \alpha _1^0 t\right) ,\\&\quad \frac{\partial Q_1\left( {\varvec{\theta }_1^0}^{(1)}\right) }{\partial B_1} = -2 \sum _{t=1}^n \left( \sum _{j=2}^p\left( A_j^0 \cos \left( \alpha _j^0 t\right) + B_j^0 \sin \left( \alpha _j^0 t\right) \right) + \sum _{k=1}^q \left( C_k^0 \cos \left( \beta _k^0 t^2\right) \right. \right. \\&\qquad \left. \left. + D_k^0 \sin \left( \beta _k^0 t^2\right) \right) + X(t)\right) \sin \left( \alpha _1^0 t\right) ,\\&\quad \frac{\partial Q_1\left( {\varvec{\theta }_1^0}^{(1)}\right) }{\partial \alpha _1}\\&\quad = -2 \sum _{t=1}^n t \left( \sum _{j=2}^p\left( A_j^0 \cos \left( \alpha _j^0 t\right) + B_j^0 \sin \left( \alpha _j^0 t\right) \right) \right. \\&\qquad \left. + \sum _{k=1}^q \left( C_k^0 \cos \left( \beta _k^0 t^2\right) + D_k^0 \sin \left( \beta _k^0 t^2\right) \right) + X(t)\right) \\&\qquad \times \left( -A_1^0 \sin \left( \alpha _1^0 t\right) + B_1^0 \cos \left( \alpha _1^0 t\right) \right) . \end{aligned}$$

Using Conjecture 2 (see Sect. A), it can be shown that:

$$\begin{aligned} {\varvec{Q}}'_1({\varvec{\theta }_1^0}^{(1)}){\varvec{D}}_1 \overset{a.eq.}{=} -2 \begin{pmatrix} \frac{1}{\sqrt{n}}\sum \limits _{t=1}^{n} X(t) \cos (\alpha _1^0 t) \\ \frac{1}{\sqrt{n}}\sum \limits _{t=1}^{n} X(t) \sin (\alpha _1^0 t) \\ \frac{1}{n\sqrt{n}}\sum \limits _{t=1}^{n} t X(t)(-A_1^0\sin (\alpha _1^0 t) + B_1^0 \cos (\alpha _1^0 t)) \end{pmatrix}. \end{aligned}$$

Now using CLT, we have:

$$\begin{aligned} {\varvec{Q}}'_1\left( {\varvec{\theta }_1^0}^{(1)}\right) {\varvec{D}}_1 \rightarrow N_3\left( 0, 4 \sigma ^2 c \varvec{\varSigma }_1^{(1)}\right) \end{aligned}$$

Next, we compute the elements of the second-derivative matrix, \({\varvec{D}}_1 {\varvec{Q}}''_1({\varvec{\theta }_1^0}^{(1)}){\varvec{D}}_1\). By straightforward calculations and using Lemmas 123 and 4 , it is easy to show that:

$$\begin{aligned} {\varvec{D}}_1 {\varvec{Q}}''_1\left( {\varvec{\theta }_1^0}^{(1)}\right) {\varvec{D}}_1 = 2\varvec{\varSigma }_1^{(1)}. \end{aligned}$$

Thus, we have the desired result.

(b) Consider the error sum of squares \(Q_2(\varvec{\theta }^{(2)}) = \sum \limits _{t=1}^{n}\bigg (y_1(t) - C\cos (\beta t^2) - D\sin (\beta t^2)\bigg )^2\). Here \(y_1(t) = y(t) - {\tilde{A}} \cos ({\tilde{\alpha }} t) - {\tilde{B}} \sin ({\tilde{\alpha }} t)\), \(t = 1, \ldots , n\). Let \({\varvec{Q}}'_2(\varvec{\theta }^{(2)})\) be the first-derivative vector and \({\varvec{Q}}''_2(\varvec{\theta }^{(2)})\), the second-derivative matrix of \(Q_2(\varvec{\theta }^{(2)})\). By Taylor series expansion of \({\varvec{Q}}'_2(\tilde{\varvec{\theta }}_1^{(2)})\) around the point \({\varvec{\theta }_1^0}^{(2)}\), we have:

$$\begin{aligned} \left( \tilde{\varvec{\theta }}_1^{(2)} - {\varvec{\theta }_1^0}^{(2)}\right) = -{\varvec{Q}}'_2\left( {\varvec{\theta }_1^0}^{(2)}\right) \left[ {\varvec{Q}}''_2\left( \bar{\varvec{\theta }}_1^{(2)}\right) \right] ^{-1} \end{aligned}$$

Multiplying both sides by the matrix \({\varvec{D}}_2^{-1}\), where \({\varvec{D}}_2 = \hbox {diag}(\frac{1}{\sqrt{n}}, \frac{1}{\sqrt{n}}, \frac{1}{n^2\sqrt{n}})\), we get:

$$\begin{aligned} \left( \tilde{\varvec{\theta }}_1^{(2)} - {\varvec{\theta }_1^0}^{(2)}\right) {\varvec{D}}_2^{-1} = -{\varvec{Q}}'_2\left( {\varvec{\theta }_1^0}^{(2)}\right) {\varvec{D}}_2 \left[ {\varvec{D}}_2{\varvec{Q}}''_2\left( \bar{\varvec{\theta }}_1^{(2)}\right) {\varvec{D}}_2\right] ^{-1} \end{aligned}$$

Now using (33), and proceeding exactly as in part (a), we get:

$$\begin{aligned} \left( \tilde{\varvec{\theta }}_1^{(2)} -{\varvec{\theta }_1^0}^{(2)}\right) {\varvec{D}}_2^{-1} \xrightarrow {d} N_3\left( 0, \sigma ^2 c {\varvec{\varSigma }_1^{(2)}}^{-1}\right) . \end{aligned}$$

Hence, the result. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Grover, R., Kundu, D. & Mitra, A. Asymptotic Properties of Least Squares Estimators and Sequential Least Squares Estimators of a Chirp-like Signal Model Parameters. Circuits Syst Signal Process 40, 5421–5465 (2021). https://doi.org/10.1007/s00034-021-01724-7

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00034-021-01724-7

Keywords

Navigation