Skip to main content
Log in

Purely Sequential and k-Stage Procedures for Estimating the Mean of an Inverse Gaussian Distribution

  • Published:
Methodology and Computing in Applied Probability Aims and scope Submit manuscript

Abstract

In the first part of this paper, we propose purely sequential and k-stage (k ≥ 3) procedures for estimation of the mean μ of an inverse Gaussian distribution having prescribed ‘proportional closeness’. The problem is constructed in such a manner that the boundedness of the expected loss is equivalent to the estimation of parameter with given ‘proportional closeness’. We obtain the associated second-order approximations for both the procedures. Second part of this paper deals with developing the minimum risk and bounded risk point estimation problems for estimating the mean μ of an inverse Gaussian distribution having unknown scale parameter λ. We propose an useful family of loss functions for both the problems and our aim is to control the associated risk functions. Moreover, we establish the failure of fixed sample size procedures to deal with these problems and hence propose purely sequential and k-stage (k ≥ 3) procedures to estimate the mean μ. We also obtain the second-order approximations associated with our sequential procedures. Further, we provide extensive sets of simulation studies and real data analysis to show the performances of our proposed procedures.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Bapat SR (2018) On Purely Sequential Estimation of an Inverse Gaussian Mean. Metrika 81(8):1005–1024

    Article  MathSciNet  Google Scholar 

  • Birnbaum ZW, Saunders SC (1958) A Statistical Model for Life Length of Material. Journal of American Statistical Association 53:151–160

    Article  MathSciNet  Google Scholar 

  • Birnbaum ZW, Saunders SC (1969) Estimation for a Family of Life Distributions. Journal of Applied Probability 6:319–327

    Article  MathSciNet  Google Scholar 

  • Chaturvedi A (1985) Sequential Estimation of an Inverse Gaussian Parameter with Prescribed Proportional Closeness. Calcutta Statistical Association Bullettin 34:215–219

    Article  MathSciNet  Google Scholar 

  • Chaturvedi A, Bapat SR, Joshi N (2019a) Sequential Minimum Risk Point Estimation of the Parameters of an Inverse Gaussian Distribution American Journal of Mathematical and Management Sciences. https://doi.org/10.1080/01966324.2019.1570883

  • Chaturvedi A, Bapat SR, Joshi N (2019b) Multi-stage point estimation of the mean of an inverse Gaussian distribution. Sequential Analysis 38(1):1–25

  • Chaturvedi A, Bapat SR, Joshi N (2019c) A k-Stage Procedure for Estimating the Mean Vector of a Multivariate Normal Population. Sequential Analysis 38(3):369–384

  • Chaturvedi A, Pandey SK, Gupta M (1991) On a Class of Asymptotically Risk-Efficient Sequential Procedures. Scandinavian Actuarial Journal 1:87–96

    Article  MathSciNet  Google Scholar 

  • Chhikara RS, Folks JL (1989) The Inverse Gaussian Distribution, Theory, Methodology and Applications. Marcel Dekker Inc, New York

    MATH  Google Scholar 

  • Chow YS, Robbins H (1965) On the Asymptotic Theory of Fixed Width Sequential Confidence Intervals for the Mean. Annals of Mathematical Statistics 36:457–462

    Article  MathSciNet  Google Scholar 

  • Edgeman RL, Salzburg PM (1991) A Sequential Sampling Plan for the Inverse Gaussian Mean. Statistical Papers 32:45–53

    Article  MathSciNet  Google Scholar 

  • Folks JL, Chhikara RS (1978) The Inverse Gaussian Distribution and its Statistical Application-a Review. Journal of the Royal Statistical Society B40:263–289

    MathSciNet  MATH  Google Scholar 

  • Ghosh M, Mukhopadhyay N (1979) Sequential Point Estimation of the Mean when the Distribution is Unspecified. Communications in Statistics, Series A 8:637–652

    Article  MathSciNet  Google Scholar 

  • Ghosh M, Mukhopadhyay N, Sen PK (1997) Sequential Estimation. Wiley, New York

    Book  Google Scholar 

  • Hall P (1981) Asymptotic Theory of Triple Sampling for Sequential Estimation of a Mean. Annals of Statistics 9:1229–1238

    Article  MathSciNet  Google Scholar 

  • Johnson N, Kotz S, Balakrishnan N (1994) Continuous Univariate Distributions, Volume 1. Wiley, New York

    MATH  Google Scholar 

  • Joshi S, Shah M (1990) Sequential Analysis Applied to Testing the Mean of an Inverse Gaussian Distribution with Known Coefficient of Variation. Communications in Statistics 19(4):1457–1466

    Article  MathSciNet  Google Scholar 

  • Leiva V, Hernandez H, Sanhueza A (2008b) An R Package for a General Class of Inverse Gaussian Distributions. Journal of Statistical Software 26(4):1–21

  • Liu W (1997) A k-stage sequential sampling procedure for estimation of normal mean. Journal of Statistical Planning and Inference 65:109–127

    Article  MathSciNet  Google Scholar 

  • Mukhopadhyay N (1982) Stein’s two-stage procedure and exact consistency. Scandinavian Actuarial Journal 2:110–122

    Article  MathSciNet  Google Scholar 

  • Mukhopadhyay N, Bapat SR (2016a) Multistage point estimation methodologies for a negative exponential location under a modified linex loss function: Illustrations with infant mortality and bone marrow data. Sequential Analysis 35(2):175–206. https://doi.org/10.1080/07474946.2016.1165532

  • Mukhopadhyay N, de Silva BM (2009) Sequential Methods and Their Applications. CRC, Boca Raton

    MATH  Google Scholar 

  • Schrodinger E (1915) Zur theorie der fall und steigversuche an teilchen mit Brown-scher bewegung. Physikalische Zeitschrift 16:289–295

    Google Scholar 

  • Sen PK (1981) Sequential Nonparametrics. Wiley, New York

    MATH  Google Scholar 

  • Seshadri V (1993) The Inverse Gaussian Distribution-A case Study in Exponential Families. Clarendon Press, Oxford

    Google Scholar 

  • Seshadri V (1999) The Invere Gaussian Distribution, Statistical Theory and Applications. Springer, New York

    MATH  Google Scholar 

  • Woodroofe M (1977) Second Order Approximation for Sequential Point and Interval Estimation. Annals of Statistics 5:984–995

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgments

We are grateful to the Editor-in-Chief and anonymous referee(s) for their valuable suggestions, which lead to an improved presentation of the manuscript.

Funding

In particular, the corresponding author, Neeraj Joshi is indebted to the Department of Science and Technology, Government of India for their financial support (INSPIRE fellowship - IF170889, 2018) to pursue his research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Neeraj Joshi.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: Proofs of the Selected Results

Appendix: Proofs of the Selected Results

Proof Proof of Theorem 1

N1 from (6) can be written as, \(N_{1} = inf\left \{n\geq m\geq 2 ; {\sum }_{j=1}^{n-1} Z_{j} \leq \frac {n^{2}}{n^{*}}\right \}\) where n comes from (5) and \({\sum }_{j=1}^{n-1} Z_{j} = \frac {n\lambda }{\widehat {\lambda }_{n}} \sim \chi ^{2}_{n-1}\) and \(Z_{j} \sim {\chi ^{2}_{1}}\). N1 can also be written as J + 1 w.p. 1 where,

$$ J = inf\left\{n \geq m-1; {\sum}_{j=1}^{n} Z_{j} \leq \frac{n^{2}}{n^{*}}\left( 1+\frac{1}{n}\right)^{2}\right\} $$
(38)

Now Comparing (38) with equation (1.1) of Woodroofe (1977), we get, α = 2, β = 1, c = n− 1, μ = 1, λ = n, \(L(n)=1+\frac {2}{n}+\frac {1}{n^{2}}\), L0 = 2, τ2 = 2, a = 1/2. □

Result (7) now follows from his Theorem 2.4 for m > 2.

Now, using the fact that \(\sqrt {n\lambda } (\overline {X}_{n} - \mu )/\mu \sqrt {\overline {X}_{n}}\) has a standard normal distribution, we can write,

$$ \begin{array}{@{}rcl@{}} E\left\{L(\mu, \overline{X}_{{N}_{1}})\right\} &= 1 - E\left[P\left\{{\chi^{2}_{1}} \leq N_{1}\lambda d^{2}\right\}\right] \\ &= 1 - E\left[{\Psi}\left\{N_{1}\lambda d^{2}\right\}\right] \end{array} $$
(39)

where Ψ(.) is the cdf of a \({\chi ^{2}_{1}}\) variate. We can expand equation (39) by Taylor Series expansion [see Hall (1981), page 1231] as follows:

$$ E\left\{L(\mu, \overline{X}_{{N}_{1}})\right\} = {\Psi}\left( \lambda d^{2}E(N_{1})\right) + \frac{\lambda^{2}d^{4}}{2} E\left\{N_{1} - E(N_{1})\right\}^{2}{\Psi}^{\prime\prime}\left( \lambda d^{2}E(N_{1})\right) + o(d^{2}) $$
(40)

where,

$$ \begin{array}{@{}rcl@{}} {\Psi}(x) &= \frac{1}{\sqrt 2\pi}{{\int}_{0}^{x}} e^{-y/2} y^{-1/2} dy \\ {\Psi}^{\prime}(x) &= \frac{1}{2\sqrt 2\pi} e^{-x/2} x^{-2} = \xi(x) \\ {\Psi}^{\prime\prime}(x) &= \frac{1}{2} \xi(x)\left\{-2x^{-2} - 1\right\} \end{array} $$

Now since we have,

\({\Psi }\left (\lambda d^{2}E(N_{1})\right ) = \alpha + \left (\lambda d^{2}E(N_{1}) - z^{2}\right ){\Psi }^{\prime }(z^{2}) + o(d^{2})\) and

\(\lambda d^{2}E(N_{1}) - z^{2} = \frac {n^{*}d^{4}}{z^{2}}\left \{n^{*}+\nu -3\right \} - z^{2}\)

we can now easily obtain the result (8) by equation (40).

Proof Proof of Theorem 2

We want to employ Helmert transformation. Let

\(Y_{i} = \frac {\lambda }{\mu ^{2}} \frac {\left ({\sum }_{i=1}^{n} X_{i} - n\mu \right )^{2}}{{\sum }_{i=1}^{n} X_{i}}, i = 1,2,3,...\) are independent \({\chi _{1}^{2}}\) r.v.’s. Denote, \(\overline {Y}_{n} = n^{-1} {\sum }_{i=1}^{n} Y_{i}\) and \(S_{n} = {\sum }_{i=1}^{n} Y_{i}\).

Let c0 = ck− 1 = 1, T0 = m0 − 1 and

$$ \begin{array}{@{}rcl@{}} T_{1} &=& max \left[\left\{c_{1}n^{*}\overline{Y}_{{T}_{0}}\right\}, T_{0}\right], L_{1} = \left\{c_{1}n^{*}\overline{Y}_{{T}_{0}}\right\} \\ &{\vdots} &{\vdots} \\ T_{k-2} &=& max \left[\left\{c_{k-2}n^{*}\overline{Y}_{{T}_{k-3}}\right\}, T_{k-3}\right], L_{k-2} = \left\{c_{k-2}n^{*}\overline{Y}_{{T}_{k-3}}\right\}\\ T_{k-1} &=& max \left[\left\{n^{*}\overline{Y}_{{T}_{k-2}}+\eta\right\}, T_{k-2}\right], L_{k-1} = \left\{n^{*}\overline{\boldsymbol{Y}}_{{T}_{k-2}}+\eta\right\} \end{array} $$

It is clear that Mi = Ti + 1,i = 1, 2,...,k − 1. We want to prove following three results. These results will be helpful in proving the main theorem.

$$ E\left( T_{k-1}\right) = n^{*} - \frac{Var\left( Y_{1}\right)}{c_{k-2}} - \frac{1}{2} + \eta + o(1); $$
(41)
$$ Var\left( T_{k-1}\right) = \frac{n^{*} Var\left( Y_{1}\right)}{c_{k-2}} + o\left( n^{*}\right) $$
(42)
$$ E\mid T_{k-1} - E\left( T_{k-1}\right)\mid^{3} = o\left( n^{{*}^{2}}\right) $$
(43)

Now we state some important Lemmas and also give their proofs, wherever necessary. We will prove the results given in equations (41)-(43) with the help of these lemmas. □

Lemma 1

Let

$$ \begin{array}{@{}rcl@{}} A_{i}\left( \epsilon\right) &=& \left\{ \mid\overline{Y}_{{T}_{i}}-1\mid \geq \epsilon\right\}, i = 0,1,2,...,k-2, \\ B_{i} &=& \left\{T_{i} = T_{i-1}\right\}, i = 1,2,...,k-1,\\ C_{i}\left( \epsilon\right) &=& \left\{ \mid T_{i}-c_{i}n^{*}\mid \geq \epsilon c_{i}n^{*}\right\}, i = 1,2,...,k-2. \end{array} $$

Then, for any 𝜖 > 0, there exists 0 < δ0 < 1 and \(0<\delta _{1}=\delta _{1}\left (\epsilon \right )<1\) such that for all sufficiently large n,

\(B_{1} \subseteq \left \{ \mid \overline {Y}_{{T}_{0}}-1\mid \geq \delta _{0}\right \} = A_{0}\left (\delta _{0}\right ),\)

\(C_{i}\left (\epsilon \right ) \subset B_{i} \cup A_{i-1}\left (\delta _{1}\right ),\)

\(B_{i+1} \subseteq c_{i}\left (\delta _{0}\right ) \cup \left \{ \mid \overline {Y}_{\left (1-\delta _{0}\right )c_{i}n^{*}}-1\mid \geq \delta _{0}\right \},\)

\(A_{i}\left (\epsilon \right ) \subset c_{i}\left (\delta _{1}\right ) \cup \left \{ \mid \overline {Y}_{\left (1-\delta _{1}\right )c_{i}n^{*}}-1\mid \geq \delta _{1}\right \} \cup \left \{ \mid \overline {Y}_{\left (1+\delta _{1}\right )c_{i}n^{*}}-1\mid \geq \delta _{1}\right \},\)

for i = 1, 2, 3,...,k − 2. Using Markov inequality and the assumption \(n^{*} = O\left ({m_{0}^{r}}\right )\) of Theorem 2, we have as \(n^{*} \rightarrow \infty \), \(P\left \{ \mid \overline {Y}_{{T}_{0}}-1\mid \geq \delta _{0}\right \} = O\left ({n^{*}}^{-1}\right )\) and \(P\left \{ \mid \overline {Y}_{\left (1 \pm \delta _{1}\right )c_{i}n^{*}}-1\mid \geq \delta _{1}\right \} = O\left ({n^{*}}^{-1}\right )\) since \(E\left (Y_{1}^{2r}\right )<\infty \). Now it follows from Lemma 1 that, for all sufficiently small 𝜖 > 0, \(P\left (A_{i}\left (\epsilon \right )\right ) = O\left ({n^{*}}^{-1}\right ), P\left (B_{i}\right ) = O\left ({n^{*}}^{-1}\right )\) and \(P\left (C_{i}\left (\epsilon \right )\right ) = O\left ({n^{*}}^{-1}\right )\) as \(n^{*} \rightarrow \infty \) for all those \(A_{i}\left (\epsilon \right )\), Bi and \(C_{i}\left (\epsilon \right )\) defined in Lemma 1.

Lemma 2

For some constant C 0

\(E\left \{T_{k-3}\left (\overline {Y}_{{T}_{k-3}}-1\right )\right \} = 0,\)

\(E\left \{T_{k-3}\left (\overline {Y}_{{T}_{k-3}}-1\right )^{2}\right \} = Var\left (Y_{1}\right )+E\left [\left \{\left (S_{{T}_{k-4}}-T_{k-4}\right )^{2}-T_{k-4}Var\left (Y_{1}\right )\right \}/T_{k-3}\right ],\)

\(E\left [\left \{\surd {T_{i}}\left (\overline {Y}_{{T}_{i}}-1\right )\right \}^{4}\right ] \leq C_{0}, i=0,1,...,k-1,\)

\(E\left \{\left (S_{T}-T_{i}\right )^{2}-T_{i}Var\left (Y_{1}\right )\right \} = 0, i=0,1,...,k-1.\)

Proof Proof of Lemma 2

The expectation on the right-hand side of the second assertion is taken to be 0 if k = 3. The first two assertions are clearly true if k = 3 and can be proved by using the conditional expectation formula conditioning on \(Y_{1}, Y_{2},...,Y_{T_{k-4}}\) when k ≥ 4. We use mathematical induction to prove the third assertion. It is true when i = 0 and assume that it is true when \(i=j \left (0 \leq j \leq k-2\right )\). So when i = j + 1, we have

$$ \begin{array}{@{}rcl@{}} &&E\left[\left\{\surd{T_{j+1}}\left( \overline{Y}_{{T}_{j+1}}-1\right)\right\}^{4}\right]\\ &=& E\left[E\left( \frac{1}{T_{j+1}^{2}}\left( S_{{T}_{j}}-T_{j}+S_{{T}_{j+1}}-S_{{T}_{j}}-\left( T_{j+1}-T_{j}\right)\right)^{4}\mid Y_{1},...,Y_{T}\right)\right] \\ &=& E\left[\frac{1}{T_{j+1}^{2}}\left( \left( S_{{T}_{j}}-T_{j}\right)^{4}+6\left( S_{{T}_{j}}-T_{j}\right)^{2}\left( T_{j+1}-T_{j}\right)Var\left( Y_{1}\right)\right.\right.\\ &&+\left.\left.4\left( S_{{T}_{j}}-T_{j}\right)\left( T_{j+1}-T_{j}\right)E\left( Y_{1}-1\right)^{3}+\left( T_{j-1}-T_{j}\right)E\left( Y_{1}-1\right)^{4}\right)\right] \\ &&\leq E\left[\left\{\surd{T_{j}}\left( \overline{Y}_{{T}_{j}}-1\right)\right\}^{4}\right] + 6E\left[\left\{\surd{T_{j}}\left( \overline{Y}_{{T}_{j}}-1\right)\right\}^{2}\right]Var\left( Y_{1}\right) \\ &&+ 4E\left( \surd{T_{j}} \mid \overline{Y}_{{T}_{j}}-1 \mid\right) E\left\{\left( Y_{1}-1\right)^{3}\right\} + E\left\{\left( Y_{1}-1\right)^{4}\right\}, \end{array} $$

which is bounded by the Cauchy-Schwartz inequality and the assumption that \(E\left [\left \{\surd {T_{j}}\left (\overline {Y}_{{T}_{j}}-1\right )\right \}^{4}\right ]\) is bounded. The fourth assertion follows directly also from a conditional argument. □

Lemma 3

As \(n^{*} \rightarrow \infty \), we have that for any 0 ≤ jk − 1 and 𝜖 > 0

\(E\left \{{T_{j}^{3}}I\left (A_{i}\left (\epsilon \right )\right )\right \} = o(1), E\left \{{T_{j}^{3}}I\left (B_{i}\right )\right \} = o(1)\) and \(E\left \{{T_{j}^{3}}I\left (C_{i}\left (\epsilon \right )\right )\right \} = o(1)\)

for all those \(A_{i}\left (\epsilon \right )\), Bi and \(C_{j}\left (\epsilon \right )\)defined in Lemma 1.

Proof Proof of Lemma 3

Let H be either \(A_{i}\left (\epsilon \right )\) or Bi or \(C_{j}\left (\epsilon \right )\). We want to show that \(E\left \{{T_{j}^{3}}I\left (H\right )\right \} = o(1)\) by using mathematical induction on j. This is true when j = 0 by Lemma 1 and we assume this is true when \(j=1 \left (0 \leq l \leq k-2\right )\). Thus, when j = l + 1, it follows from the inductive assumption and Lemma 1 that

$$ \begin{array}{@{}rcl@{}} E\left\{T_{l+1}^{3}I\left( H\right)\right\} &\leq& E\left\{\left( c_{l+1}n^{*}\overline{Y}_{{T}_{l}}+\eta+T_{l}\right)^{3}I(H)\right\} \\ &\leq& 9E\left\{\left( c_{l+1}^{3}{n^{*}}^{3}\left( \overline{Y}_{{T}_{l}}\right)^{3}+\eta^{3}+{T_{l}^{3}}\right)I(H)\right\} \\ &\leq& 9c_{l+1}^{3}{n^{*}}^{3}E\left\{\left( \overline{Y}_{{T}_{l}}\right)^{3}I(H)\right\}+o(1). \end{array} $$

Now,

$$ \begin{array}{@{}rcl@{}} {n^{*}}^{3}E\left\{\left( \overline{Y}_{{T}_{l}}\right)^{3}I(H)\right\} &\leq& 4{n^{*}}^{3}E\left\{\mid \overline{Y}_{{T}_{l}}-1 \mid^{3}I(H)\right\} + 4{n^{*}}^{3}P(H) \\ &\leq& 4{n^{*}}^{3}\left[E\left\{\left( \overline{Y}_{{T}_{l}}-1\right)^{4}\right\}\right]^{3/4}\left\{P(H)\right\}^{1/4} + o(1) \\ &=& o(1), \end{array} $$

from Lemmas 1 and 2 and the Lemma 3 follows. □

Lemma 4

As \(n^{*} \rightarrow \infty , E\left (\overline {Y}_{{T}_{k-2}}\right ) = n^{*} - Var\left (Y_{1}\right )/c_{k-2} + o(1).\)

Proof Proof of Lemma 4

Let

$$ D_{k-3} = \left\{\begin{array}{ll} B_{k-2} \cup A_{k-3}\left( \epsilon\right) \cup C_{k-3}\left( \epsilon\right) &if k\geq4 \\ B_{k-2} \cup A_{k-3}\left( \epsilon\right) &if k=3 \end{array}\right. $$

with 𝜖 > 0. We shall now show that as \(n^{*} \rightarrow \infty \)

$$ \begin{array}{@{}rcl@{}} n^{*}E\left\{\overline{Y}_{{T}_{k-2}}I\left( D_{k-3}\right)\right\} &=& o(1) \\ n^{*}E\left\{\overline{Y}_{{T}_{k-2}}I\left( {D^{c}}_{k-3}\right)\right\} &=& n^{*} - Var\left( Y_{1}\right)/c_{k-2} + o(1), \end{array} $$
(44)

from which the lemma follows. First we have

$$ \begin{array}{@{}rcl@{}} n^{*}E\left\{\overline{Y}_{{T}_{k-2}}I\left( D_{k-3}\right)\right\} &=& n^{*}E\left\{I\left( D_{k-3}\right)E\left( \overline{Y}_{{T}_{k-2}} \mid Y_{1},...,Y_{{T}_{k-3}}\right)\right\} \\ &= &n^{*}E\left[I\left( D_{k-3}\right)\left\{\frac{T_{k-3}}{T_{k-2}}\left( \overline{Y}_{{T}_{k-3}}-1\right)+1\right\}\right] \\ &\leq &n^{*}E\left\{I\left( D_{k-3}\right)\mid \overline{Y}_{{T}_{k-3}}-1 \mid\right\} + n^{*}P\left( D_{k-3}\right) \\ &\leq& n^{*}\left\{P\left( D_{k-3}\right)E\left( \surd{T_{k-3}}\mid \overline{Y}_{{T}_{k-3}}-1 \mid\right)^{2}\right\}^{1/2} + n^{*}P\left( D_{k-3}\right), \end{array} $$

which is o(1) since \({n^{*}}^{2}P\left (D_{k-3}\right )=o(1)\) by Lemma 1 and \(E\left (\surd {T_{k-3}}\mid \overline {Y}_{{T}_{k-3}}-1 \mid \right )^{2}\) is bounded by Lemma 2. To prove the result given in equation (44) we note that

$$ \begin{array}{@{}rcl@{}} &&c_{k-2}n^{*}E\left\{\overline{Y}_{{T}_{k-2}}I\left( {D^{c}}_{k-3}\right)\right\}\\ &=& c_{k-2}n^{*}E\left\{I\left( D_{k-3}^{c}\right)E\left( \overline{Y}_{{T}_{k-2}} \mid Y_{1},...,Y_{{T}_{k-3}}\right)\right\} \\ &=& E\left\{I\left( D_{k-3}^{c}\right)c_{k-2}n^{*}\frac{T_{k-3}}{T_{k-2}}\left( \overline{Y}_{{T}_{k-3}}-1\right)\right\} + c_{k-2}n^{*}P\left( D_{k-3}^{c}\right) \\ &=& E\left\{I\left( D_{k-3}^{c}\right)c_{k-2}n^{*}\frac{T_{k-3}}{T_{k-2}}\left( \overline{Y}_{{T}_{k-3}}-1\right)\right\} + c_{k-2}n^{*} + o(1) \\ &=& E\left\{I\left( D_{k-3}^{c}\right)\left( 1-\left( \overline{Y}_{{T}_{k-3}}-1\right)+R_{k-3}\right)T_{k-3}\left( \overline{Y}_{{T}_{k-3}}-1\right)\right\} \\ &+& c_{k-2}n^{*} + o(1), \end{array} $$
(45)

where \(R_{k-3} = \frac {c_{k-2}n^{*}}{T_{k-2}} - 1 + \left (\overline {Y}_{{T}_{k-3}}-1\right )\). Now on \(D_{k-3}^{c}\), we have \(R_{k-3} = \frac {c_{k-2}n^{*}}{c_{k-2}n^{*}\overline {Y}_{{T}_{k-3}}} - 1 + \left (\overline {Y}_{{T}_{k-3}}-1\right )\) and it is straight forward that \(\mid R_{k-3} \mid \leq C_{0}\left \{\left (\overline {Y}_{{T}_{k-3}}-1\right )^{2}+\left (c_{k-2}n^{*}\right )^{-1}\right \}\) for some constant C0. Consequently,

\(\mid E\left \{I\left (D_{k-3}^{c}\right )R_{k-3}T_{k-3}\left (\overline {Y}_{{T}_{k-3}}-1\right )\right \} \mid \)

$$ \begin{array}{@{}rcl@{}} &\leq& C_{0}E\left[I\left( D_{k-3}^{c}\right)\left\{\left( \overline{Y}_{{T}_{k-3}}-1\right)^{2}+\left( c_{k-2}n^{*}\right)^{-1}\right\} T_{k-3} \mid \overline{Y}_{{T}_{k-3}} \mid \right] \\ &\leq& C_{0}\epsilon E\left[I\left( D_{k-3}^{c}\right)\left\{\left( \overline{Y}_{{T}_{k-3}}-1\right)^{2}+\left( c_{k-2}n^{*}\right)^{-1}\right\} T_{k-3}\right] \\ &\leq& C_{0}\epsilon \left[E\left\{T_{k-3}\left( \overline{Y}_{{T}_{k-3}}-1\right)^{2}\right\}+\left( 1+\epsilon\right)\frac{c_{k-3}}{c_{k-2}}\right] \\ &\rightarrow& 0 \text{as} \epsilon \rightarrow 0, \end{array} $$
(46)

since \(E\left \{T_{k-3}\left (\overline {Y}_{{T}_{k-3}}-1\right )^{2}\right \}\) is bounded by Lemma 2. We also have

\(\mid E\left \{I\left (D_{k-3}^{c}\right )T_{k-3}\left (\overline {Y}_{{T}_{k-3}}-1\right )\right \} \mid \)

$$ \begin{array}{@{}rcl@{}} &=& \mid E\left\{T_{k-3}\left( \overline{Y}_{{T}_{k-3}}-1\right)\right\} - E\left\{I\left( D_{k-3}\right)T_{k-3}\left( \overline{Y}_{{T}_{k-3}}-1\right)\right\} \mid \\ &=& \mid E\left\{I\left( D_{k-3}\right)\surd{T_{k-3}}\surd{T_{k-3}}\left( \overline{Y}_{{T}_{k-3}}-1\right)\right\} \mid \\ &\leq& \left[E\left\{I\left( D_{k-3}\right)T_{k-3}\right\}E\left\{T_{k-3}\left( \overline{Y}_{{T}_{k-3}}-1\right)^{2}\right\}\right]^{1/2} \\ &=& o(1), \end{array} $$
(47)

by Lemmas 2 and 3. Finally, we show that

$$ E\left\{I\left( D_{k-3}^{c}\right)T_{k-3}\left( \overline{Y}_{{T}_{k-3}}-1\right)^{2}\right\} = Var\left( Y_{1}\right) + o(1), $$
(48)

and result given in equation (44) follows from equations (45)-(48). Now it follows from Lemma 2 that

\(E\left \{I\left (D_{k-3}^{c}\right )T_{k-3}\left (\overline {Y}_{{T}_{k-3}}-1\right )^{2}\right \}\)

$$ \begin{array}{@{}rcl@{}} &=& E\left\{T_{k-3}\left( \overline{Y}_{{T}_{k-3}}-1\right)^{2}\right\} - E\left\{I\left( D_{{T}_{k-3}}\right)T_{k-3}\left( \overline{Y}_{{T}_{k-3}}-1\right)^{2}\right\} \\ &=& E\left\{T_{k-3}\left( \overline{Y}_{{T}_{k-3}}-1\right)^{2}\right\} + o(1) \\ &=& Var\left( Y_{1}\right) + E\left[\left\{\left( S_{{T}_{k-4}}-T_{k-4}\right)^{2}-T_{k-4}Var\left( Y_{1}\right)\right\}/T_{k-3}\right] \\ &+& o(1), \end{array} $$

and so equation (48) is obviously true when k = 3. Now we show that when k ≥ 4

\(E\left [\left \{\left (S_{{T}_{k-4}}-T_{k-4}\right )^{2}-T_{k-4}Var\left (Y_{1}\right )\right \}/T_{k-3}\right ] = o(1)\)

Let

$$ H = \left\{\begin{array}{ll} C_{k-3}\left( \epsilon\right) &if k=4 \\ C_{k-3}\left( \epsilon\right) \cup C_{k-4}\left( \epsilon\right) &if k\geq5 \end{array}\right. $$

with 𝜖 > 0, then it follows from Lemmas 1 and 2 that

\(E\left [I(H)\left \{\left (S_{{T}_{k-4}}-T_{k-4}\right )^{2}-T_{k-4}Var\left (Y_{1}\right )\right \}/T_{k-3}\right ]\)

$$ \begin{array}{@{}rcl@{}} &\leq& T_{0}E\left[I(H)\left\{T_{k-4}\left( \overline{Y}_{{T}_{k-4}}-1\right)^{2}+Var\left( Y_{1}\right)\right\}\right] \\ &\leq& \left[E\left\{\left( T_{k-4}\left( \overline{Y}_{{T}_{k-4}}-1\right)^{2}\right)^{2}\right\}{T_{0}^{2}}P(H)\right]^{1/2} + Var\left( Y_{1}\right)T_{0}P(H) \\ &=& o(1). \end{array} $$
(49)

Also, for all sufficiently large n, \(E\left [I(H^{c})\left \{\left (S_{{T}_{k-4}}-T_{k-4}\right )^{2}-T_{k-4}Var\left (Y_{1}\right )\right \}/T_{k-3}\right ]\) is

$$ \begin{array}{@{}rcl@{}} &\leq& E\left[I(H^{c})\left( S_{{T}_{k-4}}-T_{k-4}\right)^{2}/\left\{\left( 1-\epsilon\right)c_{k-3}n^{*}\right\}\right]\\ &-& E\left[I(H^{c})T_{k-4}Var\left( Y_{1}\right)/\left\{\left( 1+\epsilon\right)c_{k-3}n^{*}\right\}\right]\\ &+&E\left[I(H^{c})T_{k-4}Var\left( Y_{1}\right)2\epsilon/\left\{\left( 1-\epsilon\right)\left( 1+\epsilon\right)c_{k-3}n^{*}\right\}\right]\\ \end{array} $$
$$ \begin{array}{@{}rcl@{}} &=& E\left[I(H)\left( \left( S_{{T}_{k-4}}-T_{k-4}\right)^{2}-T_{k-4}Var\left( Y_{1}\right)\right)/\left\{\left( 1-\epsilon\right)c_{k-3}n^{*}\right\}\right] \\ &+&E\left[I(H^{c})T_{k-4}Var\left( Y_{1}\right)2\epsilon/\left\{\left( 1-\epsilon\right)\left( 1+\epsilon\right)c_{k-3}n^{*}\right\}\right] \\ &\leq& E\left[I(H)\left( \left( S_{{T}_{k-4}}-T_{k-4}\right)^{2}-T_{k-4}Var\left( Y_{1}\right)\right) /\left\{\left( 1-\epsilon\right)c_{k-3}n^{*}\right\}\right] \\ &+&c_{k-4}Var\left( Y_{1}\right)2\epsilon/\left\{\left( 1-\epsilon\right)\left( 1+\epsilon\right)c_{k-3}\right\} \\ &=& c_{k-4}Var\left( Y_{1}\right)2\epsilon/\left\{\left( 1-\epsilon\right)\left( 1+\epsilon\right)c_{k-3}\right\}+o(1) \\ &\rightarrow& 0 \text{as} \epsilon \rightarrow 0, \end{array} $$
(50)

where the second equality follows from Lemma 2 and the last equality follows from the Cauchy-Schwartz inequality and Lemmas 2 and 3. From a similar argument we establish that

\(E\left [I(H^{c})\left \{\left (S_{{T}_{k-4}}-T_{k-4}\right )^{2}-T_{k-4}Var\left (Y_{1}\right )\right \}/T_{k-3}\right ]\)

$$ \begin{array}{@{}rcl@{}} &\geq& o(1) - \frac{c_{k-4} Var\left( Y_{1}\right)2\epsilon}{\left\{\left( 1-\epsilon\right)\left( 1+\epsilon\right)c_{k-3}\right\}} \\ &\rightarrow& 0 \text{as} \epsilon \rightarrow 0. \end{array} $$
(14)

Now equation (48) follows clearly from equations (49)-(51) and thus the proof is completed. □

Lemma 5

As \(n^{*} \rightarrow \infty , E\left (T_{k-1}\right ) = E\left (L_{k-1}\right ) + o(1)\).The Lemma 5 can be proved easily by using Lemma 3.

Lemma 6

As \(n^{*} \rightarrow \infty , U_{n^{*}} \equiv n^{*}\overline {Y}_{{L}_{k-2}} + \eta - \left [n^{*}\overline {Y}_{{L}_{k-2}} + \eta \right ]\)is asymptotically uniform on (0, 1).

Proof Proof of Lemma 6

Let \(J \equiv \left [c_{k-2}n^{*}\overline {Y}_{{T}_{k-3}}\right ], V \equiv c_{k-2}n^{*}\overline {Y}_{{T}_{k-3}}-J\). An argument similar to Hall (1981, p. 1237) shows that for 0 < x < 1,J > Nk− 3 and V ∈ (0, 1),

\(P\left \{U_{n^{*}} \leq x \mid J,V,T_{k-3}\right \} = x + r_{4n^{*}}\),

where \(\mid r_{4n^{*}} \mid \leq C_{0}\left (J^{1/2}/n^{*} + J/{n^{*}}^{2}\right )\) as \(n^{*} \rightarrow \infty \) uniformly in V ∈ (0, 1) and \(J>\left (1+\epsilon \right )T_{k-3}\) for 𝜖 > 0 and some constant C0. Consequently

$$ \begin{array}{@{}rcl@{}} P\left\{U_{n^{*}} \leq x\right\} &=& E\left\{P\left( U_{n^{*}} \leq J,V,T_{k-3}\right)\right\} \\ &=& E\left\{P\left( U_{n^{*}} \leq x \mid J,V,T_{k-3}\right)I\left( J>\left( 1+\epsilon\right)T_{k-3}\right)\right\} \\ &+& E\left\{P\left( U_{n^{*}} \leq x \mid J,V,T_{k-3}\right)I\left( J \leq \left( 1+\epsilon\right)T_{k-3}\right)\right\} \\ &=& E\left\{\left( x+r_{4n^{*}}\right)I\left( J>\left( 1+\epsilon\right)T_{k-3}\right)\right\} \\ &+& E\left\{P\left( U_{n^{*}} \leq x \mid J,V,T_{k-3}\right)I\left( J \leq \left( 1+\epsilon\right)T_{k-3}\right)\right\} \\ &=& x + r_{5n^{*}}, \end{array} $$

where \(\mid r_{5n^{*}} \mid \leq P\left \{J \leq \left (1+\epsilon \right )T_{k-3}\right \}+C_{0}E\left (J^{1/2}/n^{*} + J/{n^{*}}^{2}\right )+P\left \{J > \left (1+\epsilon \right )T_{k-3}\right \}\).It remains to show that as \(n^{*} \rightarrow \infty \) and for small 𝜖 > 0

\(P\left \{J \leq \left (1+\epsilon \right )T_{k-3}\right \} = o(1)\) and \(E\left (J/{n^{*}}^{2}\right ) = o(1).\)

Now, \(E\left (J/{n^{*}}^{2}\right ) \leq E\left (c_{k-2}n^{*}\overline {Y}_{{T}_{k-3}}/{n^{*}}^{2}\right ) \leq c_{k-2}{n^{*}}^{-1}\left (E \mid \overline {Y}_{{T}_{k-3}}-1 \mid +1\right ) = o(1)\) by Lemma 2 and for all sufficiently large n, small 𝜖 > 0 and k ≥ 4,

$$ \begin{array}{@{}rcl@{}} P\left\{J \leq \left( 1+\epsilon\right)T_{k-3}\right\} &\leq& P\left\{c_{k-2}n^{*}\overline{Y}_{{T}_{k-3}} \leq \left( 1+\epsilon\right)T_{k-3}\right\} \\ &\leq& P\left\{c_{k-2}n^{*}\overline{Y}_{{T}_{k-3}} \leq c_{k-2}n^{*}\left( 1-\epsilon\right)\right\} \\ &+& P\left\{c_{k-2}n^{*}\left( 1-\epsilon\right) \leq \left( 1+\epsilon\right)T_{k-3}+1\right\}\\ &\leq& P\left\{\overline{Y}_{{T}_{k-3}}-1 \leq -\epsilon\right\} \\ &+& P\left\{T_{k-3}-c_{k-3}n^{*} \geq \epsilon c_{k-3}n^{*}\right\}\left( c_{k-2}>c_{k-3}\right) \\ &\leq& P\left\{A_{k-3}\left( \epsilon\right)\right\} + P\left\{c_{k-3}\left( \epsilon\right)\right\}, \end{array} $$
(52)

which is o(1) by Lemma 1. When k = 3, it follows directly from equation (52) that\(P\left \{J \leq \left (1+\epsilon \right )T_{k-3}\right \} = o(1)\) by noting that \(\lim \sup T_{0}/{c_{1}n^{*}} < 1\). The proof is thus completed. We can now prove the result given in equation (41). It follows from Lemmas 4-6 that

$$ \begin{array}{@{}rcl@{}} E\left( T_{k-1}\right) &=& E\left( L_{k-1}\right) + o(1) \\ &=& E\left( \left[n^{*}\overline{Y}_{{T}_{k-2}}+\eta\right]\right) + o(1)\\ &=& E\left( \left[n^{*}\overline{Y}_{{L}_{k-2}}+\eta\right]-\left( n^{*}\overline{Y}_{{L}_{k-2}}+\eta\right)\right)\\ &+& E\left( n^{*}\overline{Y}_{{L}_{k-2}}+\eta-n^{*}\overline{Y}_{{T}_{k-2}}+\eta\right)\\ &+& E\left( n^{*}\overline{Y}_{{T}_{k-2}}+\eta\right) \\ &+& E\left( \left[n^{*}\overline{Y}_{{T}_{k-2}}+\eta\right]-\left[n^{*}\overline{Y}_{{L}_{k-2}}+\eta\right]\right) + o(1) \\ & =& -\frac{1}{2} + E\left( n^{*}\overline{Y}_{{L}_{k-2}}-n^{*}\overline{Y}_{{T}_{k-2}}\right) + n^{*} - \frac{Var\left( Y_{1}\right)}{c_{k-2}} + \eta \\ &+& E\left( \left[n^{*}\overline{Y}_{{T}_{k-2}}+\eta\right]-\left[n^{*}\overline{Y}_{{L}_{k-2}}+\eta\right]\right) + o(1), \end{array} $$

and so equation (41) will follow if we show that

$$ \begin{array}{@{}rcl@{}} E\left( n^{*}\overline{Y}_{{L}_{k-2}}-n^{*}\overline{Y}_{{T}_{k-2}}\right) &=& o(1), \\ E\left( \left[n^{*}\overline{Y}_{{T}_{k-2}}+\eta\right]-\left[n^{*}\overline{Y}_{{L}_{k-2}}+\eta\right]\right) &=& o(1). \end{array} $$

Let Bk− 2 be the set defined in Lemma 1. Then

$$ \begin{array}{@{}rcl@{}} \Bigl| E\left( n^{*}\overline{Y}_{{L}_{k-2}}-n^{*}\overline{Y}_{{T}_{k-2}}\right) \Bigr| &=& \Bigl| {\int}_{B_{k-2}} \left( n^{*}\overline{Y}_{{L}_{k-2}}-n^{*}\overline{Y}_{{T}_{k-2}}\right) dP \Bigr| \\ &\leq& {\int}_{B_{k-2}} n^{*}\overline{Y}_{{L}_{k-2}} dP + {\int}_{B_{k-2}} n^{*}\overline{Y}_{{T}_{k-2}}dP\\ &\leq& {\int}_{B_{k-2}} n^{*}S_{{T}_{k-2}}dP + {\int}_{B_{k-2}} n^{*}\overline{Y}_{{T}_{k-2}}dP \end{array} $$

which is o(1) by the Cauchy-Schwartz inequality and Lemmas 2 and 3 as before. Hence equation (41) is proved.

The result given in equation (11) follows directly from equation (41) since Mk− 1 = Tk− 1 + 1. Now we prove the results given in equations (42) and (43) and we need some more Lemmas for this. □

Lemma 7

As \(n^{*} \rightarrow \infty \), \(Var\left (T_{k-1}\right ) = Var\left (L_{k-1}\right ) + o(1)\).Note that we can prove this lemma by using Lemma 3.

Lemma 8

A simple conditional argument by conditioning on \(Y_{1},...,Y_{{T}_{k-3}}\) gives

$$ \begin{array}{@{}rcl@{}} Var\left( \overline{Y}_{{T}_{k-2}}\right) &=& Var\left\{\frac{T_{k-3}}{T_{k-2}}\left( \overline{Y}_{{T}_{k-3}}-1\right)\right\} - Var\left( Y_{1}\right)E\left( \frac{T_{k-3}}{T_{k-2}^{2}}\right) \\ &+& Var\left( Y_{1}\right)E\left( \frac{1}{T_{k-2}}\right). \end{array} $$

Lemma 9

The following are true as \(n^{*} \rightarrow \infty \)

$$ E\left( T_{k-2}^{-1}\right) = \left( c_{k-2}n^{*}\right)^{-1}\left( 1+o(1)\right), $$
(16)
$$ E\left( T_{k-3}\right)/n^{*} \leq C_{0}, $$
(17)
$$ E\left( T_{k-2}^{-1}{\sum}_{i=1}^{T_{k-3}}\left( Y_{i}-1\right)\right) = o\left( {n^{*}}^{-1/2}\right), $$
(18)
$$ E\left( \mid \overline{Y}_{{T}_{k-2}}-1 \mid^{3}\right) = o\left( {n^{*}}^{-1}\right). $$
(19)

Proof Proof of Lemma 9

We first prove the result given in equation (16). Let \(c_{k-2}\left (\epsilon \right )\) be the set defined in Lemma 1 with 𝜖 > 0. Then it follows from Lemma 1 that

$$ \begin{array}{@{}rcl@{}} \Biggl| E\left( \frac{c_{k-2}n^{*}}{T_{k-2}}\right)-1 \Biggr| &\leq& {\int}_{{c_{k-2}}\left( \epsilon\right)} \Biggl| \frac{c_{k-2}n^{*}}{T_{k-2}}-1 \Biggr| dp + {\int}_{{c_{k-2}}\left( \epsilon\right)} \frac{c_{k-2}n^{*}}{T_{k-2}} dp + {\int}_{{c_{k-2}}\left( \epsilon\right)} 1 dp \\ &\leq& C_{0}\epsilon + c_{k-2}n^{*}P\left( c_{k-2}\left( \epsilon\right)\right) + P\left( c_{k-2}\left( \epsilon\right)\right) \rightarrow 0, \end{array} $$

as \(n^{*} \rightarrow \infty \) and then \(\epsilon \rightarrow 0\). This proves equation (56). The assertion (54) is true when k = 3 and can be proved in a similar way as (53) by using Lemma 3 when k ≥ 4. Assertion (55) follows from Lemmas 1-3 by noting that

$$ \begin{array}{@{}rcl@{}} \left| E\left\{\frac{\surd{n^{*}}}{T_{k-2}}{\sum}_{i=1}^{T_{k-3}}\left( Y_{i}-1\right)\right\} \right| &=& \left| E\left\{\left( \frac{\surd{n^{*}}}{T_{k-2}}-\frac{1}{c_{k-2}\surd{n^{*}}}\right){\sum}_{i=1}^{T_{k-3}}\left( Y_{i}-1\right)\right\} \right| \\ &=& \left| E\left[\left\{I\left( c_{k-2}^{c}\left( \epsilon\right)\right)+I\left( c_{k-2}\left( \epsilon\right)\right)\right\}\left( \frac{\surd{n^{*}}}{T_{k-2}}-\frac{1}{c_{k-2}\surd{n^{*}}}\right) \right.\right. \\ &\times&\left.\left. {\sum}_{i=1}^{T_{k-3}}\left( Y_{i}-1\right)\right] \right| \\ &\leq& C_{0}\epsilon E\left( \surd{T_{k-3}} \mid \overline{Y}_{{T}_{k-3}}-1 \mid\right) \\ &+& C_{0}\surd{n^{*}}E\left\{I\left( c_{k-2}\left( \epsilon\right)\right)\left( \surd{T_{k-3}} \mid \overline{Y}_{{T}_{k-3}}-1 \mid\right)\right\} \\ &+& C_{0}/\surd{n^{*}}E\left\{\surd{T_{k-3}}\surd{T_{k-3}} \mid \overline{Y}_{{T}_{k-3}}-1 \mid I\left( c_{k-2}\left( \epsilon\right)\right)\right\} \\ &\leq& C_{0}\epsilon E\left( \surd{T_{k-3}} \mid \overline{Y}_{{T}_{k-3}}-1 \mid\right) \\ &+& C_{0}\surd{n^{*}}\left\{P\left( c_{k-2}\left( \epsilon\right)\right)\left( T_{k-3} \mid \overline{Y}_{{T}_{k-3}}-1 \mid^{2}\right)\right\}^{1/2} \\ &+& C_{0}/\surd{n^{*}}\left\{E\left( T_{k-3}I\left( c_{k-2}\left( \epsilon\right)\right)\right)E\left( T_{k-3} \mid \overline{Y}_{{T}_{k-3}}-1 \mid^{2}\right)\right\}^{1/2} \\ &=& C_{0}\epsilon E\left( \surd{T_{k-3}} \mid \overline{Y}_{{T}_{k-3}}-1 \mid\right) + o(1) \rightarrow 0 \text{as} \epsilon \rightarrow 0. \end{array} $$

To prove equation (56), note that

$$ \begin{array}{@{}rcl@{}} E\left( \mid \overline{Y}_{{T}_{k-2}}-1 \mid^{3}\right) &=& E\left\{\mid \overline{Y}_{{T}_{k-2}}-1 \mid^{3}I\left( c_{k-2}^{c}\left( \epsilon\right)\right)\right\} \\ &+& E\left\{\mid \overline{Y}_{{T}_{k-2}}-1 \mid^{3}I\left( c_{k-2}\left( \epsilon\right)\right)\right\} \\ &\leq& \frac{1}{\left\{\left( 1-\epsilon\right)c_{k-2}n^{*}\right\}^{3/2}}E\left\{\left( \surd{T_{k-2}} \mid \overline{Y}_{{T}_{k-2}}-1 \mid\right)^{3}\right\} \\ &+& \left\{E\left( \overline{Y}_{{T}_{k-2}}-1\right)^{4}\right\}^{3/4}\left\{P\left( c_{k-2}\left( \epsilon\right)\right)\right\}^{1/4} \\ &=& o(1). \end{array} $$

by Lemmas 1 and 2. This completes the proof of Lemma 9. □

Lemma 10

As \(n^{*} \rightarrow \infty \)

$$ \Bigl| Var\left\{\frac{1}{T_{k-2}}\sum\nolimits_{i=1}^{T_{k-3}}\left( Y_{i}-1\right)\right\} - Var\left( Y_{1}\right)E\left( \frac{T_{k-3}}{T_{k-2}^{2}}\right) \Bigr| = o\left( {n^{*}}^{-1}\right) $$

Proof Proof of Lemma 10

From Lemma 9, LHS of the above equation can be written as

$$ \Biggl| E\left\{\frac{1}{T_{k-2}^{2}}\left( \sum\nolimits_{i=1}^{T_{k-3}}\left( Y_{i}-1\right)\right)^{2}\right\} - \left\{E\left( \frac{1}{T_{k-2}}\sum\nolimits_{i=1}^{T_{k-3}}\left( Y_{i}-1\right)\right)\right\}^{2} - Var\left( Y_{1}\right)E\left( \frac{T_{k-3}}{T_{k-2}^{2}}\right) \Biggr| $$
$$ = \Biggl| E\left\{\frac{1}{T_{k-2}^{2}}\left( \sum\nolimits_{i=1}^{T_{k-3}}\left( Y_{i}-1\right)\right)^{2}\right\} - Var\left( Y_{1}\right)E\left( \frac{T_{k-3}}{T_{k-2}^{2}}\right) + o\left( {n^{*}}^{-1}\right) \Biggr|, $$

and so it remains to show that

$$E\left\{\frac{1}{T_{k-2}^{2}}\left( \sum\nolimits_{i=1}^{T_{k-3}}\left( Y_{i}-1\right)\right)^{2}\right\} - Var\left( Y_{1}\right)E\left( \frac{T_{k-3}}{T_{k-2}^{2}}\right) = o\left( {n^{*}}^{-1}\right).$$

It follows from Lemmas 1-3 and Wald’s equation that

\(E\left \{\frac {1}{T_{k-2}^{2}}\left ({\sum }_{i=1}^{T_{k-3}}\left (Y_{i}-1\right )\right )^{2}\right \} - Var\left (Y_{1}\right )E\left (\frac {T_{k-3}}{T_{k-2}^{2}}\right )\)

$$ \begin{array}{@{}rcl@{}} &\leq& E\left\{T_{k-3}\left( \overline{Y}_{{T}_{k-3}}-1\right)^{2}I\left( c_{k-2}\left( \epsilon\right)\right)\right\} + \frac{E\left( {\sum}_{i=1}^{T_{k-3}}\left( Y_{i}-1\right)\right)^{2}}{\left\{\left( 1-\epsilon\right)c_{k-2}n^{*}\right\}^{2}} \\ &+& Var\left( Y_{1}\right)P\left( c_{k-2}\left( \epsilon\right)\right) - \frac{Var\left( Y_{1}\right)}{\left\{\left( 1+\epsilon\right)c_{k-2}n^{*}\right\}^{2}}E\left\{T_{k-3}I\left( c_{k-2}^{c}\left( \epsilon\right)\right)\right\} \\ &=& \frac{Var\left( Y_{1}\right)}{\left\{\left( 1-\epsilon\right)c_{k-2}n^{*}\right\}^{2}}E\left( T_{k-3}\right) - \frac{Var\left( Y_{1}\right)}{\left\{\left( 1+\epsilon\right)c_{k-2}n^{*}\right\}^{2}}E\left( T_{k-3}\right) + o\left( n{^{*}}^{-1}\right) \\ &=& \frac{4\epsilon Var\left( Y_{1}\right)}{\left\{\left( 1-\epsilon\right)\left( 1+\epsilon\right)c_{k-2}\right\}^{2}}\frac{E\left( T_{k-3}\right)}{{n^{*}}^{2}} + o\left( {n^{*}}^{-1}\right), \end{array} $$

which is \(o\left ({n^{*}}^{-1}\right )\) since 𝜖 can be arbitrary small and \(E\left (T_{k-3}\right )/n^{*} \leq C_{0}\) by Lemma 9. A similar argument shows that

\(E\left \{\frac {1}{T_{k-2}^{2}}\left ({\sum }_{i=1}^{T_{k-3}}\left (Y_{i}-1\right )\right )^{2}\right \} - Var\left (Y_{1}\right )E\left (\frac {T_{k-3}}{T_{k-2}^{2}}\right )\)

$$ \begin{array}{@{}rcl@{}} &\geq& -\frac{4\epsilon Var\left( Y_{1}\right)}{\left\{\left( 1-\epsilon\right)\left( 1+\epsilon\right)c_{k-2}\right\}^{2}}\frac{E\left( T_{k-3}\right)}{{n^{*}}^{2}} + o\left( {n^{*}}^{-1}\right) \\ &=& o\left( {n^{*}}^{-1}\right) \text{as} \epsilon \rightarrow 0. \end{array} $$

The proof of Lemma 10 is thus completed. Now we prove the result given in equation (42). From equation (47), it suffices to show that

\(Var\left (L_{k-1}\right ) = n^{*}Var\left (Y_{1}\right )/c_{k-2} + o\left (n^{*}\right ).\)

From Lemmas 8-10, we have

$$ \begin{array}{@{}rcl@{}} {n^{*}}^{-2}Var\left( n^{*}\overline{Y}_{{T}_{k-2}}+\eta\right) &=& Var\left( \overline{Y}_{{T}_{k-2}}\right) \\ &=& Var\left\{\frac{T_{k-3}}{T_{k-2}}\left( \overline{Y}_{{T}_{k-3}}-1\right)\right\} - Var\left( Y_{1}\right)E\left( \frac{T_{k-3}}{T_{k-2}^{2}}\right) \\ &+& Var\left( Y_{1}\right)E\left( \frac{1}{T_{k-2}}\right) \\ &=& o\left( {n^{*}}^{-1}\right) + Var\left( Y_{1}\right)\frac{1}{c_{k-2}n^{*}}\left( 1+o(1)\right) \\ &=& Var\left( Y_{1}\right)\frac{1}{c_{k-2}n^{*}} + o\left( {n^{*}}^{-1}\right), \end{array} $$

and so it remains to show that

\(Var\left (L_{k-1}\right ) - Var\left (n^{*}\overline {Y}_{{T}_{k-2}}+\eta \right ) = o\left (n^{*}\right ),\)

This follows directly from the following three facts:

$$ \begin{array}{@{}rcl@{}} Var\left( L_{k-1}\right) &=& Var\left\{\left( n^{*}\overline{Y}_{{T}_{k-2}}+\eta\right) + \left( L_{k-1}-n^{*}\overline{Y}_{{T}_{k-2}}-\eta\right)\right\} \\ &=& Var\left( n^{*}\overline{Y}_{{T}_{k-2}}+\eta\right) + Var\left( L_{k-1}-n^{*}\overline{Y}_{{T}_{k-2}}-\eta\right) \\ &+& Cov\left( n^{*}\overline{Y}_{{T}_{k-2}}, L_{k-1}-n^{*}\overline{Y}_{{T}_{k-2}}\right), \end{array} $$

\(Var\left (L_{k-1}-n^{*}\overline {Y}_{{T}_{k-2}}-\eta \right ) \leq 1\) since \(\mid L_{k-1}-n^{*}\overline {Y}_{{T}_{k-2}}-\eta \mid \leq 1,\)and

$$ \begin{array}{@{}rcl@{}} Cov\left( n^{*}\overline{Y}_{{T}_{k-2}}, L_{k-1}-n^{*}\overline{Y}_{{T}_{k-2}}\right) &\leq& \left\{Var\left( n^{*}\overline{Y}_{{T}_{k-2}}\right)Var\left( L_{k-1}-n^{*}\overline{Y}_{{T}_{k-2}}-\eta\right)\right\}^{1/2} \\ &\leq& \left\{Var\left( n^{*}\overline{Y}_{{T}_{k-2}}\right)\right\}^{1/2} \\ &=& \left\{n^{*}Var\left( Y_{1}\right)/c_{k-2} + o\left( n^{*}\right)\right\}^{1/2} \\ &=& o\left( n^{*}\right). \end{array} $$

The proof of Eq. 42 is thus completed. To prove the result given in Eq. 43, note that

\(E\mid T_{k-1}-E\left (T_{k-1}\right ) \mid ^{3} = E\mid T_{k-1}-n^{*}+Var\left (Y_{1}\right )/c_{k-2}+1/2-\eta +o(1)\mid ^{3},\)

and so it suffices to show that \(E\mid T_{k-1}-n^{*} \mid ^{3} = o\left ({n^{*}}^{2}\right )\). Let Bk− 1 be the set defined in Lemma 1, then

\({\int \limits }_{B_{k-1}}\mid T_{k-1}-n^{*} \mid ^{3}dP \leq {\int \limits }_{B_{k-1}}\left (T_{k-1}+n^{*}\right )^{3}dP = o\left ({n^{*}}^{2}\right )\)

by Lemmas 1 and 3 and

$$ \begin{array}{@{}rcl@{}} {\int}_{B_{k-1}}\mid T_{k-1}-n^{*} \mid^{3}dP &=& {\int}_{B_{k-1}}\mid \left[n^{*}\overline{Y}_{{T}_{k-2}}+\eta\right]-n^{*} \mid^{3}dP \\ &\leq& E\mid \left[n^{*}\overline{Y}_{{T}_{k-2}}+\eta\right]-n^{*} \mid^{3} \\ &\leq& E\left\{\left( \mid n^{*}\overline{Y}_{{T}_{k-2}}-n^{*} \mid+\eta+1\right)^{3}\right\} \\ &=& o\left( {n^{*}}^{2}\right). \end{array} $$

Since \(E\left \{\left ({n^{*}}\mid \overline {Y}_{{T}_{k-2}}-1 \mid \right )^{3}\right \} = o\left ({n^{*}}^{2}\right )\) by Lemma 9, So \(E\mid T_{k-1}-n^{*} \mid ^{3} = o\left ({n^{*}}^{2}\right )\) and the proof is completed.Now to prove result (12) of Theorem 2, we proceed as follows:

$$ \begin{array}{@{}rcl@{}} E\left\{L(\mu, \overline{X}_{M_{k-1}})\right\} &=& 1 - E\left[P\left\{{\chi^{2}_{1}} \leq M_{k-1}\lambda d^{2}\right\}\right] \\ &=& 1 - E\left[{\Psi}\left\{M_{k-1}\lambda d^{2}\right\}\right] \end{array} $$
(57)

where Ψ(.) is the cdf of a \({\chi ^{2}_{1}}\) variate. We can expand this equation by Taylor Series expansion [see Hall (1981), page 1231] as follows:

$$ \begin{array}{@{}rcl@{}} E\left\{L(\mu, \overline{X}_{{M}_{k-1}})\right\} &=& {\Psi}\left( \lambda d^{2}E(M_{k-1})\right)\\ &&+ \frac{\lambda^{2}d^{4}}{2} E\left\{M_{k-1} - E(M_{k-1})\right\}^{2}{\Psi}^{\prime\prime}\left( \lambda d^{2}E(M_{k-1})\right)\\ &&+ o(d^{2}) \end{array} $$
(58)

where,

$$ \begin{array}{@{}rcl@{}} {\Psi}(x) &=& \frac{1}{\sqrt 2\pi}{{\int}_{0}^{x}} e^{-y/2} y^{-1/2} dy \\ {\Psi}^{\prime}(x) &=& \frac{1}{2\sqrt 2\pi} e^{-x/2} x^{-2} = \xi(x) \\ {\Psi}^{\prime\prime}(x) &=& \frac{1}{2} \xi(x)\left\{-2x^{-2} - 1\right\} \end{array} $$

Now since we have,

\({\Psi }\left (\lambda d^{2}E(M_{k-1})\right ) = \alpha + \left (\lambda d^{2}E(M_{k-1}) - z^{2}\right ){\Psi }^{\prime }(z^{2}) + o(d^{2})\),

\(\lambda d^{2}E(M_{k-1}) - z^{2} = \frac {n^{*}d^{4}}{z^{2}}\left \{n^{*}-\frac {2}{c_{k-2}}+\frac {1}{2}+\eta \right \} - z^{2}\) and

\(\frac {\lambda ^{2}d^{4}}{2}E\left \{M_{k-1}-E\left (M_{k-1}\right )\right \}^{2} = \frac {\lambda ^{3}d^{2}z^{2}}{2c_{k-2}} + o(d^{2})\)Using these equations and Eq. 58, we can obtain result (12) of Theorem 2. □

Proof Proof of Theorem 3

N2 from (19) can be written as,

$$ N_{2} = inf\left\{n\geq m\geq 2 ; {\sum}_{j=1}^{n-1} Z_{j} \leq \frac{n^{\frac{2t+s}{2p+s}+1}}{n_{0}^{\frac{2t+s}{2p+s}}}\right\} $$

where n0 comes from (18) and \({\sum }_{j=1}^{n-1} Z_{j} = \frac {n\lambda }{\widehat {\lambda }_{n}} \sim \chi ^{2}_{n-1}\) and \(Z_{j} \sim {\chi ^{2}_{1}}\). N2 can also be written as J + 1 w.p. 1 where,

$$ J = \inf\left\{n \geq m-1; {\sum}_{j=1}^{n} Z_{j} \leq \frac{n^{\frac{2t+s}{2p+s}+1}}{{n_{0}}^{\frac{2t+s}{2p+s}}}\left( 1+\frac{1}{n}\right)^{\frac{2t+s}{2p+s}}\right\} $$
(59)

Now Comparing (59) with equation (1.1) of Woodroofe (1977), we get,

α = (2t + s)/(2p + s) + 1, β = 2p + s/2t + s, \(c={n_{0}}^{-(2t+s/2p+s)}\), μ = 1, λ = n0, \(L(n)=1+\frac {2t+s}{(2p+s)n}+o\left (\frac {1}{n}\right )\), L0 = 2t + s/2p + s, τ2 = 2, a = 1/2.Result (20) now follows from his Theorem 2.4 for m > (4p + 2t)/(2t + s).Now we have,

$$ R_{N_{2}}(c) = \frac{2ct\lambda^{p}{n_{0}^{t}}}{s} E\left\{\left( \frac{n_{0}}{N_{2}}\right)^{s/2}\right\} + c{n_{0}^{t}}\lambda^{p} E\left\{\left( \frac{N_{2}}{n_{0}}\right)^{t}\right\} $$
(60)

Let us solve \(E\left \{\left (\frac {n_{0}}{N_{2}}\right )^{s/2}\right \}\). Consider \(\frac {N_{2}}{n_{0}} = x\), so that \(\frac {n_{0}}{N_{2}} = x^{-1}\). We can write,

$$ E\left\{\left( \frac{n_{0}}{N_{2}}\right)^{s/2}\right\} = E(x^{-s/2}) $$

Expanding f(x) around x = 1 by Taylor series, we obtain,

$$ E\left\{\left( \frac{n_{0}}{N_{2}}\right)^{s/2}\right\} = 1 - \frac{s}{2n_{0}} E(N_{2}-n_{0}) + \frac{s}{4}\left( \frac{s}{2}+1\right) E\left\{\frac{(N_{2}-n_{0})^{2}}{{n_{0}^{2}}}\right\} $$

From Theorem 3 of Ghosh and Mukhopadhyay (1979) and Theorem 2.3 of Woodroofe (1977), we define V, where

$$ V = \frac{(J-n_{0})^{2}}{n_{0}} \rightarrow \frac{(2p+s)^{2}}{4t+2s} {\chi_{1}^{2}} $$
(61)

for J defined in (59) and V is also uniformly integrable if m > 4p + 2s/2t + s and it suffices our job. Moreover, following Chaturvedi etal (2019), we can easily obtain

\(E\left [\frac {(J-n_{0})^{2}}{J}I\left (J > \frac {n_{0}}{2}\right )\right ] = \frac {(2p+s)^{2}}{2(2t+s)} +o(1),\)\(E\left [\frac {(J-n_{0})^{2}}{J}I\left (J < \frac {n_{0}}{2}\right )\right ] = o(1)\)Now for m > 2p + s, by Lemma 2.3 of Woodroofe (1977), we can obtain

$$ E\left\{\left( \frac{n_{0}}{N_{2}}\right)^{s/2}\right\} = 1 - \frac{s}{2n_{0}}\!\left[\frac{2p+s}{2t+s}\nu - \frac{2p+s}{2t+s}\left( \!1+\frac{2p+s}{2t+s}\!\right)\!\right] + \frac{(2p+s)^{2}}{2n_{0}(2t+s)}\frac{s}{4}\left( \frac{s+2}{2}\right) $$
(62)

Similarly, we can compute

$$ E\left\{\left( \frac{N_{2}}{n_{0}}\right)^{t}\right\} = 1 + \frac{t}{n_{0}}\left[\frac{2p+s}{2t+s}\nu - \frac{2p+s}{2t+s}\left( 1+\frac{2p+s}{2t+s}\right)\right] + \frac{(2p+s)^{2}}{2n_{0}(2t+s)}\frac{t(t-1)}{2} $$
(63)

Result (21) of Theorem 3 can now be obtained by putting (62) and (63) in (60). □

Proof Proof of Theorem 4

Result (24) of Theorem 4 can be obtained by using Lemmas 1-6 of Theorem 2 discussed above with some obvious changes in notations. Also proof of result (25) of Theorem 4 is similar to the proof of result (21) of Theorem 3 except the fact that in Theorem 4 we are working on a k-stage procedure (Subsection 3.2) instead of a purely sequential procedure (Subsection 3.1). Hence we omit the proof. □

Proof Proof of Theorem 5

Proof of this theorem can be obtained along the lines of Theorem 3 with obvious changes in notations. We can do this by replacing the notations and preliminaries of Section 3 (Subsection 3.1) by Section 4 (Subsection 4.1). Hence we omit the proof. □

Proof Proof of Theorem 6

Proof of this theorem will be exactly similar to the proof of Theorem 4 with some obvious changes in notations and preliminaries. Hence we omit the proof. □

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chaturvedi, A., Bapat, S.R. & Joshi, N. Purely Sequential and k-Stage Procedures for Estimating the Mean of an Inverse Gaussian Distribution. Methodol Comput Appl Probab 22, 1193–1219 (2020). https://doi.org/10.1007/s11009-019-09765-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11009-019-09765-x

Keywords

Mathematics Subject Classification (2010)

Navigation