Skip to main content

Advertisement

Log in

An empirical likelihood check with varying coefficient fixed effect model with panel data

  • Research Article
  • Published:
Journal of the Korean Statistical Society Aims and scope Submit manuscript

Abstract

Semiparametric models are often used to analyze panel data for a good trade-off between parsimony and flexibility. In this paper, we investigate a fixed effect model with a possible varying coefficient component. On the basis of empirical likelihood method, the coefficient functions are estimated as well as their confidence intervals. The estimation procedures are easily implemented. An important problem of the statistical inference with the varying coefficient model is to check the constant coefficient about the regression functions. We further develop checking procedures by constructing empirical likelihood ratio statistics and establishing the Wilks theorems. Finally, some numerical simulations and a real data analysis is presented to assess the finite sample performance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  • Ahkim, M., & Verhasselt, A. (2018). Testing for constancy in varying coefficient models. Communications in Statistics-Theory and Methods, 47(4), 890–911.

    Article  MathSciNet  Google Scholar 

  • Chen, R., Li, G. R., & Feng, S. (2020). Testing for covariance matrices in time-varying coefficient panel data models with fixed effects. Journal of the Korean Statistical Society, 49, 82–116.

    Article  MathSciNet  Google Scholar 

  • Chen, S. X., Härdle, W., & Li, M. (2003). An empirical likelihood goodness-of-fit test for time series. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 65(3), 663–678.

    Article  MathSciNet  Google Scholar 

  • Chen, S. X., & Cui, H. J. (2006). On Bartlett correction of empirical likelihood in the presence of nuisance parameters. Biometrika, 93, 215–220.

    Article  MathSciNet  Google Scholar 

  • DiCicco, T., Hall, P., & Romano, J. (1991). Empirical likelihood is Barterlett correctable. Annals of Statistics, 19, 1053–1061.

    MathSciNet  MATH  Google Scholar 

  • Fan, J., & Zhang, J. T. (2000). Two-step estimation of functional linear models with applications to longitudinal data. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 62(2), 303–322.

    Article  MathSciNet  Google Scholar 

  • Fan, J., Zhang, C., & Zhang, J. (2001). Generalized likelihood ratio statistics and Wilks phenomenon. The Annals of Statistics, 29(1), 153–193.

    Article  MathSciNet  Google Scholar 

  • Fan, J., & Huang, T. (2005). Profile likelihood inferences on semiparametric varying-coefficient partially linear models. Bernoulli, 11(6), 1031–1057.

    Article  MathSciNet  Google Scholar 

  • Feng, S., Li, G., Peng, H., & Tong, T. (2021). Panel data varying coefficient models with interactive fixed effects. Statistica Sinica, 31, 935–957.

    MathSciNet  MATH  Google Scholar 

  • Green, C. D. A. (2015). Three Essays on Nonparametric and Semiparametric Methods and Their Applications (Doctoral dissertation).

  • He, B. Q., Hong, X. J., & Fan, G. L. (2017). Block empirical likelihood for partially linear panel data models with fixed effects. Statistics & Probability Letters, 123, 128–138.

    Article  MathSciNet  Google Scholar 

  • Hoover, D. R., Rice, J. A., Wu, C. O., & Yang, L. P. (1998). Nonparametric smoothing estimates of time-varying coefficient models with longitudinal data. Biometrika, 85(4), 809–822.

    Article  MathSciNet  Google Scholar 

  • Huang, J. Z., Wu, C. O., & Zhou, L. (2002). Varying-coefficient models and basis function approximations for the analysis of repeated measurements. Biometrika, 89(1), 111–128.

    Article  MathSciNet  Google Scholar 

  • Li, D., Chen, J., & Gao, J. (2011). Non-parametric time varying coefficient panel data models with fixed effects. The Econometrics Journal, 14(3), 387–408.

    Article  MathSciNet  Google Scholar 

  • Li, G., Peng, H., & Tong, T. (2013). Simultaneous confidence band for nonparametric fixed effects panel data models. Economics Letters, 119(3), 229–232.

    Article  MathSciNet  Google Scholar 

  • Li, G. R., Lian, H., Lai, P., & Peng, H. (2015). Variable selection for fixed effects varying coefficient models. Acta Mathematica Sinica, English Series, 31(1), 91–110.

    Article  MathSciNet  Google Scholar 

  • Li, N., Xu, X., & Liu, X. (2011). Testing the constancy in varying-coefficient regression models. Metrika, 74(3), 409–438.

    Article  MathSciNet  Google Scholar 

  • Owen, A. (1990). Empirical likelihood ratio confidence regions. The Annals of Statistics, 18(1), 90–120.

    Article  MathSciNet  Google Scholar 

  • Owen, A. B. (1998). Empirical likelihood. Chapman and Hall/CRC.

  • Su, L., & Ullah, A. (2006). Profile likelihood estimation of partially linear panel data models with fixed effects. Economics Letters, 92(1), 75–81.

    Article  MathSciNet  Google Scholar 

  • Su, L., & Lu, X. (2013). Nonparametric dynamic panel data models: Kernel estimation and specification testing. Journal of Econometrics, 176(2), 112–133.

    Article  MathSciNet  Google Scholar 

  • Wang, H. J., Zhu, Z., & Zhou, J. (2009). Quantile regression in partially linear varying coefficient models. The Annals of Statistics, 37(6B), 3841–3866.

    Article  MathSciNet  Google Scholar 

  • Wang, H., Zhong, P.-S., Cui, Y. & Li, Y. (2018). Unified empirical likelihood ratio tests for functional concurrent linear models and the phase transition from sparse to dense functional data. Journal of the Royal Statistical Society Statistical Methodology Series B, 80, 343–364.

    Article  MathSciNet  Google Scholar 

  • Wu, C. O., Chiang, C. T., & Hoover, D. R. (1998). Asymptotic confidence regions for kernel smoothing of a varying-coefficient model with longitudinal data. Journal of the American Statistical Association, 93(444), 1388–1402.

    Article  MathSciNet  Google Scholar 

  • Xue, L., & Zhu, L. (2007). Empirical likelihood for a varying coefficient model with longitudinal data. Journal of the American Statistical Association, 102(478), 642–654.

    Article  MathSciNet  Google Scholar 

  • Yang, Y., Li, G., & Peng, H. (2014). Empirical likelihood of varying coefficient errors-in-variables models with longitudinal data. Journal of Multivariate Analysis, 127, 1–18.

    Article  MathSciNet  Google Scholar 

  • Zhao, P., & Xue, L. (2010). Empirical likelihood inferences for semiparametric varying-coefficient partially linear models with longitudinal data. Communications in Statistics-Theory and Methods, 39(11), 1898–1914.

    Article  MathSciNet  Google Scholar 

  • Zhao, P., & Yang, Y. (2015). Semiparametric empirical likelihood tests in varying coefficient partially linear models with repeated measurements. Statistical Methodology, 23, 73–87.

    Article  MathSciNet  Google Scholar 

  • Zhu, L., Qin, Y., & Xu, W. (2007). The empirical likelihood goodness-of-fit test for regression model. Science in China, Series A, 50, 829–840.

    Article  MathSciNet  Google Scholar 

Download references

Funding

This work is supported by the National Social Science Foundation of China (No. 17BTJ026).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wanbin Li.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix A. Proof of the lemmas and theorems.

Appendix A. Proof of the lemmas and theorems.

It is to start with some preliminary results about the naive empirical likelihood ratio. The proof of the theorems can be obtained by extending some of them, which is shown as the following lemmas and proof for them.

Lemma 1

Under some conditions (C1)-(C6), then when \(n\rightarrow \infty\), we have

$$\begin{aligned} \text {E}\Big [ \frac{1}{\sqrt{Nh}}\sum _{i=1}^n{\tilde{\eta }}_i(\theta (t_0))\Big ]=b(t_0)+o_p(1), \end{aligned}$$

and

$$\begin{aligned} \text {Cov} \Big [ \frac{1}{\sqrt{Nh}}\sum _{i=1}^n{\tilde{\eta }}_i(\theta (t_0))\Big ] = \Sigma ^*(t_0)+o_p(1). \end{aligned}$$

where \(\Sigma ^*(t_0)=\nu ^2_0(t_0)\Gamma (t_0).\)

Proof

Denote \({\tilde{\zeta }}(t_0) = \frac{1}{\sqrt{Nh}}\sum _{i=1}^n{\tilde{\eta }}_i(\theta (t_0))\) and its l-th component being \({\tilde{\zeta }}_l(t_0)= \frac{1}{\sqrt{Nh}}\sum _{i=1}^n{\tilde{\eta }}_{i,l}(\theta (t_0)).\) Hence, together with the Eqs. (2.5) and (2.6), it is shown that

$$\begin{aligned} \begin{aligned} {\tilde{\eta }}_{i,l}(\theta (t_0))&=\sum _{j=1}^{m_i}\sum _{k=1}^p \Big \{X_{ij,l}^*X_{ij,k}^{*T}\Big [\theta _k(t_{ij}) -\theta _k(t_0)\Big ]+X_{ijl}^*\epsilon _{ij}\Big \}K_h(t_{ij}-t_0)\\&\triangleq \sum _{j=1}^{m_i} {\tilde{\zeta }}_{il}(t_{ij},t_0)K_h(t_{ij}-t_0). \end{aligned} \end{aligned}$$

and

$$\begin{aligned} \begin{aligned} E[{\tilde{\eta }}_{il}(\theta (t_0))] =m_ih\sum _{k=1}^p\int \Big [\theta _k(hu-t_0)-\theta _k(t_0)\Big ]\times \gamma _{lk}(hu-t_0)f(hu-t_0)K(u)du. \end{aligned} \end{aligned}$$

By the conditions (C1) and Taylor expansion, an algebra calculation of the right side of the above formula leads to

$$\begin{aligned} \text {E}\Big [ \frac{1}{\sqrt{Nh}}\sum _{i=1}^n{\tilde{\eta }}_i(\theta (t_0))\Big ]=b(t_0)+o_p(1). \end{aligned}$$

For the second item in Lemma 1, the (lk)-th component of the covariance matrix is

$$\begin{aligned} \text {Cov}\Big [{\tilde{\zeta }}_l(t_0),{\tilde{\zeta }}_k(t_0)\Big ] =E\Big [{\tilde{\zeta }}_l(t_0){\tilde{\zeta }}_k(t_0)\Big ]-E\Big [{\tilde{\zeta }}_l(t_0)\Big ]E\Big [{\tilde{\zeta }}_k(t_0)\Big ]. \end{aligned}$$

So after plugging the notation of \({\tilde{\zeta }}(t_0)\), the first term on the right side is

$$\begin{aligned} \begin{aligned}&E\bigg \{ \bigg [ \frac{1}{\sqrt{Nh}}\sum _{i=1}^n{\tilde{\eta }}_{il}(\theta (t_0)) \bigg ] \bigg [ \frac{1}{\sqrt{Nh}}\sum _{i=1}^n{\tilde{\eta }}_{ik}(\theta (t_0)) \bigg ]\bigg \}\\&\quad =\frac{1}{Nh} \bigg \{ \sum _{i=1}^n E[{\tilde{\eta }}_{il}(\theta (t_0)){\tilde{\eta }}_{ik}(\theta (t_0))]+ \sum _{i_1 \ne i_2} E [{\tilde{\eta }}_{i_1l}(\theta (t_0)){\tilde{\eta }}_{i_2k}(\theta (t_0))]\bigg \}. \end{aligned} \end{aligned}$$
(A.1)

As to the first term in (A.1),

$$\begin{aligned} \begin{aligned} {\tilde{\eta }}_{i,l}(\theta (t_0)){\tilde{\eta }}_{i,k}(\theta (t_0))&=\sum _{j=1}^{m_i}{\tilde{\zeta }}_{i,l}(t_{ij},t_0){\tilde{\zeta }}_{i,k}(t_{ij},t_0) K_h^2(t_{ij}-t_0)\\&\quad + \sum _{j_1 \ne j_2}{\tilde{\zeta }}_{i,l}(t_{ij_1},t_0){\tilde{\zeta }}_{i,k}(t_{ij_2},t_0)K_h(t_{ij_1}-t_0)K_h(t_{ij_2}-t_0). \end{aligned} \end{aligned}$$
(A.2)

And as \(n\rightarrow \infty\), a straightforward calculation leads to that

$$\begin{aligned} \begin{aligned}&E\Big [{\tilde{\zeta }}_{i,l}(t_{ij},t_0){\tilde{\zeta }}_{i,k}(t_{ij},t_0)|t_{ij}=t\Big ] \\&\quad = \sum _{m=1}^p\bigg \{ \Big [ \theta _m(t)-\theta _m(t_0)\Big ]^2 \times E\Big [(X_{ij,l}^*X_{ij,m}^{*})(X_{ij,k}^*X_{ij,m}^{*})|t_{ij}=t\Big ]\bigg \}\\&\qquad +\sigma ^2(t)E\Big [(X_{ij,l}^*X_{ij,r}^{*})|t_{ij}=t\Big ]+\sum _{m_1 \ne m_2}\bigg \{\Big [ \theta _{m_1}(t)-\theta _{m_1}(t_0)\Big ] \Big [ \theta _{m_2}(t)-\theta _{m_2}(t_0)\Big ] \\&\qquad \times E\Big [(X_{ij,l}^*X_{ij,m_1}^{*})(X_{ij,k}^*X_{ij,m_2}^{*})|t_{ij}=t\Big ]\bigg \}. \end{aligned} \end{aligned}$$

Therefore,

$$\begin{aligned} \begin{aligned}&E\bigg [ \sum _{j=1}^{m_i}{\tilde{\zeta }}_{il}(t_{ij},t_0){\tilde{\zeta }}_{ik}(t_{ij},t_0)K_h^2(t_{ij}-t_0)\bigg ]\\&\quad =\sum _{j=1}^{m_i}\int E\Big [{\tilde{\zeta }}_{il}(t_{ij},t_0){\tilde{\zeta }}_{ik}(t_{ij},t_0)|t_{ij}=t\Big ]K_h^2(t_{ij}-t)f(t)dt\\&\quad ={m_i}h\sigma ^2(t_0)\gamma _{lr}(t_0)f(t_0)\int K^2(t)dt+o_p(h). \end{aligned} \end{aligned}$$
(A.3)

With a similar derivation of the second term in (A.2), we can obtain

$$\begin{aligned} E\Big [{\tilde{\zeta }}_{i,l}(t_{ij_1},t_0){\tilde{\zeta }}_{i,k}(t_{ij_2},t_0)|t_{ij_1}=t_1,t_{ij_2}=t_2\Big ]\rightarrow \rho _{\epsilon }(t_0) \gamma _{lr}(t_0), \end{aligned}$$

and a further expectation follows that

$$\begin{aligned} \begin{aligned}&E\bigg [ \sum _{j_1 \ne j_2}{\tilde{\zeta }}_{i,l}(t_{ij_1},t_0){\tilde{\zeta }}_{i,k}(t_{ij_2},t_0)K_h(t_{ij_1}-t_0)K_h(t_{ij_2}-t_0)\bigg ]\\&\quad ={m_i}({m_i}-1)h^2\rho _{\epsilon }(t_0)\gamma _{lr}(t_0)f^2(t_0)+o_p({m_i}({m_i}-1)h^2). \end{aligned} \end{aligned}$$
(A.4)

Combining (A.3) and (A.4) with (A.2), when \(n\rightarrow \infty\), we have

$$\begin{aligned} \begin{aligned}&(Nh)^{-1}\sum _{i=1}^nE\Big [{\tilde{\eta }}_{i,l}(\theta (t_0)){\tilde{\eta }}_{i,k}(\theta (t_0))\Big ]\\&\quad =\sigma ^2(t_0)\gamma _{lk}(t_0) f(t_0)\int K^2(t)dt+ h^*\gamma _{lk}(t_0)\rho _{\epsilon }(t_0)f^2(t_0)+o_p(h^*). \end{aligned} \end{aligned}$$
(A.5)

where \(h^*=hN^{-1}(\sum _{i=1}^nm_i^2-N)\). With \(h=N^{-1/5}h_0\) and \(\lim _{n\rightarrow \infty }N^{-6/5}\sum _{i=1}^n m_i^2=\lambda\), we have

$$\begin{aligned} hN^{-1}\bigg (\sum _{i=1}^nm_i^2-N\bigg )\rightarrow \lambda h_0,~~(n\rightarrow \infty ). \end{aligned}$$

Following a similar context of (A.13) in Wu et al. (1998), it follows that

$$\begin{aligned}&\bigg |(Nh)^{-1}\sum _{{i_1}\ne i_2} E\Big [{\tilde{\eta }}_{i,l}(\theta (t_0)){\tilde{\eta }}_{i,k}(\theta (t_0))\Big ]-E\Big [ (Nh)^{-1/2} \sum _{i=1}^n{\tilde{\eta }}_{i,l}(\theta (t_0))\Big ] \nonumber \\&\qquad \times E\Big [ (Nh)^{-1/2} \sum _{i=1}^n{\tilde{\eta }}_{i,k}(\theta (t_0))\Big ]\bigg |\rightarrow 0. \end{aligned}$$
(A.6)

Sum up (A.5), (A.6), the second result in Lemma 1 is concluded. \(\square\)

Lemma 2

Under conditions (\(\text {C1}\))–(C6) hold, then

$$\begin{aligned} \max _{1\le i\le n} \Vert {\tilde{\eta }}_i(\theta (t_0))\Vert =o_p((Nh)^{-\frac{1}{2}}). \end{aligned}$$

Proof

The proof is similar to that one of Lemma A.1 in Xue and Zhu (2007). \(\square\)

Lemma 3

Suppose that conditions (C1)–(C6) hold, and for a given \(t_0\),

$$\begin{aligned} {\sqrt{Nh}}\Big ({\tilde{\theta }}_{\text {NAEL}}(t_0)-\theta (t_0)\Big )-B(t_0)\xrightarrow {{\mathcal {L}}} N(0,\Sigma (t_0)), \end{aligned}$$
(A.7)

where \(B(t_0)=f(t_0)^{-1}\Gamma ^{-1}(t_0)b(t_0)\), \(\Sigma (t_0)=f(t_0)^{-2}\nu ^2(t_0)\Gamma ^{-1}(t_0)\), \(\Gamma (t_0)\) is defined in condition (C6), \(b(t_0)=(b_0(t_0), \ldots ,b_p(t_0))\), \(b_l(t_0)\) and \(\nu ^2(t_0)\) are defined by

$$\begin{aligned} b_l(t_0)= & {} h_0^{5/2}\sum _{k=1}^p[\theta _k'(t_0)\gamma _{lk}'(t_0)f(t_0)+\theta _k'(t_0)\gamma _{lk}(t_0)f'(t_0) \\&+(1/2)\theta _k''(t_0)\gamma _{lk}(t_0)f(t_0)]\int u^2K(u)du, \end{aligned}$$

and

$$\begin{aligned} \nu ^2(t_0)=\sigma ^2(t_0)f(t_0)\int K^2(t)dt+\lambda h_0 \rho _{\epsilon }(t_0)f^2(t_0), \end{aligned}$$

where \(h_0\) and \(\lambda\) are defined in conditions (C1) and (C2).

Proof

With a similar proof of (A.1) in Wu et al. (1998) and a straightforward calculation, we can derive that

$$\begin{aligned} {\tilde{\theta }}_{\text {NAEL}}(t_0)-\theta (t_0) = (f(t_0))^{-1}\Gamma ^{-1}(t_0)\bigg [\frac{1}{\sqrt{Nh}}\sum _{i=1}^n {\tilde{\eta }}_i(\theta (t_0))\bigg ]+o_p(1). \end{aligned}$$
(A.8)

By Lemma 1, we can derive that

$$\begin{aligned} \frac{1}{\sqrt{Nh}}\sum _{i=1}^n {\tilde{\eta }}_i(\theta (t_0))\xrightarrow {{\mathcal {L}}}N(b(t_0),\Sigma ^*(t_0)). \end{aligned}$$
(A.9)

Invoking (A.8) and (A.9), we can show the result in Lemma 3 . \(\square\)

Lemma 4

Suppose that conditions (C2)–(C6) hold and \(Nh\rightarrow \infty\), \(h=o(N^{-1/5})\), if \(\theta (t_0)\) is the true parameter, then

$$\begin{aligned} {\tilde{\ell }}(\theta (t_0)){\mathop {\longrightarrow }\limits ^{{\mathcal {L}}}}\chi _p^2, \end{aligned}$$
(A.10)

where \({\mathop {\longrightarrow }\limits ^{{\mathcal {L}}}}\) stands for convergence in distribution and \(\chi _p^2\) means the chi-squared distribution with freedom p.

Proof

It is easy to see that the proof of Lemma 4 requires the following three asymptotic results as

$$\begin{aligned}&\frac{1}{\sqrt{Nh}}\sum _{i=1}^n{\tilde{\eta }}_i(\theta (t_0))\sim N(0,\Sigma ^*(t_0)),\end{aligned}$$
(A.11)
$$\begin{aligned}&\frac{1}{Nh}\sum _{i=1}^n{\tilde{\eta }}_i(\theta (t_0)){\tilde{\eta }}_i^T(\theta (t_0))\rightarrow \Sigma ^*(t_0), \end{aligned}$$
(A.12)
$$\begin{aligned}&{\tilde{\ell }} (\theta (t)) \approx \Big [ \frac{1}{\sqrt{Nh}}\sum _{i=1}^n{\tilde{\eta }}_i(\theta (t_0))\Big ]^T {\tilde{D}}(\theta (t_0))^{-1}\Big [ \frac{1}{\sqrt{Nh}}\sum _{i=1}^n{\tilde{\eta }}_i(\theta (t_0))\Big ]. \end{aligned}$$
(A.13)

Firstly, by Lemma 1, when \(Nh\rightarrow \infty\), \(Nh^5\rightarrow 0\), we can obtain (A.11).

Secondly, according to the proof of (A.6) in Lemma 1, we have (A.12).

Following the Taylor expansion of \(\frac{1}{Nh}\sum _{i=1}^n\frac{{\tilde{\eta }}_i(\theta (t_0))}{1+\lambda ^T {\tilde{\eta }}_i(\theta (t_0))}=0\), then

$$\begin{aligned} \begin{aligned} 0&=\sum _{i=1}^n\frac{{\tilde{\eta }}_i(\theta (t_0))}{1+\lambda ^T {\tilde{\eta }}_i(\theta (t_0))}\\&=\sum _{i=1}^n{\tilde{\eta }}_i(\theta (t_0))-\sum _{i=1}^n \lambda ^T {\tilde{\eta }}_i(\theta (t_0)){\tilde{\eta }}_i(\theta (t_0))+ \sum _{i=1}^n\frac{{\tilde{\eta }}_i(\theta (t_0))[\lambda ^T {\tilde{\eta }}_i(\theta (t_0))]^2}{1+\lambda ^T {\tilde{\eta }}_i(\theta (t_0))}. \end{aligned} \end{aligned}$$
(A.14)

And by the proof of (2.16) in Owen (1998), the last term, on the right side of the foregoing equation, has a norm bounded by

$$\begin{aligned} \Vert \lambda \Vert ^2\Vert {\tilde{\eta }}_i(\theta (t_0))\Vert ^3 |1+\lambda {\tilde{\eta }}_i(\theta (t_0))|^{-1}= o_p((Nh)^{\frac{1}{2}})O_p((Nh))^{-1}O_p(1)=o_p((Nh)^{-\frac{1}{2}}). \end{aligned}$$

Then,

$$\begin{aligned} \lambda =\Big [\sum _{i=1}^n{\tilde{\eta }}_i^T(\theta (t_0)){\tilde{\eta }}_i(\theta (t_0))\Big ]^{-1} \sum _{i=1}^n{\tilde{\eta }}_i(\theta (t_0))+o_p((Nh)^{\frac{1}{2}}). \end{aligned}$$
(A.15)

Again, applying the Taylor expansion to (2.8), we obtain that

$$\begin{aligned} \begin{aligned} {\tilde{\ell }} (\theta (t_0))&=2\sum _{i=1}^n\log (1+\lambda ^T{\tilde{\eta }}_i(\theta (t_0)))\\&=2\sum _{i=1}^n\lambda ^T{\tilde{\eta }}_i(\theta (t_0))-\sum _{i=1}^n[\lambda ^T {\tilde{\eta }}_i(\theta (t_0))]^2+o_p(1)\\&=2\lambda Nh\Big [\frac{1}{\sqrt{Nh}}\sum _{i=1}^n{\tilde{\eta }}_i(\theta (t_0))\Big ]^T{\tilde{D}}(\theta (t_0))^{-1} \Big [\frac{1}{\sqrt{Nh}}\sum _{i=1}^n{\tilde{\eta }}_i(\theta (t_0))\Big ]\\ {}&~~- Nh\Big [\frac{1}{\sqrt{Nh}}\sum _{i=1}^n{\tilde{\eta }}_i(\theta (t_0))\Big ]^T{\tilde{D}}(\theta (t_0))^{-1} {\tilde{D}}(\theta (t_0)){\tilde{D}}(\theta (t_0))^{-1} \\&\qquad \Big [\frac{1}{\sqrt{Nh}}\sum _{i=1}^n{\tilde{\eta }}_i(\theta (t_0))\Big ]+o_p(1). \end{aligned} \end{aligned}$$
(A.16)

Following the proof of (A.12) in Wu et al. (1998), we obtain that \({\tilde{D}}(\theta (t_0))\xrightarrow {{\mathcal {P}}}\Sigma ^*(t_0)\). And taking together (A.8), (A.9) and (A.10), it can be proven that

$$\begin{aligned} {\tilde{\ell }} (\theta (t_0)) \approx \Big [ \frac{1}{\sqrt{Nh}}\sum _{i=1}^n{\tilde{\eta }}_i(\theta (t_0))\Big ]^T {\tilde{D}}(\theta (t_0))^{-1}\Big [ \frac{1}{\sqrt{Nh}}\sum _{i=1}^n{\tilde{\eta }}_i(\theta (t_0))\Big ]\xrightarrow {{\mathcal {L}}}\chi _p^2. \end{aligned}$$
(A.17)

\(\square\)

To prove Theorem 4, we show the following Lemma 5, which can be verified similarly to the proof of Lemma 7.2 in Zhu et al. (2007).

Lemma 5

Suppose that conditions (C1)-(C6) hold, then under \(H_{1n}\),

  1. (a)

    if \(n^\gamma C_n\rightarrow\)a (a is a constant), \(0\le \gamma <\frac{1}{2}\), \(\sqrt{n}{\hat{\psi }}_1\rightarrow \infty\), as \(n \rightarrow \infty\);

  2. (b)

    if \(n^{\frac{1}{2}} C_n\rightarrow 1\), \(\sqrt{n}{\tilde{\psi }}_1\xrightarrow {{\mathcal {L}}}N(\mu ,\sigma ^2)\), where \(\mu = E\left\{ W(X, \theta )X^{(2)T}\theta ^{(2)}(t)\right\}\).

\(\square\)

Proof of Theorem 1

Firstly, by a proof similar to that of Lemma 1, we can derive that

$$\begin{aligned} \frac{1}{\sqrt{Nh}}\sum _{i=1}^n{\hat{\eta _i}}(\theta (t_0))\sim N(0,\Sigma ^*(t_0)). \end{aligned}$$
(A.18)

Specially, according to the estimator \({\hat{\theta }}_{\text {RAEL}}(t_0)\), we have

$$\begin{aligned} {\hat{\theta }}_{\text {RAEL}}(t_0)-\theta (t_0)={\hat{V}}^{-1}(t_0)\frac{1}{\sqrt{Nh}}\sum _{i=1}^n{\hat{\eta _i}}(\theta (t_0)). \end{aligned}$$

Furthermore, together with the proof context of Lemma 3, it is shown that

$$\begin{aligned} {\hat{\theta }}_{\text {RAEL}}(t_0)-\theta (t_0) = (f(t_0))^{-1}\Gamma ^{-1}(t_0)\bigg [\frac{1}{\sqrt{Nh}}\sum _{i=1}^n {\hat{\eta }}_i(\theta _i(t_0))\bigg ]+o_p(1). \end{aligned}$$
(A.19)

Hence, this thereupon completes the proof of Theorem 1. \(\square\)

Proof of Theorem 2

To prove that under some regularity conditions,

$$\begin{aligned} {\hat{\ell }}(\theta (t_0))\sim \chi _p^2. \end{aligned}$$

Now, we have

$$\begin{aligned} \begin{aligned}&\frac{1}{\sqrt{nh}}\sum _{i=1}^n{\hat{\eta }}_i(\theta (t_0)) \\&\quad = \frac{1}{\sqrt{nh}}\sum _{i=1}^n\sum _{j=1}^mX_{ij}^*\Big \{Y_{ij}^*-X_{ij}^{*T}\theta (t_0)-X_{ij}^{*T}\big [{\tilde{\theta }}_{\text {NAEL}}(t_{ij})- {\tilde{\theta }}_{\text {NAEL}}(t_0)\big ]\Big \}K_h(t_{ij}-t_0)\\&\quad =\frac{1}{\sqrt{nh}}\sum _{i=1}^n\sum _{j=1}^mX_{ij}^*\epsilon _{ij}+\frac{1}{\sqrt{nh}}\sum _{i=1}^n\sum _{j=1}^m X_{ij}^*X_{ij}^{*T}\big [\theta (t_{ij})-\theta (t_0)\big ]K_h(t_{ij}-t_0)\\&\qquad -\frac{1}{\sqrt{nh}}\sum _{i=1}^n\sum _{j=1}^mX_{ij}^{*}\Big \{ X_{ij}^{*T}\big [{\tilde{\theta }}_{\text {NAEL}}(t_{ij})- {\tilde{\theta }}_{\text {NAEL}}(t_0)\big ]\Big \}K_h(t_{ij}-t_0)\\&\quad =I_1+I_2+I_3. \end{aligned} \end{aligned}$$
(A.20)

By the Taylor expansion and \(\theta _l'(t_0)-{\hat{\theta _l}}'(t_0)\xrightarrow {P}0\), we derive \(I_2+I_3\xrightarrow {P}0.\) So, together with the fact that

$$\begin{aligned} \frac{1}{\sqrt{Nh}}\sum _{i=1}^n{\hat{\eta _i}}(\theta (t_0))\sim N(0,\Sigma ^*(t_0)). \end{aligned}$$
(A.21)

and

$$\begin{aligned} \frac{1}{Nh}\sum _{i=1}^n{\hat{\eta _i}}(\theta (t_0)){\hat{\eta _i^T}}(\theta (t_0))\rightarrow \Sigma ^*(t_0), \end{aligned}$$
(A.22)

Theorem 2 can be derived along with the argument in the proof in Lemma 4. \(\square\)

Proof of Theorem 3

From the fixed effect-corrected technique in Sect. 2, we have \(E\big [Y_{ij}^*-X_{ij}^{^*T}\theta |X_{ij}^*\big ] = 0\) and further assume a auxiliary vector as

$$\begin{aligned} \psi _i = \sum _{j=1}^{m_i} w(X_{ij}^*,\theta )(Y_{ij}-X_{ij}^{^*T}\theta ). \end{aligned}$$

Denote an estimator for \(\theta\) be \({\tilde{\theta }}_n\), \({\tilde{w}}_{ij} \triangleq w(X_{ij}^*,{\tilde{\theta }}_n)\) and \(w_{ij} \triangleq w(X_{ij}^*,\theta )\), then

$$\begin{aligned} {\tilde{\psi }}_i= \sum _{j=1}^{m_i} {\tilde{w}}_{ij}(Y_{ij}-X_{ij}^{^*T}{\tilde{\theta }}_n). \end{aligned}$$
(A.23)

Consider

$$\begin{aligned} \begin{aligned} \frac{1}{\sqrt{N}} \sum _{i=1}^{n}{\tilde{\psi }}_i&= \frac{1}{\sqrt{N}} \sum _{i=1}^{n}\sum _{j=1}^{m_i} {\tilde{w}}_{ij}(Y_{ij}-X_{ij}^{^*T}{\tilde{\theta }}_n)\\&= \frac{1}{\sqrt{N}} \sum _{i=1}^{n}\sum _{j=1}^{m_i} \big [ \varepsilon _{ij} + X_{ij}^{^*T}(\theta _0-{\tilde{\theta }}_n)\big ] \big [{\tilde{w}}_{ij} + {\tilde{w}}_{ij}'(\theta _0-{\tilde{\theta }}_n)\big ]\\&= \frac{1}{\sqrt{N}} \sum _{i=1}^{n}\sum _{j=1}^{m_i}\big [ {\tilde{w}}_{ij}\varepsilon _{ij}-{\tilde{w}}_{ij}X_{ij}^{^*T}(\theta _0-{\tilde{\theta }}_n)\big ]+o_P(1). \end{aligned} \end{aligned}$$

By using a similar proof of the Lemma 7.1 in Zhu et al. (2007), we have

$$\begin{aligned} \frac{1}{\sqrt{N}} \sum _{i=1}^{n} {\tilde{\psi }}_i\rightarrow N(0, \sigma ^2), \end{aligned}$$
(A.24)

where \(\sigma ^2 =\lim _{n\rightarrow \infty }n^{-1}\sum _{i=1}^{n}\sum _{j=1}^{m_i} E \big [ {\tilde{w}}_{ij}\varepsilon _{ij}-{\tilde{w}}_{ij}X_{ij}^{^*T}L(X_{ij}^*,\theta _0)\big ]^2\) and \(L(X_{ij}^*,\theta _0)\) is the asymptotical expression of \((\theta _n-{\tilde{\theta }}_0)\) proposed in Zhu et al. (2007). Then, by the law of Large Numbers, we can obtain that

$$\begin{aligned} \frac{1}{ N} \sum _{i=1}^{n} {\tilde{\psi }}_i{\tilde{\psi }}_i^T\rightarrow \sigma _1^2, \end{aligned}$$
(A.25)

where \(\sigma _1^2 = \lim _{N\rightarrow \infty }N^{-1}\sum _{i=1}^{n}\sum _{j=1}^{m_i} E ( {\tilde{w}}_{ij}\varepsilon _{ij})^2\).

Together with the standard procedure in the empirical likelihood literature, one can find that

$$\begin{aligned} \lambda = O_P(N^{-\frac{1}{2}}). \end{aligned}$$
(A.26)

Then combined with with (A.25), we derive that

$$\begin{aligned} \begin{aligned} {\tilde{R}}_n&= \sum _{i=1}^n \lambda ^T {\tilde{\psi }}_i {\tilde{\psi }}_i^T\lambda +o_p(1)\\&=(n^{-1/2}\sum _{i=1}^{n} {\tilde{\psi }}_i)^T(n^{-1}\sum _{i=1}^n{\tilde{\psi }}_i{\tilde{\psi }}_i^T)^{-1} (n^{-1/2}\sum _{i=1}^{n} {\tilde{\psi }}_i)+o_p(1)\\&=\frac{\sigma ^2}{\sigma _1^{2}}(\sigma ^{-1}n^{-1/2} \sum _{i=1}^{n} {\tilde{\psi }}_i)^2+o_p(1). \end{aligned} \end{aligned}$$
(A.27)

Therefore, by (A.24) and (A.27), we have

$$\begin{aligned} {\tilde{R}}_n \rightarrow \frac{\sigma ^2}{\sigma _1^{2}}\chi ^2_{1}, \end{aligned}$$
(A.28)

where \(\chi ^2_{1}\) is a random variable of chi-squared distribution with degree freedom of 1 . \(\square\)

Proof of Theorem 4

According to the definition of \({\hat{w}}_{ij}\) in Sect. 3, we have

$$\begin{aligned} {\hat{w}}_{ij} = 1 - \sum _{u=1}^n\sum _{v=1}^{m_i}\Big [K_h(t_{uv}-t_{ij})X_{uv}^{*T}\Big ] \Big [\sum _{u=1}^n\sum _{v=1}^{m_i}K_h(t_{uv}-t_{ij})X_{uv}^*X_{uv}^{*T}\Big ]^{-1}X_{ij}^*. \end{aligned}$$

Then,

$$\begin{aligned} \begin{aligned} \frac{1}{\sqrt{N}}\sum _{i=1}^n{\hat{\psi _i}}&= \frac{1}{\sqrt{N}}\sum _{i=1}^n\sum _{j=1}^{m_i}{\hat{w}}_{ij}(y_{ij}-X_{ij}^{*T}\theta _n)\\&= \frac{1}{\sqrt{N}}\sum _{i=1}^n\sum _{j=1}^{m_i}{\hat{w}}_{ij}\varepsilon _{ij}-{\hat{w}}_{ij}X_{ij}^{*T}(\theta _n-\theta _0)+o_p(1). \end{aligned} \end{aligned}$$
(A.29)

By the feature of \({\hat{w}}_{ij}\) being orthogonal to \(X_{ij}^*\), the second term in (A.29) tends to 0 as \(n\rightarrow \infty\). And by the CLT, we can easily conclude that

$$\begin{aligned} \frac{1}{\sqrt{N}}\sum _{i=1}^n{\hat{\psi _i}} \rightarrow N(0,\sigma _1^2). \end{aligned}$$
(A.30)

Furthermore, invoking the proof in Theorem 3, one can show that

$$\begin{aligned} \frac{1}{N}\sum _{i=1}^n{\hat{\psi _i{\hat{\psi }}_i^T}} \rightarrow \sigma _1^2~~\text {and}~~\lambda = O_p(n^{-1/2}). \end{aligned}$$

Hence, together with the decomposition

$$\begin{aligned} {\hat{R}}_n =(\sigma _1^{-1}\frac{1}{\sqrt{N}}\sum _{i=1}^{n} {\hat{\psi _i}})^2+o_p(1), \end{aligned}$$

we can prove the Theorem 4. \(\square\)

Proof of Theorem 5

By Lemma 5, similar to the proof of Theorem 4, we can obtain Theorem 5. \(\square\)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, W., Xue, L. & Zhao, P. An empirical likelihood check with varying coefficient fixed effect model with panel data. J. Korean Stat. Soc. 51, 198–222 (2022). https://doi.org/10.1007/s42952-021-00136-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s42952-021-00136-2

Keywords

Navigation