Skip to main content
Log in

Estimation for partially linear additive regression with spatial data

  • Regular Article
  • Published:
Statistical Papers Aims and scope Submit manuscript

Abstract

This paper studies a partially linear additive regression with spatial data. A new estimation procedure is developed for estimating the unknown parameters and additive components in regression. The proposed method is suitable for high dimensional data, there is no need to solve the restricted minimization problem and no iterative algorithms are needed. Under mild regularity assumptions, the asymptotic distribution of the estimator of the unknown parameter vector is established, the asymptotic distributions of the estimators of the unknown functions are also derived. Finite sample properties of our procedures are studied through Monte Carlo simulations. A real data example about spatial soil data is used to illustrate our proposed methodology.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tang Qingguo.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: Proofs

Appendix: Proofs

In this section, let \(C>0\) denote a generic constant of which the value may change from line to line. For a matrix \(B=(b_{ij})\), set \(\Vert B\Vert _{\infty }=\max _{i}\sum _{j}|b_{ij}|\) and \(|B|_{\infty }=\max _{i,j}|b_{ij}|\). For a vector \(v=(v_{1},\ldots ,v_{k})^{T}\), set \(\Vert v\Vert _{\infty }=\sum _{j=1}^{k}|v_{j}|\) and \(|v|_{\infty }=\max _{1\le j\le k}|v_{j}|\).

Let \(f_{r\nu }(x_{r})=\chi _{r\nu }(x_{r})f_{r}(x_{r})\), \({\bar{f}}_{r\nu }=(\sum _{i=1}^{m}\sum _{j=1}^{n}f_{r\nu }(X_{ijr})) /(\sum _{i=1}^{m}\sum _{j=1}^{n}\chi _{r\nu }(X_{ijr}))\),

$$\begin{aligned} {\check{f}}_{r\nu }(x_{r})=f_{r}(x_{r\nu })\breve{A}_{r\nu 0}(x_{r})+h_{r0}f_{r}'(x_{r\nu })\breve{A}_{r\nu 1}(x_{r})+\ldots +h_{r0}^{p_{r}}f_{r}^{(p_{r})}(x_{r\nu })\breve{A}_{r\nu q_{r}}(x_{r})/p_{r}!, \end{aligned}$$

\(\bar{{\check{f}}}_{r\nu }=(\sum _{i=1}^{m}\sum _{j=1}^{n}{\check{f}}_{r\nu }(X_{ijr})) /(\sum _{i=1}^{m}\sum _{j=1}^{n}\chi _{r\nu }(X_{ijr}))\) and \(f_{r\nu }^{*}(X_{ijr})=[f_{r\nu }(X_{ijr})-\chi _{r\nu }(X_{ijr}){\bar{f}}_{r\nu }] -[{\check{f}}_{r\nu }(X_{ijr})-\chi _{r\nu }(X_{ijr})\bar{{\check{f}}}_{r\nu }]\). Noting that \(f_{r}(x_{r})=\sum _{\nu =1}^{M_{rN}}f_{r\nu }(x_{r})\) and \(f_{r\nu }(X_{ijr})=\chi _{r\nu }(X_{ijr}){\bar{f}}_{r\nu }+[{\check{f}}_{r\nu }(X_{ijr}) -\chi _{r\nu }(X_{ijr})\bar{{\check{f}}}_{r\nu }]+f_{r\nu }^{*}(X_{ijr})\), we get that

$$\begin{aligned} \begin{array}{ll} f_{r}(X_{ijr})&=\pmb {A}_{r}^{T}(X_{ijr})\pmb {a_{0r}} +{\bar{F}}_{rM_{rN}}\chi _{rM_{rN}}(X_{ijr})+\sum _{\nu =1}^{M_{rN}}f_{r\nu }^{*} (X_{ijr}), \end{array} \end{aligned}$$
(A.1)

where \(\pmb {a}_{0r}=(\pmb {a}_{0r1}^{T},\cdots ,\pmb {a}_{0rM_{rN}}^{T})^{T}\) with \(\pmb {a}_{0r\nu }=({\bar{f}}_{r\nu },h_{r0}f_{r}'(x_{r\nu }),\ldots ,h_{r0}^{p_{r}}f_{r}^{(p_{r})} (x_{r\nu })/p_{r}!)^{T}\) for \(\nu =1,\ldots ,M_{rN}-1\) and \(\pmb {a}_{0rM_{rN}}=(h_{r0}f_{r}'(x_{rM_{rN}}),\ldots ,h_{r0}^{p_{r}}f_{r}^{(p_{r})}(x_{rM_{rN}})/p_{r}!)^{T}\), \({\bar{F}}_{rM_{rN}}=(\sum _{i=1}^{m}\sum _{j=1}^{n}f_{r}(X_{ijr}))/(\sum _{i=1}^{m}\sum _{j=1}^{n}\chi _{rM_{rN}}(X_{ijr}))\). Let \(f_{kr\nu }(x_{r})=\chi _{r\nu }(x_{r})f_{kr}(x_{r})\), \(f_{kr\nu }^{*}(X_{ijr}), k=1,\ldots ,d_{1}\) and \({\bar{F}}_{krM_{rN}}\) are defined similarly as \(f_{r\nu }^{*}(X_{ijr})\) and \({\bar{F}}_{rM_{rN}}\). Denote \(\vec {Y}={\bar{Y}}-E(Y)\), \(\vec {\pmb {Z}}=\bar{\pmb {Z}}-E(\pmb {Z})\), \(\breve{\pmb {Z}}_{ij}=(\breve{Z}_{ij1},\ldots ,\breve{Z}_{ijd_{1}})^{T}\) with

$$\begin{aligned} \breve{Z}_{ijk}=\sum _{r=1}^{d_{2}}({\bar{F}}_{krM_{rN}}\chi _{rM_{rN}}(X_{ijr}) +\sum _{\nu =1}^{M_{rN}}f_{kr\nu }^{*}(X_{ijr}))-\vec {Z}_{k}+V_{ijk}, \ \ k=1,\ldots ,d_{1}.\nonumber \\ \end{aligned}$$
(A.2)

Let

$$\begin{aligned} \breve{Y}_{ij}=\sum _{r=1}^{d_{2}}({\bar{F}}_{rM_{rN}}\chi _{rM_{rN}}(X_{ijr}) +\sum _{\nu =1}^{M_{rN}}f_{r\nu }^{*}(X_{ijr}))+\vec {\pmb {Z}}^{T}\pmb {\beta }_{0} -\vec {Y}+\varepsilon _{ij}. \end{aligned}$$
(A.3)

Then, we have

$$\begin{aligned} \begin{array}{ll} \hat{\pmb {\beta }}-\pmb {\beta }_{0}=\breve{\pmb {\Gamma }}_{N}^{-1} \Big [\sum _{i=1}^{m}\sum _{j=1}^{n}\breve{\pmb {Z}}_{ij}\breve{Y}_{ij} -\breve{\pmb {W}}_{N}\pmb {A}_{N}^{-1}\big (\sum _{i=1}^{m}\sum _{j=1}^{n}\pmb {A} (\pmb {X}_{ij})\breve{Y}_{ij}\big )\Big ], \end{array}\quad \end{aligned}$$
(A.4)

with

$$\begin{aligned} \breve{\pmb {\Gamma }}_{N}=\sum _{i=1}^{m}\sum _{j=1}^{n}\breve{\pmb {Z}}_{ij} \breve{\pmb {Z}}_{ij}^{T}-\breve{\pmb {W}}_{N} \pmb {A}_{N}^{-1}\breve{\pmb {W}}_{N}^{T}, \ \ \ \breve{\pmb {W}}_{N}=\sum _{i=1}^{m}\sum _{j=1}^{n}\breve{\pmb {Z}}_{ij}\pmb {A} (\pmb {X}_{ij})^{T}. \end{aligned}$$
(A.5)

Lemma A.1

Under assumptions 1–5, it holds that

$$\begin{aligned} \max _{1\le r\le d, 1\le \nu \le M_{rN}, 1\le k\le p_{r}}\Big |\frac{1}{N}\sum _{i=1}^{m}\sum _{j=1}^{n}\breve{A}_{r\nu k}(X_{ijr})-E(\breve{A}_{r\nu k}(X_{r}))\Big |=o_{p}(M_{N}^{-2}). \end{aligned}$$

Proof

Let \(D_{N}=\{(i,j): 1\le i\le m, 1\le j\le n\}\), \({\tilde{A}}_{r\nu k}(X_{ijr})=\breve{A}_{r\nu k}(X_{ijr})-E(\breve{A}_{r\nu k}(X_{ijr}))\) and \(A^{*}=\max _{1\le r\le d_{2}, 1\le \nu \le M_{rN}, 1\le k\le p_{r}}\big |\frac{1}{N}\sum _{i=1}^{m}\sum _{j=1}^{n}{\tilde{A}}_{r\nu k}(X_{ijr})|\). For any sufficiently small positive constant \(\varepsilon \), we have

$$\begin{aligned} \begin{array}{ll} P(A^{*}>\varepsilon M_{N}^{-2}) &{} \le \frac{M_{N}^{4}}{\varepsilon ^{2}N^{2}}\sum _{r=1}^{d_{2}}\sum _{\nu =1}^{M_{rN}} \sum _{k}\big [\sum _{(i,j)\in D_{N}}E(\breve{A}_{r\nu k}^{2}(X_{ijr}))\\ &{} +\sum \sum _{(i',j')\ne (i,j)\in D_{N}}|E({\tilde{A}}_{r\nu k}(X_{ijr}){\tilde{A}}_{r\nu k}(X_{i'j'r}))|\big ]. \end{array} \end{aligned}$$
(A.6)

Let \(c_{Nk}=[M_{N}^{\delta /((2+\delta )\tau )}]\) for \(k=1,2\), where \(\tau >2(4+\delta )/(2+\delta )\) is a constant. Let the set \(\{(i,j)\ne (i',j')\in D_{N}\}\) be split into the following two parts

$$\begin{aligned} \begin{array}{l} {\mathbf {S}}_{1}=\{(i,j)\ne (i',j')\in D_{N}: |i-i'|\le c_{N1}, |j-j'|\le c_{N2}\},\\ {\mathbf {S}}_{2}=\{(i,j)\ne (i',j')\in D_{N}: |i-i'|> c_{N1} \; \text{ or } \; |j-j'|> c_{N2}\}.\\ \end{array} \end{aligned}$$

By assumption 5, we have

$$\begin{aligned} \begin{array}{l} \frac{M_{N}^{4}}{\varepsilon ^{2}N^{2}}\sum _{r=1}^{d_{2}}\sum _{\nu =1}^{M_{rN}} \sum _{k}\sum _{(i',j'), (i,j)}\sum _{\in {\mathbf {S}}_{1}}|E({\tilde{A}}_{r\nu k}(X_{ijr}){\tilde{A}}_{r\nu k}(X_{i'j'r}))| \\ \le CM_{N}^{4+2\delta /((2+\delta )\tau )}/N=o(1).\end{array} \end{aligned}$$
(A.7)

Turning to \({\mathbf {S}}_{2}\), using Lemma 5.1 of Hallin et al. (2004b), we obtain that

$$\begin{aligned} \begin{array}{ll} |E({\tilde{A}}_{r\nu k}(X_{ijr}){\tilde{A}}_{r\nu k}(X_{i'j'r}))| &{} \le C(E|{\tilde{A}}_{r\nu k}(X_{ijr})|^{2+\delta }))^{2/(2+\delta )} (\varphi (\Vert (i',j')\\ &{}-(i,j)\Vert ))^{\delta /(2+\delta )}\le CM_{N}^{-2/(2+\delta )}(\varphi (\Vert (i',j')\\ &{}-(i,j)\Vert ))^{\delta /(2+\delta )}. \end{array} \end{aligned}$$

Therefore, by assumptions 4 and 5 , we get

$$\begin{aligned} \begin{array}{l} \frac{M_{N}^{4}}{\varepsilon ^{2}N^{2}}\sum _{r=1}^{d_{2}}\sum _{\nu =1}^{M_{rN}} \sum _{k}\sum _{(i',j'), (i,j)}\sum _{\in {\mathbf {S}}_{2}}|E({\tilde{A}}_{r\nu k}(X_{ijr}){\tilde{A}}_{r\nu k}(X_{i'j'r}))| \\ \le C\frac{M_{N}^{4}}{\varepsilon ^{2}N^{2}}N\sum _{l=1}^{2}c_{Nl}^{\tau } \sum _{t=c_{Nl}}^{N}t(\varphi (t))^{\delta /(2+\delta )} =o(1). \end{array} \end{aligned}$$
(A.8)

Now Lemma A.1 follows from (A.6)–(A.8) and the fact that \(\sum _{(i,j)\in D_{N}}E(\breve{A}_{r\nu k}^{2}(X_{ijr}))\le NM_{N}^{-1}\). \(\square \)

Lemma A.2

Under assumptions 1-5, it holds that

$$\begin{aligned} \Vert \breve{\pmb {W}}_{N}/N\Vert _{\infty }=o_{p}(M_{N}^{1/2+\delta /(2+\delta )\tau }/N^{1/2}). \end{aligned}$$

Proof

We first prove

$$\begin{aligned} \Vert \frac{1}{N}\sum _{i=1}^{m}\sum _{j=1}^{n}\pmb {A}(\pmb {X}_{ij})V_{ijl} \Vert _{\infty }=o_{p}(M_{N}^{1/2+\delta /(2+\delta )\tau }/N^{1/2}), \ \ l=1,\ldots ,d_{1}. \end{aligned}$$
(A.9)

Let \(\xi _{ijr\nu 1}=\chi _{r\nu }(X_{ijr})-E(\chi _{r\nu }(X_{r}))\chi _{rM_{rN}}(X_{ijr}) /E(\chi _{rM_{rN}}(X_{r}))\),

$$\begin{aligned} \begin{array}{l} \xi _{r\nu 2}=\frac{\sum _{i=1}^{m}\sum _{j=1}^{n}\chi _{r\nu }(X_{ijr})}{\sum _{i=1}^{m} \sum _{j=1}^{n}\chi _{rM_{rN}}(X_{ijr})}-\frac{E(\chi _{r\nu }(X_{r}))}{E(\chi _{rM_{rN}}(X_{r}))}, \\ \eta _{ijr\nu k1}=\big (\frac{X_{ijr}-x_{r\nu }}{h_{r0}}\big )^{k}-\frac{E(((X_{r}-x_{r\nu })/h_{r0})^{k} \chi _{r\nu }(X_{r}))}{E(\chi _{r\nu }(X_{r}))}, \\ \eta _{r\nu k2}=\frac{\sum _{i=1}^{m}\sum _{j=1}^{n}\chi _{r\nu }(X_{ijr})((X_{ijr}-x_{r\nu }) /h_{r0})^{k}}{\sum _{i=1}^{m}\sum _{j=1}^{n}\chi _{r\nu }(X_{ijr})} -\frac{E(\chi _{r\nu }(X_{r})((X_{r}-x_{r\nu })/h_{r0})^{k})}{E(\chi _{r\nu }(X_{r}))}. \end{array} \end{aligned}$$

Then

$$\begin{aligned} \begin{array}{l} \Vert \frac{1}{N}\sum _{i=1}^{m}\sum _{j=1}^{n}\pmb {A}(\pmb {X}_{ij})V_{ijl}\Vert _{\infty } \\ \le \sum _{r=1}^{d_{2}}\sum _{\nu =1}^{M_{rN}-1}\frac{1}{N}\big |\sum _{i=1}^{m} \sum _{j=1}^{n}(\xi _{ijr\nu 1}+\chi _{rM_{rN}}(X_{ijr})\xi _{r\nu 2})V_{ijl}\big | \\ +\sum _{r=1}^{d_{2}}\sum _{\nu =1}^{M_{rN}}\sum _{k=1}^{p_{r}}\frac{1}{N}\big | \sum _{i=1}^{m}\sum _{j=1}^{n}\chi _{r\nu }(X_{ijr})(\eta _{ijr\nu k1}+\eta _{r\nu k2})V_{ijl}\big |. \end{array} \end{aligned}$$
(A.10)

Similar to the proof of Lemma A.1, we obtain that \(E[\sum _{i=1}^{m}\sum _{j=1}^{n}\xi _{ijr\nu 1}V_{ijl}]^{2}\le CNM_{N}^{2\delta /((2+\delta )\tau )-1}\) and \(E[\sum _{i=1}^{m}\sum _{j=1}^{n}\chi _{r\nu }(X_{ijr})\eta _{ijr\nu k1}V_{ijl}]^{2}\le CNM_{N}^{2\delta /((2+\delta )\tau )-1}\). Hence,

$$\begin{aligned}&\sum _{r=1}^{d_{2}}\sum _{\nu =1}^{M_{rN}-1}\frac{1}{N}|\sum _{i=1}^{m}\sum _{j=1}^{n} \xi _{ijr\nu 1}V_{ijl}|=o_{p}(M_{N}^{1/2+\delta /(2+\delta )\tau }/N^{1/2}), \end{aligned}$$
(A.11)
$$\begin{aligned}&\sum _{r=1}^{d_{2}}\sum _{\nu =1}^{M_{rN}}\sum _{k=1}^{p_{r}}\frac{1}{N}| \sum _{i=1}^{m}\sum _{j=1}^{n}\chi _{r\nu }(X_{ijr})\eta _{ijr\nu k1}V_{ijl}|=o_{p}(M_{N}^{1/2+\delta /(2+\delta )\tau }/N^{1/2}).\nonumber \\ \end{aligned}$$
(A.12)

Lemma A.1 implies \(\max _{1\le r\le d_{2}, 1\le \nu \le M_{rN}-1}|\xi _{r\nu 2}|=o_{p}(M_{N}^{-1})\), \(\max _{1\le r\le d_{2}, 1\le \nu \le M_{rN}}|\eta _{r\nu k2}|=o_{p}(M_{N}^{-1})\). Since \(E[\chi _{r\nu }(X_{ijr})V_{ijl}]=E[(\chi _{r\nu }(X_{ijr})-E\chi _{r\nu }(X_{ijr}))V_{ijl}] +E(\chi _{r\nu }(X_{ijr}))E(V_{ijl})=0\), then by arguments similar to those used in the proof of Lemma A.1, we have

$$\begin{aligned} \frac{1}{N}\big |\sum _{i=1}^{m}\sum _{j=1}^{n}\chi _{rM_{rN}}(X_{ijr})V_{ijl} \big |=o_{p}(N^{-1/2}), \end{aligned}$$
(A.13)

Therefore,

$$\begin{aligned} \sum _{r=1}^{d_{2}}\sum _{\nu =1}^{M_{rN}-1}\frac{|\xi _{r\nu 2}|}{N}\big |\sum _{i=1}^{m}\sum _{j=1}^{n}\chi _{rM_{rN}}(X_{ijr})V_{ijl} \big |=o_{p}(N^{-1/2}), \end{aligned}$$
(A.14)
$$\begin{aligned} \sum _{r=1}^{d_{2}}\sum _{\nu =1}^{M_{rN}}\sum _{k=1}^{p_{r}}\frac{|\eta _{r\nu k2}|}{N}\big |\sum _{i=1}^{m}\sum _{j=1}^{n}\chi _{r\nu }(X_{ijr})V_{ijl} \big |=o_{p}(N^{-1/2}). \end{aligned}$$
(A.15)

Now (A.9) follows from (A.10)–(A.12), (A.14) and (A.15). Similar to the proof of (A.9), we deduce that

$$\begin{aligned} \Vert \frac{1}{N}\sum _{i=1}^{m}\sum _{j=1}^{n}\chi _{rM_{rN}}(X_{ijr})\pmb {A} (\pmb {X}_{ij})\Vert _{\infty }=o_{p}(M_{N}^{-1}). \end{aligned}$$
(A.16)

Using the fact that \(E(f_{r}(X_{r}))=0\), we get that \({\bar{F}}_{rM_{rN}}=O_{p}(M_{N}^{3/2}/N^{1/2})\). Hence,

$$\begin{aligned} \Vert \frac{1}{N}\sum _{i=1}^{m}\sum _{j=1}^{n}{\bar{F}}_{rM_{rN}}\chi _{rM_{rN}}(X_{ijr}) \pmb {A}(\pmb {X}_{ij})\Vert _{\infty }=o_{p}(M_{N}^{1/2}/N^{1/2}). \end{aligned}$$
(A.17)

Similar to the proof of (A.9) and using Assumptions , we have

$$\begin{aligned} \Vert \frac{1}{N}\sum _{i=1}^{m}\sum _{j=1}^{n}\big [\sum _{r=1}^{d_{2}} \sum _{\nu =1}^{M_{rN}}f_{kr\nu }^{*}(X_{ijr})\big ]\pmb {A}(\pmb {X}_{ij}) \Vert _{\infty }=o_{p}(M_{N}^{-p_{r}}N^{-1/2}). \end{aligned}$$
(A.18)

Similar to the proof of (A.17), we get that

$$\begin{aligned} \Vert \frac{1}{N}\sum _{i=1}^{m}\sum _{j=1}^{n}\vec {Z}_{k}\pmb {A}(\pmb {X}_{ij}) \Vert _{\infty }=o_{p}(M_{N}^{-1/2}N^{-1/2}). \end{aligned}$$
(A.19)

Now Lemma A.2 follows from (A.9) and (A.17)–(A.19)and Assumption 5. \(\square \)

Lemma A.3

Under Assumptions 1-5, it holds that

$$\begin{aligned} \breve{\pmb {\Gamma }}_{N}/N=\pmb {\Gamma }+o_{p}(1). \end{aligned}$$

Proof

We first prove that \(M_{N}\pmb {A}_{N}/N\) is invertible. Let \(\lambda _{min}\) be the minimum eigenvalue of \(M_{N}\pmb {A}_{N}/N\). By Lemma 3 of Stone (1985) and Lemma A.1 and using Assumption 4 and the fact that \(\chi _{r\nu }(X_{ijr})\chi _{r\nu '}(X_{ijr})=0\) for \(\nu \ne \nu '\), we have that

$$\begin{aligned} \begin{array}{ll} \lambda _{min} &{} =\inf _{\Vert \pmb {a}\Vert =1}\frac{M_{N}}{N}\pmb {a}^{T}\pmb {A}_{N}\pmb {a}\\ &{} =\inf _{\Vert \pmb {a}\Vert =1}\frac{M_{N}}{N}\sum _{i=1}^{m}\sum _{j=1}^{n} \Big (\sum _{r=1}^{d_{2}}[\sum _{\nu =1}^{M_{rN}-1}\sum _{k=0}^{p_{r}}a_{r\nu k}A_{r\nu k}(X_{ijr}) \\ &{} +\sum _{k=1}^{p_{r}}a_{rM_{rN} k}A_{rM_{rN} k}(X_{ijr})]\Big )^{2} \\ &{} \ge C\inf _{\Vert \pmb {a}\Vert =1}\sum _{r=1}^{d_{2}}\frac{M_{N}}{N}\sum _{i=1}^{m} \sum _{j=1}^{n}\big [\sum _{\nu =1}^{M_{rn}-1}(a_{r\nu 0}\chi _{r\nu }(X_{ijr})\\ &{}+\sum _{k=1}^{p_{r}}a_{r\nu k}A_{r\nu k}(X_{ijr}))^{2} \\ &{} +(-\chi _{rM_{rN}}(X_{ijr})\sum _{\nu =1}^{M_{rN}-1}a_{r\nu 0}E_{rM_{rN}\nu } +\sum _{k=1}^{p_{r}}a_{rM_{rN} k}A_{rM_{rN} k}(X_{ijr}))^{2}\big ] \\ &{} \ge C\inf _{\Vert \pmb {a}\Vert =1}\sum _{r=1}^{d_{2}}(\sum _{\nu =1}^{M_{rn}-1}\pmb {a}_{r\nu }^{T}G_{r}\pmb {a}_{r\nu }+\pmb {a}_{rM_{rn}}^{T}G_{r}^{*}\pmb {a}_{rM_{rn}}) +o_{p}(1), \end{array} \end{aligned}$$

where \(E_{rM_{rN}\nu }=\sum _{i=1}^{m}\sum _{j=1}^{n}\chi _{r\nu }(X_{ijr})/\sum _{i=1}^{m} \sum _{j=1}^{n}\chi _{rM_{rN}}(X_{ijr})\), \(G_{r}=(g_{rij})_{(p_{r}+1)\times (p_{r}+1)}\) with \(g_{r11}=2\) and \(g_{rij}=\int _{-1}^{1}x_{r}^{i+j-2}dx_{r}-\int _{-1}^{1}x_{r}^{i-1}dx_{r} \int _{-1}^{1}x_{r}^{j-1}dx_{r}/2\) for \(i>1\) or \(j>1\), \(G_{r}^{*}=(g_{rij}^{*})_{p_{r}\times p_{r}}\) with \(g_{rij}^{*}=\int _{-1}^{1}x_{r}^{i+j}dx_{r}-\int _{-1}^{1}x_{r}^{i}dx_{r} \int _{-1}^{1}x_{r}^{j}dx _{r}/2\) and \(\pmb {a}_{r\nu }=(a_{r\nu 0}, a_{r\nu 1},\ldots ,a_{r\nu p_{r}})^{T}\) for \(\nu =1,\ldots ,M_{rn}-1\) and \(\pmb {a}_{rM_{rn}}=(a_{rM_{rn}1},\ldots ,a_{rM_{rn}p_{r}})^{T}\). For fixed \(p_{r}\), it is easy to prove that \(G_{r}\) and \(G_{r}^{*}\) are positive definite. Hence, there exists a positive constant \(C_{1}^{*}\) such that \(\lambda _{min}\ge C_{1}^{*}+o_{p}(1)\) and consequently \(M_{N}\pmb {A}_{N}/N\) is invertible. By arguments similar to those used to prove Lemma A.1 and using the fact that \(E(f_{kr}(U_{r}))=0\) for \(k=1,\ldots ,d_{1}\), we get that

$$\begin{aligned} \frac{1}{N}\sum _{i=1}^{m}\sum _{j=1}^{n}\breve{\pmb {Z}}_{ij} \breve{\pmb {Z}}_{ij}^{T}=E(\pmb {V}\pmb {V}^{T})+o_{p}(1). \end{aligned}$$
(A.20)

Using Lemma A.2 and assumption 5, we obtain that

$$\begin{aligned} |\breve{\pmb {W}}_{N} \pmb {A}_{N}^{-1}\breve{\pmb {W}}_{N}^{T}/N|_{\infty }\le M_{N}\Vert \breve{\pmb {W}}_{N}/N\Vert _{\infty }^{2}\cdot |(M_{N}\pmb {A}_{N}/N)^{-1}|_{\infty } =o_{p}(1). \end{aligned}$$
(A.21)

Now Lemma A.3 follows from (A.5), (A.20) and (A.21). \(\square \)

Proof of Theorem 3.1

Using (A.13), we obtain that

$$\begin{aligned} \sum _{r=1}^{d_{2}}\frac{1}{N}\sum _{i=1}^{m}\sum _{j=1}^{n}{\bar{F}}_{rM_{rN}} \chi _{rM_{rN}}(X_{ijr})V_{ijk}=o_{p}(M_{N}^{3/2}N^{-1}). \end{aligned}$$
(A.22)

Similar to the proof of (A.18), we have

$$\begin{aligned} \frac{1}{N}\sum _{i=1}^{m}\sum _{j=1}^{n}\big [\sum _{r=1}^{d_{2}}\sum _{\nu =1}^{M_{rN}} f_{kr\nu }^{*}(X_{ijr})\big ]V_{ijk}=o_{p}(M_{N}^{-p_{r}}N^{-1/2}). \end{aligned}$$
(A.23)

Since

$$\begin{aligned} \frac{1}{N}\sum _{i=1}^{m}\sum _{j=1}^{n}(\vec {\pmb {Z}}^{T}\pmb {\beta }_{0} +\vec {Y})V_{ijk}=(\vec {\pmb {Z}}^{T}\pmb {\beta }_{0}+\vec {Y}) \frac{1}{N}\sum _{i=1}^{m}\sum _{j=1}^{n}V_{ijk}=o_{p}(M_{N}N^{-1}), \nonumber \\ \end{aligned}$$
(A.24)

then by (A.3), we have

$$\begin{aligned} N^{-1/2}\sum _{i=1}^{m}\sum _{j=1}^{n}\breve{Y}_{ij}V_{ijk}=N^{-1/2}\sum _{i=1}^{m} \sum _{j=1}^{n}\varepsilon _{ij}V_{ijk}+o_{p}(1). \end{aligned}$$
(A.25)

Similarly, \(N^{-1/2}\sum _{i=1}^{m}\sum _{j=1}^{n}(\breve{Z}_{ijk}-V_{ijk})\varepsilon _{ij} =o_{p}(1)\). Under the assumptions of Theorem 3.1, it is easy to prove that

$$\begin{aligned} N^{-1/2}\sum _{i=1}^{m}\sum _{j=1}^{n}(\breve{Z}_{ijk}-V_{ijk})(\breve{Y}_{ij} -\varepsilon _{ij})=o_{p}(1). \end{aligned}$$

Hence,

$$\begin{aligned} N^{-1/2}\sum _{i=1}^{m}\sum _{j=1}^{n}\breve{\pmb {Z}}_{ij}\breve{Y}_{ij} =N^{-1/2}\sum _{i=1}^{m}\sum _{j=1}^{n}\pmb {V}_{ij}\varepsilon _{ij}+o_{p}(1). \end{aligned}$$
(A.26)

Similar to the proof of Lemma A.2, we deduce that

$$\begin{aligned} \Vert \frac{1}{N}\sum _{i=1}^{m}\sum _{j=1}^{n}\pmb {A}(\pmb {X}_{ij}) \breve{Y}_{ij}\Vert _{\infty }=o_{p}(M_{N}^{1/2+\delta /(2+\delta )\tau }/N^{1/2}). \end{aligned}$$
(A.27)

Therefore, Using Lemma A.2 and (A.27), we conclude that

$$\begin{aligned}&N^{-1/2}|\breve{\pmb {W}}_{N}\pmb {A}_{N}^{-1}\big (\sum _{i=1}^{m}\sum _{j=1}^{n} \pmb {A}(\pmb {X}_{ij})\breve{Y}_{ij}\big )|_{\infty } \nonumber \\&\quad \le N^{1/2}M_{N}\Vert \breve{\pmb {W}}_{N}/N\Vert _{\infty }\cdot |(M_{N}\pmb {A}_{N}/N)^{-1}|_{\infty } \cdot \Vert \frac{1}{N}\sum _{i=1}^{m}\sum _{j=1}^{n}\pmb {A}(\pmb {X}_{ij})\breve{Y}_{ij} \Vert _{\infty }\nonumber \\&\quad =N^{1/2}M_{N}o_{p}(M_{N}^{1+2\delta /(2+\delta )\tau }/N)=o_{p}(1). \end{aligned}$$
(A.28)

By arguments similar to those used in the proof of Lemma 6 of Tang and Cheng (2009), we can prove that \(N^{-1/2}\sum _{i=1}^{m}\sum _{j=1}^{n}\pmb {V}_{ij}\varepsilon _{ij}\) is asymptotically normal. Therefore, (3.2) follows from (A.4), Lemma (A.3), (A.25) and (A.28). The proof of Theorem 3.1 is completed. \(\square \)

Proof of Theorem 3.2

Let \(\pmb {a}_{0}=(\pmb {a}_{01}^{T},\ldots ,\pmb {a}_{0d}^{T})^{T}\). By arguments similar to those used to prove Lemma A.2, we deduce that

$$\begin{aligned} \Vert \tilde{\pmb {a}}-\pmb {a}_{0}\Vert ^{2}&\le CM_{N}^{2}\Vert \sum _{i=1}^{m}\sum _{j=1}^{n}\pmb {A}(\pmb {X}_{ij})\big ({\bar{Y}}_{ij}- \bar{\pmb {Z}}_{ij}^{T}\hat{\pmb {\beta }}-\pmb {A}^{T}(\pmb {X}_{ij})\pmb {a}_{0}) \Vert ^{2}\\&=o_{p}(M_{N}^{2+2\delta /(2+\delta )\tau }/N). \end{aligned}$$

Hence, under the assumptions of Theorem 3.2, by arguments similar to those used to prove Lemma A.2, we obtain that

$$\begin{aligned} \begin{array}{ll} \Big (\sum _{i=1}^{m}\sum _{j=1}^{n}K\big (\frac{X_{ijr}-x_{0r}}{h_{r}}\big ) \big (\sum _{r'\ne r}{\tilde{f}}_{r'}(X_{ijr'})-\sum _{r'\ne r}f_{r'}(X_{ijr'})\big )\Big )^{2} \\ \le \Vert \tilde{\pmb {a}}-\pmb {a}_{0}\Vert ^{2}\big \Vert \sum _{i=1}^{m}\sum _{j=1}^{n}K \big (\frac{X_{ijr}-x_{0r}}{h_{r}}\big )\pmb {A}_{-r}(\pmb {X}_{ij})\big \Vert ^{2} \\ +\big [\sum _{i=1}^{m}\sum _{j=1}^{n}K\big (\frac{X_{ijr}-x_{0r}}{h_{r}}\big ) \sum _{r'\ne r}({\bar{F}}_{r'M_{r'N}}\chi _{r'M_{r'N}}(X_{ijr'}) +\sum _{\nu =1}^{M_{r'N}}f_{r'\nu }^{*}(X_{ijr'}))\big ]^{2} \\ =o_{p}(NM_{N}^{1+2\delta /(2+\delta )\tau }h_{r}^{2})=o_{p}(Nh_{r}), \end{array} \end{aligned}$$

where \(\pmb {A}_{-r}(\pmb {X}_{ij})=(\pmb {A}_{1}(X_{ij1}),\ldots ,\pmb {A}_{r-1} (X_{ij(r-1)}),\pmb {A}_{r+1}(X_{ij(r+1)}),\ldots ,\pmb {A}_{d}(X_{ijd}))^{T}\). Therefore,

$$\begin{aligned} (Nh_{r})^{-1/2}\sum _{i=1}^{m}\sum _{j=1}^{n}K\big (\frac{X_{ijr}-x_{0r}}{h_{r}}\big )\pmb {B}_{ijr}\big (\sum _{r'\ne r}{\tilde{f}}_{r'}(X_{ijr'})-\sum _{r'\ne r}f_{r'}(X_{ijr'})\big )=o_{p}(1). \nonumber \\ \end{aligned}$$
(A.29)

Now by arguments similar to those used in the proof of Theorem 3.1 of Hallin et al. (2004) and using (A.29), we can easily complete the proof of Theorem 3.2. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Qingguo, T., Wenyu, C. Estimation for partially linear additive regression with spatial data. Stat Papers 63, 2041–2063 (2022). https://doi.org/10.1007/s00362-022-01326-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00362-022-01326-8

Keywords

Navigation