Skip to main content
Log in

Simplified Matrix Methods for Multivariate Edgeworth Expansions

  • Original Article
  • Published:
Journal of Quantitative Economics Aims and scope Submit manuscript

Abstract

Simplified matrix methods are used to analyze the higher order asymptotic properties of \(k\times 1\) sample averages. Kronecker differentiation is used to define \(k^{j }\times 1\), j’th order moments, \(\mu _j\), cumulants \(\kappa _j\) and Hermite polynomials, \(H_j\). These are then used to derive valid multivariate Edgeworth expansions of arbitrary order having the same form as the standard univariate case: \(p(x) = \phi (x)[1 + N^{-1/2} \kappa _{3}' H_{ 3}(x) /6 +{ N^{-1} } ( 3 { \kappa _{4}' }{ } H_{ 4}(x) + \kappa _3'^{ \otimes 2} H_{ 6}(x) )/72+\cdots ]\). All the usual steps in the development of a valid Edgeworth expansion are shown to be easily derived using matrix algebra.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. Papers by Phillips and Park (1988), Hansen (2006), Rilstone et al. (1996), Kollo and von Rosen (1998) and Kundhi and Rilstone (2012, 2013), amongst others, have somewhat streamlined this by employing some limited matrix algebra.

  2. A formal matrix-based third-order Edgeworth is presented in Traat (1986), but the result is not generalized nor are the derivations given necessary to demonstrate it is a valid asymptotic expansion.

  3. This designation is convenient and has been used by Jammalamadaka et al. (2006). We will similarly refer to K-moments and K-cumulants.

  4. e.g., Kollo and von Rosen (2005, p. 121)

  5. MacRae (1974) provides a matrix version of the chain rule although it uses star products and is not readily integrated with other matrix operations. Magnus (2010) states the result in Proposition 5, but does not supply a proof.

  6. Chambers (1967) derives a multivariate Edgeworth, but defines Hermite polynomials using multi-index notation. Kollo and von Rosen (1998) use the third Hermite in matrix form.

  7. E.g. see e.g. Barndorff-Nielsen and Cox (1989, p.174).

  8. See, e.g., Taniguchi (1984).

References

  • Balestra, Pietro. 1976. La derivation matricielle, collection de l’institut de Mathematiques Economiques. Paris: Sirey.

    Google Scholar 

  • Barndorff-Nielson, O.E., and D.R. Cox. 1989. Asymptotic techniques for use in statistics. New York: Chapman and Hall.

    Book  Google Scholar 

  • Bhattacharya, R.N., and J.K. Ghosh. 1978. On the validity of the formal edgeworth expansion. The Annals of Statistics 6 (2): 434–451.

    Article  Google Scholar 

  • Chambers, J.M. 1967. On methods of asymptotic approximation for multivariate distributions. Biometrika 50 (3/4): 367–383.

    Article  Google Scholar 

  • Feller, William. 1971. An introduction to probability theory and its applications, vol. 2. New York: Wiley.

    Google Scholar 

  • Hall, Peter. 1992. The bootstrap and the edgeworth expansion. New York: Springer.

    Book  Google Scholar 

  • Hansen, B.E. 2006. Edgeworth Expansions for the Wald and GMM Statistics for Nonlinear Restrictions, in Econometric Theory and Practice. In Steven N, ed. Dean Corbae. Hansen: Durlauf and Bruce E.

    Google Scholar 

  • Holmquist, Bjorn (1996) The \(d\)-Variate Vector Hermite Polynomial of Order \(k\), Linear Algebra and its Applications (237/238), 155–190

  • Jammalamadaka, S. Rao, T. Subba Rao and Gyorgy Terdik (2006) higher order cumulants of random vectors and applications to statistical inference and time series

  • Kollo, T. 1991. Matrix derivatives in multivariate statistics. Tartu: Tartu University Press.

    Google Scholar 

  • Kollo, Tonu, and Dietrich von Rosen. 1998. A unified approach to the approximation of multivariate densities. Scandinavian Journal of Statistics 25: 93–109.

    Article  Google Scholar 

  • Kollo, Tonu, and Dietrich von Rosen 2005. Advanced Multivariate Statistics with Matrices, Springer, Berlin 93–109.

  • Kundhi, G., and Paul Rilstone. 2008. The third order bias of nonlinear estimators. Communications in Statistics 37 (16): 2617–2633.

    Article  Google Scholar 

  • Kundhi, G., and Paul Rilstone. 2012. Edgeworth expansions for GEL estimators. Journal of Multivariate Analysis 106: 118–146.

    Article  Google Scholar 

  • Kundhi, G., and Paul Rilstone. 2013. Edgeworth and saddlepoint expansions for nonlinear estimators. Econometric Theory 29: 1–22.

    Article  Google Scholar 

  • MacRae, E.C. 1974. Matrix derivatives with an application to an adaptive linear decision problem. Annals of Statistics 2: 337–346.

    Article  Google Scholar 

  • Magnus, Jan R. 2010. On the concept of matrix derivative. Journal of Multivariate Analysis 101: 2200–2206.

    Article  Google Scholar 

  • Magnus, Jan R., and Heinz Neudecker. 1988. Matrix differential calculus, 1988. New York: Wiley.

    Google Scholar 

  • McCullagh, P. 1987. Tensor methods in statistics. New York: Chapman and Hall.

    Google Scholar 

  • Phillips, P.C.B., and J.Y. Park. 1988. On the formulation of wald tests of nonlinear restrictions. Econometrica 56 (5): 1065–1083.

    Article  Google Scholar 

  • Rilstone, Paul, V.K. Srivastava, and Aman Ullah. 1996. The second-order bias and mean squared error of nonlinear estimators. Journal of Econometrics 75 (2): 369–395.

    Article  Google Scholar 

  • Schild, A., and J.L. Synge. 1969. Tensor calculus. New York: Dover.

    Google Scholar 

  • Taniguchi, Masanobu. 1984. Validity of Edgeworth Expansions for Statistics of Time Series. Journal of Time Series Analysis 5 (1):

  • Traat, I. 1986. Matrix calculus for multivariate distributions. Acta et Commentationes Universitatis Tartuensis 733: 64–84.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gubhinder Kundhi.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The authors are grateful to two referees for their insightful comments. Any errors are the responsibility of the authors.

Appendix

Appendix

This Appendix provides proofs of various propositions in the paper and provides numerous examples of the quantities used in the paper.

As examples of commutation matrices with \(k=2\) so that \( K_{k^2,k} = K_{4,2} \) and \( K_{k ,k^2} = K_{2,4}\), we have

$$\begin{aligned} K_{4,2} = \begin{pmatrix} \begin{pmatrix} 1&{} 0&{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 \end{pmatrix} &{} \begin{pmatrix} 0&{} 0&{} 0 &{} 0 \\ 1 &{} 0 &{} 0 &{} 0 \end{pmatrix} \\ \begin{pmatrix}0&{}1&{} 0 &{} 0 \\ 0 &{} 0&{} 0 &{} 0 \end{pmatrix} &{} \begin{pmatrix} 0&{} 0&{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 \end{pmatrix} \\ \begin{pmatrix}0&{} 0&{} 1 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 \end{pmatrix} &{} \begin{pmatrix} 0&{} 0&{} 0 &{} 0 \\ 0 &{} 0 &{} 1 &{} 0 \end{pmatrix} \\ \begin{pmatrix}0&{} 0&{}0 &{}1 \\ 0 &{} 0 &{} 0 &{} 0 \end{pmatrix} &{} \begin{pmatrix} 0&{} 0&{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 1 \end{pmatrix} \end{pmatrix},\, K_{2,4} = \begin{pmatrix} \begin{pmatrix} 1&{} 0\\ 0 &{} 0 \\ 0 &{} 0 \\ 0 &{} 0 \end{pmatrix} &{} \begin{pmatrix}0&{} 0\\ 1 &{} 0 \\ 0 &{} 0 \\ 0 &{} 0 \end{pmatrix} &{} \begin{pmatrix} 0&{} 0\\ 0 &{} 0 \\ 1 &{} 0 \\ 0 &{} 0 \end{pmatrix} &{} \begin{pmatrix}0&{} 0\\ 0 &{} 0 \\ 0 &{} 0 \\ 1 &{} 0 \end{pmatrix} \\ \begin{pmatrix} 0&{} 1\\ 0 &{} 0 \\ 0 &{} 0 \\ 0 &{} 0 \end{pmatrix} &{} \begin{pmatrix}0&{} 0\\ 0 &{} 1 \\ 0 &{}0\\ 0 &{} 0 \end{pmatrix} &{} \begin{pmatrix} 0&{} 0\\ 0 &{} 0 \\ 0 &{} 1 \\ 0 &{} 0 \end{pmatrix} &{} \begin{pmatrix}0&{} 0\\ 0 &{} 0 \\ 0 &{} 0 \\ 0 &{}1 \end{pmatrix} \end{pmatrix}, \end{aligned}$$

noting that, if x, y and z are \(k\times 1 \) vectors, we have \(K_{k^2,k}(x\otimes y\otimes z ) = y\otimes z\otimes x\) and \(K_{k ,k^2}(x\otimes y\otimes z ) = z\otimes x\otimes y\).

As examples of higher-order Kronecker derivatives, consider a function \(f:{{\mathbb {R}}}^2\rightarrow {{\mathbb {R}}}\). Here we indicate individual partial derivatives by superscripts:

$$\begin{aligned} f^{i_1 i_2\cdots i_j}(x) = \frac{\partial ^j f(x)}{\partial x_{i_j}\cdots \partial x_{i_2} \partial x_{i_1}}. \end{aligned}$$
(A.1)

To illustrate the first four Kronecker derivatives (note that these are with respect to \(x'\)) we have (suppressing the x argument f)

$$\begin{aligned} f^{(1)}&= \bigtriangledown _{x'} f\nonumber \\&= \begin{pmatrix} f^1&f^2 \end{pmatrix},\nonumber \\ f ^{(2)}&= \bigtriangledown _{x'} f^{(1)} \nonumber \\&= \begin{pmatrix} f^{11}&f^{12}&f^{21}&f^{22} \end{pmatrix},\nonumber \\ f ^{(3)}&= \bigtriangledown _{x'} f^{(2)} \nonumber \\&= \begin{pmatrix} f^{111}&f^{112}&f^{121}&f^{122}&f^{211}&f^{212}&f^{221}&f^{222}\end{pmatrix}, \nonumber \\ f ^{(4)}&= \bigtriangledown _{x'} f^{(3)}\nonumber \\&= \begin{pmatrix} \begin{matrix} f^{1111} &{} f^{1112} &{} f^{1121} &{} f^{1122} &{} f^{1211} &{} f^{1212} &{}\cdots \\ \cdots &{} f^{1221} &{} f^{1222} &{} f^{2111} &{} f^{2112} &{} f^{2121} &{} \cdots \\ \cdots &{} f^{2122} &{} f^{2211} &{} f^{2212} &{} f^{2221} &{} f^{2222}&{} \end{matrix}\end{pmatrix}.&\end{aligned}$$
(A.2)

Proof to Proposition 3

We can write the J’th K-derivative as

$$\begin{aligned} \bigtriangledown ^J f(x)&= \sum _{i_1=1}^k\cdots \sum _{i_J=1}^k\frac{\partial ^J f(x)}{\partial x_{i_1}\cdots \partial x_{i_J}}\bigotimes _{l=1}^J \iota _{i_l}. \end{aligned}$$
(A.3)

Thus,

$$\begin{aligned}&(I_{k^j}\otimes K_{k^{J-j},k}) \bigtriangledown ^J f(x) \nonumber \\&\quad = \sum _{i_1=1}^k\cdots \sum _{i_J=1}^k\frac{\partial ^Jf(x)}{\partial x_{i_1}\cdots \partial x_{i_J}}(I_{k^j}\otimes K_{k^{J-j},m})\left(\bigotimes _{l=1}^{j} \iota _{i_l}\bigotimes _{l=1}^{J-j} \iota _{i_l}\otimes \iota _{i_J}\right)\nonumber \\&\quad = \sum _{i_1=1}^k\cdots \sum _{i_J=1}^k\frac{\partial ^Jf(x)}{\partial x_{i_1}\cdots \partial x_{i_J}} \bigotimes _{l=1}^{j} \iota _{i_l}\otimes \left( K_{k^{J-j},k} \left(\bigotimes _{l=1}^{J-j} \iota _{i_l}\otimes \iota _{i_J}\right)\right)\nonumber \\&\quad = \sum _{i_1=1}^k\cdots \sum _{i_J=1}^k\frac{\partial ^Jf(x)}{\partial x_{i_1}\cdots \partial x_{i_J}}\left( \bigotimes _{l=1}^{j} \iota _{i_l}\right)\otimes \iota _{i_J} \otimes \left( \bigotimes _{l=1}^{J-j} \iota _{i_l} \right)\nonumber \\&\quad = \bigtriangledown ^J f(x), \end{aligned}$$
(A.4)

the last equality arising from Young’s Theorem. \(\square \)

To illustrate the matrix version of Young’s Theorem stated in the paper, note that for \(k=2\) and second-order derivatives we need to show \( (I_{2^{j-1}}\otimes K_{2^{2-j},2}) \bigtriangledown ^2 f(x) = \bigtriangledown ^2 f(x) \) for \(j=1,2\). For \(j=2\) this is simply \( (I_{2 }\otimes K_{2^{0},2}) \bigtriangledown ^2 f(x) = \bigtriangledown ^2 f(x) \). For \(j=1\) we have

$$\begin{aligned} K_{k,k } \bigtriangledown f = K_{k,k } \text {Vec}\begin{bmatrix} f^{ 11}&f^{12} \\ f^{21}&f^{22}\end{bmatrix} = \text {Vec}\begin{bmatrix} f^{ 11}&f^{ 21} \\ f^{ 12}&f^{22}\end{bmatrix}= \bigtriangledown f. \end{aligned}$$
(A.5)

For third-order derivatives we illustrate by evaluating \( (I_{2^{j-1}}\otimes K_{2^{3-j},2}) \bigtriangledown ^3 f(x) \) for \(j=1,2,3 \). For \( j=3\) this is immediate. For \(j=2\)

$$\begin{aligned} (I_{2^{j-1}}\otimes K_{2^{3-j},2}) \bigtriangledown ^3f&= (I_{2 }\otimes K_{2 ,2}) \bigtriangledown ^3f\nonumber \\&= \begin{pmatrix} K_{2 ,2} &{} 0 \\ 0 &{} K_{2 ,2} \end{pmatrix} \bigtriangledown ^3f\nonumber \\&= \begin{pmatrix} K_{2 ,2} \begin{pmatrix} f^{111}\\ f^{112}\\ f^{121} \\ f^{122} \end{pmatrix} \\ K_{2 ,2} \begin{pmatrix} f^{211} \\ f^{212} \\ f^{221} \\ f^{222}\end{pmatrix}\end{pmatrix} \nonumber \\&= \begin{pmatrix} K_{2 ,2} \text {Vec}\begin{bmatrix} f^{111}&{} f^{112}\\ f^{121}&{} f^{122} \end{bmatrix} \\ K_{2 ,2} \text {Vec}\begin{bmatrix} f^{211} &{} f^{212} \\ f^{221} &{}f^{222}\end{bmatrix}\end{pmatrix} \nonumber \\&= \begin{pmatrix} \text {Vec}\begin{bmatrix} f^{111}&{} f^{122}\\ f^{121}&{} f^{112} \end{bmatrix} \\ \text {Vec}\begin{bmatrix} f^{211} &{} f^{221} \\ f^{221} &{}f^{212}\end{bmatrix}\end{pmatrix}\nonumber \\&= \bigtriangledown ^3 f. \end{aligned}$$
(A.6)

Similarly the result holds when \(j=1\). With respect to fourth-order derivatives (\(J=4\)) the proposition states that \( (I_{2^{j-1}}\otimes K_{2^{4-j},2}) \bigtriangledown ^4 f = \bigtriangledown ^4 f \) for \(j=1,2,3,4 \). For \( j=4\) this is immediate. Consider when \(j=1\). We have

$$\begin{aligned} (I_{2^{j-1}}\otimes K_{2^{4-j},2}) \bigtriangledown ^4 f&= K_{8 ,2} \text {Vec}\begin{bmatrix} \begin{matrix} f^{1111}\\ f^{1112}\\ f^{1121}\\ f^{1122}\\ f^{1211} \\ f^{1212} \\ f^{1221} \\ f^{1222}\end{matrix}&\begin{matrix} f^{2111} \\ f^{2112} \\ f^{2121} \\ f^{2122} \\ f^{2211} \\ f^{2212} \\ f^{2221} \\ f^{2222}\end{matrix}\end{bmatrix}\nonumber \\&= \text {Vec}\begin{bmatrix} \begin{matrix} f^{1111} \\ f^{2111} \end{matrix}&\begin{matrix} f^{1112} \\ f^{2112} \end{matrix}&\begin{matrix}f^{1121} \\ f^{2121} \end{matrix}&\begin{matrix} f^{1122} \\ f^{2122} \end{matrix}&\begin{matrix} f^{1211} \\ f^{2211} \end{matrix}&\begin{matrix}f^{1212} \\ f^{2212} \end{matrix}&\begin{matrix} f^{1221} \\ f^{2221} \end{matrix}&\begin{matrix} f^{1222} \\ f^{2222} \end{matrix} \end{bmatrix}\nonumber \\&= \bigtriangledown ^4 f. \end{aligned}$$
(A.7)

Similarly the result also holds for \(j=2,3\).

Proof to Proposition 4

Let \(y_i= x'\) so that \( x'^{\otimes J}= \bigotimes _{i=1}^J y_i\). Note that

$$\begin{aligned} \bigotimes _{i=1}^J y_i&= \left(\bigotimes _{i=1}^{J-1} y_i\right) \otimes y_{J }\nonumber \\&= \left( y_{J }\otimes \left(\bigotimes _{i=1}^{J-2} y_i\right)\otimes y_{J-1} \right) K_{k, k^{J-1}}\nonumber \\&= \left( \left(\bigotimes _{i=l+1}^{J}y_i\right)\left( \bigotimes _{i=1}^{l-1}y_i\right)\otimes y_{l}\right)K_{k, k^{J-1}} \end{aligned}$$
(A.8)

for any integer l, \(1\le l\le J\) where we repeatedly use \(K_{k, k^{J-1}}K_{k, k^{J-1}}=K_{k, k^{J-1}}\). So,

$$\begin{aligned} \bigtriangledown _{x}\left( \bigotimes _{i=1}^J y_i\right)&= \sum _{l=1}^J \bigtriangledown _{x} \left(\left(\bigotimes _{i=l+1}^{J}y_i\right)\left( \bigotimes _{i=1}^{l-1}y_i\right)\otimes y_{l}\right) _{ y_i\, {\text {const}}\,i\ne l } K_{k, k^{J-1}}\nonumber \\&= \sum _{l=1}^J \left(\left(\bigotimes _{i=l+1}^{J}y_i\right)\left( \bigotimes _{i=1}^{l-1}y_i\right)\otimes \bigtriangledown _{x} y_{l} \right)K_{k, k^{J-1}} \nonumber \\&= J \left( x'^{\otimes (J-1)}\otimes I_k\right)K_{k, k^{J-1}}\nonumber \\&= J \left( I_k \otimes x'^{\otimes (J-1)} \right)K_{k,k^{J-1}}. \end{aligned}$$
(A.9)

Repeating this process another \(J-1\) times we have

$$\begin{aligned} \bigtriangledown _x^J x'^{\otimes J}&= J ! \prod _{j=1}^J \left( I_{k^{j-1}} \otimes K_{k, k^{J-j}} \right). \end{aligned}$$
(A.10)

\(\square \)

Proof to Proposition 5

$$\begin{aligned}&(\bigtriangledown _{\overrightarrow{Y'}} Z\otimes I_k )(I_q\otimes \bigtriangledown _X \vec {Y})\nonumber \\&\quad = \left[\left(\sum _{i=1}^p\sum _{j=1}^q \iota _i^p\iota _j^{q'}\otimes \bigtriangledown _{\overrightarrow{Y'}} Z_{ij}\right)\otimes I_k \right] \left[ I_q\otimes \left(\sum _{u=1}^k\sum _{v=1}^l\bigtriangledown _{X_{uv}}{\vec {Y}'} \otimes \iota _u^k\iota _v^{l'} \right)\right]\nonumber \\&\quad =\sum _{i=1}^p\sum _{j=1}^q \sum _{u=1}^k\sum _{v=1}^l \left[\left( \iota _i^p\iota _j^{q'}\otimes \bigtriangledown _{\overrightarrow{Y'}} Z_{ij}\right)\otimes I_k \right] \left[ I_q\otimes \left( \bigtriangledown _{X_{uv}}{\vec {Y}'} \otimes \iota _u^k\iota _v^{l'} \right)\right]\nonumber \\&\quad =\sum _{i=1}^p\sum _{j=1}^q \sum _{u=1}^k\sum _{v=1}^l \left[\left( \iota _i^p\iota _j^{q'}\otimes \bigtriangledown _{\overrightarrow{Y'}} Z_{ij}\right)\otimes I_k \right] \left[ \left(I_q\otimes \bigtriangledown _{X_{uv}}{\vec {Y}'} \right) \otimes \iota _u^k\iota _v^{l'} \right]\nonumber \\&\quad =\sum _{i=1}^p\sum _{j=1}^q \sum _{u=1}^k\sum _{v=1}^l \left[\left( \iota _i^p\iota _j^{q'}\otimes \bigtriangledown _{\overrightarrow{Y'}} Z_{ij}\right) \left(I_q\otimes \bigtriangledown _{X_{uv}}{\vec {Y}'} \right) \right] \otimes \left[ I_k \iota _u^k\iota _v^{l'} \right]\nonumber \\&\quad =\sum _{i=1}^p\sum _{j=1}^q \sum _{u=1}^k\sum _{v=1}^l \left[\left( \iota _i^p\iota _j^{q'} I_q\right) \otimes \left(\bigtriangledown _{\overrightarrow{Y'}} Z_{ij} \bigtriangledown _{X_{uv}}{\vec {Y}'} \right) \right] \otimes \left[ \iota _u^k\iota _v^{l'} \right]\nonumber \\&\quad =\sum _{i=1}^p\sum _{j=1}^q \sum _{u=1}^k\sum _{v=1}^l \left[ \iota _i^p\iota _j^{q'} \otimes \iota _u^k\iota _v^{l'} \right] \left(\bigtriangledown _{\overrightarrow{Y'}} Z_{ij} \bigtriangledown _{X_{uv}}{\vec {Y}'} \right) \nonumber \\&\quad =\sum _{i=1}^p\sum _{j=1}^q \sum _{u=1}^k\sum _{v=1}^l \left[ \iota _i^p\iota _j^{q'} \otimes \iota _u^k\iota _v^{l'} \right] \left( \frac{\partial Z_{ij}}{\partial X_{uv} } \right)\nonumber \\&\quad = \bigtriangledown _X Z. \end{aligned}$$
(A.11)

\(\square \)

Proof to Proposition 6

(a) By induction. Put \({\widetilde{x}} = x-x_0\). Consider the univariate function \(g(\tau ) = f(x_0+ \tau {\widetilde{x}} )\). We see by the chain rule that

$$\begin{aligned} \frac{d g(\tau )}{ d\tau } = f^{(1)}(x_0+ \tau {\widetilde{x}} ) {\widetilde{x}} \end{aligned}$$
(A.12)

and if

$$\begin{aligned} \frac{d^j g(\tau )}{ d\tau ^j}&= f^{(j)}(x_0+ \tau {\widetilde{x}} ) {\widetilde{x}}^{\otimes j} \end{aligned}$$
(A.13)

then

$$\begin{aligned} \frac{d^{j+1} g(\tau )}{ d\tau ^j}&= \bigtriangledown (f^{(j)}(x_0+ \tau {\widetilde{x}} ) {\widetilde{x}}^{\otimes j} )\nonumber \\&= \left( (f^{(j+1)}(x_0+ \tau {\widetilde{x}} ) (I_{k^j}\otimes {\widetilde{x}} ) \right) {\widetilde{x}}^{\otimes j} \nonumber \\&= f^{(j+1)}(x_0+ \tau {\widetilde{x}} ) {\widetilde{x}}^{\otimes j+1}. \end{aligned}$$
(A.14)

From a Taylor expansion of \(g(\tau )\) we have

$$\begin{aligned} g(\tau )&= \sum _{j=0}^J \frac{1}{j!} g^{(j)}(\tau _0)(\tau - \tau _0)^j +\frac{1}{j!} \int _{\tau _0}^\tau g^{(j+1)}(t)(\tau - t)^j dt \end{aligned}$$
(A.15)

where the integral form of the remainder is standard and can be confirmed by induction. Setting \(\tau = 1\), \(\tau _0=0\) we get the Taylor series polynomial approximation. The integral form of the remainder term is given by

$$\begin{aligned} r_J(x)&= \frac{1}{j!} \int _{\tau _0}^\tau g^{(j+1)}(t)(\tau - t)^j dt \nonumber \\&= \frac{1}{j!} \int _0 ^1 f^{(j+1)}(x_0+ t(x-x_0) )(1 - t)^j dt (x-x_0)^{\otimes j+1}. \end{aligned}$$
(A.16)

To retrieve the Lagrange form of the remainder note that \(g^{(j+1)}(t)\) is continuous and obtains its maximum (\(\triangle \)) and minimum (\({\tilde{\triangle }}\)) on [0, 1]. Therefore,

$$\begin{aligned} {\tilde{\triangle }}\le & {} g^{(j+1)}(t)\le \triangle , \end{aligned}$$
(A.17)
$$\begin{aligned} {\tilde{\triangle }} (1 - t)^j\le & {} g^{(j+1)}(t)(1 - t)^j \le \triangle (1 - t)^j , \end{aligned}$$
(A.18)
$$\begin{aligned} (j+1) {\tilde{\triangle }}= & {} {\tilde{\triangle }} \int _0^1(1 - t)^j dt\le \int _0^1 g^{(j+1)}(t)(1 - t)^j dt \nonumber \\\le & {} \triangle \int _0^1 (1 - t)^j dt= (j+1) \triangle , \end{aligned}$$
(A.19)

or

$$\begin{aligned} {\tilde{\triangle }} \le \frac{\int _0^1 g^{(j+1)}(t)(1 - t)^j dt}{j+1} \le \triangle . \end{aligned}$$
(A.20)

By the intermediate value theory, there is a c such that

$$\begin{aligned} g^{(j+1)}(c) = \frac{\int _0^1 g^{(j+1)}(t)(1 - t)^j dt}{j+1} = f^{(j+1)}( x_0 + c(x- x_0)){\tilde{x}}^{\otimes j+1} \end{aligned}$$
(A.21)

and

$$\begin{aligned} r_J(x )&= \frac{1}{(J+1)!} f^{(J+1)}( x_0 + c(x- x_0)) (x-x_0)^{\otimes J+1} \end{aligned}$$
(A.22)

To prove (b) note that

$$\begin{aligned} \bigtriangledown ^l f_J(x)&= \sum _{j=l}^J\frac{1}{l!} \bigtriangledown ^l \left( f^{(j)}(x_0)(x-x_0)^{\otimes j}\right)\nonumber \\&= \sum _{j=l}^J\frac{1}{l!} \left( f^{(j)}(x_0)\otimes I_{k^l}\right)\bigtriangledown ^l\left( (x-x_0)^{\otimes j}\right) \end{aligned}$$
(A.23)

and

$$\begin{aligned} \bigtriangledown ^l f_J(x_0)&= \frac{1}{l!} \left( f^{(l)}(x_0)\otimes I_{k^l}\right)\bigtriangledown ^l\left( (x-x_0)^{\otimes l}\right)= \frac{1}{l!} \bigtriangledown ^l\left( (x-x_0)'^{\otimes l}\right) \bigtriangledown ^l f(x_0). \end{aligned}$$
(A.24)

By Propositions 3 and 4

$$\begin{aligned} \bigtriangledown ^l f_J(x_0)&= \frac{1}{l!} l! \prod _{j=1}^l \left( I_{k^{j-1}}\otimes K_{k,k^{l-j}} \right) (I_{k^{j-1}}\otimes K_{k^{l-j},k}) \bigtriangledown ^l f(x_0)\nonumber \\&= \bigtriangledown ^l f(x_0). \end{aligned}$$
(A.25)

\(\square \)

As an example of a matrix version Taylor’s Theorem, consider a function \(f:{{\mathbb {R}}}^2\rightarrow {{\mathbb {R}}}\). We refer back to the derivatives in the beginning of the “Appendix”. With

$$\begin{aligned} x'&= \begin{pmatrix} x_1&x_2 \end{pmatrix},\nonumber \\ x^{`\otimes 2}&= x'\otimes x'\nonumber \\&= \begin{pmatrix} x_1^2&x_1 x_2&x_2x_1&x_2^2 \end{pmatrix},\nonumber \\ x^{` \otimes 3}&= x^{`\otimes 2} \otimes x'\nonumber \\&= \begin{pmatrix} x_1^3&x_1^2 x_2&x_1 x_2 x _1&x_1 x_2 x_2&x_2x_1 x_1&x_2x_1 x_2&x_2^2 x _1&x_2^3\end{pmatrix},\nonumber \\ x^{`\otimes 4}&= x^{`\otimes 3 } \otimes x'\nonumber \\&= \begin{pmatrix} \begin{matrix} x_1^4 &{} x_1^3 x_2 &{} x_1^2 x_2x_1 &{} x_1^2 x_2 x_2 &{} x_1 x_2 x _1 x_1 &{} x_1 x_2 x _1 x_2 &{}\cdots \\ \cdots &{} x_1 x_2 x_2 x_1 &{} x_1 x_2 x_2 x_2 &{} x_2x_1 x_1 x_1 &{} x_2x_1 x_1 x_2 &{} x_2x_1 x_2 x_1 &{} \cdots \\ \cdots &{} x_2x_1 x_2 x_2 x_2^2 x _1 x_1 &{} x_2^2 x _1 x_2 &{} x_2^3 x_1 &{} x_2^4 \end{matrix} \end{pmatrix}. \end{aligned}$$
(A.26)

By inspection we confirm that we can write a fourth-order Taylor series approximation at a point \(x= x^0\) as

$$\begin{aligned} f(x)&\approx f(x_0) + f^{(1)} (x_0) (x-x_0) + \frac{1}{2!} f^{(2)} (x-x_0)^{\otimes 2} + \frac{1}{3!} f^{(3)} (x-x_0)^{\otimes 3} \nonumber \\&\quad + \frac{1}{4!} f^{(4)} (x-x_0)^{\otimes 4} \nonumber \\&= \sum _{j=0}^4 \frac{1}{j!} f^{(j)}(x_0)(x-x_0)^{\otimes j} \end{aligned}$$
(A.27)

where

$$\begin{aligned} f^{(j)}(x_0)(x-x_0)^{\otimes j}= & {} \sum _{i_1=1}^k \sum _{i_2=1}^k \cdots \sum _{i_j=1}^k f^{i_1 i_2\cdots i_j}(x_0) (x_{i_1} - x_{i_1,0}) \nonumber \\&(x_{i_2} - x_{i_2,0}) \cdots (x_{i_j} - x_{i_j,0}). \end{aligned}$$
(A.28)

Proof of Proposition 7

(By induction.) We make repeated use of Proposition 1, in particular 1 (b) and 1 (g). For \(J=1\)

$$\begin{aligned} \bigtriangledown ^1 A&= \bigtriangledown (C\otimes B)\nonumber \\&= C\otimes \bigtriangledown B + (K_{1k}\otimes I_k) ( B\otimes \bigtriangledown C)\nonumber \\&= a^1_{0}(\bigtriangledown ^{0} C\otimes \bigtriangledown ^{1-0} B) + a^1_{1}( \bigtriangledown ^{ 1} C\otimes \bigtriangledown ^{1-1} B) \end{aligned}$$
(A.29)

setting \(a^1_{ 0}=I_{k^2}=I_k\otimes I_k\) and \( a^1_{1}= (I_k\otimes I_k)(I_{k^0}\otimes K_{k,k})\). Suppose the result holds for \( J= K\) so that

$$\begin{aligned} \bigtriangledown ^K A = \sum _{j=0}^K a^K_{j} (\bigtriangledown ^j C\otimes \bigtriangledown ^{K-j} B) . \end{aligned}$$
(A.30)

Then

$$\begin{aligned} \bigtriangledown ^{K+1} A&= \sum _{j=0}^K (a^K_{ j}\otimes I_k) \bigtriangledown ( \bigtriangledown ^j C \otimes \bigtriangledown ^{K-j} B )\nonumber \\&= \sum _{j=0}^K (a^K_{ j}\otimes I_k)\left[ (\bigtriangledown ^j C\otimes \bigtriangledown ^{K-j+1} B )\right. \nonumber \\&\quad \left. + ( K_{k^j,k^{K-j+1}}\otimes I_k)( \bigtriangledown ^{K-j} B\otimes \bigtriangledown ^{j+1} C ) \right] \nonumber \\&= \sum _{j=0}^K (a^K_{ j}\otimes I_k)\left[ (\bigtriangledown ^j C \otimes \bigtriangledown ^{K-j+1} B ) \right. \nonumber \\&\quad \left. + ( K_{k^j,k^{K-j+1}}\otimes I_k)K_{k^{K-j+1},k^{j+1}}(\bigtriangledown ^{j+1} C\otimes \bigtriangledown ^{K-j} B )\right] \nonumber \\&= \sum _{j=0}^K (a^K_{j}\otimes I_k)\left[ (\bigtriangledown ^j C\otimes \bigtriangledown ^{K-j+1} B )\right. \nonumber \\&\quad \left. + ( I_{k^j }\otimes K_{k^{K-j+1},k })(\bigtriangledown ^{j+1} C \otimes \bigtriangledown ^{K-j} B )\right] . \end{aligned}$$
(A.31)

By rearrangement of the \(a^K_{ j}\) coefficients we have

$$\begin{aligned} \bigtriangledown ^{K+1} A&= \sum _{j=0}^{K+1} a^{ K+1 }_j ( \bigtriangledown ^{j } C \otimes \bigtriangledown ^{K+1- j} B ) \end{aligned}$$
(A.32)

where \(a^{ K+1 }_0 =a^K_{ 0 }\otimes I_k\), \(a^{ K+1 }_{ K+1 }=I_{k^K }\otimes K _{k,k }\) and

$$\begin{aligned} a^{K+1 }_j= (a^K_{ j}\otimes I_k) + (a^K_{ j-1 }\otimes I_k)( I_{k^{j-1}}\otimes K_{k^{(K+1)- j +1},k }). \end{aligned}$$
(A.33)

\(\square \)

Proof to Proposition 8

(By induction.) For \(j=1\), by the chain rule and application of Proposition 1 (h),

$$\begin{aligned} \bigtriangledown _t f(t)&= (\bigtriangledown _{s} g(s) \otimes I_k) ( I_1\otimes \vec {B})\nonumber \\&=(\bigtriangledown _{s} g(s) \otimes I_k) \vec {B} \nonumber \\&=\text {Vec}[ { I_k {B}( \bigtriangledown _{s}g(s) ) }]\nonumber \\&= { {B} \bigtriangledown _{s }g(s) }. \end{aligned}$$
(A.34)

Suppose the result holds for j: \( \bigtriangledown ^j_t f(t) = B^{\otimes j} \bigtriangledown ^j_s g(s)\). Again by the chain rule and repeated application of Proposition 1, in particular 1 (h),

$$\begin{aligned} \bigtriangledown _t( B^{\otimes j} \bigtriangledown ^j_t g(t) )&= ( B^{\otimes j} \otimes I_k) \bigtriangledown _t(\bigtriangledown ^j_s g(s) ) \nonumber \\&= ( B^{\otimes j} \otimes I_k) [(\bigtriangledown _{s} \bigtriangledown ^j_s g(s)) \otimes I_k] ( I_1\otimes \vec {B})\nonumber \\&= ( B^{\otimes j} \otimes I_k) \text {Vec}[ { I_k B (\bigtriangledown _{s} \bigtriangledown ^j_s g(s)) }] \nonumber \\&= ( B^{\otimes j} \otimes I_k) \text {Vec}[ { B (\bigtriangledown _{s} \bigtriangledown ^j_s g(s)) }] \nonumber \\&= \text {Vec}[{ I_k B (\bigtriangledown _{s} \bigtriangledown ^j_s g(s)) B^{' \otimes j} }] \nonumber \\&= \text {Vec}[{ B (\bigtriangledown _{s} \bigtriangledown ^j_s g(s)) B^{' \otimes j} }] \nonumber \\&= ( B^{ \otimes j} \otimes B )\text {Vec}[{ (\bigtriangledown _{s} \bigtriangledown ^j_s g(s)) } ]\nonumber \\&= B^{ \otimes j+1} \bigtriangledown ^{j+1}_s g(s) \end{aligned}$$
(A.35)

noting that when h(s) is \(n\times 1\) and s is \(k\times 1\), \(\text {Vec}[(\bigtriangledown _{s} h(s)) ]= \bigtriangledown _{s } h(s)\). \(\square \)

Proof to Proposition 9

Since

$$\begin{aligned} \bigtriangledown (Y\otimes Z) = Y\otimes \bigtriangledown Z+ (K_{s,p} \otimes I_k) (Z\otimes \bigtriangledown Y) (K_{q,t} \otimes I_l), \end{aligned}$$
(A.36)

then

$$\begin{aligned} \int \bigtriangledown (Y\otimes Z) dX= & {} \int Y\otimes \bigtriangledown Z \, dX \nonumber \\&+\int (K_{s,p} \otimes I_k) (Z\otimes \bigtriangledown Y) (K_{q,t} \otimes I_l)\,dX = 0 \end{aligned}$$
(A.37)

and the result follows. \(\square \)

Proof to Proposition 10

Using Proposition 9 (integration by parts)

$$\begin{aligned} {\mathcal {C}}_{f^{(1)}}(t)&= \int ( \bigtriangledown f (x)) e^{it'x} dx\nonumber \\&= -\int f (x)\bigtriangledown ( e^{it'x} )\, dx\nonumber \\&= -\int f (x) e^{it'x} is \, dx\nonumber \\&= (-it){\mathcal {C}}_f (t). \end{aligned}$$
(A.38)

Using Proposition 2 (differentiation of products),

$$\begin{aligned} \bigtriangledown ( ( \bigtriangledown ^{ r- 1}f(x)) e^{it'x})&= e^{it'x} \bigtriangledown ^{r }f(x)+ e^{it'x} ( \bigtriangledown ^{r-1}f(x))\otimes (it ) \end{aligned}$$
(A.39)

so

$$\begin{aligned} \int \bigtriangledown ( e^{it'x} \bigtriangledown ^{ r-1 }f(x)))\, dx&= \int e^{it'x} ( \bigtriangledown ^{ r-1 }f(x))\, dx \otimes (it) + \int e^{it' x} ( \bigtriangledown ^{ r }f(x))\, dx =0. \end{aligned}$$
(A.40)
$$\begin{aligned} 0&= \int e^{it'x} ( \bigtriangledown ^{ r-1 }f(x))\, dx \otimes (it) + \int e^{it'x} \bigtriangledown ^{ r }f(x)\, dx \end{aligned}$$
(A.41)

or \( {\mathcal {C}}_{ f^{(r)}}(t) = (-it) \otimes {\mathcal {C}}_{ f^{(r-1 )}}(t) \). Repeating this \(r-1\) times we have \( {\mathcal {C}}_{ f^{(r)}}(t) =(-it)^{\otimes r} {\mathcal {C}}_{ f }(t).\) \(\square \)

Proof to Proposition 12

Use Proposition 7 putting \(A = C \otimes B = \bigtriangledown C\) and \(B=\bigtriangledown \log C\) so that

$$\begin{aligned} \bigtriangledown ^J C = \bigtriangledown ^{J-1} A&= \sum _{j=0}^{J -1}a^{J-1 }_j (\bigtriangledown ^j C\otimes \bigtriangledown ^{J-1-j} B)\nonumber \\&= C\bigtriangledown ^{J-1 } B + \sum _{j=1}^{J -1}a^{J-1 }_{ j} (\bigtriangledown ^j C\otimes \bigtriangledown ^{J-1-j} B) \end{aligned}$$
(A.42)

and

$$\begin{aligned} \bigtriangledown ^J \log C= \bigtriangledown ^{J-1 } B&= {\triangle ^J } - \sum _{j_i =1}^{J -1}a^{J-1}_{j_i } \left( { \triangle ^{j_i } }{ } \otimes \bigtriangledown ^{J-1-j_i } B \right). \end{aligned}$$
(A.43)

Making \(l= J-1\) such substitutions we can write

$$\begin{aligned} \bigtriangledown ^{J-1 } B&= \triangle ^J + \sum _{l=1}^{J-1}(-1)^l \sum _{j_1=1}^{J_0 -1} \sum _{j_2=1}^{J_1 -1}\cdots \sum _{j_{l-1}=1}^{ J_l -1} \left( \overset{l-1}{ \underset{i=0}{\prod }} (I_{k^{J-J_i}}\otimes a^{ J _i -1}_{j_{i+1}} )\right) \left( \overset{l}{ \underset{i=1}{\bigotimes }} \triangle ^{j_i} \right) \otimes \triangle ^{ J_ l} \end{aligned}$$
(A.44)

recalling \(J_s= J -j_0- j_1-\cdots -j_{s }, j_0=0.\) \(\square \)

As examples let C(t) be the moment generating function for a \(k\times 1\) random variable having finite moments \(\mu _J\), \(J\le 4\) and consider the first four derivatives of \(\log C(t)\). For \(J=2\) note that l only takes the value \(l=1\) and

$$\begin{aligned} \bigtriangledown ^2\log C&=\triangle ^2 - \sum _{j_1 =1}^{ 1 } \prod _{s=0}^{0}(I_{k^{J- J_s} } \otimes a_{j_{s+1}}^{ J_s -1 } ) \bigotimes _{s=1}^l \triangle ^{j_s} \otimes \triangle ^{J_1}\nonumber \\&=\triangle ^2 - \prod _{s=0}^{0}(I_{k^{0} } \otimes a_{1}^{ 1 } ) (\triangle ^{1} \otimes \triangle ^{1}) \end{aligned}$$
(A.45)

Note that \(a_1^1 =K_{kk}\). Evaluating this expression at \(t=0\) we have \( \bigtriangledown ^2\log C (0)=\mu _2 - \mu _1^{\otimes 2}\). For \(J=3\),

$$\begin{aligned} \bigtriangledown ^3\log C&= {\triangle ^J }+\sum _{l=1}^{2} (-1)^l \sum _{j_{ 1} =1}^{2 } \sum _{j_{2 } =1}^{J-j _1 -1 } \prod _{s=0}^{l-1}(I_{k^{J- J_s} } \otimes a_{j_{s+1}}^{ J_s -1 } ) \bigotimes _{s=1}^l \triangle ^{j_s}\otimes \triangle ^{J_l}\nonumber \\&= {\triangle ^3}- (I_{k^{0} } \otimes a_{1 }^{2 } )( \triangle ^{1}\otimes \triangle ^{2})- (I_{k^{0} } \otimes a_{2}^{ 2 } ) ( \triangle ^{2}\otimes \triangle ^{ 1} )\nonumber \\&\quad + (I_{k^{0} } \otimes a_{1}^{ 2 } )(I_{k^{1} } \otimes a_{1 }^{ 1 }) ( \triangle ^{1}\otimes \triangle ^{ 1}\otimes \triangle ^{ 1}) \end{aligned}$$
(A.46)

and

$$\begin{aligned} t^{\top \otimes 3}\bigtriangledown ^3\log C(0)&= t^{\top \otimes 3}{\mu _3}- 3 t^{\top \otimes 3}( \mu _{1}\otimes \mu _{2}) +2 t^{\top \otimes 3} \mu _{1}^{\otimes 3}. \end{aligned}$$
(A.47)

For \(J=4\),

$$\begin{aligned} \bigtriangledown ^4\log C&= {\triangle ^4 }+\sum _{l=1}^{3} (-1)^l \sum _{j_{ 1} =1}^{3} \sum _{j_{2 } =1}^{J-j _1 -1} \sum _{j_{3 } =1}^{J-j _1-j_2 -1 } \prod _{s=0}^{l-1}(I_{k^{J- J_s} } \otimes a_{j_{s+1}}^{ J_s -1 } ) \bigotimes _{s=1}^l \triangle ^{j_s}\otimes \triangle ^{J_l}\nonumber \\&= {\triangle ^4 } - (I_{k^{0} } \otimes a_{1}^{ 3} ) ( \triangle ^{1}\otimes \triangle ^{3} ) - (I_{k^{0} } \otimes a_{2}^{ 3} ) ( \triangle ^ 2\otimes \triangle ^{ 2} ) \nonumber \\&\quad - (I_{k^{0} } \otimes a_{3}^{ 3} ) ( \triangle ^3 \otimes \triangle ^{1} )\nonumber \\&\quad + (I_{k^{0} } \otimes a_{1}^{ 3 } ) (I_{k^{1} } \otimes a_{1}^{2 } )( \triangle ^{1}\otimes \triangle ^{1}\otimes \triangle ^{2} ) \nonumber \\&\quad + (I_{k^{0} } \otimes a_{1}^{ 3 } ) (I_{k^{1} } \otimes a_{2}^{ 2} )( \triangle ^{1}\otimes \triangle ^{2}\otimes \triangle ^{1} )\nonumber \\&\quad + (I_{k^{0} } \otimes a_{2}^{ 3 } ) (I_{k^{1} } \otimes a_{1}^{ 1 } )( \triangle ^{2}\otimes \triangle ^{1}\otimes \triangle ^{1} )\nonumber \\&\quad - (I_{k^{0} } \otimes a_{1}^{3 } ) (I_{k^{1} } \otimes a_{1}^{2} ) (I_{k^{ 2} } \otimes a_{1}^{ 1} ) ( \triangle ^{1}\otimes \triangle ^{1}\otimes \triangle ^{1}\otimes \triangle ^{1} ) \end{aligned}$$
(A.48)

and

$$\begin{aligned} t^{\top \otimes 4} \bigtriangledown ^4\log C (0)&= {\mu _4 } - 4 t^{\top \otimes 4} ( \mu _{1}\otimes \mu _{3} ) - 3 t^{\top \otimes 4} \mu _2^{ \otimes 2} \nonumber \\&\quad + 12 t^{\top \otimes 4} ( \mu _1^{ \otimes 2} \otimes \mu _2 ) - 6 t^{\top \otimes 4} \mu _1^{ \otimes 4} . \end{aligned}$$
(A.49)

Proof to Proposition 15

As per Proposition 6, an \((s-1)\)’th order Taylor series expansion of \({{\mathcal {K}}} (t;T_N)\) at \(t=0\) yields

$$\begin{aligned} {{\mathcal {K}}} (t;T_N)&= \sum _{j=0}^s\frac{1}{j!} {{\mathcal {K}}} ^{(j)}( 0;T_N) t ^{\otimes j}+ r_s(t), \end{aligned}$$
(A.50)
$$\begin{aligned} r_s(t) = \frac{1}{s!}( {{\mathcal {K}}}^{(s)}(c t;T_N )-{{\mathcal {K}}}^{(s)}( 0;T_N)) t ^{\otimes s}. \end{aligned}$$
(A.51)

where c is between zero and one. Note that \({{\mathcal {K}}} (t;T_N)= N {{\mathcal {K}}} (t/\sqrt{N};X)\). By Proposition 14 we have \( \bigtriangledown ^j_t {{\mathcal {K}}} (t/\sqrt{N};X) = \sqrt{N}^{-j} \bigtriangledown ^j_s {{\mathcal {K}}} (s;X)\). By Proposition 13 we have \( \bigtriangledown ^j {{\mathcal {K}}} (0;X) = i ^j\kappa _j\). Substituting these results into the Taylor series and observing the first two terms are zero, we obtain the desired result. \(\square \)

In anticipation of quantifying the difference between \(e^{ {{\mathcal {K}}}_s}\) and \({\mathcal {C}}_s\), temporarily define

$$\begin{aligned} c_l = \frac{1}{N^{-l/2}} \frac{\kappa _{l+2}' (it)^{\otimes (l+2)}}{(l+2)!}, \end{aligned}$$
(A.52)

noting that \({{\mathcal {K}}} _s^\dagger (t;T_N)= \sum _{l=1}^s c_l\). Let \(P^\dagger (t)\) denote a generic polynomial in t of order less than 2s.

Lemma A1

Put \(s_j = s-l_1 -l_2-\cdots - l_{j-1}\) with \(s_1 =s \). For \(2\le j\le s\)

$$\begin{aligned} {{\mathcal {K}}} ^{\dagger j}_s =\sum _{l_1=1}^{s_1} \sum _{l_2=1}^{s_2}\cdots \sum _{l_j=1}^{s_j} c_{l_1} c_{l_2}\cdots c_{l_j} + o\left(N^{-(s-2)/2}\right)P^\dagger (t). \end{aligned}$$

Proof

By induction. For \(j=2\) we have

$$\begin{aligned} {{\mathcal {K}}}_s ^{\dagger 2}&= \sum _{l_1=1}^{s } \sum _{l_2=1}^{s }c_{l_1} c_{l_2}. \end{aligned}$$
(A.53)

Note that \(c_{l_1} c_{l_2} = N^{-(l_1+l_2)/2}P^\dagger (t)\). Therefore

$$\begin{aligned} {{\mathcal {K}}} ^{\dagger 2}&= \sum _{l_1=1}^{s } \sum _{l_2=1}^{s } 1[{l_1} + {l_2} \le s] c_{l_1} c_{l_2} + o\left( N^s\right)P^\dagger (t)\nonumber \\&= \sum _{l_1=1}^{s } \sum _{l_2=1}^{s- l_1 } c_{l_1} c_{l_2} + o\left( N^s\right)P^\dagger (t)\nonumber \\&= \sum _{l_1=1}^{s_1 } \sum _{l_2=1}^{s_2} c_{l_1} c_{l_2} + o\left( N^s\right)P^\dagger (t). \end{aligned}$$
(A.54)

Now, suppose the result holds for some \(k<j\le s\) so that

$$\begin{aligned} {{\mathcal {K}}} ^{\dagger k} =\sum _{l_1=1}^{s_1} \sum _{l_2=1}^{s_2}\cdots \sum _{l_k=1}^{s_k} c_{l_1} c_{l_2}\cdots c_{l_k} + o\left(N^s\right)P^\dagger (t). \end{aligned}$$
(A.55)

Then

$$\begin{aligned} {{\mathcal {K}}} ^{\dagger k+1}&=\left(\sum _{l_1=1}^{s_1} \sum _{l_2=1}^{s_2}\cdots \sum _{l_k=1}^{s_k} c_{l_1} c_{l_2}\cdots c_{l_k} + o\left(N^s\right)P^\dagger (t)\right)\sum _{l_{k+1}=1}^{s } c_{l_{k+1}} \nonumber \\&= \sum _{l_1=1}^{s_1} \sum _{l_2=1}^{s_2}\cdots \sum _{l_k=1}^{s_k}\sum _{l_{k+1}=1}^{s } c_{l_1} c_{l_2}\cdots c_{l_k}c_{l_{k+1}} + o\left(N^{-s}\right)P^\dagger (t). \end{aligned}$$
(A.56)

Note that \(c_{l_1} c_{l_2}\cdots c_{l_k}c_{l_{k+1}}\propto N^{-(l_1+l_2+\cdots l_{k+1})/2} \). Therefore

$$\begin{aligned} {{\mathcal {K}}} ^{\dagger k+1}&= \sum _{l_1=1}^{s_1} \sum _{l_2=1}^{s_2}\cdots \sum _{l_k=1}^{s_k}\sum _{l_{k+1}=1}^{s } 1[{l_1} + {l_2}+\cdots l_{k+1} \le s] c_{l_1} c_{l_2}\cdots c_{l_k}c_{l_{k+1}} + o\left(N^{-s}\right)\nonumber \\&= \sum _{l_1=1}^{s_1} \sum _{l_2=1}^{s_2}\cdots \sum _{l_k=1}^{s_k}\sum _{l_{k+1}=1}^{s_{k+1} } c_{l_1} c_{l_2}\cdots c_{l_k}c_{l_{k+1}} + o\left(N^{-s} \right). \end{aligned}$$
(A.57)

\(\square \)

Lemma A2

\(\sum _{j=0}^s \frac{1}{j!} {{\mathcal {K}}} ^{\dagger j}= P_s(t) + o\left(N^{-(s-2)/2}\right)P^\dagger (t)\).

Proof

We see from Lemma A2 that

$$\begin{aligned} \sum _{j=0}^s \frac{1}{j!} {{\mathcal {K}}} ^{\dagger j}&= 1 +\sum _{j=1}^s \frac{1}{j!}\left( \sum _{l =1}^{s } c_{l } \right)^j\nonumber \\&= 1 +\sum _{j=1}^s \frac{1}{j!}\sum _{l_1=1}^{s_1} \sum _{l_2=1}^{s_2}\cdots \sum _{l_j=1}^{s_j} c_{l_1} c_{l_2}\cdots c_{l_j} + o\left(N^{-(s-2)/2}\right)P^\dagger (t)\nonumber \\&\equiv P _s(t) + o\left(N^{-(s-2)/2}\right)P^\dagger (t). \end{aligned}$$
(A.58)

\(\square \)

Proof of Proposition 16

By the triangle inequality,

$$\begin{aligned} \big | \int _{B_N^c(\delta )}\triangle (t)e^{-it' x} dt\big |\le&\int _{B_N^c(\delta )} \big |{\mathcal {C}} (t;T_N ) \big | dt + \int _{B_N^c(\delta )} \big | P_s(t) \big |e^{-t' V t/2} dt. \end{aligned}$$
(A.59)

Since X is a continuous random variable, by Cramer’s condition, for any \(\delta >0\), the first integral is \(o(N^{-(s-2)/2})\). The second integral is \(o(N^{-(s-2)/2})\) since \(P_s(t)\) is a polynomial in t and \(N^\Delta \int _{B_N^c(\delta )} \big | P_s(t) \big |e^{-t'Vt/2} dt=o(1)\) for any positive \(\Delta \). \(\square \)

Proof of Proposition 17

To prove 17 (a) we first see, from Proposition 15, that, for some \(c_s\) in (0, 1),

$$\begin{aligned} e^{ {{\mathcal {K}}} (t;T_N)-{{\mathcal {K}}}_s (t;T_N)} -1&= e^{ r_s(t ) } -1 = r_s(t ) e^{ c_s r_s(t ) } \end{aligned}$$
(A.60)

where, again using Proposition 15,

$$\begin{aligned} | r_s(t)|&\le \Vert R _s(t)\Vert \frac{(t' V t)^{ s/2 }}{s!N^{(s-2)/2}}. \end{aligned}$$
(A.61)

Note that

$$\begin{aligned} { t'^{\otimes j} \kappa _j}&= t'^{\otimes j}( V^{1/2}V^{-1/2} )^{\otimes j}\kappa _j\nonumber \\&= ( V^{1/2}t)'^{\otimes j} \kappa _j^* \end{aligned}$$
(A.62)

where \(\kappa _j^*= ( V^{-1/2} )^{\otimes j}\kappa _j\) is the j’th cumulant of the standardized random variable \(Z=V^{-1/2}X\). Put \({\bar{\kappa }}^*_s = \max _{2\le j\le s} \Vert \kappa _j^*\Vert \), which has a finite upper bound and is bounded from below by 1.

$$\begin{aligned} | {(i V^{1/2}t)'^{\otimes j} \kappa _j^*}|&\le | V^{1/2} t ^{' \otimes j} \kappa _j^* |\nonumber \\&\le \Vert ( V^{1/2} t )'^{\otimes j}\Vert {\bar{\kappa }}^*_s\nonumber \\&= ( t' V t )^{j/2} {\bar{\kappa }}^*_s \end{aligned}$$
(A.63)

so that

$$\begin{aligned} \frac{ | {(i V^{1/2}t) ^{' \otimes j} \kappa _j^*}|}{N^{(j-2)/2}}&\le ( t' V t ) \left( \frac{t' V t}{N} \right)^{(j-1)/2} {\bar{\kappa }}^*_s \end{aligned}$$
(A.64)

and, putting \(\delta _3 = (\frac{1}{8s{\bar{\kappa }}^*_s }) ^{1/(s-1)}\) we see that, for \( \Vert V^{1/2} t\Vert \le \delta _3\sqrt{N}\),

$$\begin{aligned} \frac{ | {(i V^{1/2}t) ^{' \otimes j} \kappa _j^*}|}{N^{(j-2)/2}}&\le ( t' V t ) \left( \frac{t' V t}{N} \right)^{(j-1)/2} {\bar{\kappa }}^*_s \le \frac{t' V t}{8s} \end{aligned}$$
(A.65)

and for \(\Vert V^{1/2}t\Vert <\delta _3 \sqrt{N}\),

$$\begin{aligned} \Big | {{\mathcal {K}}}_s^\dagger (t;T_N)\Big |&\le \sum _{j=3}^s \frac{ ( t' V t ) }{j! 8 s} \le \frac{1}{8} t' Vt . \end{aligned}$$
(A.66)

Similarly, put \(\delta _2 =1\). For all t such that \(\Vert V^{1/2} t\Vert \le \delta _2\sqrt{N}\),

$$\begin{aligned} | e^{c_s r_s(t)} |&\le e^{ \Vert R _s(t)\Vert \frac{(t' V t)^{ s/2 }}{s!N^{(s-2)/2}}}\nonumber \\&\le e^{ \Vert R _s(t)\Vert t' Vt \left( \frac{ t' Vt }{N} \right)^{(s-2)/2 } }\nonumber \\&\le e^{ \Vert R _s(t)\Vert t' Vt }. \end{aligned}$$
(A.67)

Since the s’th moment of X exists, there exists a neighbourhood around zero in which \({{\mathcal {K}}}^{(s)}( t ; Z )\) is continuous. Thus, for any \(\epsilon >0\), there exists a \(\delta _1>0\) such that, for all t satisfying \(|V^{1/2}t| <\delta _1\), \(\Vert R_s (t) \Vert<\epsilon <\frac{1}{8}\). Choose \(\delta = \min [\delta _1,\delta _2,\delta _3]\) and since \(\bigtriangledown ^s{\mathcal {K}}(t)\) is continuous we can choose any \(\frac{1}{8}>\epsilon >0\) such that for all t with \(\Vert V^{1/2} t\Vert <\delta \sqrt{N}\),

$$\begin{aligned} | \triangle _1(t) |&\le \Vert R _s(t)\Vert \frac{(t' V t)^{ s/2 }}{s!N^{(s-2)/2}}e^{\{ \Vert R _s(t)\Vert t'Vt| \}} e^{-\frac{1}{2} t' Vt} e^{ \frac{1}{8} t' Vt}\nonumber \\&\le \epsilon \frac{(t' V t)^{ s/2 }}{ N^{(s-2)/2}}e^{ \frac{1}{8} t' Vt } e^{-\frac{1}{2} t' Vt} e^{ \frac{1}{8} t' Vt}\nonumber \\&\le \epsilon \frac{(t' V t)^{ s/2 }}{ N^{(s-2)/2}} e^{-\frac{1}{4} t' Vt} \end{aligned}$$
(A.68)

and

$$\begin{aligned} \int _{B_N(\delta )} | \triangle _1(t) e^{-it'x}| dt&\le \frac{ \epsilon }{ N^{(s-2)/2}} \int (t' V t)^{ s/2 } e^{-\frac{1}{4} t' Vt} dt\nonumber \\&=o\left( N^{-(s-2)/2}\right). \end{aligned}$$
(A.69)

To show 15 (b) we use a Taylor series and Lemma A2 to rewrite

$$\begin{aligned} e^{ {{\mathcal {K}}}^\dagger _s (t;T_N)} - P_s(t)&= e^{c_{s }{{\mathcal {K}}}^\dagger _s (t;T_N)} \frac{ {{\mathcal {K}}}^{\dagger s+1}_s(t;T_N) }{(s+1)!} +\sum _{j=0}^s\frac{{{\mathcal {K}}}^{\dagger j}_s(t;T_N) }{j!} - P_s(t) \nonumber \\&= e^{c_{s }{{\mathcal {K}}}^\dagger _s (t;T_N)} O\left( N^{-(s+1)/2}\right) P^\dagger (t) + o\left( N^{- s /2}\right) P^\dagger (t). \end{aligned}$$
(A.70)

where \(c_s\) is between between zero and one. As in the proof to (a), for t: \(\Vert V^{1/2}t\Vert < \delta \sqrt{N}\) \(|{{\mathcal {K}}}^\dagger _s (t;T_N) | < t' V t/8\) so that \( | (e^{c_{s }{{\mathcal {K}}}^\dagger _s (t;T_N)} ) e^{-t' V t/2}|\le e^{-t' V t/4} \).

$$\begin{aligned} |\triangle _2(t)|&= O\left( N^{-(s+1)/2}\right) |P^\dagger (t) | e^{-t' V t/4} + O\left( N^{- s /2}\right) |P^\dagger (t) | e^{-t' V t/2}\nonumber \\&= O\left( N^{- s /2}\right) |P^\dagger (t) | e^{-t' V t/4} \end{aligned}$$
(A.71)

and

$$\begin{aligned} \left| \int _{B_N(\delta )} \triangle _2(t) e^{-it'x} \right| dt&\le \int \left| \triangle _2(t) \right| dt = o(N^{- s /2}). \end{aligned}$$
(A.72)

\(\square \)

Proof to Theorem 18

From Propositions 16 and 17 we see that \((2\pi )^{-k} \int \triangle (t) e^{-it' x} dt= o(N^{-s/2})\). It remains to confirm the form of \(f_s(x;T_N)\).

$$\begin{aligned} (2\pi )^{-k} \int {\mathcal {C}} _s(t, T_N) e^{-it' x} dt&= (2\pi )^{-k} \int e^{- \frac{1}{2} t' Vt}P_s(t) e^{-it' x} dt. \end{aligned}$$
(A.73)

We recognize that

$$\begin{aligned} (-it)^{\otimes j} e^{- \frac{1}{2} t' Vt}= {\mathcal {C}}_{\bigtriangledown ^j \phi }(t;V), \end{aligned}$$
(A.74)

that is, the jth K-derivative of the CF associated with the N(0, V) density by the inversion properties of Fourier functions. Note that each of the summands in \(P_s(t)\) is proportional to \((-it) ^{\otimes \sum _{k=1}^j(l_k+2)}e^{-t'Vt/2}\). By Proposition 10 and the properties Hermite polynomials we have

$$\begin{aligned} \frac{1}{(2\pi )^k}\int e^{- \frac{1}{2} t' Vt }(-it)^{\otimes \sum _{k=1}^j(l_k+2)} e^{-it' x} dt&= H_{ \sum _{k=1}^j(l_k+2)}(x;V) \phi (x;V) \end{aligned}$$
(A.75)

leading to the stated definition of \(f_s(x;T_N)\,\). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kundhi, G., Rilstone, P. Simplified Matrix Methods for Multivariate Edgeworth Expansions. J. Quant. Econ. 18, 293–326 (2020). https://doi.org/10.1007/s40953-019-00184-w

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s40953-019-00184-w

Keywords

Navigation